The SpecFlow Chronicles – Volume 2: In Defense of ATDD

Last week I attended an excellent training in Rapid Software Testing from a Mr Paul Holland.  It was really enlightening and now I really feel like an ass for saying that there isn’t a testing body of knowledge wherein testers can continually improve their testing skills.

I say all kinds of stupid stuff that I end up taking back, that’s why this blog isn’t subtitled “Knowing how to test software in the 21st century.”  😀

I’ve been pondering over the huge amount of info in the course over the past week; there is a TON I would love to discuss with other testers, and I hope to do that here in the twitterblogoverse over the upcoming weeks and months.

But first, today, I want to talk about something Paul Holland mentioned in passing that wasn’t related directly to the RST training (it came up in a discussion of UI test automation): “ATDD is garbage.”  

The first thing that popped in my head was “Oh SNAP, I think I feel a blog post coming on!”  

What is ATDD?

Acceptance Test Driven Development (ATDD) is a practice in which the whole team collaboratively discusses acceptance criteria, with examples, and then distills them into a set of concrete acceptance tests before development begins. It’s the best way I know to ensure that we all have the same shared understanding of what it is we’re actually building. It’s also the best way I know to ensure we have a shared definition of Done. – Elisabeth Hendrickson

Let’s dig a little into the workflow of that by drawing a picture:

ATDD

acknowledgements to Jeff Morgan, who whiteboarded this out for me and my testing buddies at TechSmith.

This is what I’m referring to as the ATDD workflow:

  1. A card is pulled.
  2. Acceptance criteria is defined by collaboration between biz, test and dev.
  3. The acceptance criteria is satisfied through development work:
    • A developer works on making the feature satisfy the acceptance criteria
    • A tester (or test automator) prepares test automation to be ready for integration with the feature
  4. The tester and the developer come together when the feature is nearing completion, and finish the test automation that allows the acceptance criteria to be automated.
  5. The feature is now “done.” (According to the ATDD workflow.)

I asked Paul Holland about his statement that ATDD is garbage, and from his perspective I now understand why he said that. It all depends on what “done” means in your context. Apparently there are organizations that are creating software using this workflow where, when the automation and the feature cross that line, hand-in-hand, the feature is considered “accepted,” and “done,” and by that they mean it is fit for purpose and thoroughly tested. I don’t agree with that, but I am still an ATDD advocate.

Let’s talk about why.

ATDD helps your team understand the scope of a task

When you’re working in a flow-based, light-specification shop like I am, it can be very hard to define the scope of a task. When the requirements are vague (“improve usage data reporting”), it’s very difficult for everybody to understand what is supposed to be delivered. Are we talking about a total overhaul or a tiny incremental improvement?

ATDD helps solve this problem by getting everybody together to define acceptance criteria in terms of specific examples. One way (my favorite way) to capture these criteria in the Gherkin format, as I described in my first post about BDD tools.
robotSharks

In this example, we can see that when spies enter the submarine bay, at least one shark is dispatched. If 5 come in, 2 sharks are dispatched, then it goes up by 5s until we hit 20 sharks, and then at that point, come on, how many sharks are we supposed to be keeping around? 20 is the max. The examples illustrate this. This is the kind of thing that can be sussed out during these collaborative meetings.

ATDD gives you a set of reliable facts when the product is ready to be tested

This is where the “done” aspect of acceptance criteria is a matter of concern. As I mentioned in an earlier post, an automated test suite is a valuable set of facts about the product. The facts are useful, but they are mute with regard to the feature’s fitness for purpose. Is 2 sharks enough for 5 spies? How about 2 sharks for 9 spies? We agreed that this is enough, but whether it’s really enough is something that will need to be tested. And I mean really tested, Mr. Bond. >:|

It’s great having those reliable, checked facts available, though. It’s worth a lot. When you’ve got 10 spies in your pool, you can be confident that there are 3 sharks in there with them, because the lights on the tests are green.

ATDD is your best opportunity to introduce test automation

I’ve only been pushing ATDD for a couple of months but I’ve already seen this happen numerous times: the acceptance criteria cannot be automated because the feature lacks the necessary hooks that would make it possible. Then… Voilà. The hooks are created to make that automation possible!

Here’s a hard conversation: “I know you’re working on something else… but I can’t automate this regression test because the testing hooks aren’t there.” How do you convince someone that adding automation hooks to old features is more important than adding new features? That’s a tough one.

If there’s agreement that part of the work of finishing a feature is that those acceptance criteria will be supported by the automated checks, then that conversation is easy: wiring up those criteria is part of the job.

The artifacts of ATDD create a more testable product

This follows on to the idea that it helps you introduce test automation. Have another look at the shark example. As a tester, what are you thinking? I know what you’re thinking… what happens if there are negative spies? what about no spies? what about somebody that’s not a spy? GOOD THINKING. Now we’re really testing this shark feature.

Here’s the great news: because the testing hooks were put into the pool, we can just try it. Run that automated test with 0 spies instead of 1. What happens? What if you provide no value (the spy detector has been disabled)? What if you provide negative values (spies have hacked the spy detector)? It is easy to throw any numeric input you can imagine into this test harness with no coding ability whatsoever. (If you want to try the non-spy example, you’ll have to either code it in or get someone to help you code it in.)

Something I’m coming to realize as I study the writings of all you testing experts out there: the automated checking is useful, but what’s *really* nice is the additional power that it gives to a tester wielding those tools. ATDD creates a usable interface that’s easy for testers to exploit in an exploratory way.

To sum up: I personally would never say that a product that merely meets all of its acceptance criteria is ready for release; if that’s your attitude I wish you luck. Nonetheless, ATDD confers numerous benefits to a software project, and I believe it’s a net positive when it’s used as a means to get those benefits.

Advertisements

One thought on “The SpecFlow Chronicles – Volume 2: In Defense of ATDD

  1. Pingback: The SpecFlow Chronicles – Volume 3: A Good Thing About Gherkin | Tester Vs Computer

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s