Automated tests: Just the facts

“We’re looking to make the move from manual to automated testing.”

Let me start with a quick bit of autobiography: I got started in the testing profession around 2006. I was working in an insurance company’s tech support department as a 2nd-level support guy and admin of their incident management software. I had been approached a few times about moving into other departments: I declined a move into the development area because everybody there was miserable writing COBOL against a 30-year-old mainframe, and I declined to be moved into the internal IT services division “because I didn’t want to spend the rest of my life figuring out what’s going wrong on someone’s computer.”* My aim was to move into the insurance industry’s business side.


It was not to be. I was approached by the CIO of the company and told that there was an opportunity to become a “business analyst” and that that person would be in charge of testing a new web product. I was informed that I was a good fit for the position because of my admin background (which involved a lot of scripting) and that they were “looking to make a move from manual to automated testing.”* This is what we call “being voluntold.” I had better volunteer for that position, or else. Okay. I had just become the insurance company’s first test engineer.

(Though neither they nor I knew anything about the discipline.)

I bring it up because I am (happily) now in an organization that is much more mature with regard to software development than my first project was at that insurance company… yet I still find myself involved in the exact same project that I was on my first day as a tester. We’re looking to make the move from manual to automated testing!

Thankfully, though, I have learned a couple of things in the past 7 years (though not as much as I intend to learn in the next 7). My short term plans for this blog are going to involve a lot of discussions regarding automated tests. Make no mistake: I am an automated testing dork. Writing test automation is my happy fun place; I get home from work and then I remote in from home and automate tests from my kitchen table. I’m excited about the techniques, I’m excited about the tools, I want you to be excited about them too!

But nonetheless I want to state, up front, a warning:

There’s no such thing as a move from manual to automated testing

What is testing, anyways? As I mentioned before, I view testing as software journalism. Let’s use that thought to guide us when we think about test automation. Where does technology figure in, in a journalistic organization? Right away, I’m struck by the many things that reporters do that technology cannot do: Technology cannot make decisions about what’s important to an audience. Technology cannot tailor a message so that it’s more engaging. Technology cannot look at a universe of possible leads and guess correctly which of those leads is going to become an important story. None of that stuff.

But what can technology do for a news organization? Well, for one thing, technology usually provides the medium within which the news is transmitted, but let’s ignore that for now. In general, technology can provide a bunch of dumb, stupid facts. Those facts can be analyzed by an expert and *then* they can become an important story to an audience.
FACT: “The temperature is 70 degrees and sunny.”
FACT: “The Dow Jones Industrial Average is 15101.25.”

Both of those facts are interesting to me because I know what to do with that information.

FACT: “1 US Dollar equals .76 Euro.”
USA! USA! That’s good, right? I have no idea, I’m not an economist or a banker. I need somebody to tell me whether this information affects me at all! And that’s the point. The technology doesn’t know whether I care about a piece of information. It just gives me information; I have to *use* that information before it’s valuable.

An automated test suite is just a list of facts

Calling automated tests “tests” at all is problematic. It makes some intuitive sense because, at least if we’re talking about UI or integration tests, then an automated test is usually a test that was originally performed by a human being and is now being performed by an automaton. But is that really what’s happening? Let’s take a real world case: yesterday I tested a webpage that allows a user to enter a username and password. I played with it for a little while, then gave a bunch of feedback to the developer, we had a long discussion about which permissions would be granted to different types of users, and then I wrote 6 automated tests that verify that, given particular inputs, particular success messages or errors are presented to the user. Two of those tests found bugs! Yay automated testing!

Now, having written those 6 tests, have I automated my testing? NO WAY! Does this represent a “move from manual to automated testing?” NOT AT ALL. The playing around, the feedback, the long discussion, the creation of the test automation… *all of that* is the testing. Those 6 “tests” are just facts. When the tests are re-run, we will know that, given 6 different inputs to those 2 fields, we get 6 specific messages. Nothing more than that.

And nothing less than that, too! Here’s why, after 7 years, I’m still excited about test automation: as a member of my team, I want to be able to say with confidence that my product is ready to hand to customers, every day of the week. Ship it! Our product is sweet, go forth and sell it! But as a tester, I get an installer exe and I am possessed with the spirit of David Hume. Yeah, the last build was ready to ship, and we only changed one string, so this build is also… ready… to ship. OK I GOTTA INSTALL IT AND CHECK. How do you *really* know that the sun is going to rise tomorrow? Well, David Hume couldn’t know, and neither can we. But I can use some automation to look at the sun on the other side of the world and see the sun, and I can use some other automation to verify that the world is, in fact, turning, and I can tell you with confidence that it’s out there and the world is turning, and we can be more confident than we were. By the same token, 3 years ago I’d get an installer exe and know nothing at all about what would happen if I installed and ran it. Each new installer could install Zaxxon on my server for all I knew. I’d have to check it myself. The automation suite gives me a long list of facts about the installer, up to and including what happens when I put those 6 input combinations on that new page. Then, from that, I can make inferences about whether the software is ready to go out and delight customers.

So that’s my short warning about test automation: automated tests aren’t automated replacements for manual tests.  They’re just facts. Those facts can be quite valuable if you know what to do with them, but they are just facts.

3 thoughts on “Automated tests: Just the facts

  1. Looking forward to reading more and glad you make the distinction between testing and checking and the limits of what automation can do

    – question though ( maybe a future post ) – why does this excite you so much that you are doing it at your kitchen table?

  2. Pingback: The SpecFlow Chronicles – Volume 1 : Turning Tests Into Living Documentation | Tester Vs Computer

  3. Pingback: The SpecFlow Chronicles – Volume 2: In Defense of ATDD | Tester Vs Computer

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s