Big hand’s on 120/little hand’s on E.
“AKA Driver”- They Might Be Giants
In one of the sessions I attended at CAST this year, there was a discussion of how testers’ roles have evolved on software projects. It was mentioned how – in the days of Lean Startup and its emphasis on rapid delivery – it might not make sense to bring a tester on board before a product has been in front of some customers and proven out its viability. Testing resources might not be well-utilized on prototypes that might not become real products. They should be focusing on making sure that existing products maintain their value.
On the one hand, that makes a certain kind of sense. On the other hand, NO NO NO. STOP.
Let’s talk about two terms: Testability and Protoduction.
Testability is the degree of difficulty of testing a system. This is determined by both aspects of the system under test and its development approach.
Higher testability: more better tests, same cost.
Lower testability: fewer weaker tests, same cost.
—Robert V Binder, “Software Testability Pt 1: What is it?”
When we talk about the testability of software, we are talking about the rapidity with which testers can discover new, desired pieces of information about that software. Binder’s brief definition above (he goes into much greater detail, go read his articles on the topic!) describes it as an attribute of the software; I prefer to think of it as a feature of the software. Testability is a customer-facing feature! The customers are internal customers. If quality is value to some person that matters, people that matter are going to find testability valuable.
An Evil Example
Your Evil Software Team is doing mobile development, and you have a new idea for an iOS app that lets you take a picture of a person, and then it will use facial recognition and reference an Evil Cloud Database* to see A: whether that person is a superspy and B: if they are a superspy, what agency they work for, and any known recent missions they’ve participated in.
*(probably in Azure. Ha-CHA!)
Now, you’re not sure that supervillains are even gonna want this thing, right? You want to apply Lean Startup principles and get a Minimum Viable Product out the door, in front of some supervillain faces, in as little time as possible. If it proves to be a good idea, *then* you start eating your vegetables and doing things the right way. Maybe TDD can wait, maybe integration testing can wait, acceptance tests, who needs those, we just want to see whether this thing is worth our trouble.
One of two things can happen to this project. The worst case scenario: the idea was bad and nobody wants the thing. Throw it away and think of something else. OK, at least we didn’t waste our time. The best case scenario: the idea is great! Customers love it and start using it right away. What do you do then?
Well, obviously you throw away all that shitty code and start from scratch! Of course! It was just a prototype. 😀 THAT WILL NEVER, EVER, HAPPEN. You are never throwing that code away. You are going to live with it from now on. Forever. Your production code is now protoduction code: code that pisses you off every time you look at it.
How Testers Can Help
Having testers on the project can help, both in the worst case and in the best case, and here’s why:
Testability leads to modularity, and modularity leads to flexibility.
Consider again our Evil Mobile Spy Detector app. As a tester, on this project, the first thing I would want to do is: provide a known spy face, and a known not-spy face to the system and make sure that piece is working. That seems like the very most interesting piece from a customer perspective.
Thinking about it, that’s really a function of the cloud database model. You’ve got to provide some kind of serialized facial recognition data that would be coming from the facial recognition doodad (I’m a tester so I can use the term “doodad”) and hand it to the cloud API. The API should return Yes to the first, No to the second.
Here we’ve done two really important things: the first is that we’ve started by checking our most important assumption about the product (that it can tell between a dogooder and a bystander), and the second is that we’ve proven separation between the picture taking tech, the facial recognition tech, and the spy detection tech. We proceed in this way, proving that each of those gadgets works, on their own, and also together. If you can’t test the components in isolation, you work to make that possible. You make the product more testable.
This is a huge win in both the worst case and the best case.
In the best case – where you want to keep the protoduction software but replace it with better-designed production software – you’ve got a suite of (hopefully automated) checks to run against each of the components. Those checks can prove that the new, sweet replacement component can work flawlessly with the other two components *before* trying to integrate them, making it much, much easier to move to a better design.
In the worst case – the supervillains don’t want it – you’ve got well-tested components that are probably still valuable IP for your business. Lex Luthor and Dr. Octopus agree: “I don’t need cell phones because I use robot spiders to take pictures of people.” OK, that’s good feedback that kills this product BUT: we can still sell them the facial recognition software or the cloud database service, or both! If we didn’t design testability into the product, it’s likely that the three components would be coupled together; reusing those components could be difficult or impossible.
It can be tempting to leave testers off of a team when you’re trying to start out fast, with a prototype. Don’t do it. Find a tester that can help you make sure that your product is testable, and in proving that testability, you can mitigate the damage, either by facilitating reuse of components (in the case of a failed product) or (in the case of a successful product) by making it safe to refactor your protoduction code into real production code.