The Confidence Thing

Is software testing all about looking for bugs?

In one of my first blog posts, I stated that I view software testing as a journalistic activity. As a software tester, I aim to get reliable, timely facts to the people that need them. Bugs are an interesting story, but they’re only part of the overall picture of software quality.

There’s been a lot of chat over the years regarding the term “Quality Assurance” and what that implies. Organizations hire people like myself to work in a role they call Quality Assurance… what is that they’re paying us for? What do they want us to do for them?

In a post about this topic, blogger and test manager Wade Wachs has this to say:

…testing is done to build confidence in a product. At the end of testing, all wrapped up in appropriate safety language and carefully crafted words is a report about the level of confidence in a product, or at the very least information that is meant to affect some stake-holder’s confidence in a product.

Yep yep. I agree. Ultimately, organizations hire testers because they are hoping to be more confident that their products will be fit for some purpose., that it creates the value that it was meant to create. Those things are described generally as “software quality” and “quality assurance” is a procedure by which an organization gains confidence that they are not shoveling garbage onto their customers.

So that’s that. Wade’s right, I agree with him, let’s all nod to each other and get back to work… wait, what?

Michael Bolton:

It is not the job of testing to build confidence in the product. Confidence is a relationship between the product and some stakeholder. It is much more the job of testing to identify problems in the product—and in people’s perceptions of the product—that are based on or that would lead to unwarranted confidence.

Keith Klain:

So what is the problem with making confidence the mission of testing? Shouldn’t we want to have confidence in our products? Isn’t it a good thing to have confidence in our testing? Of course we want confidence in our products and testing, but if you make gaining that confidence your mission, in my opinion, you are intentionally adding confusion to the decision-making process.

Yikes. Ok, this isn’t as settled as I thought. Michael Bolton and Keith Klain are pretty sharp guys, their words are worth considering. What’s at issue here? What’s the source of their disagreement with the idea of testing as a confidence-building activity?

What do we mean by confidence?

In Klain’s post, he states “By definition, confidence is the quality or state of being certain,” and goes on to say (paraphrased) that creating a feeling of certainty is not the mission of a tester, and failing to recognize that will result in confirmation bias as testers try to demonstrate that the product works instead of aiming to detect whether it can fail.

That makes sense, and it’s an important thought to take on board. However: if you scroll down and read more of Webster’s definitions of confidence, another definition is “a feeling or belief that someone or something is good or has the ability to succeed at something.” Additionally, testing-as-bug-hunting might not be the mission either, depending on what we mean by the word “bug.”

What do we mean when we say “bug?”

I see no interesting difference between a “bug” and an “enhancement request.” All I see are opportunities to improve the product. Now, in a contractor-client situation, there may be a contractual difference between a bug and an enhancement, but anyone who’s taken part in conversations where those distinctions were made should be able to attest that drawing those distinctions A: always has something to do with money or deadlines B: usually results in or from an antagonistic relationship between roles and C: probably has a negative effect on software quality in the long run.

So, bugs, enhancements, bughancements, whatever you want to call them, they represent opportunities to improve the product. That brings us back to the term “quality assurance”… people argue against that term because it is impossible to find all the bugs so nothing is assured. But – stay with me here – let’s imagine for a moment that our coworkers are not immature or stupid, and when they ask for “quality assurance” they are not really asking us whether the product cannot possibly be improved. I can’t jump into my coworkers’ heads, but I think they view testing like this:

Given a perfectly skilled set of testers and an infinite amount of time, we will find all of the bugs and be perfectly confident in our release decisions.

Given an adequately skilled set of testers and an adequate amount of time, we will find an adequate number of bugs and be adequately confident in our release decisions.

Our coworkers understand that people are imperfect and that time is limited. If that wasn’t the case, there wouldn’t be any bugs in the first place. They’re paying us to tell them whether they should believe that the product is good or has the ability to succeed.

Staying objective by acting as if we are biased

This is where it gets really tricky and where my head got stuck for awhile. My goal is to provide an unbiased, objective report on my product with regard to its quality. And these guys – whom I respect – are saying that in order to be unbiased you have to keep your mind on finding bugs. But isn’t presuming that a product has bugs a bias of its own? Which is it? Are we being objective or are we looking for bugs?

It turns out that it doesn’t matter. The reason: You cannot provide an adequately thorough technical investigation of a software product without behaving as if you assume that it has bugs. The creation of that objective report on the product’s quality would be incomplete if you didn’t perform the activities of a properly skeptical tester.

Now, I think keeping that objective report in mind is really important: operationally, I want to provide the entire picture, not just the bugs that I found. Ideally, a tester wouldn’t just say that no bugs were present… that statement should be accompanied with a report of what things were tried that didn’t find bugs. It may be that all kinds of bugs are *not* present… that’s good news! But I can’t tell you whether a particular bug is present unless I look for it.

Summing up

Testers aim to create an objective evaluation by examining a product to see whether it could be improved. Others use that evaluation to determine their own level of confidence.

3 thoughts on “The Confidence Thing

  1. Pingback: Testing Bits – 12/8/13 – 12/14/13 | Testing Curator Blog

  2. Pingback: Why to use Quality Spy « Quality Spy /* Blog */

Leave a comment