Describing, Not Prescribing – Volume 2: A Rough-Draft Recipe

Last week, I mentioned a desire to find a way to make our testing documentation more lightweight and effective.

I spent some time trying some things and now I have what I’ll call my first iteration ready to demo.

The Goals

The goals, to reiterate what I wrote last week, were these:

  • To capture test ideas in a way that’s easy for the team to access and understand
  • To capture specific testing activities in a way that makes it easy to see how someone executed a test idea, without the implication that those details prescribe a “test script” that should be followed to the letter
  • To manage these documents such that it’s simple to revisit old testing activities, understand the old ideas, and refer to the old executions, for the purposes of regression testing

Tool Support

With those goals in mind, I decided I wanted:

  • A text-editing tool that would make it easy to document explicit test actions
  • A mind-mapping tool that would allow me to generate and publicize high-level test ideas easily, and associate those ideas to test actions
  • A way to tie the mindmaps to the team’s work in such a way that we could find the mindmaps and their attachments easily.

Here’s the three I landed on, respectively:

NOTEPAD++

I already use Notepad++ as my tool of choice for generating session reports, so I saw no reason to change. If you have a favorite text editor, use that.

One thing that I really like to have at my fingertips for session reporting is a hotkey to enter a date-time stamp. Notepad++ doesn’t have this by default, but you can add it in yourself.

MINDMUP

MindMup is a free cloud-based mind map editor. It’s a little light on features compared to something like XMind (though it does have one killer feature that I’ll get to in a minute) and the maps look really basic, but: you can read and edit mindmaps on any device with a browser, you can attach text (and screenshots) to nodes, and it’s free. So it’s great for my purposes.

Something I’ll write more about in a future post is MindMup’s real-time collaboration feature. This thing is slick. It creates an opportunity for collaborative testing that’s so rad I can hardly stand it. But that’s a topic for another day.

TRELLO

If we’re being honest: I did not choose Trello. In fact, I fought it tooth-and-nail for years. My team uses a flow-based Kanban-y technique where backlog items are decmposed into tasks that move lane to lane from the idea stage through the development stage through a testing stage to a done stage. (The backlog items are defined at the product level, the tasks are at the code level.) We had been using post-notes on a corkboard for our tasks and I liked that a lot, but I eventually lost the war and we moved the whole thing over to Trello, which is okay BECAUSE:

Trello lets you attach all kinds of things to the cards. One of the options is to link to items in Google Drive. MindMup lets you save MindMups to Google Drive. So, Trello lets me link MindMups to backlog items and tasks.

Wiring It All Together

How’s this work in practice?

  1. I pull a Trello card that’s ready for testing
  2. I open up a new MindMup doc and a fresh tab in Notepad++
  3. In MindMup, I write out as many test ideas as I can think of in a couple of minutes.
    • I’ve been using the SFDIPOT heuristic to help me fill out this initial blast of ideas, giving a node to each of the 6 “letters.”
    • One thing that’s great about SFDIPOT is that any behavior that was agreed upon by the team slots nicely into the “Functions” node.
    • I try to give each test idea its own node.
  4. I execute the test idea that seems likely to teach me the most, writing down my actions in Notepad++ (with time stamps near the test actions)
  5. When I feel like I’m satisfied with my exploration of the test idea, I copy/paste the text description of my actions, plus any screenshots, into the attachment for the node. Then I mark it with a color based on what I learned:
    • Green if everything turned out hunky-dory.
    • Red if I found a problem. This may end up as a bug in our bug process, based on further discussions afterward.
    • Yellow if the behavior I was looking to explore doesn’t exist yet because it’s just not done, or if I can’t test it due to some kind of blocking constraint.
    • I leave it as the default color if the test idea turned out to be uninteresting for whatever reason.
  6. I add additional test ideas to the mind map that came up while I was trying the last idea.
  7. I repeat the three previous steps until I run out of ideas.
  8. I save the mindmap to Google Drive, link to it in the Trello card, and follow up with any additional things I need to do now that I know what I want to know about the task (e.g. move it to Done, log bugs that I found, talk to a designer about an interface issue, etc).

How Is It Going?

This is an experiment, and I’ve only been working with it for a few days. My initial experience has been that it is a rad way to work: not only did I find more bugs than I might normally have (because of the heuristic), I found it to feel very natural to use this combination of documents as a description and not a prescription.

Anecdote: my product has a Windows client and a Mac client. One of the backlog items we are working on is a decent-sized change to both clients. I did the Windows client first, following the procedure above. The next day, I did the Mac client. When I started the Mac client, I just copied the whole node tree of ideas from the Windows side and plopped it into the Mac node. It was easy to see the nodes that didn’t make sense on the Mac side. BAM – DELETED! Then I added some Mac-specific nodes. Then I got started, referring back to my documented actions on the PC side but not following them precisely. Again, it felt very natural, and fast. This is a fast way to work, much faster than documenting prescriptive test cases.

It sounds complicated but it’s really not. I could make a screencast video showing how it all works, if anybody is having a hard time visualizing it.

CONCERNS

It feels loosey-goosey compared to scripted regression testing. I can’t lie, I do feel like I’m getting away with something, like it’s not professional or safe or whatever. I do believe, though, that when all is said and done this is more transparent and could result in better testing.

All that transparency comes at a cost. Before, we had a list of short tests that were easy to read. Now, we have this mess of test ideas that somebody brainstormed, some of which might not be all that important or interesting for regression testing. For regression testing, we want to elevate some of these somehow, and show the test ideas that we find so valuable that they merit exploration every release. I haven’t figured that one out yet, it could be copying the most important ideas into a new map, or marking them with a special color, or something else. Dunno yet.

Mindmaps are kind of hard to read. My hope is that this is a fluency problem: we’ll get better at reading and writing them as we do it more often. As of right now, I open a mind map that has a lot of content and I’m like WTF AM I EVEN LOOKING AT o_O? But then I drill down to the nodes I’m interested in, go a child at a time, and I can understand it.

So that’s week 1 of this experiment. Please share your thoughts or questions, and let me know if you’ve tried something similar! This is all roughly similar to xBTM, so please share your experiences with that too, I’d love to hear it.-Clint (@vsComputer)

4 thoughts on “Describing, Not Prescribing – Volume 2: A Rough-Draft Recipe

  1. Pingback: Describing, Not Prescribing – Vol 3: Let’s Call It Expeditionary Testing | Tester Vs Computer

  2. Hey Clint, you really got me hooked on your experience and inspired me to test this myself.
    So far (I’ve just started today after reading your articles on my commute to work) it’s really working great and I really love the artifacts (mind maps) this technique leaves us with.
    One question though, do you use your mind maps as a test report for a given feature and if you need to retest that feature, do you just copy the mind map and do it again, adding new test ideas as they came up with?

    I’m working in an agile environment with releases every fortnight and one thing we use is a checklist of all the basic functionality that must be working or a rollback will be necessary. We run this checklist manually every release but I always disliked checklist the same reason you were not happy with test scripts. I’ll convert this checklist to a mind map with test ideas so it can grow more organically and don’t let the testers have that run-check mentality that isn’t helpful in this type of testing.

    Thanks again for your inspiration!

    • Hi, thanks for your comment!

      Re: reporting, my managers have never asked for any detailed test reporting aside from “do you want to do more testing before we ship or are you satisfied?” so I’m pretty fortunate in that regard. I do feel like, if non-testers are interested in looking at test reports, this technique is a bit more transparent than a list of test cases but only a bit. And they can be a bit daunting to look at if you’re not used to them.

      Re: retesting, that’s something we’re experimenting with now. What we plan to do so far is: mark a node attachment with your initials and date if you mark it green (at the minimum. More details might be appropriate.) and then reset the node colors back to “not tested yet” while leaving the prior attachment(s) in place. So each person that touches that node will say what they did and when, to make it green or red. What we’re not sure about is whether we want a new copy for each release, or just to use the same one over and over. I’m leaning towards the second.

      I’d love to hear how it goes for you!
      Clint

  3. Hi Clint,

    A great byproduct of using this technique is that I’m starting to have complete feature overviews as I keep using this technique over several iterations.

    One thing I was struggling in the product I’m working on is the lack of product specifications that completely describes how all the features should work. We have several user stories that are spread over several sprints and we end up not having an overview of the feature, specially after some months working on it.

    What I found out is that when I’m creating the mind map to test a specific user story is really easy to just update another mind map I have with the feature overview. At the same time the feature overview’s mind map gives me a base I can start working from every time a new user story related to that feature is played.

    Thanks again for this inspiring post series.

Leave a comment