Last week, I mentioned a desire to find a way to make our testing documentation more lightweight and effective.
I spent some time trying some things and now I have what I’ll call my first iteration ready to demo.
The goals, to reiterate what I wrote last week, were these:
- To capture test ideas in a way that’s easy for the team to access and understand
- To capture specific testing activities in a way that makes it easy to see how someone executed a test idea, without the implication that those details prescribe a “test script” that should be followed to the letter
- To manage these documents such that it’s simple to revisit old testing activities, understand the old ideas, and refer to the old executions, for the purposes of regression testing
With those goals in mind, I decided I wanted:
- A text-editing tool that would make it easy to document explicit test actions
- A mind-mapping tool that would allow me to generate and publicize high-level test ideas easily, and associate those ideas to test actions
- A way to tie the mindmaps to the team’s work in such a way that we could find the mindmaps and their attachments easily.
Here’s the three I landed on, respectively:
I already use Notepad++ as my tool of choice for generating session reports, so I saw no reason to change. If you have a favorite text editor, use that.
One thing that I really like to have at my fingertips for session reporting is a hotkey to enter a date-time stamp. Notepad++ doesn’t have this by default, but you can add it in yourself.
MindMup is a free cloud-based mind map editor. It’s a little light on features compared to something like XMind (though it does have one killer feature that I’ll get to in a minute) and the maps look really basic, but: you can read and edit mindmaps on any device with a browser, you can attach text (and screenshots) to nodes, and it’s free. So it’s great for my purposes.
Something I’ll write more about in a future post is MindMup’s real-time collaboration feature. This thing is slick. It creates an opportunity for collaborative testing that’s so rad I can hardly stand it. But that’s a topic for another day.
If we’re being honest: I did not choose Trello. In fact, I fought it tooth-and-nail for years. My team uses a flow-based Kanban-y technique where backlog items are decmposed into tasks that move lane to lane from the idea stage through the development stage through a testing stage to a done stage. (The backlog items are defined at the product level, the tasks are at the code level.) We had been using post-notes on a corkboard for our tasks and I liked that a lot, but I eventually lost the war and we moved the whole thing over to Trello, which is okay BECAUSE:
Trello lets you attach all kinds of things to the cards. One of the options is to link to items in Google Drive. MindMup lets you save MindMups to Google Drive. So, Trello lets me link MindMups to backlog items and tasks.
Wiring It All Together
How’s this work in practice?
- I pull a Trello card that’s ready for testing
- I open up a new MindMup doc and a fresh tab in Notepad++
- In MindMup, I write out as many test ideas as I can think of in a couple of minutes.
- I’ve been using the SFDIPOT heuristic to help me fill out this initial blast of ideas, giving a node to each of the 6 “letters.”
- One thing that’s great about SFDIPOT is that any behavior that was agreed upon by the team slots nicely into the “Functions” node.
- I try to give each test idea its own node.
- I execute the test idea that seems likely to teach me the most, writing down my actions in Notepad++ (with time stamps near the test actions)
- When I feel like I’m satisfied with my exploration of the test idea, I copy/paste the text description of my actions, plus any screenshots, into the attachment for the node. Then I mark it with a color based on what I learned:
- Green if everything turned out hunky-dory.
- Red if I found a problem. This may end up as a bug in our bug process, based on further discussions afterward.
- Yellow if the behavior I was looking to explore doesn’t exist yet because it’s just not done, or if I can’t test it due to some kind of blocking constraint.
- I leave it as the default color if the test idea turned out to be uninteresting for whatever reason.
- I add additional test ideas to the mind map that came up while I was trying the last idea.
- I repeat the three previous steps until I run out of ideas.
- I save the mindmap to Google Drive, link to it in the Trello card, and follow up with any additional things I need to do now that I know what I want to know about the task (e.g. move it to Done, log bugs that I found, talk to a designer about an interface issue, etc).
How Is It Going?
This is an experiment, and I’ve only been working with it for a few days. My initial experience has been that it is a rad way to work: not only did I find more bugs than I might normally have (because of the heuristic), I found it to feel very natural to use this combination of documents as a description and not a prescription.
Anecdote: my product has a Windows client and a Mac client. One of the backlog items we are working on is a decent-sized change to both clients. I did the Windows client first, following the procedure above. The next day, I did the Mac client. When I started the Mac client, I just copied the whole node tree of ideas from the Windows side and plopped it into the Mac node. It was easy to see the nodes that didn’t make sense on the Mac side. BAM – DELETED! Then I added some Mac-specific nodes. Then I got started, referring back to my documented actions on the PC side but not following them precisely. Again, it felt very natural, and fast. This is a fast way to work, much faster than documenting prescriptive test cases.
It sounds complicated but it’s really not. I could make a screencast video showing how it all works, if anybody is having a hard time visualizing it.
It feels loosey-goosey compared to scripted regression testing. I can’t lie, I do feel like I’m getting away with something, like it’s not professional or safe or whatever. I do believe, though, that when all is said and done this is more transparent and could result in better testing.
All that transparency comes at a cost. Before, we had a list of short tests that were easy to read. Now, we have this mess of test ideas that somebody brainstormed, some of which might not be all that important or interesting for regression testing. For regression testing, we want to elevate some of these somehow, and show the test ideas that we find so valuable that they merit exploration every release. I haven’t figured that one out yet, it could be copying the most important ideas into a new map, or marking them with a special color, or something else. Dunno yet.
Mindmaps are kind of hard to read. My hope is that this is a fluency problem: we’ll get better at reading and writing them as we do it more often. As of right now, I open a mind map that has a lot of content and I’m like WTF AM I EVEN LOOKING AT o_O? But then I drill down to the nodes I’m interested in, go a child at a time, and I can understand it.
So that’s week 1 of this experiment. Please share your thoughts or questions, and let me know if you’ve tried something similar! This is all roughly similar to xBTM, so please share your experiences with that too, I’d love to hear it.-Clint (@vsComputer)