RiskStorming is a format that helps a whole development team figure out the answer to three important questions:
- What is most important to our system?
- What could negatively impact its success?
- How do we deal with these possible risks?
At TestBash Brighton, Martin Hynie told me about another benefit:
It changes the language
The closer we can bring postmortem language into planning language, the closer we get having healthier discussions around what testing can offer and where other teams play a role in that ability.
Martin was so kind as to write down his experiences for me and allowing me the publish them:
Riskstorming as a tool to change the language around testing
One of the most fascinating things to observe is the language that is highly contextual when others speak of testing. Most specifically:
While a project is ongoing, AND a deadline approaches:
- The general question around testing tends to be about reducing effort
- “How much more testing is left?”
- “When will testing be done?”
- “How can we decrease the time needed for remaining testing?”
- “Testing effort is a risk to the project deadline”
Once a project is done, or a feature is released… AND a bug is found in production:
- “Why was this not caught by testing?”
- “Testing is supposed to cover the whole product footprint?”
- “Why did our testplan not include this scenario?”
- “Who tested this? How did they decide what to test?”
There are two entirely different discussions here going on.
- One views testing as overhead and process to be overcome… because risk somehow discrete is something to be mitigated. This is false, but accepting uncertainty is a hard leap for project planning.
- The second is retrospective and viewing any failure as a missed step. Suddenly the pressure is an expectation that any bug should have been tested/caught, and the previous expectations and concerns around timelines feel insignificant, now that the team is facing the reality of the bug in production and the impact on customers and brand.
By involving product, engineers and engineering management in RiskStorming questions, we were able to reframe planning in the following manner:
- Where are the areas of uncertainty and risk?
- What are the ranges of types of issues and bugs that might come from these areas?
- How likely are these sorts of issues? Given the code we are touching… given our dependencies… given our history… given how long it has been since we last truly explored this area of the code base…
- How bad could such an issue be? Which customers might be impacted? How hard could it be to recover? How likely are we to detect it?
- Engineers get highly involved in this discussion… If such an issue did exist, what might we need to do to explore and discover the sorts of bugs we are discussing? How much effort might be needed to safely isolate and fix such issues without impacting the release? What about after the release?
Then we get to the magic question…
Now that we accept that in fact these risks are real because of trade-offs being made on schedule pressure vs testing (and not magically mitigated…):
If THIS issue happened in production, do we feel we can defend
- Our current schedule,
- Our strategy for implementation,
- Our data, and environments for inspecting our solution,
- Our decision on what is enough exploration and testing
when our customers ask: “How did testing miss this?“
What was interesting, is that suddenly, we were using the same language around testing before the release, that we only ever used after we released knowing that a bug actually happened in production. We used language around uncertainty. We starting using language around the reality that bugs will emerge. We started speaking around methods to perform the implementation that might help us make better use of testing in order to prioritize our time around the sort of issues that we could not easily detect or recover from.
We started speaking a language that really felt inclusive around shared responsibility, quality and outcomes.
I only have one data point involving RiskStorming… but took a similar approach with another team simply with interviewing engineers, and reporting on uncertainty, building a better sense or reality on trade-offs regarding these uncertainties, and options to reduce uncertainty. It had similar positive outcomes as RiskStorming, however required MUCH more explaining and convincing.
With over fifteen years of specialization in software testing and development, Martin Hynie’s attention has gradually focused towards embracing uncertainty, and redefining testing as a critical research activity. The greatest gains in quality can be found when we emphasize communication, team development, business alignment and organizational learning.
A self-confessed conference junkie, Martin travels the world incorporating ideas introduced by various sources of inspiration (including Cynefin, complexity theory, context-driven testing, the Satir Model, Pragmatic Marketing, trading zones, agile principles, and progressive movement training) to help teams iteratively learn, to embrace failures as opportunities and to simply enjoy working together.