A TestSphere Expansion

Software Testing, TestSphere

Let’s begin with a special thanks to Benny & Marcel. Where would we ever be without the good help of smart people?

BandM.png

Benny & Marcel making a case for Testing in Production


It’s been 2 years since we launched TestSphere: A card deck that helps testers and non-testers think & talk about Testing.
People keep coming up with wonderful ideas on how to further improve the card deck. Expansions, translations, errata,…

A Security deck! A Performance deck! A Usability deck! An Automation deck!
Well… yes. The possibilities are huge, but it needs to make sense too: Value-wise & business-wise.
The thing TestSphere does extremely well is twofold: Spark Ideas and Spark Conversation – Thinking & Talking

Maja

Maja being ‘Business Manager’ for a RiskStorming workshop for DevBridge, Kaunas

RiskStorming is developing to become an incredibly valuable format. It combines the two aspects of TestSphere perfectly.
In its essence it makes your whole team have a structured conversation about quality that is loaded with new ideas and strategies. To be blunt: It helps testers figure out what they are being paid for and it helps non-testers find out why they have testers in the first place.

It’s the learnings and insights of having run continuous RiskStorming workshops for many different businesses in many different contexts that drive the new TestSphere expansion.

The creation of an expansion is driven, not by novelty, but from a clear need.

I present you here the first iteration of all new concepts on the cards. No Explanations or Examples yet. We’ll keep the iterations lean. If you have feedback, you can find me on ‘All the channels’.

Five New Cards Per Dimension

In the first version we had 20 cards per dimension. We noticed that some important cards were missing. The new expansion will cover these.

  • Heuristics: Possible ways of tackling a problem.
    • Dogfooding
    • Stress Testing
    • Chaos Engineering
    • Three Amigo’s
    • Dark Launch

+

  • Techniques: Clever activities we use in our testing to find possible problems.
    • OWASP Top Ten
    • Peer Reviews
    • Mob Testing
    • Feature Toggles
    • Test Driven Development

+

  • Feelings: Every feeling that was triggered by your testing should be handled as a fact.
    • Informed
    • Fear
    • Overwhelmed
    • Excited
    • Unqualified

+

  • Quality Aspects: Possible aspects of your application that may be of interest.
    • Observability
    • Measureability
    • Business Value Capability
    • Scalability
    • Availability

+

  • Patterns: Patterns in our testing, but also patterns that work against us, while testing such as Biases.
    • Single Responsibility Principle
    • Story Slicing
    • Mutation Testing
    • Strangling Patterns
    • Long Term Load testing

+

Two New Dimensions

Dimensions are the aspects of the cards that are divided by represented colors. We felt like some important dimensions were missing. Both of these are mainly operations related, a not to be underestimated part of testing.

Hardening: (working title) Concepts that improve the underlying structures of your software. Compare this dimension to muscle building – You need to strain your muscles until the weak parts get small tears, the tissue can then regenerate and build a stronger, more robust muscle. We test, exercise and strain the product so that we can fill the cracks with smarter ideas, better code and stronger software.

  1. Blameless Post Mortem
  2. Service Level Objectives/Agreements
  3. Anti-Corruption Layer
  4. Circuit Breaker
  5. Bulkhead
  6. Caching
  7. Distributed systems
  8. Federated Identity
  9. Eventual Consistency
  10. API Gateway
  11. Container Security Scanning
  12. Static Code Analysis
  13. Infrastructure as Code
  14. Config as Code
  15. Separation of Concerns
  16. Command Query Responsibility Segregation
  17. Continuous Integration
  18. Continuous Delivery
  19. Consumer Driven Contract Testing
  20. Pre Mortem

Post-Release: (working title) Tactics, approaches, techniques,… that improve the ability to see what’s going on – and orchestrating safe changes in your application’s production environment. When something goes wrong, goes well, brings in money, throws an error, becomes slow,… You can see it and its results.

  1. Fault Injection
  2. Logging
  3. Distributed Tracing
  4. Alerting
  5. Anomaly Detection
  6. Business Metrics
  7. Blackbox Monitoring
  8. Whitebox Monitoring
  9. Event Sourcing
  10. Real User Monitoring
  11. Tap Compare
  12. Profiling
  13. Dynamic Instrumentation
  14. Traffic Shaping
  15. Teeing
  16. On-Call Experience
  17. Shadowing
  18. Zero Downtime
  19. Load Balancing
  20. Config Change Testing

Wrapping up

I’m out of my water here. There’s so much I need to investigate, learn, put into words for myself before I can make it into a valuable tool for you. I welcome any feedback.
Thank you for being such an amazing part of this journey already.

Knowit

The winning team of the RiskStorming workshop at TestIT in Malmö

Reflecting on the last project

Experience Reports, Software Testing

This is a post written by Geert van de Lisdonk about a project he worked 1,5 year on as a Test consultant.99-Geert.png

My last project was in the financial sector. The product we made was used by private banks. Our users were the Investment Managers of those banks. They are called the rock stars of the banking world. Those people didn’t have time for us. We could get little information from them via their secretaries or some meeting that was planned meticulously. And in that meeting only 1 or 2 of our people could be present, to make sure they didn’t spent too much time. Getting specs or finding out what to build and build the right thing was not an easy task. Our business analysts had their work cut out for them but did a very good job with the resources they had. Getting feedback from them was even more difficult. Especially getting it fast so we could change the product quickly. Despite all that, we were able to deliver a working product to all clients. This blogpost is a reflection on what I think I did well, what didn’t do well and what I would have changed if could have done it over.

 

What we did well

Handovers

One of the best things we did, in my opinion, were the handovers. Every time something was developed, a handover was done. This handover consists out of the developer showing what has been created to me, the tester, and the product owner.
This moment creates an opportunity for the PO to verify if the correct thing has been build or point out possible improvements.
As a tester this is a great source of information. With both the developer and the PO present,  all possible questions can be answered. Technical, functional and everything in between can be reviewed and corrected if necessary.

Groomings

Getting the tester involved early is always a good idea. When the Business Analysts had decided on what needed to be made, a grooming session was called to discuss how we could achieve this.
Most of the times there was already some kind of solution prepared by the Product Manager that would suit the needs of several clients. This general solution would then be discussed.

For me this was a moment I could express concern and point out risks. This information would also be a base for the tests I’d be executing.

Teamwork

The team I was in is what I would describe as a distributed team. We had team-members in Belgium, UK and 2 places in Italy. Working together wasn’t always easy. In the beginning most mass communication was done using emails sent to the entire team. This didn’t prove very efficient so we switched to using Microsoft Teams.

There was one main channel which we used the most. Some side channels were also set up that would be used for specific cases. People in the team were expected to have Teams open at all times. This sometimes didn’t happen and caused problems. It took some getting used to, but after a while I felt like we were doing a decent job!

whatsapp image 2019-01-21 at 10.59.45

 

What we could have done better

Retrospectives

When I first joined the team the stand-ups happened very ad-hoc. You could get a call between 9am-3pm or none at all. Instead a meeting was booked with a link to a group Skype conversation. Everybody was now expected to join this conversation at 10am for the stand-up. This was a great improvement! Every sprint we would start with a planning meeting and set out the work we were supposed to do.

But there were also ceremonies missing. At no point in time was there a sprint review or a retrospective. This meant that developers didn’t know from each other what had been finished or what the application is currently capable of.

The biggest missing ritual in my opinion was the retrospective. There was no formal way of looking at how we did things and discussing on how we could improve. Having a distributed team didn’t help here. Also the high pace we were try to maintain made it difficult. But if the PM would have pushed more for this, I think the team could have benefited a lot.

Unit testing

There was no incentive to write unit tests. So there were only a handful of them. Not because the developers didn’t want to. They even agreed that we should write them! There was just nobody waiting for them so they didn’t write them.
There were multiple refactorings of code that could have been improved with unit tests. Many bugs were discovered that wouldn’t have existed if only there were some unit tests written. But since nobody asked for it, and the pace was to high, no time was spent on them.

Less pressure

This project was ran at a fast pace. Between grooming and delivery were sometimes only 3 sprints. 1 for analysis, 1 for development, 1 for testing/fixing/deploying. This got us in trouble lots of time. When during development raised new questions or requirements emerged, there was little time for redirection. Luckily we were able to diminish the scope most of the time, but I still feel we delivered lower quality than we would have liked.

whatsapp image 2019-01-21 at 12.25.25

What I would have done differently

Reporting

Looking back, it was difficult for the PM to know exactly what I was doing. We used TFS to track our work, but it wasn’t very detailed. The stand-ups did provide some clarity, but only a partial message.

My testing was documented in a OneNote on the SharePoint, so he technically could verify what I was doing. Although admittedly this would require a lot of energy from him.
I think he would have preferred pass/fail test cases, but I didn’t deem that feasible with the high pace we were trying to maintain.
In hindsight I could have delivered weekly reports or sprint reports of what was done and what issues were found or resolved. This would would of course take some time at the end of the sprint, that could be an issue. I did look for a decent way to report on my testing but never found a format that suited me.

Fix more bugs myself

We were working CRM Dynamics that was altered to fit the needs of our customers. Both the system and the product were built in such a way that most setting could be altered in the UI. It took me a while to learn how these things worked but managed to resolve bug myself. Sometimes I didn’t know how to resolve them in the UI. I would then take this opportunity and have the developers explain to me how to resolve it next time I encounter something similar.

Since the framework restricted us in some ways, we also made use of a C# middleware to deal with more complex things. The middleware issues were harder for me to resolve so I don’t think I would be able to fix those by myself. The middleware developers being in Italy also complicated things. Pairing on the bug fixes could have taught me a lot. This did happen from time to time, but not frequently enough so I could dive in and sort things out myself.
Additionally, having more insights into the application would have been a nice luxury to have. Through tools such as ‘Dynatrace’, Application Insights,… I could have provided more information to the developers.

whatsapp image 2019-01-21 at 10.59.45 (1)

To summarize

Despite the high pace this project was ran, we still managed to do very good things. The people on the development team were very knowledgeable and taught me a lot. Sure there were some things that I would have liked to change, but that will always be the case. To me the biggest problem was that we didn’t reflect on ourselves. This meant we stagnated on several levels and only grew slowly as a team and a product.
I learned that I value (self-)reflection a lot. More than I previously knew. I started looking for other ways to reflect. At DEWT I got advised to start a journal for myself. This is something I plan on doing for my next project. Currently I have a notebook that I use for all testing related events. Not yet a diary, but a start of this self-reflection habit!
I also learned how I like to conduct my testing and where problems might have risen there. Focus on people and software instead of documentation. I would add some kind of reporting to show of my work. I’ve been looking in good ways to do this report, but am yet to find a format that suits me.
                           

 

A Changing Mindset

Experience Reports, Software Testing

Ever since I left my short stint at the meat factory, I’ve been a Software Testing Consultant for all of my modest career. Until a few months ago, when fate threw me into a Product Owner role.
5 months in, I feel my priorities, my thinking, my mindset… change.

This is not necessarily a good thing, but it is a necessary thing. First, I was Product Owner of Test Automation. But as that team disbanded due to too much overhead for a reasonably small team, I became Product Owner of a 8-headed SCRUM team of developer-architects, a tester, a test automation specialist, a DevOps specialist and soon a new junior developer.

My previous two blog posts were about helping a relatively small team learn more, move them forwards and become confident.
My new role is again different and it’s providing me insights about myself, how I adapt to these dynamics.

Mindset

My mindset has changed drastically. Where I was focused on risks, oversights and possible problems before, I am now looking at ‘good enough’ and going forward with the things ‘that matter’. Because of my Testing background and my now PO role, I realise that those two things are very different for me than other team members. I don’t know the risks well enough, I don’t know the scope too well (as the product is very new to me) and I can only guess at the value our changes bring.

Yet, this doesn’t seem to stop me forming opinions and making decisions.

It frightens me to take steps forward into this vast uncertainty of unknown unknowns knowing that I’m probably on top of the Dunning-Kruger ‘Mount Stupid’.
I caught my self disregarding several risks people mentioned, just because they intervened with my plans…
I have critisised many Product Owners before, when I was a tester, that I could see they had no clue what they were doing or where they were going.

 


I’m beginning to believe that this uncertainty is a big part of the role.
I need a tester to keep my feet on the ground.
I need this done as early as possible.

My priorities lie with keeping the team happy and delivering business value to the stakeholders. Not in risks, maintenance or changes…

Because of that, I’m not thinking of 3 out of 4 types of work.


Four Types of Work

When you find yourself in a situation where you don’t know enough or feel inadequate, start learning, reading and discussing. That’s what I do at least. I needed to ‘up my game’.

One extremely important finding for me were the four types of work featured in ‘The Phoenix Project’: Business Projects, Internal IT, Changes and the highly destructive Unplanned Work.

This connected several frustrations of mine into one model.
My current customer is quite good at pinning down Business Projects. At the very beginning, we do a three-amigo kind of thing where we lay the fundamental vision for the project and immediately try to cut down all the surrounding waste.

Internal IT is handled reasonably, though the responsible people seem to live on a well frequented island. We have two Admins who seem to troubleshoot and fix several major problems a day.

Changes are frequently happening, but are largely unmanaged. I’ve added a blank User Story in our sprints to capture ‘surprise tasks’. This should create a good baseline to see where these change requests come from and how much time they soak up. From there on out we can create procedures to mitigate, ignore, prioritise, escalate… What exactly we’ll do with the data, I don’t know yet, but we’ll have a better idea on how to tackle these changes.

I finally can put into words why I as a tester was often a source of frustration for a Product Owner: Unplanned Work. This type of work disrupts your whole flow, motivation, plans and ultimately, can destroy your project. Call it, bugs, risk, oversights,… it’s everything that suddenly requires someone’s attention and who can therefore no do anything that was planned. It eats your plans. It tears apart your flow and energy. It makes sure people get frustrated.

When Work In Progress is often called the silent killer, Unplanned work is the loud bloody zombie apocalypse that comes to exterminate your project. It terrifies me.
… enter the jolly testers who tell us we forgot about something important.

We just had two sprints torn up by the walking dead. Project management: ‘oh, we forgot to include these highly crucial features that need to be in production by the end of the month.’
Nor I, nor the team, was amused.

A Change in Thinking

A year ago, when I was a tester in this situation I would raise many bugs, make them visible and be loud about the frustrations I could notice in the team.
In similar occasions, I’d have given up and watch the train ride into a wall (again) to then see what we could make of the pieces.

Being in this situation as a Product Owner I try to make the best of the situation. Hope for the best and try to plan for the worst.
As a contingency, I put machinations in place that will bring more insight:

  • We will capture the ‘surprise tasks’ that weigh on the team to manage Changes
  • We will analyse the bugs found after development (and initial testing) from the past 6 months to build a checklist that can help us identify Unplanned Work
  • I need to keep a buffer to allow for Unplanned Work

The data in 1 will be a baseline to come up with certain Change Procedure(s).
From the data in 2, we can build automated checks, monitoring, alerts and ‘have you thought about/talked to X’-checklists for management.

I’m now in a role where I don’t have to be the 20-something-year-old screaming bloody murder anymore. It might sound strange or unfair, but my words have more impact these days. I won’t complain.
This phenomenon has given me the power to actually strategise and bring change while being very obvious about it. I’m not trying to persuade people to follow my ideas anymore. I’m gathering them by being direct.


I want to avert future disasters like we’re now in. I want the team to be on top of things. Maybe in the future, we’ll simulate our own disasters, while we’re still in control. Just for fun. And learning.

fb_img_1526309449076

Product Owner of Test Automation 2

Experience Reports, Software Testing

In my previous post, I explained the strategy I envisioned for the team. Comparing it to a boardgame.

What this post lacked thoroughly, was a clear focus on team learning. I feel like an idiot for not noticing this earlier.

Learning Objectives as part of the sprint

What has become abundantly clear to me is that the Team Members are the heart of your team. They need to be nurtured, grown.
In our team, we’ve been actively investing into people to become more confident and knowledgeable. ‘Learning objectives’ have become about 50% of our sprint stories.

I add in: Spikes, Proof of Concepts, Blog Posts, Challenges,… to have people work through material and produce reports, concepts, demo’s or anything that reproduces the acquired knowledge. After that, they ask feedback from other team members, discuss or teach. The aim is to achieve two things: new learnings and something valuable for the team, project or product. This keeps our stakeholders happy and our team in learning-mode.

But 50% is a lot of time… how do you explain this to stakeholders?
Test Automation is a valuable endeavor. Though in uncertain conditions it can be rendered useless, time consuming or time wasting even. That’s where we are now with the team. Many different things are changing. The application, the architecture and the development teams are all getting a good shake. This is not a good moment to heavily invest in UI or even API checks. Instead, I shift the focus of the team in a different direction.

Whereto now?

I see a lot of opportunities to coach, train and pioneer automation strategies, as a team.
Once the dust settles from all the management decision-making and architecture workshops it’ll fall on the automation people to strongly improve our release pipeline.
To achieve this, we need to become better at what we do, need to become more confident in what we say and become more respected for the value we bring.
As a team.

Instead of building more automation, our focus shifts to coaching, training and knowledge sharing. The issue, however, is that we first need to do knowledge gathering and train ourselves. The good thing is, we’re more of a team now than before and we can help each other out. We also have some time to invest in ourselves, which will pay off tenfold in the future. Hopefully.

In congruence to the team building up their skills, I’m monitoring progress of these changes and looking for opportunities to help out. Whether it’s now or in the future, I want to know where we can add business value fast. Additionally, I’m collecting examples of good practices in our context and using those as a basis to build an automation strategy.

Changes are coming our way, but we’re preparing to deal with them.

20180214_153619

The Automation team, at sprint kickoff

Session Based Learning

Software Testing

Last week at BREWT1, a peer conference in Belgium, I was talking to Simon Tomes about an idea his new tool TestBuddy had triggered:

Session Based Learning Management.

Let me first introduce you to Simon and his project. Simon is a wonderful human being who’s mission in life is to raise the spirits of everyone around him just a little bit higher. With years of experience, he’s become pretty darn effective at it. #GoEncourage is his mission to see things positively and communicate as such. As a tester, he’s found it especially helpful for his developer-tester relationships.

TestBuddy is his Simon’s and Rajit Singh‘s brainchild. From the way thin

gs are looking, this will become the go-to application for Session Based Test Management. Centralising your charters, your missions, your team and giving an overview of what’s done and is still on the stack. I’ve had the privilege to watch their journey and am eager to see it evolve further.

5a169cc16b2f3a0001d3350c_testbuddy-logo@4x-p-500

What do we have to learn

The idea that sparked me to talk to Simon was that their tool could very effectively be ‘abused’ to guide team learning.
Imagine being on a team that kept a backlog of ‘what do we have to learn’, the same way we have charters that guide us in ‘what do we have to test’. Same concept, different goals.

Imagine sitting in a planning meeting that would outline skills, information, books, videos,… that must/should/could be explored. Having those ‘learnings’ split up in charters that work the same way as you’d test an application:

Plan your learnings:

  1. As a team, pinpoint a skill, piece of knowledge,… needed by the team
  2. Explore the skill to get a basic overview (a first charter?)
  3. Outline what the absolute basis is to start building from
  4. Identify what outside help/tools/resources you need
  5. Try to plan a step-by-step pathway of learning consisting of several charters

For every charter:

  1. State your mission of learning and describe what a successful session would look like
  2. Whenever you see opportunity to have a sidetrack, create a new charter for it
  3. Use the time of the session to learn in function of the charter mission.
  4. Debrief with the team: What have you learnt and what can you teach?

Debrief:
Using Jon Bach‘s PROOF model:

P: How did you go about your learning journey?
R: What can you identify as having learnt?
O: What stood in the way of learning?
O: Did you see sidetracks or uncover new steps to explore?
F: Does the learning path still feel valuable? Would you abandon/change/evolve the pathway?

Notice how the process guides you through different learning phases?
Explore, Draw, Internalise, Debate. 

I could see a wonderful learning path for the whole team using this method. In the long term, there’s nothing making us happier than to learn something new.

Working together to gather new insights, collaborate on setting learning goals and sharing acquired knowledge,… I image would be an incredibly strong psychological, emotional and fulfilling journey for any one team.

What do you think? Could this be something you could apply in your project? How much time would the team be able to invest in this per day, week, sprint?

IMG_20171126_125422.jpg