The Three-Facet Approach

Experience Reports, Software Testing

As I’m writing this, at least 21 other people are feeling distressed at this very moment and the hours to come.
The first release of the current project has been deployed last saturday. Tomorrow, monday, 600+ people’s work-life will have been impacted tremendously.

These past few months have been a series of accomplishments. I have seen a team timetravel from the late 90’s to the modern’ish age of software development.
When confronted with tight deadlines, an oversold, over marketed product, extreme expectations and a new way of working the team showed they could keep on growing.
They’ve shown incredible tenacity. And it is through this persistence that the deploy was made possible.
Tomorrow will show how well we did.

Before starting this project there was one tester present. She used to test mostly informally, doing a final check whether fixes actually fixed the problem and didn’t break anything else. Development was limited to implementing small changes on a staging environment before deploying on the live one.

Since that day, 5 consultant developers and 1 consultant tester, me, joined the team. Next to working on the product, we were asked to implement a methodology that would make them more agile and put more structure in the process.

TFS, Test management and the Three-Facet Approach

How has this ‘improved process’ had an impact on our testing?
Among other things, We implemented a few triggers so that the testers became part of the team. No longer will we hear that quality is only our problem or that we are responsible if any bugs are found in production. We’ve also made sure we integrate with the TFS environment, further penetrating ourselves in the development cycle.

Considering Test Planning:
– We want to be able to participate in the Definition of Done
– We want to achieve some sort of traceability
– We want to have a set of core tests
– Most of all, we want to find problems fast

This lead to a compromise approach:

  1. Test Cases: Per User Story that is put on the scrum board, the testers create a test case or three. These describe core functionality and when all these pass, our part of the Definition of Done is completed.
  2. Regression Test Set: A checklist is maintained. “Can I sort the grid by clicking the header?” This regression test checklist can be used for UAT, as a basis for automation and regression tests.
  3. Charters, sessions and test ideas. Even though this facet has yet to be fully clarified and followed. Most of our testing has been primarily informal testing. We’ve found 600+ bugs in limited functionality in only a few months time by following this approach, but we have yet to fill out one charter.

The three-facet approach is a made-up Bug Priorityname for a methodology that serves our projects needs. It is a compromise. It gives us the room we need to do good testing while keeping the overhead to a minimum. These last few weeks, people have asked us many questions.

  • “How many ongoing bugs to go?”
  • “What do you think of the quality?”
  • “Do you think go-live is possible this weekend?”
  • “Have you tested the billing functionality already?”

The question whether all test cases were written and/or passed didn’t make the list.
Should I take this as a lessons-learned?
In any case, I plan to further implement Session Based Test management into our testing now that the pressure of go-live is off.

One thought on “The Three-Facet Approach

  1. Hi Beren!
    It’s great to see your blog! I’m not surprised to read good and useful content too 🙂
    You help us testers becoming better testers!
    Nice post, I liked it a lot.

    Some considerations I made are the following:
    What I was wondering about is the test charters and how to ‘plan for them’, make them visible. They are mostly not really mappable to user stories… so you can’t always say that you can test a user story fully when it is ready to be tested.
    You want to test the user stories as a whole… and again over different sprints, to check if they make sense, if they integrate well with one another. Yet I think – as you say as much as possible, testing needs to be in the definition of done.
    So how would you approach this?

    (Beren): To participate in the DoD, the first facet is used. Only critical path test cases are written and validated when the User Stories are ready for test. If they appear to work, the DoD is completed for the people testing. This is a very narrow definition. If Unit tests and Integration tests pass, there should normally be no problem for the Test Cases to ‘pass’.
    However, the third facet revolves around Exploratory sessions. I’ve been pondering to create Test Tasks per User Story, and estimate them to take an arbitrary 2 hours. This task would be the first Session of potentially many. The session would also have the User Story under test as center. Tests should be focused around the User Story, but the tester should not be prohibited from testing more than that, if he feels the need to.

    I thought first about adding charters to the product backlog – but this is in Scrum actually not done – as it is a single disciplinary job and maybe a product can be shipped without a certain aspect of testing being executed.

    (Beren): If it would fit your context and everyone in the team learnt to work with it, why not? I might have misunderstood you. I don’t feel bound to Scrum as a process, if that means limiting our thinking and/or ways of working.

    The second idea was to have a general definition of done where the test charter is saying which testing needs to be conducted before a sprint can be considered as being done.
    The problem I had here was…. that we cannot always define what testing needs to be done.. as we won’t know where the bugs are.. Good test ideas emerge while testing.

    (Beren): Agree

    The third idea was to just take a time-box for exploratory testing for every sprint. That time-box needs to be planned for, making sure that at least x hours of testing during a sprint is spent on exploratory testing.
    The other time can be divided over defect retesting, checking of user stories, preparing test data to check user stories… I kind of got stuck on this one… as I did not have a chance yet to try it out 🙂

    (Beren): You’ve come to the same conclusion as I did. I’ll let you know how it goes! I too am still scratching the surface of this. I wasn’t going to let the DoD be influenced by the first exploratory session, but it could definitely work that way.

    About going live or not – it should be the incentive to go live after every sprint in my opinion. Many scrum teams are not able to do this. However – you should pay a lot of attention to continuous integration and continuous builds and unit and integration test coverages. Not being able to release should be a big disappointment in the team after which the team should stand up, take a bit less scope on the fork (Beren):not too sure this is English, but I understood 😉 and try do deliver a full package. If you don’t do this, agile becomes chaos and buggy, sloppy software. You start running behind and you will not catch up.
    To deliver quality, especially in scrum, also development needs to get some extra discipline. Try to convince the lead developer and some dev team members to get this on track. At first you will slow down a bit, but once on track you will deliver better and faster.

    (Beren): This too we’ve been struggling with. Continuous releases into Production, apart from bugfixes, is not (yet?) working for our context. I understand your notions about it being an incentive. In the current context, this would bring much stress to the team. Stress and complexity, which we probably couldn’t handle.

    If you need reporting on checks, maybe you can add test cases to user stories. I split the testing work up in checking user stories – where I report on the user story level ( does it have outstanding defects or not – was testing completed or not) Then with all the time I have left, I try to spend time on automation of regression tests and exploratory testing, following session based testing. I actually give the checking of user stories gladly to the analysts or users and coach them in the testing and take the technical testing in the test team (testing of interfaces for example)

    What do you think?

    (Beren): Looks like we’re climbing the same mountain. In short the Three-facet-approach does three things:
    1. Validate User Stories with limited scripted checks (automated or manual);
    2. Cross-User Story Exploratory Testing, managed through sessions, on a much deeper AND broader level;
    3. Create a Regression checklist that can be used for Product Owner tests, UAT tests,…

    Coaching our UAT users, or any users for that matter, is something we’ll have to pay more attention to. This became apparent the few weeks after our first release.

    Cheers,

    Mike

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s