7 things to show your skills at the practical CAT exam

At the practical exam for the Certified Agile Tester, there are 7 product types in which you need to show your mastery. Before my CAT exam I took some time to reflect on the essence of each of the products and how I could best show that I know how to test in an agile team. In this article I’ll list my notes on the 7 things to show your skills at the practical CAT exam. I strongly suggest that you take some time to do that as well.

  1. Sprint Plan
  2. Test Strategy
  3. Task Board
  4. Sprint Burn-down Chart
  5. Session Sheets
  6. Sprint Review
  7. Sprint Retrospective

When you’re done reading through my notes, please leave a comment about how you will deliver your favourite .

Because there are seven product types, it doesn’t mean that you’ll pass if you only deliver seven – it is said that it has been done, but whoever made it with only one of each must have been very exact and to the point. Remember that the more content you deliver, the higher your chances are for success because what ever rubbish you deliver just doesn’t count. You can never lose marks only gain them, so train your handwriting and produce as much as you can.

1. Sprint Plan

This is the first thing I did at the exam – even while the instructor and the invigilator where setting up the computers, I was scribbling away just to get it out of my head.

At the practical exam the SUT simulation is delivered in three drops that all need initial testing, regression testing, and eventually some re-testing of the defects that you will find. Apart from that you’ll spend your time on planning, analysing the material, and creating the products that you’ll deliver (i.e. each of these seven products)

It is crucial to set time boxes and to be strict on yourself to uphold them. Start practicing now: how much time will you spend on reading and commenting on this article? I found that my time scheduling slipped when testing the drops, so in retrospective, I’d like to have made smaller time boxes than I initially did (have a look at my initial plan below).

First I thought it was okay to slack a bit on the time boxes in the testing time, but it turned out to be where I needed to be the most strict.

Because my native language is Danish, I had more time for the exam because it was in English, so my sprint plan looked like this:

30 min: Planning
* Test Strategy
* Task board
* Initial Burndown

30 min: Drop 1
* Analysis
* Testing
* Reporting

30 min: Drop 2
* Analysis
* Regression & Re-Testing
* Testing
* Reporting

30 min: Drop 3
* Analysis
* Regression & Re-Testing
* Testing
* Reporting

10 min: Review
* List defects (before the exam I planned to list passed and failed stories, but I ended up listing > those at the Sprint Retrospective)

20 min: Retrospective
* Listed the passed and failed stories. I.e. I was behind my planning so I didn’t get to test the third story properly. I compensated by explaining what parts of it that I did test and that it would go into the next sprint in a real life situation. (Before the exam I really did plan to explain how to mitigate any of the types of failures that I’ve seen in the exam.)
How much detail will you put into your Sprint Plan and how will you practice and enforce the time boxes that you set for the tasks? Share your thoughts in the comments.

2. Test Strategy

Start by making a bare-bone strategy with only the most basic statements that describe how you will test throughout the practical exam.

After each test session, or after you’re done testing a drop, take a minute to update the strategy by adding any methods that you’ve used during the session. Also make sure to take notice of which methods that you added earlier and have not used yet. This will be valuable information for the Sprint Retrospective later on.

These are examples on the basic methods that I wanted to use for my exam. Be sure to use your own favourites, and go through those listed in the manual, then take a look at the stories that you will need to test and add any that will make sense in your specific exam situation.

  • Risk assessment: how to start with user stories that has the highest business value or user stories that have other depending stories (this one didn’t make sense in my situation because each drop had a specific story and the order was predefined with the highest business value first. It would however make a lot of sense to include this statement in a real world test strategy)
  • How tests will be conducted: which test activities, types, techniques, regression testing, re-testing, testing levels, etc.

    • Exploratory testing
    • Regression testing after each new drop
    • Re-testing of defects reported in earlier drops
    • A short description of the use of session sheets for documenting the test sessions (more why than how)
    • How acceptance criteria are used to design tests that the Definition of Done (DoD) include that all acceptance criteria must pass before a user story can be accepted at the sprint review (I really struggled with this at the exam, because all of the stories had some AC that didn’t make it. My conclusion is that the exercise here was to make a DoD that made room for technical and testing debt and to set a threshold for what was acceptable for the product owner)
    • That the tester will inspect both the front-end (UI) and the back-end (database)
    • That any defects will be reported on the session sheets (i.e. not on separate sheets)

Take a look in the manual on what else a test strategy should contain. These notes are reflecting my style and I am looking forward to reading about yours in the comments.

3. Task Board

At the practical exam, there are no real opportunity to use a task board, so you are only asked to make a sketch of one. Make sure to clearly show how it would work in a real situation and include a legend and any additional explanations that you need to clearly explain how your Task Board would be used.

Because it is not actively used, it only eats your time , so you should be very careful not to spend too much time on the sketch.

What does your favourite Task Board look like? In what genius ways have your team’s task board evolved over time?

4. Sprint Burn-down Chart

Compared to the Task Board, the Sprint Burn-down Chart can actually be used, so here are my notes on how I did it at the exam and what I had planned.

In my first attempt at the exam, I found so many severe defects that I didn’t feel I could pass any of them. In hindsight, I should have passed some of the defects on to “the next sprint” as testing debt and passed the (some of the) stories with notes on that in the retrospective. At my second attempt, I only passed two out of three stories and my Burn-down Chart was more interesting to look at and not just a flat-line.

In your planning you will give each of the stories story points on a given scale (in both my cases it was 1 through 6), so you will simply plot the initial sum of the points at drop zero and draw a line for your expected velocity – i.e. when you have estimated the stories and create the Sprint Burn-down Chart. As you finish each drop, plot your progress in the chart.

5. Session Sheets

The Session Sheets are used throughout the practical exam. The difference from the rest of the products is that there is a special way to name your testing sheets so they can be traced to the user stories. Be sure to check the syntax in the practical exercises that you’ll get at the course.

At the exam I also got copies of the user stories with a field for my notes. I was not entirely sure what to do with it, so I simply made some notes related to the acceptance criteria that were listed on the story card.

Because the Session Sheets are the core products of the practical exam, I set up some rules for myself before the exam:

  • Refer directly to Acceptance Criteria and make notes in the actions taken to test them.
  • Not all AC are fulfilled in one code drop, so be specific on which AC are and can be tested in a certain session/drop.
  • Name defects according to AC and any other odd behaviour should be noted as Issues (there are a field for defects and a field for issues below a larger field for session notes on each session sheet)
  • Make a note in the strategy that any AC that are not mentioned is passed. I crossed this out and made a note that the danger of this is that the AC will too easily be forgotten. Instead: Note which AC that are tested and which are left for later drops – even if they are so simple as “The site can be accessed”.
  • When AC pass, create a regression test sheet per story per user story in Drop 2 and 3. On them make a list of the passed AC and check them off as they pass your regression test.
  • If there is a regression, make a specific note on it and note a defect. This actually happened in my second drop. In my first attempt at the exam, I didn’t make any specific regression tests, so I hope this makes a difference.
  • Only create regression test sheets for AC that previously passed and are not implicitly tested with other AC that specifically belong to the current drop. E.g. if an AC says “The site can be accessed”, it will be implicitly be regression tested with any new testing that you do.
    The point of my rules was to be specific about what to make note of and what not to waste time on. In essence, my rules reflect my weaknesses and act as a declaration towards addressing those. Make sure to reflect on your own weaknesses and how you’ll keep them in check during the exam.

In the comments, share the one most important rule that you will set for yourself when taking session notes?

6. Sprint Review

As you can see in my Sprint Plan, the Sprint Review has the smallest time box. All I found necessary was to list the unfixed defects, related to the user stories. Then I added a note simulating a product owner’s decision taken on the customer’s behalf.

Any defects not directly related to acceptance criteria, but related to the additional requirement specifications will also be listed here if they were not fixed. Don’t add any details about the defects. I didn’t even write out the descriptions/titles of the defects, just the IDs that were also listed on the session sheets.

Bear in mind that the Sprint Review should simulate a demonstration of what a team has a accomplished during a sprint, so it is an honest report on what happened and what was delivered. The core value to both the customer and the team, in relation to the collaboration, is the honesty because it makes it clearer what can be expected in the next sprint. The same holds true at the exam where you need to show your professionalism regarding the state of the SUT and that you have done what was in your power, as the tester, to contribute to the product.

As with the other topics, make sure to read the manual and take some time to reflect on how your sprint review will turn out at the exam. If you already have a clear idea, put it in the comments.

7. Sprint Retrospective

The sprint retrospective sheet contain notes about how the process can be improved in the future. In the exam situation, it will should give rational ideas on how your behaviour in the exam process was and how it could be improved.

The retrospective can explain symptoms on what the simulated team can improve. E.g. if there are many security related defects, you can suggest to bring on a security expert to test the product. Likewise with any performance and user experience issues or defects you may find.

If you can see that a majority of the defects could have been prevented with code reviews, unit tests, clearer user stories and acceptance criteria, the Sprint Retrospective is where you should address those.

Most important is to address all the issues that are noted on the session sheets during the tests – and make sure to make notes on any issues like these, on the session sheets, when you find the issues.

Explain which issues are most important to the team and the customer and how the team would mitigate these.

Issues are e.g. if stories are too big, label them as epics and break them down into smaller stories. If acceptance criteria are unclear, explain that the testers should pair up with the product owner to make them testable.

If you have an additional example, post it in the comments and explain what a symptom could be and how the issue can be a problem in the future.

How To Give Constructive Feedback on User Stories

As a participant in an agile project I want to know how to give constructive feedback on User Stories so the team can continuously improve.

In many software development projects, the adoption of agile development principles has had a great start, but may be struggling when it comes to implementation. I’ve seen examples of that in the way features and requirements are broken down into programming tasks and further on into testing, deployment, and maintenance tasks. The best way to tie all these tasks together is to describe the end result as a User Story and reference the related tasks to that. That way, whoever does the related task will always have the expected end-result in mind and can without effort see what other tasks are planned for that story. As with many other things, starting can be difficult and may need a lot of collaboration so the best way to get started is to know how to give constructive feedback on User Stories.

  • What is a user story?
  • How can the team prevent feature- or scope-creep using acceptance criteria?
  • How can scenarios put the acceptance criteria back into the context of the user story to make it testable?
  • Who should give and take advice about writing user stories?
  • Who should decide on the level of quality for a story?
  • What makes feedback constructive?

The goal with this article is to describe one way to assess the quality of user stories and provide constructive feedback on how to improve them.

What is a user story?

A user story is a simple way to describe a feature, its target audience, and its merits in a single sentence. It is basically requirement specification in agile projects.

The simplest form is the strongest as the following:

  • As an [role],
  • I can [feature]
  • so that [reason] (optional)

For example:

  • As an [administrator],
  • I can [edit posts by any member]
  • so that [the post can meet the quality standard of the forum]

Check out this great article about writing user stories.

Set boundaries with acceptance criteria

The acceptance criteria are rules that define the user story. Without them, user stories are open-ended and ultimately suffering from feature creeps. Consider the following examples:

  • The post can only be changed by the author or an administrator.
  • Members can only use the products they have payed for.
  • The video will start when the page is loaded and the window has focus.

The acceptance criteria are clear, specific, and unambiguous.

This article has some nice practical examples of acceptance criteria.

Add context with scenarios

Scenarios are used to give context to the acceptance criteria of the user story. Some acceptance criteria can occur in multiple scenarios. The scenarios make the user stories testable.

Consider this simple form for using scenarios:

  • Scenario [#]
  • Given [context]
  • When [event]
  • Then [outcome]

Applying the example from the user story:

  • Scenario [1]
  • Given [an administrator is logged in]
  • and [has loaded another user’s post]
  • When [clicking the edit button]
  • Then [the content of the post is editable in a text field]

Adding a variant of the scenario:

  • Scenario [2]
  • Given [a user is logged in]
  • and [has loaded a post]
  • and [is the author of the post]
  • When [clicking the edit button]
  • Then [the content of the post is editable in a text field]

The team should agree on the criteria of a good user story. If a story does not meet the criteria, simply ask yourself and each other: “what is missing”, raise your concern and your preliminary answer with the team – even if your answer is so simple that you are almost embarrassed to say it out loud.

This article has a great example of Acceptance Criteria vs. Scenarios involving baby pets :)

How To Give Constructive feedback on User Stories

The team and yourself may not agree to the solution, but that’s okay.

The key is that you share your concern in a concise way and show the way to a solution by communicating a possible solution to the team.

Maybe your answer is really good and it solves the issue with the user story. Great, now move on. Your solution may also be really bad. Also great!, if you are part of a team that you trust and you all know that you simply suggested the solution in lack of a perfect solution to start a collaborative discussion on how to solve the problem. There are many degrees of perfect, especially in the context of a project where time, cost and quality are the determining factors. The perfect solution could be the dirty hack that moves the project forward.

The idea here is that the culture in your team has to support even the worst suggestions because they can start deep conversations that may lead to genius solutions.

Trust yourself and your team, lead the way to the solution, and know that your solution may not be the best.

Leave a comment: describe pitfalls when creating user stories and how to give constructive feedback in agile teams.

Certified Agile Tester Course Outline – my summary of the CAT course

I recently became a Certified Agile Tester and I just came across my notes from the course and wanted to share my personal experience with it in case you are interested in the course but would like some details more about it before signing up. Fortunately some of my colleagues could give me some details before I started the course, but I might be more lucky than most. ISQI are not super informative about the course content, so here you go: my summary of the CAT course – the Certified Agile Tester Course Outline. Enjoy :)

If you have any questions or comments, do not hesitate to comment below or contact me.

My CAT course consisted of 4 days of training and one exam day with two tests. Each of the four days of the Certified Agile Tester course have a theme: Day 1: Agile Methods and Process, Day 2: Planning, Day 3: Testing, Day 4: Teams. I’ve organised the article in sections describing each of the training days. The exam day is a tale of its own, so I might write about the another time.

Day 1 – Agile Methods and Process

The first day is about the Agile Methods and Process. These are covered by the modules 1 and 2 in the CAT Manual 3.0.

  • Introduction
  • Daily Scrum
  • Problems with Traditional
  • Agile Manifesto & Principles
  • Agile Methods
  • Agile Process Option
  • Roles
  • Intro to Agile Exercise

We started the course getting introduced to the instructor Søren Wassard. After that, we got a short task to write our own User Story which we all put up on a Task Board that we used for a Daily Scrum every morning during the course. We talked about the concept of the daily scrum although all of us knew it. Later, as we discussed the variety of uses of the task board, some of us shared our experiences which proved to be very different.

One of the things that the student is evaluated on is the Soft skills, which the instructor is evaluating during the training period.

A rather negative topic is the constant comparison to traditional software development methods like Waterfall and the V-model. In the manual there is an emphasis on the problems with traditional methods

We were introduced to the most commonly used Agile methods: Scrum, Kanban, XP, and Lean Software development. The Agile process option that is used for examples in the course is briefly defined, and referenced the rest of the course. Within the process, the necessary roles are defined.

We ended the day with a practical group exercise in which we had to build small town in Lego bricks using the knowledge about agile that we had acquired during the day.

Of course there was some homework as well, which was 5 rather easy K2 questions. I re-read the part of the manual that we’d gone through during the day, taking notes and highlighted some of the key points from the lectures. Finally I skimmed the modules 4, 5, and 6 really quick before passing out.

Day 2 – Planning

The theme of Day 2 was Planning and the topics included in the modules 4 and 5 that describes the strategic aspects of agile testing.

  • Requirements & Specifications
  • Iteration Zero
  • Release Planning
  • Task Board
  • Test Strategy
  • Estimation
  • Iteration Planning
  • Burn-down Charts
  • Sprint Practice Exercises

That covers the topics of Module 4 and 5 which are about pre-planning and continuous planning respectively.

The pre-planning theme in Module 4 include topics mostly on Requirements and Specifications that should be addressed before the team start continuously delivering features.

Iteration Zero is the framing in which the team can get to know each other, do the initial Release Planning, deciding on how to use a Task Board. The  team can outline the Test Strategy, and do the initial Estimation on what stories and tasks that might already be specified.

Module 5 focuses more on the continuous planning that is repeated and improved between each sprint. include the topics:

Iteration Planning that is done at the very beginning of each sprint, sprint reviews and retrospectives are used to evaluate the latest iteration for improvement in the following iterations. Burn-down Charts on both the release and iteration levels that are used throughout the project as an indicator on how it is going.

Building on the Lego exercise from Day 1, the Sprint Practice Exercises was now run as they would be on the exam day, although we were collaborating on the exercises.

The homework on Day 2 was very similar to those from Day 1 with a handful K2 questions with answers and marking guidelines.

Day 3 – Testing

Day 3 had more technical topics than the other days. It focuses on the practical aspects of being an agile tester in the module 6 and most of module 7. We did the last part of module 7, about Test Automation and Technical and Testing Debt on day 4.

  • Continuous Integration
  • Version Management
  • Pairing
  • Acceptance Criteria
  • Regression Testing
  • Defect Management

We talked about the very practical techniques of Continuous Integration and Version Management. These should be a default tool for anyone in the team.

Another practical technique we discussed was Pairing – the concept of working on tasks in pairs. Basically the discussion was about when to pair and not to pair. In general, the message was that any combination of team roles can benefit from pairing.

One of the greater topics of the day was Acceptance Criteria in relation to the Definition of Done and Acceptance Testing. In extension to these, we talked about which basic Test Techniques are useful to apply to different situations. Various test techniques can be applied both when planning the user stories’ acceptance criteria and while conducting exploratory tests.

We talked about the how Agile methods strongly advocate automating regression testing given the continuous integration and deliveries that is a core part of the methods.

Defect management in Agile projects is different from that in traditional projects because often the defects are fixed almost immediately after they are found.

The practical exercises continued from day 2 in the same manner where we worked in two teams of 4 persons. On day 2 it was stressful doing the exercises because we didn’t know exactly what to do, but on day 3 it was much easier because we tried it the day before.

The homework consisted of only two K2 example questions. In addition there was one scenario based question with 4 specific questions.

Day 4 – Teams

On day 4, the main theme was Agile Teams in different contexts, with module 9 in mind. Additionally, several more technical topics were covered in the last part of module 7 and module 8. Finally the course was summarised with module 10 and the last preparing questions for the exam was reviewed.

  • Test Automation
  • Non Functional Testing
  • Tools Support
  • Debt Technical & Testing
  • TDD
  • Teams
  • Agile For Large Projects
  • Course Summary

We started the day with the last part of module 7 about Test Automation, Non Functional Testing, Tools Support, and Technical & Testing Debt.  With regards to Test Automation, the CAT manual mostly covered  the why and what but as part of the course, we discussed how automation is implemented in some of the participants’ companies.  In a similar manner, the why and what of non-functional testing was briefly reviewed and we discussed what techniques can be used for testing non-functional requirements.

Different types of tools that can support the many shared roles and their activities in Agile Projects. We talked about the pros and cons when using open source and commercial tools.

Agile theory and practitioners often talk about Technical Debt where Testing Debt is a less covered topic. We reviewed and discussed how to manage Testing Debt. The same way that Debt is discussed extensively in relation to technical issues, TDD is often centred around programming where, as the name suggest, it is test design before programming. The main message was that test professionals have a lot to contribute with in TDD.

Team structures are unique and should be handled that way practically. That said, we discussed some common patterns that Agile Teams form. We covered distributed and co-located teams and what tools can support distributed teams. We also talked about how the soft skills can benefit how well teams perform.

Although Agile methods are very common in small teams and start-up companies, certifications as the CAT are often an answer to the needs from larger companies and enterprises. That makes Agile for Large Projects and Enterprise Projects obvious and very relevant topics for the course.

The final practical exercise consisted of two parts (check out the article about the practical exam and exercises). The first part was done in the same groups as the previous days. The second part was done individually to prepare us for the exam conditions.


Concluding the course we reviewed each module briefly in a Q&A where the instructor asked questions about them and we, the participants answered them.

I hope you can use my Certified Agile Tester Course Outline. Please leave a comment if anything was unclear. Then I’ll try to answer as best as I can


How to deliver test statuses at a daily scrum

As an agile tester, I want to share the status of the project’s testing tasks with my team, so we can solve any issues and impediments before they become bottle necks.

This article will give you an idea of how to deliver test statuses at a daily scrum. The test related work in agile teams seems to revolve around user stories and their acceptance criteria. This will naturally also be the centre of the discussions in daily standup meetings (or a scrum as it is often called).

Here are some of the basic question that I will address in this article:

  • What is the basic agenda of a stand-up meeting?
  • Which testing activities should an agile tester be planning for an iteration?
  • What testing constraints should an agile tester be prepared to meet?
  • What obstacles impacting testing should an agile tester be prepared to address?
  • What to consider when communicating in cross-functional teams
  • The beauty of Failing Fast and sharing it based on trust and shared responsibility

How To Deliver Test Statuses at Daily Scrums so Everyone can Understand It

One of the rules of the scrum is that it is time-boxed, so you don’t get to chit-chat – you have to be concise. Another scrum rule is that each and all members deliver a status including obstacles and next steps within the time-box. It is pretty much like a turn-based game where each player go through the turn’s phases:

  1. what was your tasks since last scrum
  2. what are your tasks until next scrum
  3. what may prevent you from accomplishing them

A case: You’ve been working with a coder on a user story for an email opt-in form on a website:

“As a non-subscriber,
I can submit my email on a landing page,
so I can receive updates through the email marketing channel”

You’ve established the acceptance criteria and the need to do an exploratory test that include an integration to an email marketing provider, but you don’t have the customer’s credentials for the service so you can setup a test environment for the integration. You’ve already collaborated with the coder on the tests and there is a stub that can receive the email.

This is how you’d give your team the status update:

“Yesterday, Coder Bob an I tested the story for the email opt-in form on the landing page with a stub.
Next step is to test that the integration to the email marketing provider actually works.
However, I need the credentials to the email marketing service so I can setup an email-list that can catch the output from the form”

To that, the product owner would naturally answer:

“Sure, I’ll get it for you after the meeting”

There: three breaths of air and everyone on the team knows exactly what you did, what you’re about to, and what holds you back from doing it. You even got a solution to your problem in less than 30 seconds. (I should add that the example story is pretty short and there would be more of the same kind in your status update. Problems of this size could also be solved outside the standup meeting, but for the sake of the example and me not being able to come up with a real problem, please play along here).

In the case, you clearly state your task since last stand-up meeting, your next step, and the obstacle you’re facing. A team member have the solution to your problem and it is clear what has to be done and who will take the next step.

However, what if not all team members know what you’re talking about, even if you speak no riddles?

In agile teams all members have different skill sets, which is good. It is therefore important, not only to be concise but to also provide enough information about an obstacle so everyone can understand it and potentially help you clear the obstacle.

It is a two-way street: if you find yourself in a scrum not understanding a fellow team member’s problems it is your job to ask for just enough details to understand the problem. Don’t babble on or encourage the speaker to do so. Simply raise your hand and ask concisely. To riff on the example, you might not know what an email-list is in the context of email marketing providers. Ask: “What is an email-list”. Everyone at the scrum will know the context of the question and know that you ask because you might have a great solution to the problem if you know more about it.

Fear of inadequacy might withhold some people from asking that “dumb” question. Fear of inadequacy and “dumb” questions simply does not exist in pure collaborative work. You’re already a team member, so you’ve probably been chosen for a unique skill set that fill the gaps of other team members – perhaps even your ability to ask questions in a way that reveal unforeseen issues, in which case it is your duty to practise that skill until success.

  • Be concise.
  • State what you did,
  • what you’re about to,
  • and what may hold you back.
  • If in doubt, ask
  • and ask again.

What is your take on how to deliver test statuses at a daily scrum? Do you share similar experiences or is there a side to it that you think is important? Leave a comment and let me know.

Can metrics improve agile testing? Measure, Analyse, Adjust

As a tester in an agile project, I want to make visible how I spend my time, so my team can act accordingly and stakeholders know what to expect.

As you’ve probably experienced, plans and estimates are revised during a sprint. I have never heard of any healthy project that has not done that. In an unhealthy project, stakeholders may stick to a loosely estimated initial plan because they do not know better and cannot act on any historic data

Consider a scenario with a release date getting closer. A stakeholder, who is restrained by a contractual budget and a promise of timely releases, asks the team when they are done testing the stories. The team can’t answer with any acceptable margin, so the stakeholder start drilling down into the stories asking specific questions about how many hours are needed for exploratory testing, test design, and GUI automation – basically taking over the test strategy. At the end of the meeting the testers have committed to some arbitrary forced number of hours for each type of testing on the remaining user stories. The only benefit of that is that the stakeholder is satisfied until it is discovered that none of the new estimates are useful at the next status meeting.

In this article I provide an answer to the question: Can metrics improve agile testing. The best way to answer that is to split it into these four sub topics which I will answer in sequence:

  • Why collecting metrics is helpful
  • What metrics should be gathered during a sprint?
  • What is the easiest way to gather metrics during a sprint?
  • How can metrics help improve testing?

Everyone must know why they are collecting metrics

Imagine recording data without knowing what it is used for and why it is gathered in the first place. The first thing I could imagine myself doing was – consciously or unconsciously – making up reasons. Adding to that, I would become insecure if the metrics are in any way connected to my performance on the team. Building on that and the uncertain reasons behind the metrics, some might even alter those in a direction that would be of no benefit to anyone.

Say, for example, that you are recording how much time you spend on testing a story. You have no idea what the data is used for. You want yourself and the project to look good in the statistics, so you start rounding the numbers down here and there – just a tad. Over the duration of the sprint, that become several hours. Then it is time for stacking the backlog for the next sprint and guess what more stories are taken in than you and your team can test within the sprint.

So be transparent about why and how metrics are gathered. Be vocal about why you measure any performance on the team and project. If you are in any doubt of why you are collecting a metric, ask about it and discuss what it can be used for and how it can benefit the project.

What metrics should be gathered during a sprint?

In other words: What does testers spend their time on? The answer is in my experience always some version of these three things:

  • Testing (including running tests against the application under test)
  • Investigating, reporting, and re-testing bugs (including collaborating with coders and product owners)
  • Setting up and waiting for environments or other resources such as test data (including designing scripted test cases based on notes from previous ET-sessions)

One of the most efficient test techniques in agile testing is exploratory testing because there is less preparation and setup required compared to scripted tests. It is not always easy to gather metrics while focussing your energy on finding creative ways to test the system, so you will need a simple system for doing so.

What is the easiest way to gather metrics during a sprint?

One such system I’ve been using for gathering metrics during exploratory testing sessions is a light version of session-based test management or SBTM, which was developed by Jon and James Bach. The core idea is simply to take notes of what you are doing during a time boxed session and mark each entry. The notes have to really simple so it does not take up all your creative testing energy.

The wikipedia entry defines session based testing like so:

Session-based testing is a software test method that aims to combine accountability and exploratory testing to provide rapid defect discovery, creative on-the-fly test design, management control and metrics reporting. wikipedia.org/wiki/Session-based_testing

A genius of the tool is that you will take specific note of how much time you use on
1) Testing, 2) investigating Bugs, and 3) Set up. These are also called TBS Metrics. When you gather the metrics, you can see where you spend your time, so if you see less time testing than investigating bugs or even setting up the environment, you can take action on for example getting the environment setup automated. If e.g. you spend a lot of time investigating bugs, perhaps the programmers should be more involved in testing or the user stories should be analysed with the product owner to get more specific acceptance criteria.

The important point is that if you collect these 3 simple TBS metrics, you have come a long way towards being able to analyse your results, you can continuously improve your testing effort based on those numbers.

The full SBTM may be a big bite to start off with, so I recommend that you start with SBT Lite which was defined by my mentor Sam Kalman who worked with Jon Bach at Quardev. SBT Lite is much more free-form, but is still able capture the metrics you need.

In practise, a tester would take notes on his/her actions during an exploratory test session.
Before the session starts, a session charter is chosen and noted on the session sheet along with the esters name. The charter could be one of the acceptance criteria in a user story or an entire user story depending on the size and scope of them. A meta session charter could also be to come up with session charters for a set of stories. The charter is basically the mission of the session.

During the session, the tester marks his/her notes as T, B, or S, along with a time-stamp and a one-line description of what happened. The result should look much like a commit log from healthy pair programming session.

When analysing the session logs, it would be evident how much time is used on testing, bug, or setup time respectively. In the full SBTM resource there is a link to Perl tool that can analyse session notes if they are formatted in a specific way, but you should make your own session log parser if you prefer to follow the simpler version or even design your own set of metrics (which I recommend once you’ve tried this for a few sessions).

How can metrics help improve testing?

As I suggested in the scenario in the introduction, it is not only managers and stakeholders that benefit from the collected metrics. Testers can benefit on a personal (professional) level, the team can benefit, if there is a cross-team test organisation that too can benefit from the TBS metrics.

The individual can use the analysis of the metrics to hone his/her skills. Perhaps compare how much time is spent on each task type compared to other testers. If the numbers differ significantly, have a conversation about why. Maybe you can find new ways to be more efficient or maybe you have some important advice for one of your colleagues.

The team can use the analysis to find bottle necks. E.g. if a lot of time is spent on setup, perhaps the testers need a dedicated test environment that is hooked up to the CI-server or if that is already the case, perhaps it should not re-deploy every time there is a push to the development branch.

So, can metrics improve agile testing?

Yes, if you start measuring right now!

My final recommendation regarding metrics is simply to start collecting data immediately.

Start simple by introducing a simplified version of Session Based Test Management to your testing colleagues on your team – talk to them about it and discuss how your team might benefit. Next, present the idea to the rest of your team and get some feedback on how simple it should be if everyone should be able to use it.

Now, collect some data, analyse it, present the results, and adjust.

Leave a comment about how you use metrics in your organisation and let me know if this helps you in any way.


Here’s a list of the tools you can use to look more into session-based test management an starting a simple measurement of your team’s testing effort:

How to Plan and Document Testing in Agile Projects

As a tester in an agile software development project, I want to plan and document exploratory tests, so I can prove test results to customers and easily re-run the tests for re-testing and regression testing. This article is made to answer the question of how to plan and document testing in agile projects.

The problems are that time is spent documenting tests that are only used once or that the tests are only run by the same person, so the documentation is never actually read. Other times, the test documentation needs to be handed off to another person that need to run the tests. In the latter situations, the person getting the documentation rarely have time to read the documentation or the required background knowledge to understand it thoroughly without talking to the author anyway.

To solve these problems you should optimise processes within and around your team. Here are some basics that your team need:

  • Reliable infrastructure that supports testing
  • A shared understanding of basic test techniques
  • Thorough user story design with the core team
  • Work in pairs to spread knowledge and prevent silos
  • Structure manual testing in a simple framework
  • Automate regression tests of business critical features

Reliable infrastructure that supports testing

No experienced tester that I have met have not tried to sit and wait for a test environment to be ready or their test data has been corrupted. The problem is that often there are a limited amount of environments or some service or database that the environment depends upon is unstable or not working as expected, so the test cannot be completed.

Building and maintaining a reliable infrastructure for testing can involve many tiny factors that are easy to solve but just as easy to break if unaware of them.

Some of the solutions to these problems are so simple that it is unbelievable that the problems ever existed, but that is easy to say when looking back at them.

Use a dedicated server for testing

In some projects, there are no rule or framework for how servers, database connections, and services are versioned and how they are integrated. Perhaps the team has a few dedicated servers for each phase in the application life cycle

Talk to your team about configuration management and start by naming the environments and make up rules for how they can integrate so no-one accidentally wipes your test environment when you are almost done with the long regression test.

Keep a one-to-one ratio between application and databases

When several application instances are using the same database instance you are begging for trouble. When a change is made to the application in one environment that involves updating the database schema, naturally all of the dependent application instances needs to get that code change as well. The simple solution is to have a one-to-one relation between app-instances and db-instances. Never let more than one code branch use the same database.

Organise test data

When several people are using the same database for testing, say one is automating regression testing and another is running exploratory testing on some new feature, make sure to avoid each other in the database. That means that you need to talk to each other about what data you need to manipulate. If that is unclear or if you are not sure if each other’s data will be affected by proxy, branch out, make a copy of the database and use separate environments. Otherwise you cannot trust your tests anyway.

If part of the requirements are to support loading data from an external service and furthermore serving your data along with that, make sure to validate the integrity of the data from the external source before updating your database with it. If that have already happened, talk to your customer about how to use the external source without breaking your system.

Mock or stub external integrations

Unavailable and unstable integrations in a test environment can be the source of a lot of unwanted noise.

If you are in the unfortunate situation to depend on an external service, make sure to mock it off or virtualise it if you must. That way you have complete control of how it performs. There is not many things worse than being benched because some external resource is performing badly.

A shared understanding of basic test techniques

A common pitfall when available test resources are limited when the deadline approaches, is to get free hands on the team to test without them knowing how to apply test techniques in practise. Some ways to circumvent this is to share knowledge about test techniques within the team, send team members off on courses in practical use of basic test techniques, and/or take on agile coaches or test mentors that can help the team learn some test techniques.

Another idea is to make a book club on the team, where each team member will read a book and present the key points to the rest of the team followed by a discussion that will produce a practical action plan for implementation of at least one idea from the book. Ideally the book club should not be limited to literature about testing, but to anything relevant to the team.

Build a shared reference

Build a library of simplified pattern-descriptions of most test techniques so your team can easily use them in appropriate situations. A place to find relevant information about test techniques is in some of the popular test certification programs, e.g. ISTQB Advanced Test Analyst Syllabus 2012 has a list of some basic techniques that can be used in practise. The following list is taken from the learning objectives list from the chapter on testing techniques. As a tester on your team, you should understand each of them and pick a few that you want to teach your colleagues:

Specification-Based Techniques

Defect-Based Techniques

Experience-Based Techniques

Thorough user story design with the core team

Design user stories together. In software development there are always three very different views on user stories, the customer’s, the coder’s, and the tester’s. An agile practice, called The Three Amigos has been defined in order to describe this fact, how to acknowledge it, and use it as an advantage to design better user stories.

The basics of the practice is that the product owner/business analyst, a coder, and a tester get the joint responsibility to make user stories are ready for implementation and testing according to the customer’s needs. A clever book is written about this: Lean-Agile Acceptance Test-Driven Development: Better Software Through Collaboration

Work in pairs to spread knowledge and prevent silos

Face-to-face interaction is by far the best way to share knowledge. This is even more true when sharing a technique for accomplishing a complex task such as testing an application or teaching someone a business subject matter that might not make sense.

With testing in focus, all team members can benefit from pair-testing. The testers’ mindset can be shared with the other roles on the team so they can build in quality in their respective areas of responsibility on the project. These are some of the benefits that the team roles can draw from pair-testing:


Each tester in the pair can learn details in test techniques from the other tester.

More details are discovered and there are generated ideas on two levels while testing because one tester drives and the other navigates.

If the navigator take session notes, the driver has free hands to follow the flow of digging out a bug from a complex string of actions.

If the driver discovers an unexpected behaviour, the navigator can easier retrace the steps to reproduce the issue.

Product owner/tester

The tester can pickup som of the business processes that might not make sense if not working with them directly with a product owner.

The product owner can learn some structures and techniques around testing that enables him/her to take over some of the testing tasks when time is limited.

The tester contribute with the tester’s mindset when identifying acceptance criteria to a user story, so these will be more detailed and accurate when the coder starts implementing them and when the tester later need to remember what details matter the most in the story.


The coder can explain some of the logic behind the implementations in the application so the tester can see what areas, modules, or combinations might make more sense to test than others.

The tester can show some of the techniques s/he uses when running a functional test, so the coder can use some of the same techniques when writing unit tests.

The coder can teach the tester about how to structure the architecture behind the automation framework.

Coder/coder, product owner/coder, and product owner/product owner

The last three combinations will also bring a lot of value to the product or project in relation to other tasks than testing.

The coder/coder pairs will produce high-quality code with built-in code review and, if done right, full unit test coverage for regression testing in the CI-system.

The product owner/coder pair will take form as deep discussions into the minute details about the business logic, so the product owner will write more effective acceptance criteria, and the coder will understand more of the undocumented details about the business.

The product owner/product owner pair is rarely done in practice, but the effect of the combination is that the user stories will be more elaborate and accurate.

Structure manual testing in a simple framework

Use light-weight documentation. Once the user stories are ready for implementation and testing according to the customer’s needs som might think that documentation is sufficient. Ideally, yes. In reality, no: details are found during implementation and testing and naturally the decisions around those need to be added to the user story. These might be elaborations around a newly discovered acceptance criteria or an error that occur as a result of the implementation. In relation to testing, add documentation will be descriptions of how various testing techniques would cover the acceptance criteria in the story.

If using an issue tracking system like Jira to track user stories, I recommend simply adding new information to and grooming existing info in the user story until it is done.

Specific to exploratory testing, a useful tool is Session Based Test Management (SBTM) or SBTLite, where the tester will note what the time is spent on while doing exploratory test sessions.

Automate regression tests of business critical features

Regression and re-tests are some of the most time-wasting activities in software development. Mostly because it does not need creative problem solving skills, just mere monkey–see–monkey–do skills. Automate functional tests that you don’t want to repeat ever again.

Think automation from the beginning of the project. Sometimes a considered framework may be very hard to automate. A common problem in UI automation is some UI frameworks’ use of dynamic element identifiers. That makes it hard to write and maintain a good element locator.

Re-testing of all defects that are found

When a defect is found, the test for it will be re-run several times, first by the coder while fixing it, then by the tester that found it. It is a waste of time if the exact same test is run manually again and again. If the tester can automate the re-test of the defect as it is found, it will be a great service to the coder that fixes it. This does not mean that the coder does not have to write unit tests around the fix.

Regression tests that touch high-value business flows in the application

Regression tests are indispensable. That is especially true when a development team is using a VCS with several branches and integrating them often in an agile project. If you manage your automation right, you can make the time spent on manual regression test count. After all, it is with manual testing that the really nasty bugs are found.

Follow the money when writing automated tests. If your customer is always sure to stay in business when you deliver your software, they can pay you. In release-, sprint planning and backlog grooming, there should be made conscious decisions about what is the most important and high value user stories. When automating regression tests, make sure to have a good test coverage of these stories.

Validate integrations that may break during development

You always want to know the state of your own code and that can be delimited by knowing the state of any integrations that affect your system. If an external service is down and one of your features depends on it you have the chance to mock it or virtualise it if it needs more intelligent features than what a mock can provide.

Start criteria for any manual testing should be known

Before giving any human a test related task you should know the basic state of the system. It is very annoying to manually test a long flow only to find out that some depending configuration is not setup correctly and you have to start all over.

CAT Exam

In this article I’ll give you an idea of how my exam for the Certified Agile Tester (CAT) went, how I prepared, if I was nervous, and if I thought it was hard.

How did I do?

The first time I took the exam I didn’t pass. With the risk of sounding like an excuse, I’ll try to explain why I thought I failed the first attempt. Perhaps my reflections will help some of you pass the exam the first time. The iSQI does not provide any other information than whether you pass or fail the exam and the percentage of correct answers in each test (the practical exam and the written exam).

Reasons why I may have failed my first attempt at the CAT Exam:

  • Read too little
  • Wrote too little answers (focused on being right in every answer)
  • Had higher thoughts of my knowledge of agile testing than anticipated
  • Practical
    • Missed “required” sheets
    • Unclear traceability between strategy, plan, tests, defects, and re-tests
    • Too few defects reported
  • Written
    • Need more corrects answers (remember that no incorrect answers can subtract from your score, only correct answers can add to your score)
    • Practice more with the sample questions in the CAT Manual
    • Need to write faster to give more answers

In the time leading up to my second attempt, I worked on these issues and it seemed to have worked as I passed the exam with much better scores than the first time. You may need to work on the same things, but try to think about what issues you may run into and work on those before you may have to compile a list like this after a first attempt.

How did I prepare for the exam?

I have always hated preparing for exams because there is so much pointless tension connected to it. It is really stressing me out. On the other hand I really love researching a topic for use in a presentation, an article, or similar. Given I had a bit extra time in the weeks before the course, I decided to create a website about the CAT course, to help others study for the CAT exam. With that decision I added a real purpose to my effort and removed the pointlessness of the tension.

Was I categorising too much?

In the beginning I spent a lot of time grasping the structure of the CAT Manual 3.0. I listed all the learning objectives for each module with their K-levels. I phrased the learning objectives as titles on articles that would become part of the website. For each of the articles/learning objectives, I listed a handful of questions that would answer the key topics of the learning objective. The idea was then to simply fill out the questions, supplement with what would learn from the CAT Manual, and later from the CAT Course itself.

In hindsight, this was a big mistake and it is completely against the Agile manifesto and principles of just-in-time learning. Even though I wanted to get an overview of the CAT Manual to be able to start with what I would find the most important or the hardest to learn, I ended up wasting time on categorisation, prioritisation, and the whole idea of how the website would be like. The CAT Manual is carefully designed for the students to follow from start to finish and I should simply have done so, writing my notes to my articles as I went through it before the course.

As I am writing this, I worry that I am making the same mistake again, by spending time on reflecting on the exam outcome. However, this will be my only opportunity to do so. The next steps after this reflection is to read the CAT Manual again and fill in my notes in the article scaffolds that I already created. While waiting for the exam results, I wanted to use the opportunity of having them fresh in memory, to summarise my experiences of each day on the course. I didn’t expect that I would have to take the exam again.

Was the exam hard?

Before giving my subjective opinion on it, I will try to retell some of the other participants’ concerns especially on the practical exam, which we discussed in the lunch break.

There is a lot of handwriting involved in the exam. As I suspect most people in IT, I am not practicing pen writing at the level required by the CAT exam. Of course at the written exam, there is a lot of writing to do, but there it is really simple and you only have to concentrate on one thing: answering the question within the given time. At the practical exam, you need to write a lot as well, only using another technique. You also have to juggle a lot of things and be careful to make all your notes traceable all the way from the bugs that you find and back to the specifications and the user stories. On top of that, there are the required products that you need to deliver.

At my first attempt at the exam, one of my colleagues had some technical issues with the linux distribution that was running the assignment. I really felt for him while I was glad that it wasn’t happening to me. I’m not sure I could have handled the added stress very well. Believe it or not, at my second attempt I found myself in a similar situation, where I wasn’t provided with the credentials to the database. In both instances, it was handled very well by the instructor and the invigilators and both my colleague and I were given extra time as compensation.

The written exam questions were very much like the sample questions. At least, I did not find them hard to understand at all. That said, I should probably have put more effort into answering them with more variations – I only answered with the required number of answers and no more.

Why is the exam so long?

There is a lot to cover, so it probably cannot be any shorter. Many topics are covered in the CAT Manual and the CAT Course, so to test the participants in as many of them as possible, the exam has to be long.

Was I nervous about the exam?

I wasn’t really nervous because I was confident that I would pass the exam. I based my confidence on the fact that I work by the principles presented in the CAT Manual and that my first encounter with software testing theory was a lot of self studying and reading. I felt that I had seen and heard of everything in the CAT Manual and had tried most of it. To say the least, I was very confident that I knew the subject matter.

How was my effort before and during the exam?

Perhaps I lost track of the primary goal, of passing the exam, when I shifted focus from only studying for the exam to using what I learn to building a website to help others take the exam. My first impression of my effort is that I spent a lot of time and energy on researching and studying the CAT Manual, but thinking back on it, I spent too much of my time and energy grasping the structure of the CAT Manual instead of memorising the actual content.

Leading up to the exam I let myself be tricked into thinking that the Lean principles of delivering just enough to give value also applies to the exam situation. That is NOT the case here! I would answer each question in the written exam with just enough information to satisfy myself. I did not give any variations on the answers, making my answers more clear and increasing the chance of gaining extra points.

QA Specialist in Unity Technologies

For a few months I have been working at Unity Technologies as QA Specialist, handling the publicly submitted bugs reports. I am very proud (even lucky as I have just graduated) of being a part of one of the worlds leading teams in game development.

Our issue tracker, FogBugz, is not feature loaded, but the FogBugz XML API has quite a few possibilities to it.

I am learning Perl to get an easy and versatile tool to work the API with. Once I am on top of that, I will look into how the API can be used to fill in some missing features of FogBugz. Then, in a near future when I have tweaked FogBugz to meet the needs of Unity Technologies, I will put my interests and experience into extending the product itself to make life even more easy for game developers.