7 things to show your skills at the practical CAT exam

At the practical exam for the Certified Agile Tester, there are 7 product types in which you need to show your mastery. Before my CAT exam I took some time to reflect on the essence of each of the products and how I could best show that I know how to test in an agile team. In this article I’ll list my notes on the 7 things to show your skills at the practical CAT exam. I strongly suggest that you take some time to do that as well.

  1. Sprint Plan
  2. Test Strategy
  3. Task Board
  4. Sprint Burn-down Chart
  5. Session Sheets
  6. Sprint Review
  7. Sprint Retrospective

When you’re done reading through my notes, please leave a comment about how you will deliver your favourite .

Because there are seven product types, it doesn’t mean that you’ll pass if you only deliver seven – it is said that it has been done, but whoever made it with only one of each must have been very exact and to the point. Remember that the more content you deliver, the higher your chances are for success because what ever rubbish you deliver just doesn’t count. You can never lose marks only gain them, so train your handwriting and produce as much as you can.

1. Sprint Plan

This is the first thing I did at the exam – even while the instructor and the invigilator where setting up the computers, I was scribbling away just to get it out of my head.

At the practical exam the SUT simulation is delivered in three drops that all need initial testing, regression testing, and eventually some re-testing of the defects that you will find. Apart from that you’ll spend your time on planning, analysing the material, and creating the products that you’ll deliver (i.e. each of these seven products)

It is crucial to set time boxes and to be strict on yourself to uphold them. Start practicing now: how much time will you spend on reading and commenting on this article? I found that my time scheduling slipped when testing the drops, so in retrospective, I’d like to have made smaller time boxes than I initially did (have a look at my initial plan below).

First I thought it was okay to slack a bit on the time boxes in the testing time, but it turned out to be where I needed to be the most strict.

Because my native language is Danish, I had more time for the exam because it was in English, so my sprint plan looked like this:

30 min: Planning
* Test Strategy
* Task board
* Initial Burndown

30 min: Drop 1
* Analysis
* Testing
* Reporting

30 min: Drop 2
* Analysis
* Regression & Re-Testing
* Testing
* Reporting

30 min: Drop 3
* Analysis
* Regression & Re-Testing
* Testing
* Reporting

10 min: Review
* List defects (before the exam I planned to list passed and failed stories, but I ended up listing > those at the Sprint Retrospective)

20 min: Retrospective
* Listed the passed and failed stories. I.e. I was behind my planning so I didn’t get to test the third story properly. I compensated by explaining what parts of it that I did test and that it would go into the next sprint in a real life situation. (Before the exam I really did plan to explain how to mitigate any of the types of failures that I’ve seen in the exam.)
How much detail will you put into your Sprint Plan and how will you practice and enforce the time boxes that you set for the tasks? Share your thoughts in the comments.

2. Test Strategy

Start by making a bare-bone strategy with only the most basic statements that describe how you will test throughout the practical exam.

After each test session, or after you’re done testing a drop, take a minute to update the strategy by adding any methods that you’ve used during the session. Also make sure to take notice of which methods that you added earlier and have not used yet. This will be valuable information for the Sprint Retrospective later on.

These are examples on the basic methods that I wanted to use for my exam. Be sure to use your own favourites, and go through those listed in the manual, then take a look at the stories that you will need to test and add any that will make sense in your specific exam situation.

  • Risk assessment: how to start with user stories that has the highest business value or user stories that have other depending stories (this one didn’t make sense in my situation because each drop had a specific story and the order was predefined with the highest business value first. It would however make a lot of sense to include this statement in a real world test strategy)
  • How tests will be conducted: which test activities, types, techniques, regression testing, re-testing, testing levels, etc.

    • Exploratory testing
    • Regression testing after each new drop
    • Re-testing of defects reported in earlier drops
    • A short description of the use of session sheets for documenting the test sessions (more why than how)
    • How acceptance criteria are used to design tests that the Definition of Done (DoD) include that all acceptance criteria must pass before a user story can be accepted at the sprint review (I really struggled with this at the exam, because all of the stories had some AC that didn’t make it. My conclusion is that the exercise here was to make a DoD that made room for technical and testing debt and to set a threshold for what was acceptable for the product owner)
    • That the tester will inspect both the front-end (UI) and the back-end (database)
    • That any defects will be reported on the session sheets (i.e. not on separate sheets)

Take a look in the manual on what else a test strategy should contain. These notes are reflecting my style and I am looking forward to reading about yours in the comments.

3. Task Board

At the practical exam, there are no real opportunity to use a task board, so you are only asked to make a sketch of one. Make sure to clearly show how it would work in a real situation and include a legend and any additional explanations that you need to clearly explain how your Task Board would be used.

Because it is not actively used, it only eats your time , so you should be very careful not to spend too much time on the sketch.

What does your favourite Task Board look like? In what genius ways have your team’s task board evolved over time?

4. Sprint Burn-down Chart

Compared to the Task Board, the Sprint Burn-down Chart can actually be used, so here are my notes on how I did it at the exam and what I had planned.

In my first attempt at the exam, I found so many severe defects that I didn’t feel I could pass any of them. In hindsight, I should have passed some of the defects on to “the next sprint” as testing debt and passed the (some of the) stories with notes on that in the retrospective. At my second attempt, I only passed two out of three stories and my Burn-down Chart was more interesting to look at and not just a flat-line.

In your planning you will give each of the stories story points on a given scale (in both my cases it was 1 through 6), so you will simply plot the initial sum of the points at drop zero and draw a line for your expected velocity – i.e. when you have estimated the stories and create the Sprint Burn-down Chart. As you finish each drop, plot your progress in the chart.

5. Session Sheets

The Session Sheets are used throughout the practical exam. The difference from the rest of the products is that there is a special way to name your testing sheets so they can be traced to the user stories. Be sure to check the syntax in the practical exercises that you’ll get at the course.

At the exam I also got copies of the user stories with a field for my notes. I was not entirely sure what to do with it, so I simply made some notes related to the acceptance criteria that were listed on the story card.

Because the Session Sheets are the core products of the practical exam, I set up some rules for myself before the exam:

  • Refer directly to Acceptance Criteria and make notes in the actions taken to test them.
  • Not all AC are fulfilled in one code drop, so be specific on which AC are and can be tested in a certain session/drop.
  • Name defects according to AC and any other odd behaviour should be noted as Issues (there are a field for defects and a field for issues below a larger field for session notes on each session sheet)
  • Make a note in the strategy that any AC that are not mentioned is passed. I crossed this out and made a note that the danger of this is that the AC will too easily be forgotten. Instead: Note which AC that are tested and which are left for later drops – even if they are so simple as “The site can be accessed”.
  • When AC pass, create a regression test sheet per story per user story in Drop 2 and 3. On them make a list of the passed AC and check them off as they pass your regression test.
  • If there is a regression, make a specific note on it and note a defect. This actually happened in my second drop. In my first attempt at the exam, I didn’t make any specific regression tests, so I hope this makes a difference.
  • Only create regression test sheets for AC that previously passed and are not implicitly tested with other AC that specifically belong to the current drop. E.g. if an AC says “The site can be accessed”, it will be implicitly be regression tested with any new testing that you do.
    The point of my rules was to be specific about what to make note of and what not to waste time on. In essence, my rules reflect my weaknesses and act as a declaration towards addressing those. Make sure to reflect on your own weaknesses and how you’ll keep them in check during the exam.

In the comments, share the one most important rule that you will set for yourself when taking session notes?

6. Sprint Review

As you can see in my Sprint Plan, the Sprint Review has the smallest time box. All I found necessary was to list the unfixed defects, related to the user stories. Then I added a note simulating a product owner’s decision taken on the customer’s behalf.

Any defects not directly related to acceptance criteria, but related to the additional requirement specifications will also be listed here if they were not fixed. Don’t add any details about the defects. I didn’t even write out the descriptions/titles of the defects, just the IDs that were also listed on the session sheets.

Bear in mind that the Sprint Review should simulate a demonstration of what a team has a accomplished during a sprint, so it is an honest report on what happened and what was delivered. The core value to both the customer and the team, in relation to the collaboration, is the honesty because it makes it clearer what can be expected in the next sprint. The same holds true at the exam where you need to show your professionalism regarding the state of the SUT and that you have done what was in your power, as the tester, to contribute to the product.

As with the other topics, make sure to read the manual and take some time to reflect on how your sprint review will turn out at the exam. If you already have a clear idea, put it in the comments.

7. Sprint Retrospective

The sprint retrospective sheet contain notes about how the process can be improved in the future. In the exam situation, it will should give rational ideas on how your behaviour in the exam process was and how it could be improved.

The retrospective can explain symptoms on what the simulated team can improve. E.g. if there are many security related defects, you can suggest to bring on a security expert to test the product. Likewise with any performance and user experience issues or defects you may find.

If you can see that a majority of the defects could have been prevented with code reviews, unit tests, clearer user stories and acceptance criteria, the Sprint Retrospective is where you should address those.

Most important is to address all the issues that are noted on the session sheets during the tests – and make sure to make notes on any issues like these, on the session sheets, when you find the issues.

Explain which issues are most important to the team and the customer and how the team would mitigate these.

Issues are e.g. if stories are too big, label them as epics and break them down into smaller stories. If acceptance criteria are unclear, explain that the testers should pair up with the product owner to make them testable.

If you have an additional example, post it in the comments and explain what a symptom could be and how the issue can be a problem in the future.

Can metrics improve agile testing? Measure, Analyse, Adjust

As a tester in an agile project, I want to make visible how I spend my time, so my team can act accordingly and stakeholders know what to expect.

As you’ve probably experienced, plans and estimates are revised during a sprint. I have never heard of any healthy project that has not done that. In an unhealthy project, stakeholders may stick to a loosely estimated initial plan because they do not know better and cannot act on any historic data

Consider a scenario with a release date getting closer. A stakeholder, who is restrained by a contractual budget and a promise of timely releases, asks the team when they are done testing the stories. The team can’t answer with any acceptable margin, so the stakeholder start drilling down into the stories asking specific questions about how many hours are needed for exploratory testing, test design, and GUI automation – basically taking over the test strategy. At the end of the meeting the testers have committed to some arbitrary forced number of hours for each type of testing on the remaining user stories. The only benefit of that is that the stakeholder is satisfied until it is discovered that none of the new estimates are useful at the next status meeting.

In this article I provide an answer to the question: Can metrics improve agile testing. The best way to answer that is to split it into these four sub topics which I will answer in sequence:

  • Why collecting metrics is helpful
  • What metrics should be gathered during a sprint?
  • What is the easiest way to gather metrics during a sprint?
  • How can metrics help improve testing?

Everyone must know why they are collecting metrics

Imagine recording data without knowing what it is used for and why it is gathered in the first place. The first thing I could imagine myself doing was – consciously or unconsciously – making up reasons. Adding to that, I would become insecure if the metrics are in any way connected to my performance on the team. Building on that and the uncertain reasons behind the metrics, some might even alter those in a direction that would be of no benefit to anyone.

Say, for example, that you are recording how much time you spend on testing a story. You have no idea what the data is used for. You want yourself and the project to look good in the statistics, so you start rounding the numbers down here and there – just a tad. Over the duration of the sprint, that become several hours. Then it is time for stacking the backlog for the next sprint and guess what more stories are taken in than you and your team can test within the sprint.

So be transparent about why and how metrics are gathered. Be vocal about why you measure any performance on the team and project. If you are in any doubt of why you are collecting a metric, ask about it and discuss what it can be used for and how it can benefit the project.

What metrics should be gathered during a sprint?

In other words: What does testers spend their time on? The answer is in my experience always some version of these three things:

  • Testing (including running tests against the application under test)
  • Investigating, reporting, and re-testing bugs (including collaborating with coders and product owners)
  • Setting up and waiting for environments or other resources such as test data (including designing scripted test cases based on notes from previous ET-sessions)

One of the most efficient test techniques in agile testing is exploratory testing because there is less preparation and setup required compared to scripted tests. It is not always easy to gather metrics while focussing your energy on finding creative ways to test the system, so you will need a simple system for doing so.

What is the easiest way to gather metrics during a sprint?

One such system I’ve been using for gathering metrics during exploratory testing sessions is a light version of session-based test management or SBTM, which was developed by Jon and James Bach. The core idea is simply to take notes of what you are doing during a time boxed session and mark each entry. The notes have to really simple so it does not take up all your creative testing energy.

The wikipedia entry defines session based testing like so:

Session-based testing is a software test method that aims to combine accountability and exploratory testing to provide rapid defect discovery, creative on-the-fly test design, management control and metrics reporting. wikipedia.org/wiki/Session-based_testing

A genius of the tool is that you will take specific note of how much time you use on
1) Testing, 2) investigating Bugs, and 3) Set up. These are also called TBS Metrics. When you gather the metrics, you can see where you spend your time, so if you see less time testing than investigating bugs or even setting up the environment, you can take action on for example getting the environment setup automated. If e.g. you spend a lot of time investigating bugs, perhaps the programmers should be more involved in testing or the user stories should be analysed with the product owner to get more specific acceptance criteria.

The important point is that if you collect these 3 simple TBS metrics, you have come a long way towards being able to analyse your results, you can continuously improve your testing effort based on those numbers.

The full SBTM may be a big bite to start off with, so I recommend that you start with SBT Lite which was defined by my mentor Sam Kalman who worked with Jon Bach at Quardev. SBT Lite is much more free-form, but is still able capture the metrics you need.

In practise, a tester would take notes on his/her actions during an exploratory test session.
Before the session starts, a session charter is chosen and noted on the session sheet along with the esters name. The charter could be one of the acceptance criteria in a user story or an entire user story depending on the size and scope of them. A meta session charter could also be to come up with session charters for a set of stories. The charter is basically the mission of the session.

During the session, the tester marks his/her notes as T, B, or S, along with a time-stamp and a one-line description of what happened. The result should look much like a commit log from healthy pair programming session.

When analysing the session logs, it would be evident how much time is used on testing, bug, or setup time respectively. In the full SBTM resource there is a link to Perl tool that can analyse session notes if they are formatted in a specific way, but you should make your own session log parser if you prefer to follow the simpler version or even design your own set of metrics (which I recommend once you’ve tried this for a few sessions).

How can metrics help improve testing?

As I suggested in the scenario in the introduction, it is not only managers and stakeholders that benefit from the collected metrics. Testers can benefit on a personal (professional) level, the team can benefit, if there is a cross-team test organisation that too can benefit from the TBS metrics.

The individual can use the analysis of the metrics to hone his/her skills. Perhaps compare how much time is spent on each task type compared to other testers. If the numbers differ significantly, have a conversation about why. Maybe you can find new ways to be more efficient or maybe you have some important advice for one of your colleagues.

The team can use the analysis to find bottle necks. E.g. if a lot of time is spent on setup, perhaps the testers need a dedicated test environment that is hooked up to the CI-server or if that is already the case, perhaps it should not re-deploy every time there is a push to the development branch.

So, can metrics improve agile testing?

Yes, if you start measuring right now!

My final recommendation regarding metrics is simply to start collecting data immediately.

Start simple by introducing a simplified version of Session Based Test Management to your testing colleagues on your team – talk to them about it and discuss how your team might benefit. Next, present the idea to the rest of your team and get some feedback on how simple it should be if everyone should be able to use it.

Now, collect some data, analyse it, present the results, and adjust.

Leave a comment about how you use metrics in your organisation and let me know if this helps you in any way.

Resources

Here’s a list of the tools you can use to look more into session-based test management an starting a simple measurement of your team’s testing effort: