avatarAnthony Mersino

Summarize

What is the role of a Tester in Scrum? Is Agile Tester a thing?

Is Agile Tester a thing? Does Scrum have a tester role? Is the tester a full-fledged member of an agile or Scrum team? Read on to find out!

Many people are curious about the tester’s role in a Scrum Team. Do Agile and Scrum have an agile tester role? Is the tester a full-fledged member of an agile team? Read on to find out!

This post was inspired by some questions from a testing professional on a team adopting Scrum. When I started to write about my experience with the testing function on agile and Scrum teams I became aware of the many fails and antipatterns I have seen over the years. Is this a challenge for you as well?

Let’s dive in!

What Does the Scrum Guide Say About Testing?

As a refresher, there are three roles in Scrum — Project Manager, Developer, and Tester. Right?

Wrong! Not even close. The three roles on a Scrum Team are the Product Owner, Scrum Master and Developers.

People that are accustomed to traditional ways of working may get confused by the wording in the Scrum guide about team members or “developers” That term is used for a reason — Scrum doesn’t recognize any specialty roles on the team. Why is that?

The reason that specialties are avoided is to foster a sense of collective ownership for end-to-end delivery. Specialization causes inefficiencies and bottlenecks that are not always visible unless you step back and look at end-to-end delivery.

Before Scrum, most people used a traditional or waterfall approach. In the traditional approach, specialists perform their task and then hand the work off to others. A business analyst writes the requirements and hands it off to a developer. The developer writes the code and then tosses it over the wall to a tester. Everyone moves on to the next thing and it all works smoothly in a daisy chain of handoffs.

Except when it doesn’t. For example, when there are defects or things don’t work as expected. Or when big queues of partially done work pile up because some roles take longer than others. All that work in process that is partially done (or almost done) should be viewed as waste.

So the Scrum approach is to have the team work together to get small parts of the solution all the way to done within each timeboxed sprint. No queues or handoffs.

This can be puzzling to some so let’s take a look at how testing is performed on a Scrum Team and then explore some of the common anti-patterns for the tester role in Agile and Scrum.

[Check out my related post, One More Time What is the Business Analyst Role in Scrum?]

How Testing Should Be Handled on a Scrum Team

First, testing is an important part of development. No one in their right mind would suggest that testing doesn’t occur in Scrum.

The Scrum Team is responsible for all the work needed to develop the increment of the product. Since that includes testing, the Scrum Team is responsible for testing. Full stop.

There are many ways that they can accomplish this and they have tradeoffs in terms of time and quality.

  • Each person on their team may test their own development
  • Each person on the team may write automated tests that test their own development
  • Team members may pair up with one person developing and one person testing
  • Team members may pair program together with the result being defect-free code from the beginning
  • Team members may focus just on performing manual tests
  • Team members may focus just on writing automated tests.

But make no mistake, the Developers on the team are responsible for testing. The typical approach to testing is a two-prong approach:

  • Each developer who writes code also writes and executes manual and automated tests
  • Another team member with testing expertise writes and executes manual and automated tests

In the best of cases, acceptance criteria are developed before any tests or development has started. A common set of acceptance criteria that is agreed upfront goes a long way toward building things correctly the first time.

Even better are test-first approaches that move testing to the front of the development process. The most popular is Test Driven Development (TDD) in which the developer writes a small test before developing the functionality to satisfy the test.

There are a number of other test-first approaches which extend the thinking about tests to either the full team or the team plus stakeholders. These include Specification by Example (SBE), Behavior Driven Development (BDD), Acceptance Test Driven Development (ATDD).

Depending on your product, multiple levels of testing may be needed to make sure that the individual feature performs correctly, doesn’t break anything else, and is integrated properly into the Product Increment. All necessary tests should be part of the team’s definition of done and should be performed within the same sprint.

Sounds pretty straightforward so far? What could possibly go wrong? Let’s look at some agile tester anti-patterns.

Agile Tester Anti-Patterns

Once I started writing about this topic I was amazed at how quickly I could generate a list of anti-patterns for testing. It seems like this is an area that presents challenges for many people.

#1 — Testing at the end — Testing at the end is the most common anti-pattern. One team member writes the code and does little or no testing and then tosses the code over the wall to a tester. That person developing may or may not be working from a clear set of clear requirements. The person writing the code measures their progress by how many things they toss over the wall and the person executing the test measures their progress by how many defects they find. Together they are busy but system-wide little or nothing is being accomplished.

#2 — No Agreed Acceptance Criteria — I recall a client that was using waterfall development. The tester prided themself on being independent and knowing the systems better than the programmers. So much so that they did not share the acceptance criteria or tests that they were using. So it wasn’t surprising that the code often failed the tests developed by the tester. It would have been better if the tester and developer discussed the business needs, perhaps with the requestor, and agreed on the acceptance criteria. That way everyone has the same goal.

#3 — “My Job is to Break Your Code” — Similar to the previous item, in many organizations, those with testing expertise see it as their job to break the code or find faults. In an agile team, the goal for everyone is to deliver valuable solutions. An effective tester in an agile team will work closely with developers to build quality in and avoid defects which are waste.

#4 — “I’m the Only One Who Can do the Testing” — Another anti-pattern is to have just one person who is authorized to do testing. Unfortunately, this is the way people act in many organizations. I was recently working with a small team that had one person doing the testing and that person took off the entire month of December for vacation. If only one person can do the testing, does the team simply stop work? No. Do they keep going and stockpile partially completed backlog items? Hell no!

The team has to collectively figure out how to get that testing done.

I’ve also seen organizations that take this one step further and enforce role limitations by their organizational structures and policies. I’ve been in organizations where they used testing tools like Mercury or HP Test Center (or whatever that is called now). But the licenses for those tools were limited to those who were part of the QA organization. It was their “turf”.

#5 — “The Devs could have completed 10 stories but the Testing Team only got 4 of them done” — This is an oldie but a goodie — when a team acts like it is two different teams. And they actually refer to each other as the Devs and the QAs. It may as well be Crips and Bloods. Us and them.

We could have gotten it done if not for them.

.

When this is happening you will usually hear it in the sprint reviews. It can also rear its head in sprint planning.

Why? It happens because people want to fall back on those sequential ways of working where they just focus on getting their own tasks done and ignore the larger goals of delivering customer value.

#6 — Separate Development Team and Testing Team — Some organizations take this a step further and they keep separate org structures for programmers and testers. This is even worse than the prior example — worse in terms of speed, outcomes and the number of escalations between the two managers involved.

Craig Larman’s 5th law of organizational behavior speaks directly to this:

Culture Follows Structure

— Craig Larman’s Laws of Organizational Behavior.

The separate organization is going to create separation. Duh!

#7 — We Test in the next Sprint after the Development Sprint — If you have ever coached a new Scrum Team, this is one of the common requests — can we test in the next sprint. No. Absolutely not.

I had a client several years ago that had a separate testing organization and they prided themselves on using this technique. The results were predictable. Developers were trying to develop new code while getting defects from previous sprints. Partially done work piled up and feedback loops were long and ineffective. Little or nothing was getting done. Forecasts for completion were rosy and teams consistently failed to meet those forecasts.

#8 — Tester Pools that Support Multiple Scrum Teams — Another ineffective technique that is often coupled with the prior one is to use a pool of testers to support multiple Scrum teams. The pool provides testing as teams need it. Each team piles up code that is not tested and the testing “pool” treats that as a backlog.

Testers are not part of the team and they feel little or no ownership for delivery. They also missed out on the backlog refinement sessions and don’t share an understanding of the items they are testing or the customer’s need.

The results are pretty predictable — lots of back and forth and very little in the way of delivery of customer value.

This is usually less about consolidating expertise as it is about protecting some testing manager’s job. Like the previous example, the results are plenty of partially done work and churn, poor feedback loops, and inability to forecast the future with any sort of accuracy.

#9 — Testing or QA Centers of Excellence — Going another step in the wrong direction are those organizations that set up testing centers of excellence. I’ve seen a lot of these and they are always failures.

I suspect the driver for setting up a Center of Excellence is that there was a quality problem. The response to that quality problem was to make one specific part of the organization responsible for quality. The only problem is, a Center of Excellence cannot change the quality in any way! The best they can do is reveal quality problems that others can fix.

One of the worst cases of this was at a client several years ago. All testing and QA professionals were part of the QA Center of Excellence and they reported to the manager of the CoE. That manager required every tester to submit a weekly report of the number of defects they found. Yup, you read that correctly.

How did that weekly report affect behaviors? Tester 1 found 20 defects this week and tester 2 found 40. Who is doing a better job? If I didn’t find 20, do I need to look harder? Am I going to be punished or rewarded based on the number of defects I report per week? Should I doctor my report to get more defects? Or less defects? How do obligations to the testing CoE impact my behavior toward the agile team?

#10 — “Our Testers are Sleeping While You Develop” — Another popular idea is to have a team of offshore testers test overnight while your developers are sleeping. The idea that we are testing while you sleep sounds promising at first blush. Especially when you are trying to reduce the hourly cost of testing people.

It doesn’t work if your testers and developers need to talk.

Outsourcing testing to an offshore team makes little sense to me. Though on a per person basis you can save money on testers, it is easy to see that it doesn’t actually save money overall and introduces lots of misunderstandings, churn and partially done work.

I see this anti-pattern pretty commonly even in organizations that claim to be using agile ways of working. I even had one client that switched the outsourced testing organization midstream to save a few schmeckels. It was unbelievable how much chaos ensued as the new vendor tried to get up to speed and test during the night! But the overall testing cost did decrease on a weekly basis. Unfortunately, so did quality and delivery. That was a textbook example of local optimization!

To be effective, the entire team including anyone doing testing needs to have a shared understanding of the work requests. The testers that are sleeping while you develop are also missing out on backlog refinement and other discussions that would lead to a shared understanding.

#11 — “I’m the Onshore Rep for the Offshore Testing Team” — Some have actually taken the previous idea and FUBAR’d it. Recognizing that those offshore testers were not able to participate in the agile team conversations, they added someone to help translate back and forth. Communication problem solved!

I had a team that did this exact thing and it was just 3 short years ago. When I was first introduced to the team I was surprised to learn there was just one testing professional working with a team of 6 programmers. That is when the tester told me it was not a problem because he was the onshore rep for the offshore team members. Wait, WTF?

How this worked was the onshore rep attended all the scrum meetings and took notes of the testing tasks that were required. They stayed up into the evening Chicago time and handed off work to three offshore testers. The three testers would work overnight and then that same group would meet again in the morning at the end of their shift.

The onshore rep would capture the status from each offshore tester and then the onshore rep would show up at the daily scrum and relay the information. The onshore rep performed no other role than handing off work to the others. Do you recognize this as the telephone game?

This was set up this way because the client could not afford to hire onshore testers.

This type of arrangement used to be more popular but I think people have wised up that having an onshore rep adds costs to the otherwise lower cost offshoring process. And it gets worse if you have an onshore rep and an offshore rep — two people whose sole job is to play the telephone game and relay information.

#12 — All Tests Cases and Results need to be Written in Our Standard Tool — I am not opposed to using tools to support testing but I have seen this become a problem. Earlier I referenced Mercury or HP — who used to be known for that industrial-strength test management software. I can’t judge the efficacy of the tools but I do know that:

  1. There were learning curves associated with the tools
  2. The tools were costly and therefore, not everyone had access to them
  3. The tools were overkill for many teams

I had one organization that mandated the use of the standard testing tool for one reason — so that they could run reports out of that testing tool to show the value that the testing team was adding to the process. The mandated tool was simply a way for some managers to demonstrate value by reporting on activities. That misses the point that the value is not in the activities, it is in the solutions being delivered to the customer.

#13 — “Developers Make Shitty Testers” — I was told at one client that they would not let developers have access to the standard testing tool. When I pushed the issue, I was told that “developers make shitty testers”.

Of course, shitty developers make shitty testers, but on the whole, my experience has been the opposite. Developers have pretty good testing skills. When using test-first approaches, they are excellent at it.

That is not to say that someone who has invested in becoming an expert wouldn’t be better at thinking of edge cases or unhappy paths — of course, they would. But if a developer sucks at testing, their code probably sucks as well.

#14 — “As the Tester, I am Accountable for Quality” — Sometimes testing becomes this powerplay with the person performing testing acting as the gatekeeper for quality. “Nothing goes out the door unless I say so” is how they act.

And in some organizations, managers and leaders set up the tester for failure by telling them they are accountable or worse, holding them accountable for defects in production. That makes about as much sense as the server in a restaurant being blamed for food poisoning.

I’ve actually heard managers ask who was the tester when there was a production support issue. As if from the entire team, the tester is the single throat they want to choke. It is bad behavior that only enforces other bad behaviors.

#15 — “All Code has Defects. We Just Fix the Critical Defects” — I had a client that argued vociferously that all software has bugs and striving for no defects at the end of the sprint is foolish.

Not surprisingly, their sprints were frequently interrupted by production support crises and fire drills. Hmm, I wonder why? It was a real head-scratcher. Here is a snapshot of their cumulative velocity — how would you rate their predictability after nearly a year?

The challenge with this team is that they could almost never plan with any accuracy. Every sprint there would be a major production issue and the development manager would call the developers into his office and say, “forget about the sprint we need to fix this issue”.

Yeah, all software has bugs. Certainly in your organization.

#16 — Trouble Ticket Badminton — Another anti-pattern that I have seen in organizations is the reliance on a defect tracking tool (e.g. Remedy or Jira) to communicate between those developing and those testing. Defect reports get logged and then assigned to the developer. Developers get a notification from the tool and then see the defect report. So far so good, right?

What happens if you the organization tracks metrics on closure time? Coupled with poorly written backlog items and missing acceptance criteria you get a whole bunch of trouble ticket churn. Everyone knows the clock is running so they push to close or reassign tickets as quickly as possible.

For an interesting exercise, go into the tool you use for development (e.g. Jira). Open up a defect and scroll down and check the history. Do you see a bunch of back and forth between testing and development? Do you see tickets getting closed as “working as designed” and then later get reopened by testing.

Rather than have a conversation, the individuals use the “power” of the tool to communicate by moving the tickets back and forth.

#17 — Testing in Production — I saved this for last because if I put it first you might actually think I made it up. I had a client not too many years ago where the developers did not test their code. No not ever. And there were no testers so the code was put into production without any testing.

Little imagination is required to predict what happened.

Bottom Line — There is no Agile Tester on an Agile or Scrum Team

For technology teams, testing is a critical activity. People who specialize in testing would do well to step back and think about how customer value is created, the purpose of testing and the waste created by introducing defects.

  • Product Backlog Items that are brought into the sprint should be taken all the way to done in the sprint.
  • There should be exactly zero defects for those items to be considered done.
  • Good practices for testing include test-first approaches.
  • Testing and quality are the responsibility of the entire team, not just a person whose training and title happen to be tester.

There is no Agile Tester or Scrum Tester.

Anthony Mersino is the founder of Vitality Chicago, an Agile Training and Coaching firm devoted to helping Teams THRIVE and Organizations TRANSFORM. He is also the author of two books, Agile Project Management, and Emotional Intelligence for Project Managers.

If you enjoyed this post, you might be interested in the following:

Agile
Scrum
Agile Tester
Agile Methodology
Recommended from ReadMedium