July 16, 2014

The Tester

Testing has become so important in software development that it is everyone’s responsibility. At every stage of the product development, tests are being created and run to ensure the code does what it is supposed to be doing.

The old approach

In the days of waterfall development, all the testing was done at the end of the project. There were dedicated QA teams which ran manual test scripts to see if they could ‘break’ the software and produced a list of defects for the development team to fix.

There are many problems with this approach. The biggest problem was that due to other issues with the waterfall process, the development team rarely finished development on time, which meant the testing part, at the end of the project, got left out.

Even when it did happen, it relied on huge amounts of people and time to try to test every possible code path in an application. In most cases this was not possible. If a bug was fixed, the entire manual testing process had to be restarted as the fix may have broken something else.

For this reason, software suffered from being of low quality and issues were usually found out when the customer first used the application.

The new approach

Quality of code and the product as a whole has become the centre of the Agile world. Extreme programming gives us the core practices of clean, well-tested and refactored code. In the Scrum methodology there is no testing phase. Testing is done at the same time as, if not before, writing the code. In the Scaled Agile Framework, teams are described as the DBT team, which stands for Define, Build, Test team.

Testing is just as important as the feature code itself. Therefore the tester has become a first class citizen in the Agile world and there is a large demand for good testers who can write code and understand the Agile nature of testing.

The Principles of Testing

Lisa Crispin and Janet Gregory in their book ‘Agile Testing’, define a set of principles for the Tester role. You will see that these principles are taken from other principles from other methodologies, mainly from Extreme Programming. This gives us an idea that the tester needs to be able to write good code as well as have a testing mind set.

The principles are:

  • Provide Continuous Feedback
  • Deliver Value to Customer
  • Enable Face to Face communication
  • Have Courage
  • Keep it Simple
  • Practice Continuous improvement
  • Respond to Change
  • Self-Organise
  • Focus on People
  • Enjoy

The emphasise is that testers are not policeman, guarding the quality of the code, but integrated members of the team, working and collaborating with all other members to provide the best quality product that they can.

What is a test?

A test is a second chance at getting the code right. It you have a list of numbers you want to add up and you do it once, you may well have a mistake that you don’t know about. If you then add them up from the bottom up, and you come to the same answer, you have increased the probability that you are right by a large amount. Tests follow the same idea. Your code does something and your tests do the same thing. If the code produces the same result as the test, chances are the code is correct.

Tests also protect against regression problems. If I code something today that works and it is tested, when I change the code to add a new feature, I will know the original intention of the code still works because my tests will still pass. This stops unintentional behaviour appearing in code that I didn’t even touch. This is one of the biggest advantages of tests.

Documentation has always been a big problem in software development. The documentation gets out of date almost as soon as the code is complete. If it was even correct in the first place. Tests give us a great chance to auto-document the code. The tests should describe what the code does. There are frameworks that can change our tests into actual wiki pages that describe the functionality in a human language. We know the documentation is correct because the tests pass.

The vast majority of tests should run automatically every time the code base changes. This enables instant feedback of regression and integration issues. It also updates the auto-generated documentation so it is always up to date.

Types of testing

A software application can be viewed in many different ways. As a result, testing frameworks and practices have appeared to test each type of view and at different levels of detail.

Test Driven Development (TDD)

TDD is covered in its own page under best practices, but is worth including it here for completeness. TDD has become the most popular way of writing code. The tests are written first and then the code is written to make the tests pass. TDD usually relates to class level testing and these types of tests are written using the xUnit frameworks (nUnit, jUnit, jsUnit etc). These tests are called
Unit tests. They are usually run by the developer before checking in their code to the source repository and then again on the code repository as part of the check in process. There is usually a check in gate stopping any code from actually being checked in, if the tests fail.

Behaviour (or business) driven development (BDD)

BDD is another test first then code practice. One may say BDD is a subset of TDD, however in practice BDD tests differ from TDD tests in that they are written in some form of human readable language.

The user stories that make up the product backlog will have acceptance criteria written on the back of them (or electronically added to the story). The acceptance criteria are a way of telling the developer what functionality the feature (story) needs to have to be accepted by the product owner.

The acceptance criteria are owned by the product owner, but the tester often helps or completely writes the criteria with the help of the product owner. It is these acceptance criteria that go to form the acceptance tests.

Acceptance tests are written using a framework such a Cucumber or Specflow. The tests are written in a human language, such as English, and the framework translates these into method calls with plug into a code test framework such as xUnit frameworks.

These tests usually test the interface (application boundaries) and are often seen as integration tests. No mocking is used here and the tests run end to end.

It is these types of tests that are most likely to be used to create living documentation that is auto-generated and placed in a team wiki. This is because of their English (or other) language nature.

Non-Functional Requirement Testing (NFR Testing)

NFR testing differs from functional testing in that the tests are usually run in isolation from the code check in process. This is not always true and ideally they would be run as part of that process, however, these types of tests may involve extra deployment steps and agreement from other teams as part of resource sharing. For example, in large systems, performance testing may require an entirely separate network segment so that the tests don’t slow the day to day business traffic down.

The aim is to have all tests as automated as possible, but that is not always a reality. In banking it is rare to see these types of tests fully automated. The industry is working towards this goal and different organisations have achieved greater and lesser steps towards this goal. As a tester, it will be your job to further the goal of full automation.

Security testing

Security testing (or Penetration Testing, Pen Tests for short) is usually a specialist skill which involves exploratory testing, whereby a security tester tries different things to hack into and compromise the security and availability of a system. Quite often this type of testing is outsourced to specialist companies. This is an area of high specialisation and once you are competent with the basics of testing, it is something you might like to explore to find a niche for yourself in this role.

Performance testing

Performance testing also requires a fair bit of specialisation. It differs from functional testing in that the process for performance testing is ongoing and often carried out manually due the duration of complex testing.

Simple performance tests can measure the length of time that an application responds under load. You might measure a single call on a service and record its time. Then start to call the service many times per second and see whether the performance of the call deteriorates. Usually the idea is to find a benchmark at what the service can handle before it deteriorates. It the performance is good enough then no more work is done, if not, either more hardware is provided to handle the load, or code enhancements must take place to improve the performance.

More complex performance testing involves modelling the behaviour of the calling client. This is often human behaviour modelling and can become non-deterministic in its nature. In this case techniques such as Monte Carlo testing are used to given probabilities of performance.

User Interface testing

There are several frameworks that simulate user input such as clicking a button in the browser. Selenium and the Silk Testing framework are two such tools which are used to test websites and WPF applications.

These tests can be automated in a similar as unit tests, however, they require specialist skill in programming these tools and the tests usually have to manually started instead of run as part of the check in process. This is because the browsers or WPF user interfaces have to be opened as part of the tests and this consumes valuable server resources, or may require an interactive user to be logged on.

This is another great area for a tester who does not want to be a developer to specialise in as the skills are typically different but similar to developer’s skills.


The different types of tests we have covered are

  • Acceptance criteria – Owned by Product Owner by help given by Tester.
  • Acceptance tests – Can be written by Tester, PO and Developer roles.
  • Unit Tests – Usually written by Developers.
  • Integration Tests – Can be written by Testers or Developers.
  • Performance Tests – Written by the testers.
  • Security Tests – Specialist testing by security testers.
  • UI tests – Written by testers with special knowledge of UI testing frameworks.
  • Manual Tests – Exploratory testing where the tester performs ad-hoc testing for edge cases.

Definition of done

As part of the team’s commitment to getting their stories completed, the team will have agreed upon a definition of done for the story. Part of the definition of done will include how and at what level the functionality should be tested. The tester role should be able to input into the DOD and give this information to the team.

Team makeup

A typical Scrum team will be made up of Developers and Testers (as well as other roles), and we can see that there is a lot of cross over as to who writes the tests. There is plenty of room for testers on the team especially those with specialist knowledge in the areas of performance, security and UI testing.


As part of the automated build process, there are several metrics that are recommended to measure the code quality. As part of the tester role it is important the tester understands and encourages the use of these metrics.

The metrics that are recommended are:

  • Code Coverage Trends
  • Cyclomatic Complexity
  • Defect Metrics Trend

You can measure and view hundreds of metrics by using a tool like NDepend. Metrics should only be used with the end goal of delivering value to the customer and keeping the ability to produce potentially shippable software.


There are literally hundreds of tools available for the tester role. Most of the work will involve tools of some kind. Here are some common tools used in testing in Finance.

Further reading

Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley Signature) – Lisa Crispin

Microsoft White Paper on Testing .NET

User Story Acceptance Criteria

Definition of Done

For Acceptance criteria in stories:

User Stories Applied: For Agile Software Development (Addison Wesley Signature Series) – Mike Cohn

Simon Powers
Simon Powers is an Agile Coach specialising in large scale transformations and agile adoption. He has a background in very large enterprise architecture which has led on to organisational design and agile process refinement. Simon is the founder of Adventures with Agile.