In early March I had the following blog post ‘7 reasons to skip tests‘ published on This post looks at reasons why we may need to cut back on testing so that we can release earlier. It also looked at ways to reduce the test load later on in the life cycle by testing earlier and introducing more exploratory testing.

However, I only briefly mention the importance of prioritising the tests that we decide to add to the regression test suite. A recently published blog post by Stu at Wild Tests goes into this more deeply. This post inspired me to look more closely at how to review and prioritise test cases before regression testing.

What is regression testing?

Regression testing is supposed to confirm that no new, unexpected bugs have been introduced to the software and existing functionality still works as expected. These tests may be run at various points throughout the development phase, most commonly before a software release. It normally requires large regression pack to be created and regularly updated and maintained. However, over time, the number of tests included can dramatically escalate. Hence the need to regularly review test cases and prioritise them.

What should we do with test cases when reviewing them?

When reviewing a test case, there will usually 3 possible outcomes – Keep, Update, Delete.

  • Keep – if the test case is still required then it remains in the regression test suite.
  • Update – if the test is still required but the requirements have changed then the test case is updated so it matches the new requirements.
  • Delete – if the test case is completely out of date and incorrect, or covers functionality that is no longer included in the software, then it should be permanently removed. Other reasons for deleting a test might be because a similar test already exists.

Just because I’ve decided to ‘keep’ some tests doesn’t mean they all need to be run. The remaining tests need to be prioritised so that the tests that cover the most high risk items are run first. Lower risk tests may not need to be run at all. However, if a test case covers a feature considered to be low risk, should we be planning to run the test at all? A test may not need to be deleted, but it might need to be removed.

Difference between Deleting and Removing test cases?

Stu’s Wild Tests blog post makes a distinction between deleting tests and deleting the data tests hold. This means there could be a need for a 4th test case review outcome – Remove. Deleting a test means that the test is permanently deleted from the regression test suite. This includes any data included in the test case. Remove means it no longer exists in the regression test suite so its not run in error, which can waste limited time and resources assigned to testing. We may wish to remove a test case if it covers a low risk item. Priorities change, we may need the test case again in the future.

Things to consider when reviewing tests

What feature does the test cover? Is the test correct and accurate? Is this feature considered high or low risk?

Before doing anything, we need to make sure we know exactly what the test coverage is. If there is something included in the test that is no longer required then it should be removed. Likewise, if there is something not included in the test that is high priority then it must be included.

Furthermore, we must ensure that the data included in the test must match the requirements. If they test and requirements do not match then time could be wasted reported non-existent defects.

Finally, we must consider the risks. Risk should be determined using the potential impact to the customer if the feature should fail, and the likelihood of that feature failing. The likelihood will increase if a change has been made in the software that may cause that feature to fail.

Further reading

Kill your darlings – why deleting tests raises software quality
Breaking the test case addiction
7 Reasons to Skip Tests