Tag Archives: Test Optimization

Testing Culture – Excuses, Blame and Fear

I came across an interesting article which discussed common excuses that testers make:

There were 3 which I found particularly worrying:

  • The other tester on my team missed the bug
  • If I log the bug I found in production, I’ll be asked why I didn’t find it sooner.
  • There wasn’t enough time to test

If a tester needs to use these excuses, then the issue might not be with testing but with the company culture.

A company culture where colleagues are encouraged to make excuses, attempt to shift the blame on someone else, or instil enough fear so someone is afraid to report something critical is not a good one.

“There wasn’t enough time to test”

This particular excuse will always exist and it is a valid one. There will be strict release deadlines which can’t always be pushed back. When the software has to be released, it has to be released. The tester has very little control over this. However, it doesn’t mean that the tester is entirely blameless if a critical bug makes its way to live.

It is the testers responsibility to ensure that the tests are optimised and prioritised. Tests covering the most essential features should be run first. Tests should also be optimised so that they are run as quickly and efficiently as possible.

Occasionally, the tester may be in the unfortunate situation where there isn’t enough time to test even the most essential features. It is also the testers responsibility to communicate what the overall test coverage it to those making the decision to release. They need to be aware what was tested and what wasn’t tested. This allows them to make an informed decision about if the software can be released or not.

Provided the tester prioritised and optimised their tests, and accurately communicated what was and was not tested, they can hold their head high. They tested everything to the best of their ability with the resources they were given.

“The other tester on my team missed the bug.”

With the increasing complexity of software applications, it is likely that not all members of the team will have been involved in testing every single aspect. However, is it really worth blaming the sole tester who was testing that particular feature?

Instead of blaming a single tester, we should look at the overall testing process. If there was only 1 tester validating this particular feature, maybe assigning additional testers to each feature can prevent bugs slipping through the net.

We may also want to look at how the feature was tested. If the tests were conducted using manual or automated test cases, was this particular scenario covered? If not, then the test case should be reviewed and the test coverage assessed. It may even be worth including some exploratory testing so additional unscripted scenarios are covered.

It should be recognised that we are part of a wider team and we should all be working together to improve the quality of software applications. When a bug makes it through the cracks we must remember that testers are human. They are bound to miss things out. It is not possible to investigate every possible decision that could be made when using the software.

“If I log the bug I found in production, I’ll be asked why I didn’t find it sooner.”

First of all, the person blaming the tester needs to realise that it is better that the bug was found sooner rather than later. It will be much worse if a customer finds and reports the bug.

Second, when a defect is found in production, it still needs to be fixed as soon as possible. Reducing the impact of that bug for the customer should be top priority. When a fix has been released, we can start investigating what went wrong. However, the purpose of the investigation should not be to find out who to blame but to prevent a similar situation happening again.

Finally, if there are employees terrified enough not to report critical issues, then what else are they holding back on? Employees will work much better if they are not afraid of management. Honesty should be encouraged so any mistakes can be rectified sooner rather than later. Otherwise, it will be worse for the business.

I will repeat what I said earlier, testers are human. A few bugs are likely to make it through the cracks. Software is becoming increasingly complex, there are just too many ways for the software to go wrong.

Final point…

There is a TV series that is shown in various countries – The Apprentice. It involved a several candidates who are applying for a highly paid job or money to invest in a business. The candidates are split into teams and are given a business related task. The team that makes the most money from that task wins the round, and 1 person from the other team is fired. The firing process involves the candidates being interrogated in a board room. The candidates are encouraged to blame others for the failure, and defend their own actions. One person is eventually fired.

Personally, I hate this show. It promotes a toxic culture of fear and blame. What does this actually achieve? Isn’t it better to focus on learning from the mistakes, preventing them from being made in the future and improving the process?

If the tester is forced to use any of these excuses, then it might indicate an issue with the overall company culture, not the tester.

Main image taken from publicdomainpictures.net


Do I really need to test this?

In early March I had the following blog post ‘7 reasons to skip tests‘ published on testproject.io. This post looks at reasons why we may need to cut back on testing so that we can release earlier. It also looked at ways to reduce the test load later on in the life cycle by testing earlier and introducing more exploratory testing.

However, I only briefly mention the importance of prioritising the tests that we decide to add to the regression test suite. A recently published blog post by Stu at Wild Tests goes into this more deeply. This post inspired me to look more closely at how to review and prioritise test cases before regression testing.

What is regression testing?

Regression testing is supposed to confirm that no new, unexpected bugs have been introduced to the software and existing functionality still works as expected. These tests may be run at various points throughout the development phase, most commonly before a software release. It normally requires large regression pack to be created and regularly updated and maintained. However, over time, the number of tests included can dramatically escalate. Hence the need to regularly review test cases and prioritise them.

What should we do with test cases when reviewing them?

When reviewing a test case, there will usually 3 possible outcomes – Keep, Update, Delete.

  • Keep – if the test case is still required then it remains in the regression test suite.
  • Update – if the test is still required but the requirements have changed then the test case is updated so it matches the new requirements.
  • Delete – if the test case is completely out of date and incorrect, or covers functionality that is no longer included in the software, then it should be permanently removed. Other reasons for deleting a test might be because a similar test already exists.

Just because I’ve decided to ‘keep’ some tests doesn’t mean they all need to be run. The remaining tests need to be prioritised so that the tests that cover the most high risk items are run first. Lower risk tests may not need to be run at all. However, if a test case covers a feature considered to be low risk, should we be planning to run the test at all? A test may not need to be deleted, but it might need to be removed.

Difference between Deleting and Removing test cases?

Stu’s Wild Tests blog post makes a distinction between deleting tests and deleting the data tests hold. This means there could be a need for a 4th test case review outcome – Remove. Deleting a test means that the test is permanently deleted from the regression test suite. This includes any data included in the test case. Remove means it no longer exists in the regression test suite so its not run in error, which can waste limited time and resources assigned to testing. We may wish to remove a test case if it covers a low risk item. Priorities change, we may need the test case again in the future.

Things to consider when reviewing tests

What feature does the test cover? Is the test correct and accurate? Is this feature considered high or low risk?

Before doing anything, we need to make sure we know exactly what the test coverage is. If there is something included in the test that is no longer required then it should be removed. Likewise, if there is something not included in the test that is high priority then it must be included.

Furthermore, we must ensure that the data included in the test must match the requirements. If they test and requirements do not match then time could be wasted reported non-existent defects.

Finally, we must consider the risks. Risk should be determined using the potential impact to the customer if the feature should fail, and the likelihood of that feature failing. The likelihood will increase if a change has been made in the software that may cause that feature to fail.

Further reading

Kill your darlings – why deleting tests raises software quality
Breaking the test case addiction
7 Reasons to Skip Tests

What I read last week (10th March 2019)

I’ve been continuing my journey through the 30 days of testability challenge so, like last week, the list is going contain a lot of items related to testability. This challenge has really opened my eyes at the many ways we can improve the quality of our software applications simply by improving its testability.

This week I will be attending the UKStar software testing conference which takes place on Monday 11th and Tuesday 12th March. I am lucky enough to be attending both days. Next weeks posting is likely to contain a summary of the amazing talks I attended as well what I read this week.

I had another article published this last week by testproject.io. This article, titled “7 reasons to skip tests”, is about the importance of test optimization. It was inspired by another article they published “7 Reasons NOT to skip tests” by Piet Van Zoen. Despite the similar titles, I don’t see my article as opposing the views presented in Van Zoen’s article. Once tests have been optimized, we should be doing everything possible NOT to skip tests. 

7 reasons to skip tests
Article written by myself about the importance of optimizing your regression test suites.

7 reasons NOT to skip tests
Article by Piet Van Zoen which focuses on the importance of making sure all tests are run.

Finally, I want to thank João Farias for sharing my blog post, ‘We don’t need automation, we need better testing‘, on his own weekly blog ‘5 things to read this week‘. It was his blog that inspired me to start this blog post series where I write down and share all the interesting articles I read each week. So its great that he’s found one of my blog posts an interesting read.


Test Talks Podcast – Creating Klassi Frameworks with Larry Goddard
Larry Goddard discusses the advantages of WebDriver.io and image comparison tools. I particularly liked his final piece of advice about how test automation should not be replacing testing. Something I completely agree with and have spoken about recently.

Test Automation

Top Free Automation Tools for Testing Desktop Applications
Joe Colantonio shares a list of automation tools for desktop applications. The most talked about automation tool, Selenium, is designed for web applications but not everyone works with web applications. Its great to see an article focused on promoting alternative tools to Selenium.

How to decide what to automate
Kristin Jacknovy breaks down how test automation should be developed into 12 simple points. We can’t and shouldn’t automate everything, in this article Jocknovy helps us make the right decision about what should be automated.

10 Best Practices and Strategies for Test Automation
Like Jacknovy’s article, this is about best practices for test automation. However, unlike Jacknovy’s article, this is more about the test automation strategy rather than which tests to choose to automate.


Testing ask me anything – Testability
A really informative video, in which Ask Winter answers a series of questions on the subject of testability. It is worth watching for both experts and novices on Testability, and essential if your taking part in the 30 days of Testing challenge.

Data Generation (Testability)

5 Test Data Generation Techniques You Need to Know
Describes 5 methods for generating data generation:

  • Manual test data generation
  • Automated test data generation
  • Back-end data injection
  • Third party tools
  • Path wise test data generators

Test Data Generation: What is, How to, Example, Tools
Discusses why we need to generate test data and describes some different testing techniques and the different types of data they require.

Decomposability (Testability)

Circuit Breaker
Martin Fowler introduces the circuit breaker pattern which can be used to introduce new code to applications already in production. If an error is logged, an alert is sent out and alternative code is run instead until the issue is fixed.

Testing In Production The Mad Science Way: Circuit Breakers and Science Experiments
As well as explaining what circuit breakers are, James Espie discusses the science experiment model. This is similar to the circuit breaker pattern except the original code continues to be run, with the new code run in the background with the results compared. The original code is replaced once we are happy that the new code works as designed.

Other blogs that share lists of test related articles

https://5blogs.wordpress.com/ (daily)
http://blog.testingcurator.com/ (weekly)
http://thatsabug.com/ (weekly)
https://weapontester.com/tea-time (weekly)

Testing Conferences

The Club
A forum for discussing, asking questions, answering questions, and requesting help. Run by the Ministry of Testing.

Feel free to recommend anything that you think is of interest.
Main image taken from http://www.publicdomainpictures.net