Category Archives: Testing

When should we stop using Record and Playback in Test Automation Development?

On 17th July 2019, I presented my talk on Record and Playback in test automation. It was the first time I gave this talk and was thrilled at the positive response. I also enjoyed answering the many questions that followed the talk. This is one of my favourite questions which I want to answer in more detail.

How long did it take for you to get to the stage where you no longer needed to use Record and Playback?

When I started out using test automation I did have some programming experience, however this was in Java, not C# which I was expected to use. It had also been about a year and a half since I’d done any programming, so I was a little rusty. My previous experience meant I was able pick up C# quickly. It wasn’t long before I was only using Record and Playback as a guide rather than relying on it completely.

Using Record and Playback is a choice. I consider my C# programming skills sufficient enough to not have to use Record and Playback, however this doesn’t mean I’ve stopped using it. As well as a great learning tool for someone new to test automation, it also allows experienced programmers develop automated tests quickly.

Not everything can be recorded. You cannot automate using Record and Playback alone. Sometimes I use Record and Playback, sometimes I don’t. Depends entirely on what I’m trying to automate.

Should we stop using Record and Playback?

Lets assume you are working in test team that is full of experienced test automation developers who are proficient programmers. Does this mean you should stop using Record and Playback?

It is completely your choice. If you have the skills and confidence to develop automated test cases without the use of Record and Playback, then you don’t have to use it. However, there are a few things that need to be considered before disregarding it completely.

Help people who are new to test automation development

Occasionally, you may get new people join the test team. There may be an expectation for them to take part in test automation development. It may be some time before their skills align with your own.

Attempting test automation development for the first time can be a daunting prospect. Mistakes will be made, time will be wasted, but this should not put people off. Some practice and learning is required before any value can be gained. Inexperience should not be preventing people from developing automated test cases.

For the sake of integrating new team members, we should be providing the development team with the option to use record and playback if they choose. This will help encourage new developers to gain the confidence to develop robust, maintainable and reliable automated test cases.

A tool to speed up the creation of automated tests

Another thing to consider is how Record and Playback can be used to make our work more efficient. If there is a tool available that could improve the way you do your job, then there should be no shame in using it.

A calculator is one example of this. It is possible to solve basic arithmetic without one. However, choosing to use a calculator in no way undermines our own intelligence.

Test automation itself is another example. We can still run these tests manually. But automation can still drastically improve the our testing efforts.

Record and Playback can be used to create tests quickly by auto-generating code required for the test to run. This code can then be adapted to improve the maintainability and robustness of the tests.

Your Choice

I’ve always said that Record and Playback must never be the sole method for developing automated test cases. If you do choose to use it, then you must also take the time to examine the auto-generated code and adapt it accordingly. Tests developed using record and playback alone will be unreliable and difficult to maintain.

Record and Playback is a tool that can be used to help us develop automated tests. It is your own choice if you decide to use it or not. If you can develop automated tests without it then you don’t have to use it.

I will be presenting my talk ‘The Joy of Record and Playback in Test Automation’ at Test Bash Manchester and TestCon Europe later this year.

Main image from http://www.publicdomainpictures.net

Advertisements

Shift Left or Shift Right – Discovering what is in the bottle

“The problem is not that testing is the bottleneck. The problem is that you don’t know what’s in the bottle. That’s a problem that testing addresses”.

Michael Bolton

On 11th June 2019, I watched the Ministry of Testing Ask Me Anything Webinar on Shift Left and Shift Right. This blog post is a summary of what I learnt from this webinar and my own interpretation of what shift left and shift right is.

A new feature has been completed and it is ready to be released. But, there is something preventing this from happening – testing!

In a lot of cases, there is this time between feature completion and feature delivery where testing takes place. The problem with this is that it leaves little time for fixing any defects found during testing. If the tester finds a critical defect, the team are faced with a difficult decision – either fix the bug and delay the release, or release the feature with the bug.

Graph showing how testing effort peaks just before the release of an application or feature. This is a scenario where there is no shift left or shift right.

If time and testing efforts were plotted on a graph, there would be a massive peak towards the end of the development process. Introducing shift left and shift right allows that peak to be smoothed out.

Shift left and shift right allows that peak to be smoothed out through process improvement.

Shift Left

Shift left involves introduces earlier testing tasks that will ease the burden of testing that tends to increase just before feature release. Tasks can include earlier reviews of requirements and documentation, plus earlier planning. There is also the additional of earlier testing, particularly through other layers of the testing pyramid (Service, API).

The main purpose of shift left, is improving the testing process. Speeding up the time between feature completion and feature release. This can be made possible through earlier testing, reviews and planning and the introduction of testing automation.

Shift Right

Unlike Shift left, shift right is not so much about actual testing. Shift Right is about gathering information post release. This information is then fed back into the testing process. Before watching the ask me anything webinar, I didn’t know anything about shift right. When first hearing the term, I imagined post production testing. I’ve always been wary of this as there can be issues when testing with live customer data.

In hindsight, it all makes sense that shift right isn’t about post-release testing. After-all, shift left isn’t just about testing earlier but gathering information, planning and preparing for testing earlier. We review requirements, build an understanding about the application and what needs to be tested, and develop test cases. This information can be used to improve the testing process – the tests can be run earlier, quicker and more efficiently.

So, similar to shift left, with shift right we are aiming to learn more about the application and understand how the application is used by actual customers. We ask questions, analyse what users are actual doing and identify what we missed. Shift left will have already improved the overall testing process, shift right can make it even better. We take the information gathered post-release and feed it back to the testing process.

Implementing shift left and shift right

Both concepts require communication. The difference is who we communicate with.

The concept of shift left is already well established throughout the software development industry. It is also something that should be relatively easy to implement, especially when compared to shift right. Shift left can be achieved by speaking to other people within the team – developer, testers, designers, product managers.

For shift right, we need to collaborate with colleagues who we may not directly interact with on a daily basis. As a result, they may not fully understand what information we need, and why we need it. The difficulties with engaging with colleagues who are not part of the immediate team is partly why shift right can be so hard to implement.

To implement shift right, we need information about how the end-user is actually using the system. What parts of the application are they using the most? How are they using it? The information is most likely already available, it is just a case of knowing what we need and asking for it (often easier said than done, especially in larger organisations). Customer support or help desk is a good place to start. Help desk technicians speak to customers on a daily basis. They can provide detailed accounts of customer issues. These are issues that were probably missed because they were not covered in the original test plan. Data scientists are also worth speaking to. Data retrieved through analytics and monitoring can be used to identify user behaviour.

The bottleneck

Why do we need shift right? We could probably achieve an efficient testing process with shift left alone. No matter how great something is – it can always be better. By using new information about the user behaviour, we can make an already perfect process more perfect.

As Michael Bolton says, testing is not the bottleneck – we just don’t know what is in that bottle.

No amount of testing will ever completely reveal what is in that bottle. With shift left and shift right, we can discover much more than we already know. We can use that information to reveal even more hidden information.

Additional Resources

Ministry of Testing Club – Shift Left and Shift Right discussion
Testing Ask Me Anything – Shift Left, Shift Right – Marcus Merrell

Main image taken from https://www.publicdomainpictures.net

A Stitch In Time Reduces Critical Bugs

On 19th July 2019, I attended the #MidsTest meetup in Coventry where I gave my second 99 second talk. This time, I brought a prop – a block from a quilt I’m currently making. This blog post is based on the talk I gave.

Tweet about my 99 second talk, including a photo of me giving the talk

One of my hobbies includes sewing. At the moment I’m working on a patchwork quilt which will be a wedding gift for my sister-in-law who is getting married in August.

A patchwork quilt is made up of hundreds of small pieces of fabric, sewn together to create blocks. These are then sewn together to make the completed quilt. The main image for this post is one of several blocks which will be included in the final quilt.

You’re probably wondering where I’m doing with this!

Unit and Integration Testing

Those small pieces of fabric that make up the quilt – rectangles, squares and triangles – have to be unit tested before being used to make the quilt. Any that have not bee cut to the correct shape and size could result in a major bug finding its way to the completed quite.

Once the ‘units’ of fabric have been tested, they are sewn together into smaller blocks. Before sewing the blocks together, they have to be integration tested. Incorrect seam widths or wrong side of the fabric being used are common bugs that can affect the overall design of the quilt.

Saving time by finding defects earlier

These smaller blocks get stitched together to make bigger blocks, which are sewn together to make even bigger blocks. Eventually, all the blocks are sewn together to complete the entire quilt. Each block was integration tested before being used to make a bigger block.

All the testing that takes place early in the quilts development helps reduce the risk of more critical defects being introduced later on. Additionally, bugs found in the smaller blocks are a lot easier to fix than ones found in the bigger ones. The stitches have to be unpicked and the pieces of fabric sewn back together. Defects on smaller blocks are quicker to fix because there are fewer stitches that need unpicking – there are fewer dependencies.

All that testing, why are there still bugs?

Unfortunately, no amount of testing will completely eliminate all bugs. It helps drastically reduce the number of defects that find their way into the final product – but doesn’t eliminate them altogether.

No matter how careful I am, the quilts I make all have minor flaws in them. However, these are minor issues that don’t significantly affect the design. Any major defects that could have affected the quilts design were eliminated early on. If they had been found later, once the quilt was complete, they would be a lot harder to fix.

Why don’t I fix every defect? If I stopped to fix every defect then there is a risk that the quilt won’t get completed in time for my sister-in-laws wedding. In software development, the risks are normally a lot greater than that. Delaying the release costs the business money, sometimes more than if a defect was released to the live environment.

It is not always feasible to fix every single defect – especially if they are minor ones. A little more effort on unit and integration testing can reduce the number of bugs that need to be fixed later.

AMA – Shift Left and Shift Right (A quick summary)

Last night I watched Ministry of Testings Ask Me Anything about Shift Left and Shift Right. Some great questions were asked, and I learnt a lot from it.

The most enlightening moment for me was when I actually learnt what shift right meant. I thought it was just testing in production, but its so much more than that. It is learning about the software post release based on actual data and user behaviour, and feeding this information back to improve the testing process.

Shift left is already well known and well used by testing teams everywhere. However, without shift right, we are unaware of what is happening post-production. The data collected post-production could be used to make our testing efforts more efficient.

We can make testing great using shift left, we can make it greater using shift right.

The full webinar can be watched here:
https://www.ministryoftesting.com/events/testing-ask-me-anything-shift-left-shift-right-marcus-merrell

Further questions and discussion on the subject can be viewed here:
https://club.ministryoftesting.com/t/ask-me-anything-shift-left-shift-right/26353

I am currently sorting through all my notes from the AMA. I will try and publish a proper post about what I learnt next week.

Main image taken from https://www.publicdomainpictures.net

I’ve completed the TestProject Test automation Superpowers Challenge!

I was very excited when TestProject announced their test automation challenge. The main reason for this is that it required us to combine API and UI tests. API testing is something I have very little experience in. I’ve been wanting to expand my software testing skills to include API testing for some time.

UI Test

Since most of my test automation experience is with UI testing, I started out by creating a basic UI test using TestProject’s record and playback feature. This test launched Wikipedia, entered the search term ‘Software testing’, clicked return, and then checked the heading of the web page opened in the browser.

I then adapted the test so that the parameter WikipediaSearchTerm is set at the start of each test. After submitting this search term, the test checks that the firstHeading matches the WikipediaSearchTerm.

UI Test result
UI Test as shown in the test editor

API Test

This was a little harder since I’ve never created any automated API tests before. I started out by copying what was done in the video (shown in the original TestProject blog post that announced the challenge).

I then adapted the query, json path and validation to match what I searched for in the UI test. I changed the query so the search term WikipediaSearchTerm was used, the json path searched for the title instead of the snippet, and the validation checked that the WikipediaSearchTerm matched the response.

Validation for the API Test
URL, Query and Json path for the API Test
API Test in the editor
API Test Result

Combining the API and UI test

By using the same parameter, WikipediaSearchTerm, in both the UI and API test I was able to combine the 2 tests very easily. I was able to confirm that the API response matched the actual result that is returned when the same search term is entered via the UI.

Full test report. The API test, UI test and the combined API and UI test were run (the combined test is cut off but the result is visible on the right).

Conclusion

The video included in the TestProject blog gave some brilliant basic instructions for creating and API and UI test. Using this as a starting point, I was able to gradually learn more about API testing and how to use TestProject to create basic UI tests. I look forward to learning more about API testing using TestProject in the future.

Do I really need to test this?

In early March I had the following blog post ‘7 reasons to skip tests‘ published on testproject.io. This post looks at reasons why we may need to cut back on testing so that we can release earlier. It also looked at ways to reduce the test load later on in the life cycle by testing earlier and introducing more exploratory testing.

However, I only briefly mention the importance of prioritising the tests that we decide to add to the regression test suite. A recently published blog post by Stu at Wild Tests goes into this more deeply. This post inspired me to look more closely at how to review and prioritise test cases before regression testing.

What is regression testing?

Regression testing is supposed to confirm that no new, unexpected bugs have been introduced to the software and existing functionality still works as expected. These tests may be run at various points throughout the development phase, most commonly before a software release. It normally requires large regression pack to be created and regularly updated and maintained. However, over time, the number of tests included can dramatically escalate. Hence the need to regularly review test cases and prioritise them.

What should we do with test cases when reviewing them?

When reviewing a test case, there will usually 3 possible outcomes – Keep, Update, Delete.

  • Keep – if the test case is still required then it remains in the regression test suite.
  • Update – if the test is still required but the requirements have changed then the test case is updated so it matches the new requirements.
  • Delete – if the test case is completely out of date and incorrect, or covers functionality that is no longer included in the software, then it should be permanently removed. Other reasons for deleting a test might be because a similar test already exists.

Just because I’ve decided to ‘keep’ some tests doesn’t mean they all need to be run. The remaining tests need to be prioritised so that the tests that cover the most high risk items are run first. Lower risk tests may not need to be run at all. However, if a test case covers a feature considered to be low risk, should we be planning to run the test at all? A test may not need to be deleted, but it might need to be removed.

Difference between Deleting and Removing test cases?

Stu’s Wild Tests blog post makes a distinction between deleting tests and deleting the data tests hold. This means there could be a need for a 4th test case review outcome – Remove. Deleting a test means that the test is permanently deleted from the regression test suite. This includes any data included in the test case. Remove means it no longer exists in the regression test suite so its not run in error, which can waste limited time and resources assigned to testing. We may wish to remove a test case if it covers a low risk item. Priorities change, we may need the test case again in the future.

Things to consider when reviewing tests

What feature does the test cover? Is the test correct and accurate? Is this feature considered high or low risk?

Before doing anything, we need to make sure we know exactly what the test coverage is. If there is something included in the test that is no longer required then it should be removed. Likewise, if there is something not included in the test that is high priority then it must be included.

Furthermore, we must ensure that the data included in the test must match the requirements. If they test and requirements do not match then time could be wasted reported non-existent defects.

Finally, we must consider the risks. Risk should be determined using the potential impact to the customer if the feature should fail, and the likelihood of that feature failing. The likelihood will increase if a change has been made in the software that may cause that feature to fail.

Further reading

Kill your darlings – why deleting tests raises software quality
Breaking the test case addiction
7 Reasons to Skip Tests

Are businesses only asking for Test Automation?

Recently I published a blog post on the limitations of test automation and how it should be used to improve our overall test strategy rather than attempt to replace manual testing. I shared this on LinkedIn and the discussion that followed was very interesting.

Generally, most people seemed to agree with me demonstrating that most people know the importance of manual testing, the limitations of test automation and how best to utilise both to create the best test strategy.

However, there were also several frustrated responses from people saying that there seemed to be more job adverts for test automation than manual testers. Some questioned if these companies asking for test automation experience actually need test automation, or even know how it can be used.

This may indicate that while software testers recognise the importance of manual testing, the same cannot be said for software engineering companies in general.

Why are businesses recruiting for test automation?

I am not a business leader, or involved in recruitment, so I can only speculate.

It is most likely that businesses are recognising the potential value that test automation can bring to a testing project. However, are businesses even aware what this value is? Even if a test automation project is successful, there may be some disappointment when the expected value is not achieved, despite the fact that there is still proven value.

There is a concern that some businesses are choosing to begin using test automation because other companies are using test automation. This would certainly increase the pressure on them to employ testers with experience in test automation.

What about Manual Testers?

I believe that a good test automation developer first needs experience as a manual tester. Without these skills, I don’t think we can expect a test automation developer to make the right decisions when it comes to implementing a test strategy.

Before starting out, the tester needs to gain knowledge and understanding of the software application and how its used. I don’t think it would be possible for the tester to create valuable automated tests without knowledge of the software. This knowledge is best gained by exploring, experiencing and learning about the software. This will allow them to design the tests in the best possible way.

I believe that all manual testers have the potential to become great test automation developers. However, this does not mean they should be forced into it. Businesses should be making use of both manual and test automation in their overall testing strategy. There are just too many limitations to rely solely on test automation.

How is the increase in demand for test automation affecting manual testing?

If we are spending our time developing automated tests, that is less time spent on manual testing. This is OK if the automated tests are bringing value to the project. However, we must still remember that manual testing is just as valuable.

As mentioned in my previous blog post, test automation should be used to enhance testing, not replace it. The overall testing strategy needs to include scripted manual and exploratory testing as well. If there are not as many manual testers being hired, we have to think about how this might be affecting the overall testing effort at these companies.

Test Automation is development, shouldn’t the Software Developers be doing this?

Test automation is most certainly development. Anyone who develops automated tests should be classed as a developer. However, there is a difference between the type of products that software developers and test automation developers work on.

A different mindset is required for both types of developers. As I said earlier, I believe that a test automation developer should first have experience of manual testing. It is this experience that puts them into the correct mindset. A software developer is unlikely to have this mindset.

Conclusion

The increase in demand for test automation means that businesses are clearly aware that there is value to be gained. However, they may also be failing to recognise the costs and limitations. These limitations are the reason why manual testing is still required (and this is unlikely to change for some time).

The decision about whether to use test automation or not should be based on the advice given by software testers. A good testing strategy may or may not include test automation – but MUST include manual testing.

If businesses want to be using test automation as part of their overall test strategy, they should invest in both manual and automated testing.

I would like to thank everyone who commented on the original blog post “We don’t need automation, we need better testing”. It was these comments that inspired this post.

Main image taken from http://www.publicdomainpictures.net