Here is the next blog post where I provide summaries for the talks I attended at the UKSTAR software testing conference. This covers the first 3 talks in the automation track. Peet Michielsen talks about how to fix the ‘leaks’ in the test pipeline, Joep Schuurkes shares a few tips on how to choose or design a test automation framework, and Viv Richards introduces us to visual regression testing.
When you’re moving house and the pipeline needs re-plumbing
Peet Michielsen walks us through his journey from one company, where he setup a release pipeline from scratch, to a new company which already had a pipeline in place but found there were several ‘leaks’. He talked about some of the challenges he faced at the new company to fix these leaks. Finally, he gave some tips for improving the release pipeline.
This talk used the plumbing analogy a lot to explain his points. Generally, he was demonstrating the importance of allowing the project to ‘flow’ and not be held up by delays in testing. Replace anything that is obsolete and introducing reliable test automation are a couple of the ways to improve this flow. The use of test automation and continuous integration is what makes a software project ‘flow’.
Peet did not refer to any particular tools or technologies that he uses. This helped him demonstrate that his ideas could be applied to most projects regardless of the tools and processes they already use. This seems like a good idea as I am often put off by talks that focus strongly on technologies which are unsuitable for my current work. It can distract people from the actual message. The ideas that were presented in this talk could easily be applied to any test project.
What to look for in a test automation tool
Joep starts off this talk by discussing some of the issues he had with previous test automation tools and why this led to him building his own framework. This helped solved most of the issues he was previously having. His new framework, created using python, used a mixture of existing commands as well as newly developed ones – gluing well established tools and libraries together.
Throughout the talk, Joep showed us how he did completed some of the following test activities using his framework – Create, Read, Debug, Run and Report. With each activity he provided some great tips that can be used to improve a test automation framework.
Some of my favourite tips include:
- Naming tests well can clarify the tests intent. It can also make it easier to notice gaps and patterns in the test coverage and when running the tests.
- A test does one thing only, keeps things clear and focused.
- When a test fails, can you see the failed step? Do you have the required information? Can you understand the information provided? There is no such thing as too much information, so long as its well structured.
- Never trust a test you haven’t seen fail.
And finally, the most important piece of advice: Forget about the shiny, be impressed but ask … Is your tool helping you to do better testing?
Spot the Difference; Automating Visual Regression Testing
Why do we use test automation? It is more reliable as its performed by tools and scripts, meaning that the risk of human error is dramatically reduced. However, it does have its issues, especially when testing the UI. A large amount of investment is required, its only practical for tests that need repeating and with no human observation there is no guarantee that the UI is user friendly.
One popular pattern used in test automation is the page object model. The issue with this model is that the locations and visual attributes of elements are not usually checked. We played a game of sport the difference where there were 2 versions of the same GUI. The audience could easily spot most of the ‘mistakes’. There were about 10 in total, but only 4 would have been picked up in test automation using the page object model. Things missed out included additional spaces between elements, text styles and fonts and changes to colours or images on the page.
Viv then goes on to demonstrate how a screenshot of a GUI can be compared against previous versions of the GUI as part of test automation so that the software team can be alerted to minor changes in the software a lot sooner. These tests, run repeatedly on future versions of the application, can bring additional value to the software project.