Monthly Archives: April 2019

What I read last week (28th April 2019)

It has been another exciting week. I am pleased to announce I will be giving a talk at SwanseaCon later this year on Monday 9th September. My talk will be on test automation and how to gain the most value for it. This increases the number of talks that I will be giving this year to 2 – the second talk being Test Bash Manchester. For someone who is still very new to speaking at conferences, this feels so overwhelming. I hope I do a good job giving these talks.

This week, on the ministry of testing club forum, there will be a power hour event taking place. Abby Bangser will be answering as many questions as possible on:

  • Enabling DevOps delivery
  • Testing on cloud infra team
  • Starting with Observability

I’ve already submitted a couple of questions. If anyone else has any questions they’d like to ask, they should be submitted here.

Social Media Discussions

LinkedIn post – Does Test Automation find bugs?
A post I shared on LinkedIn about if its possible for test automation to find bugs. I argue that a failed test doesn’t actually find the bug and it takes additional exploratory testing to find the exact details. However, test automation does alert the tester to the area of the software that may not be working as required. It yielded some interesting discussions. Feel free to add your own thoughts.

Podcasts

Test Talks Podcast – Next Generation Agile Automation with Guljeet Nagpaul
In this episode, Guljeet Nagpaul talks about how the development of test automation frameworks, the benefits, the challenges and how it will continue to develop.

Test Talks Podcast – Pushing Security Testing Left, Like a Boss with Tanya Janca
Tanya Janca talks about what security is and why it is importnat. Security Testing is taking care of and protecting people, and it is important to ensure that there are policies in place to protect them. It is also important that these people are not put into a position where they may have to break these policies.

Articles

Riskstorming Experience Report: A Language Shift by Isle of Testing
This article discusses the benefits of risk-storming and how it changes the questions that we ask during testing and when we ask them. There are questions that are normally asked post production when a bug is found. Instead, this question is asked before release. There is a link to another article that explains how to run risk-storming sessions.

Kill Your Darlings – Why Deleting Tests Raises Software Quality by WildTests
Stu at WildTests discusses the limits of testing – we cannot test everything. It is important to prioritize and reduce the tests we need to run to allow the application to be delivered sooner. Priorities can be determined by getting closer to support and development. Customer support can help us understand customer pains better. Development can teach us about the changes that have been made so we understand where the risks are.

Why Your Test Automation Is Ignored – 5 Steps to Stand Out by Bas Dijkstra, Test Beacon
In this article, Bas Dijkstra talks about the phases of test automation that often lead to failure. He then presents 5 ways to improve test automation and better demonstrate the values and benefits.

My Automation’s Not Finding Bugs, But That’s OK by Paul Grizzaffi, Responsible Automation
Paul Grizzaffi was kind enough to share this blog post from last year in response to my linkedin post about how test automation rarely finds bugs. Grizzaffi states that even if an automated test doesn’t find any bugs, it does not mean it is valueless. It can still enable rapid release of the product.

3 Qualities You Must Have in Order to Become a Strong Software Tester by Raj Subramanian, Testim
Unlike similar posts I’ve seen where qualities tend to be focused on skills, this list looks at qualities required for personal development. The qualities listed are communication, motivation and education. These qualities are required for a tester to develop new and existing skills, which in turn makes a better tester.

Observability vs Monitoring by Steve Waterworth, DZone
Monitoring and Observability by Cindy Sridharan
While trying to think of questions to ask for the Ministry of Testing Power Hour session (1st May on The Club) I did a little research in to Observability. One thing I found interesting was the distinction between observability and monitoring. Here are a couple of articles that discuss this.

Other blogs that share lists of test related articles

https://5blogs.wordpress.com/ (daily)
http://blog.testingcurator.com/ (weekly)
http://thatsabug.com/ (weekly)
https://weapontester.com/tea-time (weekly)
https://www.ministryoftesting.com/feeds/blogs

Testing Conferences
https://testingconferences.org/

The Club
https://club.ministryoftesting.com/
A forum for discussing, asking questions, answering questions, and requesting help. Run by the Ministry of Testing.

Feel free to recommend anything that you think is of interest.
Main image taken from http://www.publicdomainpictures.net

Advertisements

Do I really need to test this?

In early March I had the following blog post ‘7 reasons to skip tests‘ published on testproject.io. This post looks at reasons why we may need to cut back on testing so that we can release earlier. It also looked at ways to reduce the test load later on in the life cycle by testing earlier and introducing more exploratory testing.

However, I only briefly mention the importance of prioritising the tests that we decide to add to the regression test suite. A recently published blog post by Stu at Wild Tests goes into this more deeply. This post inspired me to look more closely at how to review and prioritise test cases before regression testing.

What is regression testing?

Regression testing is supposed to confirm that no new, unexpected bugs have been introduced to the software and existing functionality still works as expected. These tests may be run at various points throughout the development phase, most commonly before a software release. It normally requires large regression pack to be created and regularly updated and maintained. However, over time, the number of tests included can dramatically escalate. Hence the need to regularly review test cases and prioritise them.

What should we do with test cases when reviewing them?

When reviewing a test case, there will usually 3 possible outcomes – Keep, Update, Delete.

  • Keep – if the test case is still required then it remains in the regression test suite.
  • Update – if the test is still required but the requirements have changed then the test case is updated so it matches the new requirements.
  • Delete – if the test case is completely out of date and incorrect, or covers functionality that is no longer included in the software, then it should be permanently removed. Other reasons for deleting a test might be because a similar test already exists.

Just because I’ve decided to ‘keep’ some tests doesn’t mean they all need to be run. The remaining tests need to be prioritised so that the tests that cover the most high risk items are run first. Lower risk tests may not need to be run at all. However, if a test case covers a feature considered to be low risk, should we be planning to run the test at all? A test may not need to be deleted, but it might need to be removed.

Difference between Deleting and Removing test cases?

Stu’s Wild Tests blog post makes a distinction between deleting tests and deleting the data tests hold. This means there could be a need for a 4th test case review outcome – Remove. Deleting a test means that the test is permanently deleted from the regression test suite. This includes any data included in the test case. Remove means it no longer exists in the regression test suite so its not run in error, which can waste limited time and resources assigned to testing. We may wish to remove a test case if it covers a low risk item. Priorities change, we may need the test case again in the future.

Things to consider when reviewing tests

What feature does the test cover? Is the test correct and accurate? Is this feature considered high or low risk?

Before doing anything, we need to make sure we know exactly what the test coverage is. If there is something included in the test that is no longer required then it should be removed. Likewise, if there is something not included in the test that is high priority then it must be included.

Furthermore, we must ensure that the data included in the test must match the requirements. If they test and requirements do not match then time could be wasted reported non-existent defects.

Finally, we must consider the risks. Risk should be determined using the potential impact to the customer if the feature should fail, and the likelihood of that feature failing. The likelihood will increase if a change has been made in the software that may cause that feature to fail.

Further reading

Kill your darlings – why deleting tests raises software quality
Breaking the test case addiction
7 Reasons to Skip Tests

What I read last week (14th April 2019)

Some really big news, I’ve been selected to give a talk at Test Bash Manchester in October. My talk will be on record and playback features in test automation. I will be discussing how they can be useful for testers with little experience or a great deal of experience in test automation. I will also demonstrate how tests generated using record and playback can be adapted and improved. It is a really exciting opportunity.

I’ve written articles about this before for testproject.io. I look forward to exploring this subject more throughout the year. In particular, I plan to explore how different automation tools use record and playback and how easy it is to adapt them.

Webinar

Ask Me Anything – Whole Team Testing with Lisa Crispin
This week I watched the brilliant webinar in which the testing community was given the opportunity to ask Lisa Crispin anything they wanted about Whole Team Testing.

All questions that could not be answered during the webinar were shared on the club and answered at a later date. This included an answer to a question that I asked:

The following blog post includes a summary of some of the questions that were asked and answered during the webinar:
https://wildtests.wordpress.com/2019/04/09/lisa-crispin-ama-whole-team-testing/

Podcasts

Test talks podcast – What is Programming with Edaqa Mortoray
Do we actually know what programming is? In this podcast episode, Edaqa Mortoray discusses his book ‘What is Programming’ in which he attempts to answer this question. He talks about how programming is not the lone activity that some think it is. It should be a social activity that includes communicating with fellow stakeholders. This is demonstrated in the way the book is split into 3 sections: People, Code and You.

  • People are the reason software exists.
  • At the heart of any software is source Code.
  • Behind the screen is a real person: You.

Test talks podcast – Discover The Personality of Your Application with Greg Paskal
In this episode, Greg Paskal suggests that we should not just be looking at the way tests pass and fail. When running daily tests, we should look at their behaviour over time in order to identify issues that are often overlooked. For example, a test may gradually take longer to run. Monitoring this over time could provide an insight into an emerging problem.

Test talks podcast – Chaos Engineering with Tammy Bütow
This is something I’ve not heard much about before. In this podcast episode, Tammy
Bütow explains what chaos engineering is and how it has been used to identify issues in a companies ability to recover from disaster using methods like fault injection. One example that was discussed was Netflix’ Chaos Monkey application.

The Good, the Bad and the Buggy – Season 2 recap
A recap of all the previous episodes. This episode discusses how different technology has improved the user experience and changed the way we do things in certain industries.

The Guilty Testing 11 – 7 ways I sabotaged myself as a testing
This episode was inspired by the UKSTAR 2019 talk by Claire Goss (Testers: Is It Our Own Fault We Are Underrated?). This episode provided a list of ways in which testers might be sabotaging our own testing efforts. One which interested me was the idea that developers should not be developing a new feature until it has been tested. Allowing demos to take place gives the opportunity for stakeholders to provide early feedback. Testers aren’t the only ones who should be assessing the quality of the software application.

Articles

Logging, Monitoring and Alerting with Kristin Jackvony
I’ve been looking into how logging can be used to aide our testing efforts. This article defines logging, monitoring and alerting and discusses how each can be used to benefit the team as a whole as well as testing.

Testing the Adversary Profession! by KimberleyN
KimberleyN decided to reshare this blog post which was originally published last year due to Lisa Crispins Ask Me Anything webinar. This post discusses the relationship between developers and testers and why some bugs may find their way into production. Reasons suggested included fear – fear of starting an argument or making enemies by reporting bugs.

For Just a Few Lego Bricks More by Michael Fritzius
An interesting analogy to explain how modular approaches to programming are recommended. There are usually only a small number of standard designs of lego bricks, and each one fits easily with the other design. This allows the same design to be reused millions of times. The same goes for programming. There should only be a small number of standard functions which can easily connect with the other functions.

Other blogs that share lists of test related articles

https://5blogs.wordpress.com/ (daily)
http://blog.testingcurator.com/ (weekly)
http://thatsabug.com/ (weekly)
https://weapontester.com/tea-time (weekly)
https://www.ministryoftesting.com/feeds/blogs

Testing Conferences
https://testingconferences.org/

The Club
https://club.ministryoftesting.com/
A forum for discussing, asking questions, answering questions, and requesting help. Run by the Ministry of Testing.

Feel free to recommend anything that you think is of interest.
Main image taken from http://www.publicdomainpictures.net

What I read last week (7th April 2019)

It has been a very chaotic month. I’ve not been in the office for 3 weeks because of days off, travelling to Colorado for work, and attending a course in Birmingham. The course I took was the ISTQB Advanced Level Test Manager course. With all my focus on this course, I really didn’t read much last week. Here are a few items I found time to read in the last week.

ISTQB – That´s it! No more certification exams, as they prove nothing
One day this week, while travelling on the train to Birmingham for the ISTQB advanced test manager course, I came across this discussion on the ministry of testing club forum. A few issues were raised about the ISTQB foundation course – the outdated syllabus and the multiple choice questions that only required the person taking the exam to remember the knowledge not apply the knowledge. My experiences while taking the advanced course were very different. I don’t know how up to date the course is, but the syllabus is a lot more in depth, and the questions harder. For most questions we are given a scenario and the question is based on that scenario. To get the right answer we actually have to apply our knowledge instead of just memorise it.

The art of the bug report
The opening sentence “Testers are storytellers” definitely rings true. There have been times when a bug hasn’t been taken as seriously as I feel it should have been, and this has usually been down to the way I’ve told the story. In this article, Anneliese Herbosa gives a few pointers about how best to tell the story. Ensure that the most important details are included and the entire story is told. There needs to be a balance between adding substance without detracting from the message of the story. There was one point that I found difficult – know your audience. Several different people might need to read the bug. In my case, it is usually the product owner who reviews the bugs and the developer who has to fix it. Both need different levels of information. Getting that right is not easy.

Rapid Software Testing Guide to Making Good Bug Reports
The previous article referenced this article by James Bach. There are a couple of additional points in this article that was not included in the previous one: Correctly assessing the magnitude of a bug and common mistakes in bug reporting.

Why test automation is a lot like bubble wrap
When I first saw this title, I was a little confused. I was certain I had heard this before. It was only when looking through my notes from the UKSTAR software testing conference that I realised where I’d heard it from. Below is a copy of a note I took while listening to one of Baz Dijkstra’s talks. In this article, Baz Dijkstra decides to discuss this in more detail with a few additional comparisons as well.

img_1663-1


Other blogs that share lists of test related articles

https://5blogs.wordpress.com/ (daily)
http://blog.testingcurator.com/ (weekly)
http://thatsabug.com/ (weekly)
https://weapontester.com/tea-time (weekly)
https://www.ministryoftesting.com/feeds/blogs

Testing Conferences
https://testingconferences.org/

The Club
https://club.ministryoftesting.com/
A forum for discussing, asking questions, answering questions, and requesting help. Run by the Ministry of Testing.

Feel free to recommend anything that you think is of interest.
Main image taken from http://www.publicdomainpictures.net

Jaroslaw Hryszko, Amy Phillips and Bas Dijkstra (UKSTAR talks Day 2, Part 2)

It has been 3 weeks but I’ve finally completed the last of the UKSTAR blog posts. The final few summaries were difficult to write, it is amazing how much you can forget in just a few weeks. Fortunately, my note taking skills were good enough to keep my memory fresh.

Adept: Artificial Intelligence, Automation and Laziness by Jaroslaw Hryszko

Jaroslaw gave a highly technical talk about automated defect prediction using static analysis tools and machine learning. In real life, more bugs are often found later in the lifecycle. Jaroslaw demonstrates that using prediction based QA, more bugs can be found earlier in the lifecycle. This saves a significant amount of money as the cost to fix is less.

I found it very interesting that Jaroslaw gave 2 different definitions for bugs and defects. Previously I’d always thought of them as always being the same:

  • Bug – mistake in the source code, doesn’t have to result in a defect.
  • Defect – discrepancy between users expectations and actual behaviour, does not have to be caused by a bug.

I’ve already studied techniques for static analysis so that bugs can be found earlier in the lifecycle, but never really thought much about how machine learning could be applied. This is a subject that I need to read a log more of. My notes are filled with suggestions for papers, articles and topics which I plan to search for online. This talk was highly technical but provided enough information to use as a basis for further research.

Keynote 3* – How to lead successful organisational change by Amy Phillips

We’ve attended this amazing conference, learnt many new facts and developed new ideas that could potentially improve what already takes place at our companies. However, applying these changes is easier said than done.

How do we apply these changes? We can’t just tell everyone this is how we should start doing things. First, we may not have the authority to do this. Second, people don’t like change. In this talk, Amy talks us through a process that could help us gain support from within the organisation. This will increase the chance of the change being embraced instead of rejected.

Steps suggested include:

  • 0. Create foundation
    • Establish credibility so that colleagues are more likely to trust that the change might work
    • Ensure that there is capacity for change. If we attempt to introduce the change at a critical time, like when there is a deadline approaching, the change is more likely to be rejected.
  • 1. Build an emotional connection
  • 2. Identify a north star
    • The north star represents something that we should aim for, a mutual goal.
  • 3. Small steps in the right direction
    • Don’t try and do everything at once.

Originally, this talk was meant to be at the start of the day. I don’t know the reason for moving the keynote, but it seemed to work better this way. This talk seemed well suited to take part at the end of day, giving us a final piece of advice to ensure that we got the most out of the conference.

Deep Dive F – Building Robust Automation Frameworks by Bas Dijkstra

For the final deep dive session, I chose to attend Bas Dijkstra’s session on building automation frameworks. Bas walked us through a series of steps to setup a basic automated test and improve on it. Most of my experience with test automation is self taught so it is interesting to see what steps someone else would follow. It confirms that I am also following recommended steps and fills in any gaps in my knowledge.

Iteration 1 – creating a basic test using record and playback
Once this was done, Bas highlighted some potential issues such as all steps being in one method, everything being hard coded and no browser management.

Iteration 2 – Better browser management
Ensure that the browser is closed down in a tear down script once the test has been run.

Iteration 3 – Waiting and synchronisation
Implement a timeout and waiting strategy, for example “all elements should be visible within 10 seconds”. If this does not happen, a timeout exception should be thrown.

Iteration 4 – Page objects
Makes tests more readable by separating out the flow of the tests. This makes it easier to update and maintain tests.

Iteration 5 – Test data management
Each test run will change the data. Therefore there needs to be a way to create and control the required test data. One option is to reset the database. It is worth talking to the developers who could provide something to make this possible.

Iteration 6 – Quick! More tests!
Make the tests data driven so the data can be varied. Using the same values doesn’t really prove much once the test has already been run. Data driven testing allows alternative data values to be used and allow more edge cases to be covered.

Iteration 7 – Narrowing the scope
Run data driven tests through the API to speed up tests and make the tests more efficient.

Iteration 8 – Service Visualisation
Dependencies aren’t always accessible, which can affect robustness. Use a fake or virtual process to keep the test environment under control.

Bas Dijkstra, Poornima Shamareddy and Wayne Rutter (UKSTAR talks Day 2, Part 1)

We are now on the 2nd and final day of the UKSTAR talks. Thankfully, I’d been able to get a decent nights sleep despite attending networking evening that had been arranged at a local pub. I arrived nice and early, in time for breakfast and the Lean Coffee event at the huddle area. This was followed by another day of brilliant talks.

Keynote 4* – Why do we automate?
Bas Dijkstra

This is a brilliant question that many neglect to ask. In this talk, Bas discusses the common mistakes that occur in test automation such as attempting to automate everything. Instead, we should be asking questions about why we want to automate so we are sure that we are doing the right thing. It is not a case of one size fits all, the approach to test automation will be different for each project. Automation is wonderful but only if done right.

It is important to, every now and then, step back and ask “why?”

  • Why do we need automation?
  • Why this tool?
  • Why these tests?
  • Why this percentage of tests?

Only with a satisfactory answer should we proceed.

Bas used the following quote from the movie Jurassic Park to summarise his point:

“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

Ian Malcolm, Jurassic Park

Bas also used another brilliant analogy: Test automation is like bubble wrap. Its fun to play with but it has its limits and can give a false sense of security. He discusses this analogy more here.

*This was originally scheduled to take place at the end of the day not the beginning. I’ve labelled this as keynote 4 to match the program even though it took place before keynote 3.

Cognitive Testing Talks
Cognitive Testing – Insight Driven Testing Powered By ML by Poornima Shamareddy
Cognitive QA by Wayne Rutter

The next 2 talks were on very similar topics from the modern tester track.

The first talk, Cognitive Testing – insight driven testing powered by ML by Poornima Shamareddy, talked about the development of a self learning system that used data mining and pattern recognition to rank and prioritize test cases. During the talk, Poornima walked us through the process of developing the application including the benefits achieved.

The second talk, Cognitive QA by Wayne Rutter, discussed investigation into ways to identify the amount of test resource required for each area of an application. Wayne went into great detail about some of the different machine learning methods including supervised learning, such as using classification methods, and unsupervised learning, such as clustering. This talk was especially impressive as Wayne was part of the speakeasy program having never given the talk before.

Artificial Intelligence and machine learning concepts are something I’ve not covered since university. My memory of the subjects is a little vague but both talks provided enough basic information about machine learning techniques to make the talk informative to anyone regardless of prior knowledge. It was interesting to see examples of practical applications of artificial intelligence, natural language process and machine learning from a test perspective.