Monthly Archives: March 2019

What I read the last few week (31st March 2019)

The last time I published a ‘what I read last week’ blog post was 3 weeks ago, for this I must apologise. During this time I attended the UKSTAR software testing conference and travelled to Colorado for work. I didn’t have as much free time as I thought I would get for reading and researching.

However, I have been slowly writing up summaries of the talks I attended at the UKSTAR conference, and working my way through the 30 days of testability challenge. I wasn’t expecting to complete the challenge in 30 days. My aim is to complete it by the end of April (60 days instead of 30).

Here is a small list of articles, blogs and podcast episodes that I read or listened to this week recently.


Test Talks Podcast – Episode 244 – Fast Forward Your Entire Development Cycle with Israel Rogoza and Avishai Moshka
This episode discusses the issues surrounding compromising. We are often required to deliver software applications fast or don’t have enough resources to deliver and test all that is required. Compromises often lead to less testing. Therefore, we need ways to fast forward and optimize the development cycle.

Test Talks Podcast – Episode 194 – The Reality of Testing in an Artificial World with Angie Jones
Angie Jones talks about why it is important to test machine learning applications. An older podcast episode which I decided to listen to after hearing Angie Jones speak at UKSTAR. It includes most of what was said in the conference talk, so if you didn’t get a chance to attend the conference you can still listen to most of the content here.

The Good, The Bad and the Buggy – Episode 19 – Lockdown
This episode looks at software applications and devices that are making their way into prisons. Alex and Bria discuss why it is important for inmates to have access to these applications, and some things that need to be considered while designing these applications.

The Good, The Bad and the Buggy – Episode 20 – March Madness
I’ve never heard of march madness before listening to this episode. This could be because it is something that hasn’t reached the UK, or because I don’t watch much basketball. However, this episode is still worth listening to. Alex and Bria discuss various bugs related to the March Madness sporting event, and the implications of these bugs when not found and fixed soon enough. Sports fans are incredibly passionate and vocal, and the smallest bug will usually find its way around twitter in seconds.


Standing Against #paytospeak
Lee Marshall, aka the Growing Tester, is one of the organisers for the #MidsTest meetup that takes place monthly in Birmingham, Solihull and Conventry. He is excellent at encouraging new speakers, like myself, by giving them the opportunity to speak at the test meetups. In this article he highlights the issue where some conferences only allow those with experience or those willing to pay to speak at conferences.

Michael Larson – various blog posts about 30 days of testability
As you know, I’ve been taking part in the 30 days of testing challenge. Travelling and other commitments have meant I’ve been unable to complete the challenge in time. Michael Larson is one of the better organised people who has not only completed the challenge in time but also managed to write a blog post for each activity. I’ve not read through all his posts yet, but it has been very encouraging to read someone else’s experiences of the challenge. I must congratulate Michael Larson for completing the task within the 30 days.

Tyranny of the backlog
Alan Kelly talks about common backlog issues that make it impossible to complete. We’ve all been there, bugs are found and added to the backlog (or abyss as I often refer to it as). A client asks for another ‘small’ feature, it gets added to the backlog. The backlog just grows and grows and never ends.

Creating a Bunch of Test Automation Scripts is a Waste of Money
What questions should we ask before starting test automation? It is important to ask the right questions so that the right tests are automated and there is a decent return of investment at the end. This is something I’ve also discussed in the past, particularly in the OnlineTestConf talk I gave in November last year.

These are my confessions … (as a huddler)
Chris Armstrong did a brilliant job as host of the huddle area. Here he talks about his own experiences managing huddle spaces at conferences. I’ve previously written 2 blog posts about some of the discussions that took place. There were also competitions (thanks for my UKSTAR water bottle), stickers to decorate our passes with, board games in case we just wanted a break, and an area to just sit down and chat. It was a nice addition to the event.

Other blogs that share lists of test related articles

Other blogs that share lists of test related articles (daily) (weekly) (weekly)

Other blogs that share lists of test related articles (daily) (weekly) (weekly) (weekly)

Testing Conferences

Feel free to recommend anything that you think is of interest.
Main image taken from


Gerie Owen and Fiona Charles (UKSTAR talks Day 1, Part 3)

Here is the final blog post for day 1 of the UKSTAR conference. This includes Gerie Owen’s talk on wearable technology and Fiona Charles’ keynote on the positives and negatives of disruptive technologies.

A Wearable Story: Testing the human experience
Gerie Owen

Gerie starts the talk by talking about her experiences while running the Boston Marathon. After completing the gruelling race it was found that her time had not been logged correctly. The chip she was wearing was defective. It is easy to imagine how frustrating this must be for someone who has just run 26 miles.

Gerie uses this story to explain what a wearable is and the importance of ensuring that the user gains some value from a wearable technology. In this example, it is clear that no value was gained from the wearable chip used in the Boston Marathon.

Wearables require some kind of human interaction for value to be achieved. Gerie demonstrates how to set up persona’s that can be used for testing a wearable device. These personas will contain details about the life, goals and expectations of someone who is likely to use the device providing an understanding of their expectations. This information is used to create user value stories.

The best outcome of a user story is one where someone gets value. The worst outcome is where no value is achieved at all.

This is the second time I’d heard Gerie Owen speak. The first time was at the spring 2018 Online Test Conf where she gave another brilliant talk on continuous testing.

Technology’s feet on society’s ground
Fiona Charles

The second keynote, and final talk of the day, was give by Fiona Charles who asks the question: Is ‘disruptive’ a good term?

When it is a good term, it can lead to positive change. However, could lead to unintended negative outcomes. Technology reaches everywhere into society but often reflects and favours the privileged. Biases and discrimination can lead to some of these negative outcomes.

It is common for students to have to receive and submit homework via the internet. This is convenient for both the students and the teachers, but what about students who don’t have access to a computer.

Self service checkouts have led to a more efficient shopping experience for both staff and customers. However this has led to more fruit and vegetables being stored in plastic packaging so it can be scanned easier.

We can now buy items online and return them just as easily. But in a lot of cases the returns are just being sent to landfill. In addition, more parcels mean more delivery vehicles. This has resulted in an increase in traffic congestion.

Fiona also includes some more dangerous examples, like an aircraft which almost crashed because the auto-pilot malfunctioned and the pilot was not able to override it easily. As technology advances, we need to think about how much human intervention should be retained.

There are potential ethical implications of technology, especially as Artificial Intelligence starts to gain prominence. We must question assumptions, biases, objectives and decisions. We must be asking:

  • Should we build this?
  • Is it right to build this?
  • What could go wrong?

Peet Michielsen, Joep Schuurkes and Viv Richards (UKSTAR talks Day 1, Part 2)

Here is the next blog post where I provide summaries for the talks I attended at the UKSTAR software testing conference. This covers the first 3 talks in the automation track. Peet Michielsen talks about how to fix the ‘leaks’ in the test pipeline, Joep Schuurkes shares a few tips on how to choose or design a test automation framework, and Viv Richards introduces us to visual regression testing.

When you’re moving house and the pipeline needs re-plumbing
Peet Michielsen

Peet Michielsen walks us through his journey from one company, where he setup a release pipeline from scratch, to a new company which already had a pipeline in place but found there were several ‘leaks’. He talked about some of the challenges he faced at the new company to fix these leaks. Finally, he gave some tips for improving the release pipeline.

This talk used the plumbing analogy a lot to explain his points. Generally, he was demonstrating the importance of allowing the project to ‘flow’ and not be held up by delays in testing. Replace anything that is obsolete and introducing reliable test automation are a couple of the ways to improve this flow. The use of test automation and continuous integration is what makes a software project ‘flow’.

Peet did not refer to any particular tools or technologies that he uses. This helped him demonstrate that his ideas could be applied to most projects regardless of the tools and processes they already use. This seems like a good idea as I am often put off by talks that focus strongly on technologies which are unsuitable for my current work. It can distract people from the actual message. The ideas that were presented in this talk could easily be applied to any test project.

What to look for in a test automation tool
Joep Schuurkes

Joep starts off this talk by discussing some of the issues he had with previous test automation tools and why this led to him building his own framework. This helped solved most of the issues he was previously having. His new framework, created using python, used a mixture of existing commands as well as newly developed ones – gluing well established tools and libraries together.

Throughout the talk, Joep showed us how he did completed some of the following test activities using his framework – Create, Read, Debug, Run and Report. With each activity he provided some great tips that can be used to improve a test automation framework.

Some of my favourite tips include:

  • Naming tests well can clarify the tests intent. It can also make it easier to notice gaps and patterns in the test coverage and when running the tests.
  • A test does one thing only, keeps things clear and focused.
  • When a test fails, can you see the failed step? Do you have the required information? Can you understand the information provided? There is no such thing as too much information, so long as its well structured.
  • Never trust a test you haven’t seen fail.

And finally, the most important piece of advice: Forget about the shiny, be impressed but ask … Is your tool helping you to do better testing?

Spot the Difference; Automating Visual Regression Testing
Viv Richards

Why do we use test automation? It is more reliable as its performed by tools and scripts, meaning that the risk of human error is dramatically reduced. However, it does have its issues, especially when testing the UI. A large amount of investment is required, its only practical for tests that need repeating and with no human observation there is no guarantee that the UI is user friendly.

One popular pattern used in test automation is the page object model. The issue with this model is that the locations and visual attributes of elements are not usually checked. We played a game of spot the difference where there were 2 versions of the same GUI. The audience could easily spot most of the ‘mistakes’. There were about 10 in total, but only 4 would have been picked up in test automation using the page object model. Things missed out included additional spaces between elements, text styles and fonts and changes to colours or images on the page.

Viv then goes on to demonstrate how a screenshot of a GUI can be compared against previous versions of the GUI as part of test automation so that the software team can be alerted to minor changes in the software a lot sooner. These tests, run repeatedly on future versions of the application, can bring additional value to the software project.

The Risk of Forgotten Knowledge

What is the most important thing in your possession right now? What would the implications be if you were to lose it?

Yesterday, I took a flight to Colorado. This is the first time I’ve been to the USA since 2008, and the first time I’ve travelled abroad for work. I am a little nervous, which doesn’t help the fact I am naturally a paranoid traveller. I am the sort of person who checks every minute that I have not lost anything. I will panic if I put my passport in the wrong pocket of my coat or bag and can’t find it later.

While waiting at the airport for the shuttle bus, we noticed a discarded pair of glasses. We all wear glasses so understand how essential they are. This then led to a discussion of what is most important, and what item we’d be most devastated about losing.

I’ve purposefully not taken anything sentimental with me on this trip so I don’t have to worry about losing these. Glasses are an obvious answer, but I have brought a spare pair so it wouldn’t be the end of the world if I lost these. I am not overly attached to my phone either, phones can be replaced and any photos on it have been backed up. Losing my passport would be problematic, but arrangements can be made to get me home safely at the end of my trip. There are items in my possession which losing would bring me a great deal of hardship. In most cases this could be fixed, not always easily but things will get better.

There is something I would be devastated to lose and could never be replaced with any kind of money. My notebooks, one of which includes notes from the UKSTAR conference I attended last week. I still haven’t written up or analysed all my notes from the talks. This is knowledge that is currently only stored in two places, my notebook and in my memory. Memories fade. This has already started as it has been a week now.

Knowledge is not just information, it is a representation of our own personal experiences and interpretations of that information. It will differ from person to person, but each person will develop new and different ideas. New ideas develop into new knowledge.

Knowledge is the most important possession we have and must be shared for two reasons. So that others can learn and develop new ideas from it and so that it is not lost and forgotten, even if the original source has not been preserved.

While on my trip I will be continuing my write ups of my UKSTAR conference notes which will be shared in a series of blog posts. I’ve also completed several tasks on the 30 days of testing challenge but have not yet completed the write ups. The next ‘what I read last week’ post will be published on Sunday. I didn’t do must reading last week because of the conference and preparing for my trip to Colorado.

Main image from

Angie Jones and Anne-Marie Charrett (UKSTAR talks Day 1, Part 1)

Here is the first blog post where I discuss the talks I attended at the UKSTAR 2019 conference. This covers the first keynote by Angie Jones, and the Deep Dive session run by Anne-Marie Charrett.

Keynote 1 – The Reality of Testing in an Artificial World
Angie Jones

This first keynote of the conference was given by the amazing Angie Jones. I confess, I’d already watched this talk once before at the STAREAST techwell conference last year. They make a selection of their talks available online to watch on demand for a few months after the event and this was one of the talks I chose to watch. Angie is such an engaging speaker that, even though I knew the story she was going to tell, I was still on the edge of my seat wondering what was going to happen next.

Angie challenges the misconception that an application doesn’t need testing because the “AI is doing it!”. She produces several examples where machine learning has gone wrong, which may indicate that the application was not tested adequately enough. This is especially worrying as there may come a point where AI is incorporated into applications where reliability is essential. For example, an application that predicts if a patient is likely to get cancer. Some applications are too important not to test so we cannot be relying on them just working.

So how do we test it? Angie walks us through the process she followed when testing an AI application for the first time.

  • First she learnt how the application works. This is really important as AI will have no pre-determinable results. Therefore, we need to know how it got to this unknown result and test that the AI is calculating the result correctly.
  • Second, we train the system to see if the outcome is correct. Using test automation, we generate large amounts of data. We then test the outcome and see if that matches what we expect to appear based on the data we fed into the system. We repeat this multiple times with different sets of training data to see if the outcome matches what we expect.

AI is all about calculating results that cannot be pre-determined. We should be tested the method for calculating that result, not the outcome itself. If we are putting all our faith in an AI application and relying on them to getting the correct result, how can we NOT test this? People often ask if AI is something to be feared. Without testing, the answer is yes AI is something that should be feared.

Deep Dive – API Exploratory Testing
Anne-Marie Charrett

When attending this conference, my main focus was on test automation. However, I also wanted to learn something new. I was attracted to this deep dive session because API testing is something I’ve not done much before and really think I should start doing. Also, with so much focus on test automation, it was nice to learn something new about exploratory testing for a change. After all, both are equally important.

This talk started by examining how the use of mind maps can encourage testers to take a more systematic approach when doing exploratory testing. She started with the GUI and then went on demonstrate how this same idea can be used to explore the API. She walked us through examples of tests that might be carried out.

This talk was especially useful for API testing novices as she taught us how basic API commands worked, and how they can be used to test the API layer within an application. It was very basic, but useful for getting us started. Exploratory testing is all about learning and gaining experience. It is an excellent testing method to use for learning more about the application being tested and API testing.

Having never attended a deep dive session, I didn’t know what to expect. I really enjoyed the interactivity of the session. Anne-Marie is very good at encouraging audience participation. We were encouraged to suggest tests to run. During each ‘test’, before actually revealing the outcome, she would ask the audience “Whats the hypothesis?” encouraging us to think and talk about what we’d expect the outcome to be.

Women in Tech, Diversity and Inclusion (UKSTAR Huddle area discussion)

In this post, I will be continuing to share my experiences at the UKSTAR software testing conference. Previously I wrote about the lean coffee event that took place at the UKSTAR huddle area. This was an area designated for chilling out, playing games, meeting new people and discussing various test related topics.

Another discussion event on diversity and inclusion, mainly focused on international women’s day, was organised to take place in the huddle area. It was a popular event, the number of attendees was so high we had to move to one of the other rooms where it was quieter.

The session took a similar format to the lean coffee session. We were given post-it notes and told to write down a few topics and stick them on the white board. We then voted on the topics we wanted to discuss. There were so many people involved that we spent more time on each topic. Normally lean coffee has about 5 minutes per topic, we spent 15 minutes per topic during the 30 minute session.

Similar to my lean coffee blog post, in this post I am attempting to give a summary of the discussion that took place rather than just presenting my own opinions.

Why do our peers not think diversity is important?

The person who suggested this topic started out by stating why diversity is important. It has been suggested that diverse teams outperform non-diverse teams. She also talked about some unfortunate experiences. For example, she once told some male colleagues about a ‘Women Who Code’ event. The response was “You mean women who can’t code?”. While it was clearly meant as a joke, it was something that has stuck with her.

In my experience, I feel that I am well respected by my male colleagues. Despite this, I do often feel that the issue is not taken seriously. Sometimes it is even joked about, although I’ve never heard any jokes as unfortunate as the one described previously.

I think that the reason that some may not see this as an issue is because it doesn’t affect them as much. They don’t know what its like to be a woman in a team dominated by men. If someone has never experienced a situation where they are not in a privileged group, how can they understand the situation and its issues.

It can be really scary for a women starting work in a new team, especially if it is a male dominated team. However, someone also pointed out that this situation can be just as scary for men. There can be this fear of saying the wrong thing, not knowing what to talk about, or not knowing how to treat women. While some may not understand the importance of diversity, most men don’t want to be seen as ‘macho’ or deliberately exclude women.

The discussion ended with the question “Why do we have to win the respect of men?”, the response was another question: “What is the alternative?”.

I don’t want to be seem as being here because of a quota

This topic was kicked off with the question, “How would you feel if you found out you only got a job because of a quota?” The answer given was ‘insulted’, a sentiment shared by most people present. This is unsurprising as I believe most would prefer to get a job on their own merit.

If I ever found myself in this position, I would strive to prove myself and earn the respect of my peers. I would show that my skills and experience alone show that I was worth hiring.

I then asked the question “Would you turn down a job if you found out you were offered it because of a quota?” I stated that, while I would be offended, it would depend entirely on how desperate I was for that job. Several people in the room agreed that they would be unlikely to turn down an opportunity if they were offered it to meet a quota.

Someone then pointed out that men have advantages that women don’t have. It is often easier for men to progress in certain areas. It is not a level playing field. If there is just one thing that gives women an advantage, why should we not use it?

The discussion then moved on to why there might need to be a quota in certain cases. Sometimes, excuses are made like “No one else applied” or “Don’t know any!”. This is probably where the problem lies. Why are women not applying for these roles? Could there be someone putting them off? No one likes quotas, however sometimes they can encourage employers to actually seek out specific candidates who meet a certain criteria and have the skills required for the role.

Someone suggested the possibility that recruiters may be biased when pre-screening CVs. We are all aware of the infamous AI recruiting tool used by Amazon to screen applicants that was biased against women. It is now becoming more and more common for CVs to have certain personal details that could reveal a persons name, gender and race before passing them on to employers.


It was great to attend such a lively discussion on gender diversity in certain industries. It was encouraging to have men and women among those who attended. I remember talking to someone afterwards who said that debates like this can go on forever.This is definitely true as both of these topics could have easily gone on for several hours. We could have continued the discussions all afternoon if the room wasn’t required for another talk. Just the 15 minutes of discussion for each topic was enough to give me a lot to think about.

Attending Conferences and Testers learning to code (UKSTAR Huddle Area – Lean Coffee)

The UKSTAR conference is sadly over but what an experience it was. As well as attending some amazing talks, I also took the time to see all the exhibitors, meet and speak to so fellow attendees and visit the huddle area.

The huddle area included a ‘duck pond’ where anyone could enter a competition to win a UKSTAR water bottle (I managed to win one 😊), several board games, and opportunities to discuss a variety of testing topics. One particular event I took part in was the lean coffee session on the tuesday morning.

About 6 people were at the lean coffee event, apparently it is suggested that we have no more than 5 people but we seemed to be ok with the extra person. To start, we were given post-it notes and asked to write down a few topics and stick them on the white board. We then voted on which topics to talk about. Each discussion lasted about 5 minutes with the option to extend this if everyone else agreed that they’d like to continue the discussion. Typically, because of the small group, everyone was given a chance to say something about the topic.

In this post, I am attempting to give a summary of the discussion that took place rather than just present my own opinions.

Why do we attend conferences? How do we know if they are worth it?

The first topic chosen was “How do we know if teams arelearning from conferences?”, although this was merged with another suggestion “Whatconference do you plan to attend next?”.

Not everyone can communicate with confidence what they’ve learnt or what their experiences were when attending a conference. So how do managers and businesses know that the investment was worth it? Most mentioned that they were required to either write a report or do a presentation showing what they’ve learnt. I’ve never shied away from presenting my findings to a team – from either personal research, experiences at work, or attending events like conferences.

I suggested that attending a conference can add to a colleague’s personal development which can improve the way they work. Networking, discussions, and being outside their comfort zone can add to their confidence and communication skills. I’m sometimes worried that I’ll no gain anything from attending a conference, which would be disastrous for myself, the business and my colleagues who may wish to attend similar events in the future. Fortunately, this has never happened.

We all discussed reasons for attending a conference and how we managed to get support from our managers to attend a conference. I focused on specific talks and what I expected to gain from attending this talk. The reasons were mainly focused on learning and networking. Andrew Brown, one of the speakers at the conference, was one of those present and said that he can often only attend a conference if he is speaking. This highlights the issue where people often only have the opportunity to attend conferences if they work at a company willing to invest in its employees that way.

Andrew gave some brilliant advice during the discussion: “Never go to a conference session where you already know the answer”.

How can we find out how we learn?

This question was the basis for our second topic. It is definitely one that is hard to answer. It is something that a lot of people can take years to figure out.

It was mentioned how it can be difficult to engage with certain colleagues, especially when giving presentations. There is often that one person who struggles to understand, maybe because they are disinterested or don’t find presentations as the best learning tool.

Some prefer to read a book or article, some like to listen to podcasts or watch videos, some like to discuss. The preferred methods for learning can be very diverse.

Do Testers need to learn to code?

This was a topic which I suggested so was asked to open the discussion. I suggested 2 different avenues for discussion – Coding for test automation and coding for manual testing.

With test automation it can be easily argued that coding and programming skills are essential. However, with the existence of ‘code-less’* automation tools it may be easy to suggest that we don’t even need to know how to code for that anymore.

With manual testing, technically a tester does not need to know how to code to do manual testing. However, knowledge of the basic programming constructs, can make it easier for them to understand the changes being made to the application better and therefore improve their testing process.

A couple of people suggested that, even though it wasn’t essential for testers to know how to code, having that skill can be good for their career development. Times are changing, which means their job is also likely to change significantly. Knowing how to code can keep a testers options open and allow them to keep up with the times.

Someone made the distinction between reading code and writing code. It was suggested that there were huge benefits in including testers in code reviews. For this, the tester only needs to know how to read code.

It is a topic discussed often throughout the testing community, and one that can go in any direction.


This as the first lean coffee session I’d ever attended. Discussions are short and quick and, with no pre-planning, could go in any direction. With a group of attendees who have never met, there can be an interesting mix of diverse opinions. I will definitely be attending one again in the future if the opportunity arises.

This is the first of a series of blog posts that will cover my experiences at the UKSTAR conference. Feel free to comment with your own thoughts on some of the topics we discussed at the Lean Coffee session.

*I see the term code-less as a misnomer when it comes to test automation. The code may be invisible to the tester, but it still exists in the background. The code is generated automatically as the automated tests are developed.  

Main image from