Featured post

SwanseaCon – Experiences of a first time speaker

On Monday 9th September, I attended SwanseaCon. This event was particularly significant because it was also the first conference that I was given the opportunity to speak at in person.

Pre-SwanseaCon meet-up

A 20 minute walk from the hotel took me to the Three Lamps, a restaurant in the center of Swansea – and the venue for the Pre-SwanseaCon meet-up.

My lightning talk was an extension of a 99 second talk I originally gave at a #MidsTest meetup. I compared the testing pyramid to the process of making a patchwork quilt.

The meet-up included 4 lightning talks (1 was given by myself) and an opportunity to ask questions after each one. This was a bit of a shocker as I’ve never attended lightning talk rounds that included a Q and A section before.

Fortunately, I love asking questions (and answering them). A surprising range of questions were asked after my talk, some about testing and others about quilting.

I’m not entirely sure why, but I was a lot more nervous about giving the lightning talk than I was giving the conference talk the next day. Luckily everyone was friendly and supportive.

Once I was back at the hotel, I wrote up my notes from the lightning talks and published them on my blog. They can be viewed here.

The main event

After a good nights sleep, it was time for SwanseaCon. The village hotel was a wonderful venue, providing plenty of tea, coffee and soft drinks to get us through the day.

A good selection of conference swag was provided, including a tee-shirt, notebook (essential for any conference), drinks bottle, stickers, and a rubber duck (not surprising as one of the sponsors were rubber duck consulting).

The conference included a start and end keynote, and 5 sessions (with 4 concurrent tracks).

The talks I attended were:

  • Flow. The worst software development methodology in history. Ever! (opening keynote)
  • Testing, Agile and DevOps – their roots and evolutionFlow. The worst software development methodology in history. Ever.
  • Quality not Quantity – Getting value out of test automationFlow (my talk)
  • You’re the Tech Lead… you fix it!
  • Embracing Change: From Tester to Quality Coach
  • The Good, the Bad and the Avoidable SQL Practicies
  • Sustaining Remote-First Teams (closing keynote)

I will be publishing my notes from each talk over the next week (with the exception of my own talk, of course).

My own talk

I am really pleased with how my talk went. There was a slight hiccup at the start of my talk. My laptop was out of commission so I had to use an older laptop, which did not have a HDMI port.

Important lesson – bring your own cables, and highlight what you need to the organizers in advance.

Fortunately, everything was sorted in the end. I suppose the initial technical issues gave me something different to worry about. My relief at getting the problem fixed meant I was less nervous about giving the talk.

Here is the download link for the slides from my talk.

I had some great conversations about my talk while networking. People were also keen to share their own ideas and similar experiences.

It’s also interesting to see other people’s notes and what they got out of the talk. Lizzie Lane was nice enough to share her own notes from the conference, which included my own talk.

Thanks for the wonderful messages…

For the first time, I decided to create a front page for my notes and encourage others to leave a message and sign it. This was signed by people who attended my talk and others who I chatted to during the conference and the pre-conference meet-up.

Advertisements
Featured post

Fantom Programming, Event Sourcing and Mobile Phones – Pre-SwanseaCon meetup

The day is finally here. It is time for SwanseaCon, the agile development and software crafting conferencem. It should be quite the event, especially since I am one of the speakers.

Last night I attended the pre-SwanseaCon meetup. An evening full of eating, drinking and 4 wonderful lightning talks. I have already written up my notes from the lightning talks in the form of sketch-notes:

The future of sentient buildings with Fantom programming – Steve Eynon

We were introduced to Fantom programming by Steve Eynon in this opening lightning talk. This is the programming language used to develop Sky Spark by Sky Foundry.

We were shown how Fantom was used to develop automated systems that started out developing automation in buildings. This then led on to introducing building analytics.

With the way things are growing, it is becoming apparent that Artificial Intelligent will need to be considered for future development.

How evil is event sourcing? – by Jordon Collier

I’ve not really heard much about Event Sourcing, so this talk by Jordon Collier was definitely an enlightening one.

In his talk, he explained what event sourcing is, why it is used, and the pro’s and cons.

The question he was addressing, ‘Is it Evil?’ was given a very simple answer of ‘yes’. However, he did go on to say when it could be useful.

The mobile phone is dead! What does this mean for us? – by Leigh Rathbone

What is the problem with the mobile phone? It has been years since we have seen any significant innovation. In fact, even the latest iPhone cannot be defined as cutting edge technology.

Most future ideas about how technology could be used to improve our way of life rarely includes mobile phones. There is usually a stronger focus about usability and personalisation. This shows how we need to actually involve customers and get feedback so that this technology can be developed right.

Finally…

Yes, I did say there were 4 lightning talks. No, you have not miscounted, I’ve only included 3 lightning talks on the page of sketch-notes.

The 4th talk was delivered by myself, where I compared the process of making a quilt to the testing pyramid. This was an extension of a 99 second talk I originally gave at a #MidsTest meet-up. I later wrote this into a blog post, A Stitch in Time Saves Critical Defects.

I feel this idea is better demonstrated as a longer lightning talk, especially when I have a block from a quilt I’m making to use in the demonstration.

Featured post

Communities of Practice – Notes from Ask Me Anything Webinar

I’ve decided to take some time off from writing to enjoy the rest of the summer. Don’t worry, I’m still reading blogs, watching webinars and currently working through a couple of courses at the Test Automation University. I’ve also got to prepare for Swansea Con where I will be giving my talk ‘Quality not Quantity – Gettin value out of Test Automation‘. There may be the occasional blog post, if inspiration hits, like this one about the Ministry of Testings latest Ask Me Anything.

Lee Marshall is an excellent advocate for communities in practice. He runs the #MidsTest meetup, and graciously permitted me give 2 talks there – one in January and another in July of this year. I was very excited to find out that he was taking part in one of the Ministry of Testing’s ask me anything webinars.

I’ve recently started running my own community of practice sessions. I’ve referred to them as discussion sessions. It is still early days, so the structure still needs some refining. This talk gave me some great ideas for improvements.

I’ve recently tried out sketch noting. My first attempt was for Angie Jones talk, “What’s that Smell? Tidying up out Test Code”. I like having all the key points from a talk on a single A4 page, but my first attempt was rather messy. This attempt went a lot better – thanks to me investing in a ruler – although my handwriting could still do with some improvement.

Featured post

When should we stop using Record and Playback in Test Automation Development?

On 17th July 2019, I presented my talk on Record and Playback in test automation. It was the first time I gave this talk and was thrilled at the positive response. I also enjoyed answering the many questions that followed the talk. This is one of my favourite questions which I want to answer in more detail.

How long did it take for you to get to the stage where you no longer needed to use Record and Playback?

When I started out using test automation I did have some programming experience, however this was in Java, not C# which I was expected to use. It had also been about a year and a half since I’d done any programming, so I was a little rusty. My previous experience meant I was able pick up C# quickly. It wasn’t long before I was only using Record and Playback as a guide rather than relying on it completely.

Using Record and Playback is a choice. I consider my C# programming skills sufficient enough to not have to use Record and Playback, however this doesn’t mean I’ve stopped using it. As well as a great learning tool for someone new to test automation, it also allows experienced programmers develop automated tests quickly.

Not everything can be recorded. You cannot automate using Record and Playback alone. Sometimes I use Record and Playback, sometimes I don’t. Depends entirely on what I’m trying to automate.

Should we stop using Record and Playback?

Lets assume you are working in test team that is full of experienced test automation developers who are proficient programmers. Does this mean you should stop using Record and Playback?

It is completely your choice. If you have the skills and confidence to develop automated test cases without the use of Record and Playback, then you don’t have to use it. However, there are a few things that need to be considered before disregarding it completely.

Help people who are new to test automation development

Occasionally, you may get new people join the test team. There may be an expectation for them to take part in test automation development. It may be some time before their skills align with your own.

Attempting test automation development for the first time can be a daunting prospect. Mistakes will be made, time will be wasted, but this should not put people off. Some practice and learning is required before any value can be gained. Inexperience should not be preventing people from developing automated test cases.

For the sake of integrating new team members, we should be providing the development team with the option to use record and playback if they choose. This will help encourage new developers to gain the confidence to develop robust, maintainable and reliable automated test cases.

A tool to speed up the creation of automated tests

Another thing to consider is how Record and Playback can be used to make our work more efficient. If there is a tool available that could improve the way you do your job, then there should be no shame in using it.

A calculator is one example of this. It is possible to solve basic arithmetic without one. However, choosing to use a calculator in no way undermines our own intelligence.

Test automation itself is another example. We can still run these tests manually. But automation can still drastically improve the our testing efforts.

Record and Playback can be used to create tests quickly by auto-generating code required for the test to run. This code can then be adapted to improve the maintainability and robustness of the tests.

Your Choice

I’ve always said that Record and Playback must never be the sole method for developing automated test cases. If you do choose to use it, then you must also take the time to examine the auto-generated code and adapt it accordingly. Tests developed using record and playback alone will be unreliable and difficult to maintain.

Record and Playback is a tool that can be used to help us develop automated tests. It is your own choice if you decide to use it or not. If you can develop automated tests without it then you don’t have to use it.

I will be presenting my talk ‘The Joy of Record and Playback in Test Automation’ at Test Bash Manchester and TestCon Europe later this year.

Main image from http://www.publicdomainpictures.net

Featured post

Testers, Please Speak to the Developers!

This blog post is based on a 99 second talk I gave on 17th July 2019 at the Birmingham #MidsTest meetup.

Hands Holding Jigsaw
Developers and testers working together – www.publicdomainpictures.net

Once, a change had to be made to the software. This particular change was something that could not be controlled via the user interface. It was also not possible to observe the change via the user interface.

When we think about the testability, if something cannot be observed or controlled via the User Interface, this would mean that this change was untestable – right?

There are other ways.

Don’t rely on the User Interface

Sometimes a change might be made to the application that isn’t user facing. It might be something that the user should not change or view for safety or confidentiality reasons.

From a usability and user experience point of view, being able to see and interact with the user interface is essential. However, there is much more happening beneath the surface that the user never sees. There is likely to come a time where this needs to be tested.

Understanding Requirements

Contrary to popular belief, developers and testers should not be enemies. They can help each other. A Whole Team Testing approach is being discussed throughout the testing community. Testers can help developers and developers can help testers.

When there is a new work item or change request, encouraging testers and developers to discuss the requirements early on can avoid any misunderstandings later on. A failed test because either the developer or tester did not understand the requirements wastes time.

Making an application testable, making a defect fixable

Communication can also be used so both tester and developer fully understands what is required to complete a work item. A work item should not be complete until it has passed testing by a tester.

If something is not easily testable, then the developer needs to make it testable. To do this, the developer needs to know what the tester needs.

If a defect is found while testing, then the tester needs to provide enough information for the developer to fix the defect. To do this, the tester needs to know what information the developer needs.

Introducing extra logging

When told a change had to be made to the application that could not be controlled or observed via the user interface, I started out by talking with the developer.

We discussed the requirements to make sure we both understood the change and why it was needed.

We then discussed what we needed to fully implement this change. We decided that adding some additional logging would help with testing. We discussed what and when information was required.

Extra logging helps both the developer and the tester. Both can benefit for the information it provides. The tester benefits by having a better understanding of what is happening beneath the user interface. The developer can also use this information to help fix any defects found.

Mutual Understanding

By speaking to the developer before the change was implemented, we were able to reach a mutual understanding. We both agreed on the requirements and what was needed to make the the change testable.

Agreeing on requirements early on can reduce delays later on. Making sure that something is testable improves the quality of the application. Working together improves the efficiency of the entire development process.

Featured post

Shift Left or Shift Right – Discovering what is in the bottle

“The problem is not that testing is the bottleneck. The problem is that you don’t know what’s in the bottle. That’s a problem that testing addresses”.

Michael Bolton

On 11th June 2019, I watched the Ministry of Testing Ask Me Anything Webinar on Shift Left and Shift Right. This blog post is a summary of what I learnt from this webinar and my own interpretation of what shift left and shift right is.

A new feature has been completed and it is ready to be released. But, there is something preventing this from happening – testing!

In a lot of cases, there is this time between feature completion and feature delivery where testing takes place. The problem with this is that it leaves little time for fixing any defects found during testing. If the tester finds a critical defect, the team are faced with a difficult decision – either fix the bug and delay the release, or release the feature with the bug.

Graph showing how testing effort peaks just before the release of an application or feature. This is a scenario where there is no shift left or shift right.

If time and testing efforts were plotted on a graph, there would be a massive peak towards the end of the development process. Introducing shift left and shift right allows that peak to be smoothed out.

Shift left and shift right allows that peak to be smoothed out through process improvement.

Shift Left

Shift left involves introduces earlier testing tasks that will ease the burden of testing that tends to increase just before feature release. Tasks can include earlier reviews of requirements and documentation, plus earlier planning. There is also the additional of earlier testing, particularly through other layers of the testing pyramid (Service, API).

The main purpose of shift left, is improving the testing process. Speeding up the time between feature completion and feature release. This can be made possible through earlier testing, reviews and planning and the introduction of testing automation.

Shift Right

Unlike Shift left, shift right is not so much about actual testing. Shift Right is about gathering information post release. This information is then fed back into the testing process. Before watching the ask me anything webinar, I didn’t know anything about shift right. When first hearing the term, I imagined post production testing. I’ve always been wary of this as there can be issues when testing with live customer data.

In hindsight, it all makes sense that shift right isn’t about post-release testing. After-all, shift left isn’t just about testing earlier but gathering information, planning and preparing for testing earlier. We review requirements, build an understanding about the application and what needs to be tested, and develop test cases. This information can be used to improve the testing process – the tests can be run earlier, quicker and more efficiently.

So, similar to shift left, with shift right we are aiming to learn more about the application and understand how the application is used by actual customers. We ask questions, analyse what users are actual doing and identify what we missed. Shift left will have already improved the overall testing process, shift right can make it even better. We take the information gathered post-release and feed it back to the testing process.

Implementing shift left and shift right

Both concepts require communication. The difference is who we communicate with.

The concept of shift left is already well established throughout the software development industry. It is also something that should be relatively easy to implement, especially when compared to shift right. Shift left can be achieved by speaking to other people within the team – developer, testers, designers, product managers.

For shift right, we need to collaborate with colleagues who we may not directly interact with on a daily basis. As a result, they may not fully understand what information we need, and why we need it. The difficulties with engaging with colleagues who are not part of the immediate team is partly why shift right can be so hard to implement.

To implement shift right, we need information about how the end-user is actually using the system. What parts of the application are they using the most? How are they using it? The information is most likely already available, it is just a case of knowing what we need and asking for it (often easier said than done, especially in larger organisations). Customer support or help desk is a good place to start. Help desk technicians speak to customers on a daily basis. They can provide detailed accounts of customer issues. These are issues that were probably missed because they were not covered in the original test plan. Data scientists are also worth speaking to. Data retrieved through analytics and monitoring can be used to identify user behaviour.

The bottleneck

Why do we need shift right? We could probably achieve an efficient testing process with shift left alone. No matter how great something is – it can always be better. By using new information about the user behaviour, we can make an already perfect process more perfect.

As Michael Bolton says, testing is not the bottleneck – we just don’t know what is in that bottle.

No amount of testing will ever completely reveal what is in that bottle. With shift left and shift right, we can discover much more than we already know. We can use that information to reveal even more hidden information.

Additional Resources

Ministry of Testing Club – Shift Left and Shift Right discussion
Testing Ask Me Anything – Shift Left, Shift Right – Marcus Merrell

Main image taken from https://www.publicdomainpictures.net

Featured post

Automating BDD Scenarios using SpecFlow (London Tester Gathering Workshop 2019)

On 26th June 2019, I attended the London Tester Gathering workshops. The workshop I’d chosen, Automate Scenarios with SpecFlow. I chose this workshop because I’m hoping to start using SpecFlow on my current test automation project.

The workshop was run by Gáspár Nagy, the creator of SpecFlow, self-proclaimed BDD addict and editor of the BDD monthly newsletter (I’ve already subscribed).

My current project

Image result for test pyramid
Testing pyramid. UI testing is only the tip, more testing levels exist. https://martinfowler.com/bliki/TestPyramid.html

I’m not new to test automation. I’ve already developed a series of automated end-to-end UI tests using Ranorex. These tests are designed to provide broad test coverage of the main features in the application. The steps are designed to mimic the process that a typical user is likely to follow. Running these tests allow us to find out if the most commonly used features in the application work as expected.

We also run a series of manual tests that cover individual features more deeply. Ranorex is great for end-to-end testing, but automating more in-depth test cases was inefficient and brought little value to the project. I believe that these tests would be better suited to behaviour driven development, which in turn can be automated using SpecFlow.

By attending the workshop, my main aim was to learn how to use SpecFlow. In addition to this, I hoped to understand how it can be used to improve my current testing strategy. I didn’t want to just include automated end-to-end UI tests. I wanted to dig a little deeper into the test pyramid and cover other testing levels with test automation.

What is BDD?

In order to use SpecFlow, you need to understand what BDD. Therefore it the workshop started with a discussion around what BDD is.

The scenarios are written and agreed on before the development takes place

BDD stands for behaviour driven development. It encourages collaboration between the testers, developers and other stakeholders. All requirements should be fully understood and agreed on before any development takes place. Allowing for an earlier feedback loop, where any questions or confusion is cleared up before the scenarios are formalised. This ensures that all parties fully understand what work needs to be done.

The advantage of BDD is that its tests are designed to show how the expected behaviour aligns with the product. The development of a feature is designed to focus on the user expectations.

The scenarios are written in a common language that allow anyone in the team to write tests, making it easier to document and verify the tests. It also ensures that there is a shared understanding of the requirements across the entire team, not just those who understand code.

How is BDD used in SpecFlow?

SpecFlow uses Gherkin language in its scenarios which are designed to show the features expected behaviour. Gherkin breaks down the scenario into Given, When,Then steps. This language ensures that the scenarios can be understood by anyone.

The format used in SpecFlow will look something like this:

Example of a scenario in SpecFlow. Taken from one of the exercises used in the workshop.

Then, for each statement in the scenario, some code is written that will run the steps required for that statement.

Automation

After covering the basics of BDD and scenarios, we then went through a series of exercises designed to encourage us to use SpecFlow and understand how it works. Each exercise had its own visual studio solution, containing all the resources needed to complete the exercise. The application being tested, a pizza website called GeekPizza.

We first created a basic test that checked the number of pizzas displayed on the menu. There was an additional bonus exercise to try out at home. We were also encouraged to think about how we would test that the automation worked correctly.

The second exercise looked at introducing a data table containing a list of items which need to be checked while the test is being run. The third exercise was designed to show how to split up files and step definition classes.

The next set of exercises showed us how to use SpecFlow for web automation. We only had time to work on the first exercise, but we have enough information to help us with the remaining exercises.

Final thought…

This workshop provided an excellent introduction to Behaviour Driven Development, which is essential for SpecFlow. All the exercises, even the ones we’d already done, included bonus tasks so there is plenty to work on at home. The workshop provided everything we needed to really practice and understand SpecFlow.

Gáspár is definitely the person to go to if you need help with writing BDD scenarios or automating them using SpecFlow. I strongly recommend going to one of his talks, workshops or courses if you need to learn more.

Main image taken from http://www.publicdomainpictures.net

Featured post

Collaborate Bristol Part 4 – Talks by Catalina Butnaru and Eriol Fox

What stands in the way of Ethical AI?
Catalina Butnaru

Who do we design for? It is probably not who you think (or want). We want to design for the end-user, but we are often promoting the views of the business stakeholders. If they don’t approve of something, then it can’t be delivered.

There are several false beliefs with AI:

  • AI created super human intelligence
  • AI can be ethical

These are both false and any attempts to achieve this will product ethical zombies – something that cannot think for itself.

Designers need to account for the ethical design of AI applications. To achieve this, several ethical principles need to be established:

  • Privacy
    The ability to be switched off at the request of the user
  • Well being
    Deploying the application doesn’t harm a human (physically or mentally)
  • Accountability
    The user is able to report on unfair outcomes
  • Transparency
    It must be clear to the user that the application uses AI. It must also be clear how the AI makes its decisions
  • Awareness of Misuse
    It should be clear that the system can be misused, how it can be misused and the user should be able to report this when it has happened

Only when these principles have been implemented can a MEP be achieved – Minimum Ethical Product.

Diverse representations in design and awkward conversations with colleagues
Eriol Fox

There is no such thing as a completed neutral tool. Everyone is guilty of unconscious bias which can have an effect on the design of products. Lack of representation of certain demographics can also lead to misunderstandings. To avoid this, we need to start having these awkward conversations so that there is a more accurate representation. Reach out to users, include them so that there is a better understanding of what they want and need.

Stock photos that don’t represent real people, forms that only allow official names or male/female genders, proving the option for doctor male and doctor female instead of just doctor (why?). The list goes on.

These have become known as edge cases, or people we don’t care about or don’t represent the main users. They are excuses we make when we don’t want to discuss certain people. Instead, we should use the term ‘stress cases’ – cases that need more attention.

Applications should be make for anyone to use, not just those who we see as ‘normal’.

Several books were recommended. I’m currently reading ‘Technically Wrong’. A lot of the examples used in the talk are mentioned in this book, I strongly recommend reading it. I’ve already ordered ‘The Politics of Design’ on Amazon.

Final Thoughts…

Positive user experience and collaboration are essential when software testing. With all the software testing events that I take part in, it is good to step back and think things through a little differently. Collaborate Bristol 2019 gave me an opportunity to do just that. I now have new avenues of research to explore, which will help expand my knowledge and experience in software testing.

Thankyou Simon Norris and the other organisers at Collaborate Bristol for an enjoyable and informative day.

Featured post

Collaborate Bristol Part 2 – Talks by Jon Fisher and Georgia Rakusen

Falling between the cracks
Jon Fisher

Could a product have the capability of killing someone?

Three real life examples were given:

  1. Chernobyl
    A mixture of poor design and human behaviour led to the core in one of the nuclear reactors exploding. This was caused by an optimising violation where someone attempts to break the rules with the intent of achieving someone good. In this example, the engineers wanted the safety test to pass so they broke crucial safety rules to do so. In total 31 people died (if you believe the official statistics).
  2. Railway Safety
    Unfortunately, I did not write enough down to fully remember or understand this particular scenario but it involved someone working at a computer where they had to perform repetitive tasks. A chain of events led to the person at the computer making a mistake due to the repetitiveness of his work. The railway line became fully electrified while an engineer was doing maintenance work. Fortunately, no one died. Unfortunately, the engineer had to have both his hands amputated.
  3. Ethiopian Airlines Flight 302
    The cause of this plane crash is still under investigation, however it is generally believed to have been caused by a sensor recording the wrong flight angle. The computer decided to dip the plan to correct this angle. The pilot noticed this and attempted to stop the plane from dipping. The pilot and the computer were fighting each other – the computer won the fight and 157 people died.
The Swiss cheese model was mentioned as a way to show that there will always be several holes in the design. Accidents can happen when those holes are perfectly aligned.

When designing a product, the desired outcome is to deliver value to the customer. Is there an obsession with value? Are we even aware of the potential risks and pains involved when delivering that value?

The human will try and do things the tech team believed they shouldn’t and won’t do. Humans are unpredictable – they they probably do the unexpected. No matter how many levels of defence, there will always exist that perfect chain of events that can result in catastrophe.

I have one observation with the examples mentioned above. We have 2 situations where a human was trying to fight the system. In one case, the human won resulting in Chernobyl. In another, the system won resulting in the Ethiopia plane crash. Do we design to allow a human to take over when required when the computer has gotten it wrong? Or, do we design to prevent a human taking over so they don’t do something stupid?

Web 3.0: How blockchain will change the way we interact with one another
Georgia Rakusen

Thew world is full centralised systems who control everything we do. What is the problem with centralisation? Everything is controlled by a central organisation who have all the power. All information is controlled by the central power, which can create questionable integrity.

Centralized vs decentralized vs distributed processing
Centralized vs decentralized vs distributed processing

Blockchains allow information to be stored across a network of computers. Because the information is not stored at a central location, it is not owned by a single person or company. Multiple people are encouraged to cooperate to verify the information and transactions are valid. Since the information is stored and checked by multiple sources, the overall system has better integrity.

A few examples were given where such a system has been beneficial.

CIvil – The journalism industry is reliant on ad revenue which influences content. As a result, we have no idea what information is correct or not. A decentralised system of co-ownership and participation can help build a more integral industry. Members have to follow a code of conduct and can be voted out if that code is broken.

Openlaw – Normally, legal contracts are controlled by a lawyer. This can make any legal process slow and cumbersome. Instead, legal agreements are created and signed on a block chain. Without any central lawyer, it is easier to raise disputes. Ultimately, all parties involved have to agree.

UPort – an open identity system where personal information can be easily transferred to new platforms. The user has better control over what information they want to share, and what information they want to hide.

I found this great video explains what blockchain is better than I do.

I hope you enjoy reading my summaries. I find its a great way to review my notes and record my own interpretation of the talk. Next post will be about the talks by Gavin Strange and Hilary Brownlie.

Featured post

Collaborate Bristol Part 1 – Talks by Onkardeep Singh MBE and Juliana Martinhago

On Friday 21st July 2019, I attended Collaborate Bristol – A UX and design conference. This is the second time I’ve attended this conference and, like last year, I learnt a great deal from it. I am normally so focused on software testing, it is easy to forget the importance of the user experience. I definitely encourage others to research alternative subject areas that may offer a different outlook to your main interests.

I was pleasantly surprised to find myself on the front cover of the program – in a photo taken of the audience last year, you can just see me on the second row.

In total there were 8 talks on varying topics. In this blog post, I am going to start by writing what I learnt from the first 2 talks – by Onkardeep Singh and Juliana Martinhago

Being passionate, not precious, about your work
Onkardeep Singh MBE

Passion – intense desire or enthusiasm for something
Precious – something that is of great value that must not be wasted.

This first talk explored the workings of the mind. As someone who has always struggled to understand the basic concepts of psychology, I fear this talk may have gone a little over my head. However, it was still an interesting talk and I’m going to do my best to provide my own interpretation.

During this talk, Onkardeep asked the audience a couple of questions:

  • Thoughts and feelings com before an action – true or false
  • Humans are unique because we are in control of our thoughts and actions – true or false

The responses to these questions were mixed. The truth is there is no concrete answer. It is quite common for someone to consciously think before they act, however there often comes a time where that same person might run on autopilot. Sometimes we have control over our actions, but not always – mistakes can happen.

Our actions may be better explained by what is most important to us. If we detach ourselves then our actions aren’t affected as much by our thoughts. When we see something as precious, we see it as being of great value. If we see something as valuable, then we are more likely to have strong thoughts and feelings about it. These thoughts and feelings can affect the way we act. By distancing ourselves from something, not seeing it as precious, we are less likely to have that strong reaction.

We need to be passionate about our work, and have that intense desire for things to go well. But we should avoid being precious about it, so that we don’t react too negatively when things go wrong.

Building great products and successful teams
Juliana Martinhago

Juliana is a product designer at Monzo – a banking app which I’ve never used and knew very little about until this talk.

Monzo was presented being a bank that aims to make banking easier, removing the normal frustration associated with traditional banks. This is achieved by having a strong focus on improving the user experience.

At Monzo, the teams are made up of ‘squads’ – small teams with a shared goal. They are formed around outcomes instead of features. This seems like a good idea as the feature may fail to achieve the desired outcome. Focusing on a outcome means that alternative ideas can be explored.

I can’t remember if the Spotify model was mentioned or if its used at Monzo, however I do know that this also uses ‘squads’.

They start each stand-up by asking the question: What is the most impactful thing we can do today to achieve X? This allows a backlog of ideas to be developed that could be used to achieve what ever X is (the outcome).

One feature available in Monzo is the ‘labs’. This is used to test new features. The user is able to switch on or off specific features which are still in development. Customers are aware that there the feature is still a working progress, but are given the opportunity to test it out early and provide early feedback.

The entire model used at Monzo is aimed at providing something meaningful for the customer, which provides a banking app with a vastly improved user experience.

I will continue publishing my write ups of the Collaborate Conference talks over the next couple of weeks. Next up will be ‘Falling between the cracks’ by Jon Fisher and ‘Web 3.0’ by Georgia Rakusen.

Featured post

A Stitch In Time Reduces Critical Bugs

On 19th July 2019, I attended the #MidsTest meetup in Coventry where I gave my second 99 second talk. This time, I brought a prop – a block from a quilt I’m currently making. This blog post is based on the talk I gave.

Tweet about my 99 second talk, including a photo of me giving the talk

One of my hobbies includes sewing. At the moment I’m working on a patchwork quilt which will be a wedding gift for my sister-in-law who is getting married in August.

A patchwork quilt is made up of hundreds of small pieces of fabric, sewn together to create blocks. These are then sewn together to make the completed quilt. The main image for this post is one of several blocks which will be included in the final quilt.

You’re probably wondering where I’m doing with this!

Unit and Integration Testing

Those small pieces of fabric that make up the quilt – rectangles, squares and triangles – have to be unit tested before being used to make the quilt. Any that have not bee cut to the correct shape and size could result in a major bug finding its way to the completed quite.

Once the ‘units’ of fabric have been tested, they are sewn together into smaller blocks. Before sewing the blocks together, they have to be integration tested. Incorrect seam widths or wrong side of the fabric being used are common bugs that can affect the overall design of the quilt.

Saving time by finding defects earlier

These smaller blocks get stitched together to make bigger blocks, which are sewn together to make even bigger blocks. Eventually, all the blocks are sewn together to complete the entire quilt. Each block was integration tested before being used to make a bigger block.

All the testing that takes place early in the quilts development helps reduce the risk of more critical defects being introduced later on. Additionally, bugs found in the smaller blocks are a lot easier to fix than ones found in the bigger ones. The stitches have to be unpicked and the pieces of fabric sewn back together. Defects on smaller blocks are quicker to fix because there are fewer stitches that need unpicking – there are fewer dependencies.

All that testing, why are there still bugs?

Unfortunately, no amount of testing will completely eliminate all bugs. It helps drastically reduce the number of defects that find their way into the final product – but doesn’t eliminate them altogether.

No matter how careful I am, the quilts I make all have minor flaws in them. However, these are minor issues that don’t significantly affect the design. Any major defects that could have affected the quilts design were eliminated early on. If they had been found later, once the quilt was complete, they would be a lot harder to fix.

Why don’t I fix every defect? If I stopped to fix every defect then there is a risk that the quilt won’t get completed in time for my sister-in-laws wedding. In software development, the risks are normally a lot greater than that. Delaying the release costs the business money, sometimes more than if a defect was released to the live environment.

It is not always feasible to fix every single defect – especially if they are minor ones. A little more effort on unit and integration testing can reduce the number of bugs that need to be fixed later.

Featured post

AMA – Shift Left and Shift Right (A quick summary)

Last night I watched Ministry of Testings Ask Me Anything about Shift Left and Shift Right. Some great questions were asked, and I learnt a lot from it.

The most enlightening moment for me was when I actually learnt what shift right meant. I thought it was just testing in production, but its so much more than that. It is learning about the software post release based on actual data and user behaviour, and feeding this information back to improve the testing process.

Shift left is already well known and well used by testing teams everywhere. However, without shift right, we are unaware of what is happening post-production. The data collected post-production could be used to make our testing efforts more efficient.

We can make testing great using shift left, we can make it greater using shift right.

The full webinar can be watched here:
https://www.ministryoftesting.com/events/testing-ask-me-anything-shift-left-shift-right-marcus-merrell

Further questions and discussion on the subject can be viewed here:
https://club.ministryoftesting.com/t/ask-me-anything-shift-left-shift-right/26353

I am currently sorting through all my notes from the AMA. I will try and publish a proper post about what I learnt next week.

Main image taken from https://www.publicdomainpictures.net

Featured post

I’ve completed the TestProject Test automation Superpowers Challenge!

I was very excited when TestProject announced their test automation challenge. The main reason for this is that it required us to combine API and UI tests. API testing is something I have very little experience in. I’ve been wanting to expand my software testing skills to include API testing for some time.

UI Test

Since most of my test automation experience is with UI testing, I started out by creating a basic UI test using TestProject’s record and playback feature. This test launched Wikipedia, entered the search term ‘Software testing’, clicked return, and then checked the heading of the web page opened in the browser.

I then adapted the test so that the parameter WikipediaSearchTerm is set at the start of each test. After submitting this search term, the test checks that the firstHeading matches the WikipediaSearchTerm.

UI Test result
UI Test as shown in the test editor

API Test

This was a little harder since I’ve never created any automated API tests before. I started out by copying what was done in the video (shown in the original TestProject blog post that announced the challenge).

I then adapted the query, json path and validation to match what I searched for in the UI test. I changed the query so the search term WikipediaSearchTerm was used, the json path searched for the title instead of the snippet, and the validation checked that the WikipediaSearchTerm matched the response.

Validation for the API Test
URL, Query and Json path for the API Test
API Test in the editor
API Test Result

Combining the API and UI test

By using the same parameter, WikipediaSearchTerm, in both the UI and API test I was able to combine the 2 tests very easily. I was able to confirm that the API response matched the actual result that is returned when the same search term is entered via the UI.

Full test report. The API test, UI test and the combined API and UI test were run (the combined test is cut off but the result is visible on the right).

Conclusion

The video included in the TestProject blog gave some brilliant basic instructions for creating and API and UI test. Using this as a starting point, I was able to gradually learn more about API testing and how to use TestProject to create basic UI tests. I look forward to learning more about API testing using TestProject in the future.

Featured post

Testing Culture – Excuses, Blame and Fear

I came across an interesting article which discussed common excuses that testers make:
http://thethinkingtester.blogspot.com/2019/05/seven-excuses-software-testers-need-to.html

There were 3 which I found particularly worrying:

  • The other tester on my team missed the bug
  • If I log the bug I found in production, I’ll be asked why I didn’t find it sooner.
  • There wasn’t enough time to test

If a tester needs to use these excuses, then the issue might not be with testing but with the company culture.

A company culture where colleagues are encouraged to make excuses, attempt to shift the blame on someone else, or instil enough fear so someone is afraid to report something critical is not a good one.

“There wasn’t enough time to test”

This particular excuse will always exist and it is a valid one. There will be strict release deadlines which can’t always be pushed back. When the software has to be released, it has to be released. The tester has very little control over this. However, it doesn’t mean that the tester is entirely blameless if a critical bug makes its way to live.

It is the testers responsibility to ensure that the tests are optimised and prioritised. Tests covering the most essential features should be run first. Tests should also be optimised so that they are run as quickly and efficiently as possible.

Occasionally, the tester may be in the unfortunate situation where there isn’t enough time to test even the most essential features. It is also the testers responsibility to communicate what the overall test coverage it to those making the decision to release. They need to be aware what was tested and what wasn’t tested. This allows them to make an informed decision about if the software can be released or not.

Provided the tester prioritised and optimised their tests, and accurately communicated what was and was not tested, they can hold their head high. They tested everything to the best of their ability with the resources they were given.

“The other tester on my team missed the bug.”

With the increasing complexity of software applications, it is likely that not all members of the team will have been involved in testing every single aspect. However, is it really worth blaming the sole tester who was testing that particular feature?

Instead of blaming a single tester, we should look at the overall testing process. If there was only 1 tester validating this particular feature, maybe assigning additional testers to each feature can prevent bugs slipping through the net.

We may also want to look at how the feature was tested. If the tests were conducted using manual or automated test cases, was this particular scenario covered? If not, then the test case should be reviewed and the test coverage assessed. It may even be worth including some exploratory testing so additional unscripted scenarios are covered.

It should be recognised that we are part of a wider team and we should all be working together to improve the quality of software applications. When a bug makes it through the cracks we must remember that testers are human. They are bound to miss things out. It is not possible to investigate every possible decision that could be made when using the software.

“If I log the bug I found in production, I’ll be asked why I didn’t find it sooner.”

First of all, the person blaming the tester needs to realise that it is better that the bug was found sooner rather than later. It will be much worse if a customer finds and reports the bug.

Second, when a defect is found in production, it still needs to be fixed as soon as possible. Reducing the impact of that bug for the customer should be top priority. When a fix has been released, we can start investigating what went wrong. However, the purpose of the investigation should not be to find out who to blame but to prevent a similar situation happening again.

Finally, if there are employees terrified enough not to report critical issues, then what else are they holding back on? Employees will work much better if they are not afraid of management. Honesty should be encouraged so any mistakes can be rectified sooner rather than later. Otherwise, it will be worse for the business.

I will repeat what I said earlier, testers are human. A few bugs are likely to make it through the cracks. Software is becoming increasingly complex, there are just too many ways for the software to go wrong.

Final point…

There is a TV series that is shown in various countries – The Apprentice. It involved a several candidates who are applying for a highly paid job or money to invest in a business. The candidates are split into teams and are given a business related task. The team that makes the most money from that task wins the round, and 1 person from the other team is fired. The firing process involves the candidates being interrogated in a board room. The candidates are encouraged to blame others for the failure, and defend their own actions. One person is eventually fired.

Personally, I hate this show. It promotes a toxic culture of fear and blame. What does this actually achieve? Isn’t it better to focus on learning from the mistakes, preventing them from being made in the future and improving the process?

If the tester is forced to use any of these excuses, then it might indicate an issue with the overall company culture, not the tester.

Main image taken from publicdomainpictures.net

Featured post

Abuse Cases – Understanding Motives

Before identifying how a user might misuse an application, we need to understand why a user might misuse the application.

“Sometimes when I try to understand a person’s motives I play a little game. I assume the worst. What’s the worst reason they could possibly have for saying what they say and doing what they do?”

Petyr Baelish, Game of Thrones Season 7 Episode 7, The Dragon and the Wolf

I was not able to attend Test Bash Brighton this year, however I do enjoy reading about other people’s experiences. Today, I came across a blog post by Nicola Owen containing a write up of her notes. Included in this post were the notes for the talk by Claire Reckless and Jahmel Harris, titled ‘United by Security: The Test that Divides Us’. Nicola included a photo of the following slide:


Slide from Test Bash Brighton talk ‘United by Security: The Test that Divides Us’ by Claire Reckless and Jahmel Harris. Photo taken by Nicola Owen.

Abuse Cases vs. Use Cases

I’m already familiar with the concept of use cases, where a specific scenario is created that suggests a way in the application may be used. The use case would demonstrate a method in which the user may interact with a product. It would then go on to demonstrate how the user would gain value from this interaction.

Abuse case would have a similar concept. Except the abuse case would suggest a situation that would be undesirable for the business. This is a specific scenario that suggests a way the application may be used, but in a way that would benefit a malicious user. What reasons might a user have for interacting with the application in an unintended way? How might they do this? Would they be successful if they tried?

One thing I really like about the idea of Abuse Cases is that they don’t just look at how a user might misuse the application. They also think about why a user might misuse the application.

Creating Abuse Cases

I decided to have a think about how a user might attempt to abuse the software applications I’m currently working on. I test applications designed to control scientific instruments and analyse data collected from running measurements with these instruments. We often sell to the pharmaceutical industry who have strict rules about how experiments are conducted and the data used. The settings used for an measurements must be logged, and the data cannot be manipulated or deleted.

A user who works for a pharmaceutical company may be under pressure to get a drug approved by the FDA. They may choose to manipulate result data, or change the way a measurement is run so that the results better support the product for FDA approval. They may also attempt to generate fake data so that the FDA can approve the drug sooner.

By understanding the motivation of a malicious user, we were then able to identify potential ways a user might attempt to misuse the application. We then investigated if it was possible for the user to do this.

Unintended Consequences

“Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”

Ian Malcolm, Jurassic Park

During her keynote at the UKSTAR testing conference, Fiona Charles constantly asked the question ‘What could go wrong?’

In this talk, there were many examples of the unintended consequences of technology. Some were caused by a human which directly affected other humans. In these examples, the human will have deliberately misused the application on purpose.

There were others that had a negative effect on society and the environment. These would not be caused directly by other humans, and wouldn’t usually have been done on purpose. Examples included self service checkouts. To make it easier for a customer to quickly scan items at the checkout, there has been an increase in the amount of plastic packaging used. This has had a negative impact on the environment.

Abuse cases like these usually don’t carry a motive. This makes them a lot harder to understand or anticipate. Despite this, they are still something to think about. The best method would be to question everything – assumptions, biases, objectives and decisions.

Summary

Unlike Use Cases, which need to be successful, an Abuse Case must be unsuccessful.
When people talk about the importance of understanding the end-user, this should apply to both good and bad users. We should also think about the ethical implications of the application and the potential impact it might have on society or the environment.

By anticipating abuse cases in advance, we can create software that is safer and more secure.

Main image taken from publicdomainpictures.net

Featured post

Star East Lightning Talks 2019

I love lightning talks. I probably get most of my inspiration from watching them. They are short, with no time for anecdotes or elaboration. The speaker has to be brief, to the point. The message needs to be explained in a clear and concise manner, but within a time limit. The listener can take that message and interpret it in any way they want. From that message comes new ideas.

The Star East conference took place between April 28th and May 3rd 2019 in Orlando, Florida. I’ve never been to one of the Star conferences, including this one. However, they always make a selection of talks available to watch online for about 3 months after the event. This includes the lightning talks.

Seretta Gamba

Seretta was one of the speakers at UKSTAR this year, where she delivered her talk “Why Cats are the Best Test Automators”. I’d opted to attend a different talk during this time slot, however it was nice to hear a highlight of the full talk during the Star East lightning talks.

Why are cats the best automators? Seretta provides a few examples of how the personality traits of a cat can apply to test automation engineers. A few reasons given include:

  • They are lazy – don’t waste time repeating things. Instead they automate it. However, they do recognize the need for manual and exploratory testing when required.
  • They are cunning – they don’t just do something. They study and come up with a strategy do make the tests as maintainable and efficient as possible.
  • They provide feedback – cats tell you what they want and need. A good test automation engineer provides good feedback to management about metrics.

Bob Galen

This talk was about changing our language in order to change our minds.

Our language often creates an us and them attitude:

  • Developers vs. Testers
  • Test Automation vs Manual Testers
  • Management and executives vs everyone else
  • The Customer vs Us

Language can influence our mindset and define us, including our surrounding culture. Start using team based language. Doing so can completely change the overall tone in an organisation. Together, we are a network of folks trying to achieve a common goal.

Language should include: We, Partnership, Teams, Shared goals aligned with customer

Angie Jones

Angie asked the internet for unpopular opinions related to software testing.

One thing that came up was “You don’t have to argue with people all the time to get a good tester”.

Testers can have a bad guy persona. This shouldn’t be the case. The best testers understand they are not meant to just find bugs. The best testers integrate with the team. The best testers start testing as early as the design phase, analyzing the requirements for potential defects before any coding has taken place.

There were other unpopular opinions discussed, but this was one that stood out for me.

Jason Arbon

Jason provides the perfect interview question:

A text box and a button labelled count. The purpose of this button is to count the number of A’s in a string entered in the text box. The tester can run only 10 test cases before release.

How would a good tester approach this scenario? Jason provides several examples of test cases, things to consider, questions to ask, the number of possibilities are infinite. However, it is important for the tester to ask questions, be critical of the requirements and test for the long term.

Isabel Evans

“You could speak at this conference!”

As someone who is just starting out with conference speaking, this talk was definitely an interesting one. Isabel talks about how anyone could speak at conferences, and they should do so to build confidence. She also gives some great advice for writing submissions that will be accepted:

  • Actually apply, you won’t get accepted unless you apply. If you are unsuccessful, ask for feedback.
  • Spell check, you won’t be accepted if you spell qualtiy wrong. (yes, I’ve included a typo on purpose)
  • Make sure that you present evidence to backup your idea.
  • Do not use bad language.
  • Do not assume we know who you are.
  • Most importantly, Tell your own story. This is what makes your idea compelling.

In your conference submissions and talks, you should be engaging, interactive and inclusive.

Chandra Arutta

Chandra delivered a story about how the QA Trasnformation Journey at his organisation with the aim of bringing the team together.

The change was centered around building BDD scenarios. The team were taught about the new technology, with additional training provided to help with building domain knowledge. There was a focus on developing a common language which would make it easier for colleagues to provide feedback. The overall aim was to integrate QA practices with developer culture to encourage team collaboration.

Adam Auerbach

Adam answered a few questions that he’d been asked during the conference:

  • What are the best tools for continuous testing?
    • Whatever you have. If you want change, start with what you have, then identify issues with these tools before choosing new ones to solve these issues.
  • Should UI tests be included in the pipeline?
    • Yes, but be aware that they are slow and you need a robust framework. Time will be needed to maintain them. This is why we have the testing pyramid, triage the tests so that there are more robust and reliable unit tests and fewer slow and expensive UI tests.
  • Do we need Manual Testing?
    • We will always need manual testing. A recommended book is ‘How Google Tests Software’, they have manual tests. Think about how you use automation and manual testing.
  • Issues with Test Data Management?
    • Test data management is a big constraint. You need to setup synthetic data as it may not always be possible to use production data.
  • How to include developers in the process?
    • Educate, show them how failed tests and other issues slow them down.

As a final piece of advice, at conference ask questions. If there is something you want to know, ask either at a talk, during networking, or find a speaker and ask them.

Lloyd Roden

In this talk, Lloyd suggested a scenario where a group of testers are playing a game of the weakest link. They are each asked, how many tests have you run in the last hour?

  • Tester 1 – 300
  • Tester 2 – 30
  • Tester 3 – 2
  • Tester 4 – 0

Tester 4 – you are the weakest link, goodbye!

On paper, it does look like tester 4 is a terrible tester. Is this the case? Each of these testers had a different view of the perfect test case.

Tester 1 preferred running short tests that took less than a minute to run. In some cases this could be as simple as clicking a button. Tester 3 and 4 prefer running end-to-end tests which can take more time. In some cases, this could take longer than the hour.

With this in mind, is tester 1 really the better tester because more tests were run? Is tester 4 the worst tester because they ran fewer tests?

As a metric, number of test cases run is meaningless without context.

Dorothy Graham

Does change always solve problems? When embracing the new, we sometimes forget the old. Just because something has been round for a while does not make it useless.

Dorothy sings us a wonderful song, using the tune “These are a few of my favourite things” from the Sound of Music.

In this song, Dorothy talks about her favourite testing techniques. Just because some of these are old doesn’t mean they should no longer be used. The full lyrics can be found here.

The range of topics that appear in a lightning talk round can be really diverse. With only 5 minutes to speak, its amazing the amount of information you can gain from them. I hope you enjoyed reading my summaries.

Main image taken from https://www.publicdomainpictures.net/en/

Featured post

What I read last week (28th April 2019)

It has been another exciting week. I am pleased to announce I will be giving a talk at SwanseaCon later this year on Monday 9th September. My talk will be on test automation and how to gain the most value for it. This increases the number of talks that I will be giving this year to 2 – the second talk being Test Bash Manchester. For someone who is still very new to speaking at conferences, this feels so overwhelming. I hope I do a good job giving these talks.

This week, on the ministry of testing club forum, there will be a power hour event taking place. Abby Bangser will be answering as many questions as possible on:

  • Enabling DevOps delivery
  • Testing on cloud infra team
  • Starting with Observability

I’ve already submitted a couple of questions. If anyone else has any questions they’d like to ask, they should be submitted here.

Social Media Discussions

LinkedIn post – Does Test Automation find bugs?
A post I shared on LinkedIn about if its possible for test automation to find bugs. I argue that a failed test doesn’t actually find the bug and it takes additional exploratory testing to find the exact details. However, test automation does alert the tester to the area of the software that may not be working as required. It yielded some interesting discussions. Feel free to add your own thoughts.

Podcasts

Test Talks Podcast – Next Generation Agile Automation with Guljeet Nagpaul
In this episode, Guljeet Nagpaul talks about how the development of test automation frameworks, the benefits, the challenges and how it will continue to develop.

Test Talks Podcast – Pushing Security Testing Left, Like a Boss with Tanya Janca
Tanya Janca talks about what security is and why it is importnat. Security Testing is taking care of and protecting people, and it is important to ensure that there are policies in place to protect them. It is also important that these people are not put into a position where they may have to break these policies.

Articles

Riskstorming Experience Report: A Language Shift by Isle of Testing
This article discusses the benefits of risk-storming and how it changes the questions that we ask during testing and when we ask them. There are questions that are normally asked post production when a bug is found. Instead, this question is asked before release. There is a link to another article that explains how to run risk-storming sessions.

Kill Your Darlings – Why Deleting Tests Raises Software Quality by WildTests
Stu at WildTests discusses the limits of testing – we cannot test everything. It is important to prioritize and reduce the tests we need to run to allow the application to be delivered sooner. Priorities can be determined by getting closer to support and development. Customer support can help us understand customer pains better. Development can teach us about the changes that have been made so we understand where the risks are.

Why Your Test Automation Is Ignored – 5 Steps to Stand Out by Bas Dijkstra, Test Beacon
In this article, Bas Dijkstra talks about the phases of test automation that often lead to failure. He then presents 5 ways to improve test automation and better demonstrate the values and benefits.

My Automation’s Not Finding Bugs, But That’s OK by Paul Grizzaffi, Responsible Automation
Paul Grizzaffi was kind enough to share this blog post from last year in response to my linkedin post about how test automation rarely finds bugs. Grizzaffi states that even if an automated test doesn’t find any bugs, it does not mean it is valueless. It can still enable rapid release of the product.

3 Qualities You Must Have in Order to Become a Strong Software Tester by Raj Subramanian, Testim
Unlike similar posts I’ve seen where qualities tend to be focused on skills, this list looks at qualities required for personal development. The qualities listed are communication, motivation and education. These qualities are required for a tester to develop new and existing skills, which in turn makes a better tester.

Observability vs Monitoring by Steve Waterworth, DZone
Monitoring and Observability by Cindy Sridharan
While trying to think of questions to ask for the Ministry of Testing Power Hour session (1st May on The Club) I did a little research in to Observability. One thing I found interesting was the distinction between observability and monitoring. Here are a couple of articles that discuss this.

Other blogs that share lists of test related articles

https://5blogs.wordpress.com/ (daily)
http://blog.testingcurator.com/ (weekly)
http://thatsabug.com/ (weekly)
https://weapontester.com/tea-time (weekly)
https://www.ministryoftesting.com/feeds/blogs

Testing Conferences
https://testingconferences.org/

The Club
https://club.ministryoftesting.com/
A forum for discussing, asking questions, answering questions, and requesting help. Run by the Ministry of Testing.

Feel free to recommend anything that you think is of interest.
Main image taken from http://www.publicdomainpictures.net

Featured post

Do I really need to test this?

In early March I had the following blog post ‘7 reasons to skip tests‘ published on testproject.io. This post looks at reasons why we may need to cut back on testing so that we can release earlier. It also looked at ways to reduce the test load later on in the life cycle by testing earlier and introducing more exploratory testing.

However, I only briefly mention the importance of prioritising the tests that we decide to add to the regression test suite. A recently published blog post by Stu at Wild Tests goes into this more deeply. This post inspired me to look more closely at how to review and prioritise test cases before regression testing.

What is regression testing?

Regression testing is supposed to confirm that no new, unexpected bugs have been introduced to the software and existing functionality still works as expected. These tests may be run at various points throughout the development phase, most commonly before a software release. It normally requires large regression pack to be created and regularly updated and maintained. However, over time, the number of tests included can dramatically escalate. Hence the need to regularly review test cases and prioritise them.

What should we do with test cases when reviewing them?

When reviewing a test case, there will usually 3 possible outcomes – Keep, Update, Delete.

  • Keep – if the test case is still required then it remains in the regression test suite.
  • Update – if the test is still required but the requirements have changed then the test case is updated so it matches the new requirements.
  • Delete – if the test case is completely out of date and incorrect, or covers functionality that is no longer included in the software, then it should be permanently removed. Other reasons for deleting a test might be because a similar test already exists.

Just because I’ve decided to ‘keep’ some tests doesn’t mean they all need to be run. The remaining tests need to be prioritised so that the tests that cover the most high risk items are run first. Lower risk tests may not need to be run at all. However, if a test case covers a feature considered to be low risk, should we be planning to run the test at all? A test may not need to be deleted, but it might need to be removed.

Difference between Deleting and Removing test cases?

Stu’s Wild Tests blog post makes a distinction between deleting tests and deleting the data tests hold. This means there could be a need for a 4th test case review outcome – Remove. Deleting a test means that the test is permanently deleted from the regression test suite. This includes any data included in the test case. Remove means it no longer exists in the regression test suite so its not run in error, which can waste limited time and resources assigned to testing. We may wish to remove a test case if it covers a low risk item. Priorities change, we may need the test case again in the future.

Things to consider when reviewing tests

What feature does the test cover? Is the test correct and accurate? Is this feature considered high or low risk?

Before doing anything, we need to make sure we know exactly what the test coverage is. If there is something included in the test that is no longer required then it should be removed. Likewise, if there is something not included in the test that is high priority then it must be included.

Furthermore, we must ensure that the data included in the test must match the requirements. If they test and requirements do not match then time could be wasted reported non-existent defects.

Finally, we must consider the risks. Risk should be determined using the potential impact to the customer if the feature should fail, and the likelihood of that feature failing. The likelihood will increase if a change has been made in the software that may cause that feature to fail.

Further reading

Kill your darlings – why deleting tests raises software quality
Breaking the test case addiction
7 Reasons to Skip Tests

Featured post

What I read last week (14th April 2019)

Some really big news, I’ve been selected to give a talk at Test Bash Manchester in October. My talk will be on record and playback features in test automation. I will be discussing how they can be useful for testers with little experience or a great deal of experience in test automation. I will also demonstrate how tests generated using record and playback can be adapted and improved. It is a really exciting opportunity.

I’ve written articles about this before for testproject.io. I look forward to exploring this subject more throughout the year. In particular, I plan to explore how different automation tools use record and playback and how easy it is to adapt them.

Webinar

Ask Me Anything – Whole Team Testing with Lisa Crispin
This week I watched the brilliant webinar in which the testing community was given the opportunity to ask Lisa Crispin anything they wanted about Whole Team Testing.

All questions that could not be answered during the webinar were shared on the club and answered at a later date. This included an answer to a question that I asked:

The following blog post includes a summary of some of the questions that were asked and answered during the webinar:
https://wildtests.wordpress.com/2019/04/09/lisa-crispin-ama-whole-team-testing/

Podcasts

Test talks podcast – What is Programming with Edaqa Mortoray
Do we actually know what programming is? In this podcast episode, Edaqa Mortoray discusses his book ‘What is Programming’ in which he attempts to answer this question. He talks about how programming is not the lone activity that some think it is. It should be a social activity that includes communicating with fellow stakeholders. This is demonstrated in the way the book is split into 3 sections: People, Code and You.

  • People are the reason software exists.
  • At the heart of any software is source Code.
  • Behind the screen is a real person: You.

Test talks podcast – Discover The Personality of Your Application with Greg Paskal
In this episode, Greg Paskal suggests that we should not just be looking at the way tests pass and fail. When running daily tests, we should look at their behaviour over time in order to identify issues that are often overlooked. For example, a test may gradually take longer to run. Monitoring this over time could provide an insight into an emerging problem.

Test talks podcast – Chaos Engineering with Tammy Bütow
This is something I’ve not heard much about before. In this podcast episode, Tammy
Bütow explains what chaos engineering is and how it has been used to identify issues in a companies ability to recover from disaster using methods like fault injection. One example that was discussed was Netflix’ Chaos Monkey application.

The Good, the Bad and the Buggy – Season 2 recap
A recap of all the previous episodes. This episode discusses how different technology has improved the user experience and changed the way we do things in certain industries.

The Guilty Testing 11 – 7 ways I sabotaged myself as a testing
This episode was inspired by the UKSTAR 2019 talk by Claire Goss (Testers: Is It Our Own Fault We Are Underrated?). This episode provided a list of ways in which testers might be sabotaging our own testing efforts. One which interested me was the idea that developers should not be developing a new feature until it has been tested. Allowing demos to take place gives the opportunity for stakeholders to provide early feedback. Testers aren’t the only ones who should be assessing the quality of the software application.

Articles

Logging, Monitoring and Alerting with Kristin Jackvony
I’ve been looking into how logging can be used to aide our testing efforts. This article defines logging, monitoring and alerting and discusses how each can be used to benefit the team as a whole as well as testing.

Testing the Adversary Profession! by KimberleyN
KimberleyN decided to reshare this blog post which was originally published last year due to Lisa Crispins Ask Me Anything webinar. This post discusses the relationship between developers and testers and why some bugs may find their way into production. Reasons suggested included fear – fear of starting an argument or making enemies by reporting bugs.

For Just a Few Lego Bricks More by Michael Fritzius
An interesting analogy to explain how modular approaches to programming are recommended. There are usually only a small number of standard designs of lego bricks, and each one fits easily with the other design. This allows the same design to be reused millions of times. The same goes for programming. There should only be a small number of standard functions which can easily connect with the other functions.

Other blogs that share lists of test related articles

https://5blogs.wordpress.com/ (daily)
http://blog.testingcurator.com/ (weekly)
http://thatsabug.com/ (weekly)
https://weapontester.com/tea-time (weekly)
https://www.ministryoftesting.com/feeds/blogs

Testing Conferences
https://testingconferences.org/

The Club
https://club.ministryoftesting.com/
A forum for discussing, asking questions, answering questions, and requesting help. Run by the Ministry of Testing.

Feel free to recommend anything that you think is of interest.
Main image taken from http://www.publicdomainpictures.net

Featured post

Jaroslaw Hryszko, Amy Phillips and Bas Dijkstra (UKSTAR talks Day 2, Part 2)

It has been 3 weeks but I’ve finally completed the last of the UKSTAR blog posts. The final few summaries were difficult to write, it is amazing how much you can forget in just a few weeks. Fortunately, my note taking skills were good enough to keep my memory fresh.

Adept: Artificial Intelligence, Automation and Laziness by Jaroslaw Hryszko

Jaroslaw gave a highly technical talk about automated defect prediction using static analysis tools and machine learning. In real life, more bugs are often found later in the lifecycle. Jaroslaw demonstrates that using prediction based QA, more bugs can be found earlier in the lifecycle. This saves a significant amount of money as the cost to fix is less.

I found it very interesting that Jaroslaw gave 2 different definitions for bugs and defects. Previously I’d always thought of them as always being the same:

  • Bug – mistake in the source code, doesn’t have to result in a defect.
  • Defect – discrepancy between users expectations and actual behaviour, does not have to be caused by a bug.

I’ve already studied techniques for static analysis so that bugs can be found earlier in the lifecycle, but never really thought much about how machine learning could be applied. This is a subject that I need to read a log more of. My notes are filled with suggestions for papers, articles and topics which I plan to search for online. This talk was highly technical but provided enough information to use as a basis for further research.

Keynote 3* – How to lead successful organisational change by Amy Phillips

We’ve attended this amazing conference, learnt many new facts and developed new ideas that could potentially improve what already takes place at our companies. However, applying these changes is easier said than done.

How do we apply these changes? We can’t just tell everyone this is how we should start doing things. First, we may not have the authority to do this. Second, people don’t like change. In this talk, Amy talks us through a process that could help us gain support from within the organisation. This will increase the chance of the change being embraced instead of rejected.

Steps suggested include:

  • 0. Create foundation
    • Establish credibility so that colleagues are more likely to trust that the change might work
    • Ensure that there is capacity for change. If we attempt to introduce the change at a critical time, like when there is a deadline approaching, the change is more likely to be rejected.
  • 1. Build an emotional connection
  • 2. Identify a north star
    • The north star represents something that we should aim for, a mutual goal.
  • 3. Small steps in the right direction
    • Don’t try and do everything at once.

Originally, this talk was meant to be at the start of the day. I don’t know the reason for moving the keynote, but it seemed to work better this way. This talk seemed well suited to take part at the end of day, giving us a final piece of advice to ensure that we got the most out of the conference.

Deep Dive F – Building Robust Automation Frameworks by Bas Dijkstra

For the final deep dive session, I chose to attend Bas Dijkstra’s session on building automation frameworks. Bas walked us through a series of steps to setup a basic automated test and improve on it. Most of my experience with test automation is self taught so it is interesting to see what steps someone else would follow. It confirms that I am also following recommended steps and fills in any gaps in my knowledge.

Iteration 1 – creating a basic test using record and playback
Once this was done, Bas highlighted some potential issues such as all steps being in one method, everything being hard coded and no browser management.

Iteration 2 – Better browser management
Ensure that the browser is closed down in a tear down script once the test has been run.

Iteration 3 – Waiting and synchronisation
Implement a timeout and waiting strategy, for example “all elements should be visible within 10 seconds”. If this does not happen, a timeout exception should be thrown.

Iteration 4 – Page objects
Makes tests more readable by separating out the flow of the tests. This makes it easier to update and maintain tests.

Iteration 5 – Test data management
Each test run will change the data. Therefore there needs to be a way to create and control the required test data. One option is to reset the database. It is worth talking to the developers who could provide something to make this possible.

Iteration 6 – Quick! More tests!
Make the tests data driven so the data can be varied. Using the same values doesn’t really prove much once the test has already been run. Data driven testing allows alternative data values to be used and allow more edge cases to be covered.

Iteration 7 – Narrowing the scope
Run data driven tests through the API to speed up tests and make the tests more efficient.

Iteration 8 – Service Visualisation
Dependencies aren’t always accessible, which can affect robustness. Use a fake or virtual process to keep the test environment under control.

Featured post

Bas Dijkstra, Poornima Shamareddy and Wayne Rutter (UKSTAR talks Day 2, Part 1)

We are now on the 2nd and final day of the UKSTAR talks. Thankfully, I’d been able to get a decent nights sleep despite attending networking evening that had been arranged at a local pub. I arrived nice and early, in time for breakfast and the Lean Coffee event at the huddle area. This was followed by another day of brilliant talks.

Keynote 4* – Why do we automate?
Bas Dijkstra

This is a brilliant question that many neglect to ask. In this talk, Bas discusses the common mistakes that occur in test automation such as attempting to automate everything. Instead, we should be asking questions about why we want to automate so we are sure that we are doing the right thing. It is not a case of one size fits all, the approach to test automation will be different for each project. Automation is wonderful but only if done right.

It is important to, every now and then, step back and ask “why?”

  • Why do we need automation?
  • Why this tool?
  • Why these tests?
  • Why this percentage of tests?

Only with a satisfactory answer should we proceed.

Bas used the following quote from the movie Jurassic Park to summarise his point:

“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

Ian Malcolm, Jurassic Park

Bas also used another brilliant analogy: Test automation is like bubble wrap. Its fun to play with but it has its limits and can give a false sense of security. He discusses this analogy more here.

*This was originally scheduled to take place at the end of the day not the beginning. I’ve labelled this as keynote 4 to match the program even though it took place before keynote 3.

Cognitive Testing Talks
Cognitive Testing – Insight Driven Testing Powered By ML by Poornima Shamareddy
Cognitive QA by Wayne Rutter

The next 2 talks were on very similar topics from the modern tester track.

The first talk, Cognitive Testing – insight driven testing powered by ML by Poornima Shamareddy, talked about the development of a self learning system that used data mining and pattern recognition to rank and prioritize test cases. During the talk, Poornima walked us through the process of developing the application including the benefits achieved.

The second talk, Cognitive QA by Wayne Rutter, discussed investigation into ways to identify the amount of test resource required for each area of an application. Wayne went into great detail about some of the different machine learning methods including supervised learning, such as using classification methods, and unsupervised learning, such as clustering. This talk was especially impressive as Wayne was part of the speakeasy program having never given the talk before.

Artificial Intelligence and machine learning concepts are something I’ve not covered since university. My memory of the subjects is a little vague but both talks provided enough basic information about machine learning techniques to make the talk informative to anyone regardless of prior knowledge. It was interesting to see examples of practical applications of artificial intelligence, natural language process and machine learning from a test perspective.

Featured post

Gerie Owen and Fiona Charles (UKSTAR talks Day 1, Part 3)

Here is the final blog post for day 1 of the UKSTAR conference. This includes Gerie Owen’s talk on wearable technology and Fiona Charles’ keynote on the positives and negatives of disruptive technologies.

A Wearable Story: Testing the human experience
Gerie Owen

Gerie starts the talk by talking about her experiences while running the Boston Marathon. After completing the gruelling race it was found that her time had not been logged correctly. The chip she was wearing was defective. It is easy to imagine how frustrating this must be for someone who has just run 26 miles.

Gerie uses this story to explain what a wearable is and the importance of ensuring that the user gains some value from a wearable technology. In this example, it is clear that no value was gained from the wearable chip used in the Boston Marathon.

Wearables require some kind of human interaction for value to be achieved. Gerie demonstrates how to set up persona’s that can be used for testing a wearable device. These personas will contain details about the life, goals and expectations of someone who is likely to use the device providing an understanding of their expectations. This information is used to create user value stories.

The best outcome of a user story is one where someone gets value. The worst outcome is where no value is achieved at all.

This is the second time I’d heard Gerie Owen speak. The first time was at the spring 2018 Online Test Conf where she gave another brilliant talk on continuous testing.

Technology’s feet on society’s ground
Fiona Charles

The second keynote, and final talk of the day, was give by Fiona Charles who asks the question: Is ‘disruptive’ a good term?

When it is a good term, it can lead to positive change. However, could lead to unintended negative outcomes. Technology reaches everywhere into society but often reflects and favours the privileged. Biases and discrimination can lead to some of these negative outcomes.

It is common for students to have to receive and submit homework via the internet. This is convenient for both the students and the teachers, but what about students who don’t have access to a computer.

Self service checkouts have led to a more efficient shopping experience for both staff and customers. However this has led to more fruit and vegetables being stored in plastic packaging so it can be scanned easier.

We can now buy items online and return them just as easily. But in a lot of cases the returns are just being sent to landfill. In addition, more parcels mean more delivery vehicles. This has resulted in an increase in traffic congestion.

Fiona also includes some more dangerous examples, like an aircraft which almost crashed because the auto-pilot malfunctioned and the pilot was not able to override it easily. As technology advances, we need to think about how much human intervention should be retained.

There are potential ethical implications of technology, especially as Artificial Intelligence starts to gain prominence. We must question assumptions, biases, objectives and decisions. We must be asking:

  • Should we build this?
  • Is it right to build this?
  • What could go wrong?
Featured post

Peet Michielsen, Joep Schuurkes and Viv Richards (UKSTAR talks Day 1, Part 2)

Here is the next blog post where I provide summaries for the talks I attended at the UKSTAR software testing conference. This covers the first 3 talks in the automation track. Peet Michielsen talks about how to fix the ‘leaks’ in the test pipeline, Joep Schuurkes shares a few tips on how to choose or design a test automation framework, and Viv Richards introduces us to visual regression testing.

When you’re moving house and the pipeline needs re-plumbing
Peet Michielsen

Peet Michielsen walks us through his journey from one company, where he setup a release pipeline from scratch, to a new company which already had a pipeline in place but found there were several ‘leaks’. He talked about some of the challenges he faced at the new company to fix these leaks. Finally, he gave some tips for improving the release pipeline.

This talk used the plumbing analogy a lot to explain his points. Generally, he was demonstrating the importance of allowing the project to ‘flow’ and not be held up by delays in testing. Replace anything that is obsolete and introducing reliable test automation are a couple of the ways to improve this flow. The use of test automation and continuous integration is what makes a software project ‘flow’.

Peet did not refer to any particular tools or technologies that he uses. This helped him demonstrate that his ideas could be applied to most projects regardless of the tools and processes they already use. This seems like a good idea as I am often put off by talks that focus strongly on technologies which are unsuitable for my current work. It can distract people from the actual message. The ideas that were presented in this talk could easily be applied to any test project.

What to look for in a test automation tool
Joep Schuurkes

Joep starts off this talk by discussing some of the issues he had with previous test automation tools and why this led to him building his own framework. This helped solved most of the issues he was previously having. His new framework, created using python, used a mixture of existing commands as well as newly developed ones – gluing well established tools and libraries together.

Throughout the talk, Joep showed us how he did completed some of the following test activities using his framework – Create, Read, Debug, Run and Report. With each activity he provided some great tips that can be used to improve a test automation framework.

Some of my favourite tips include:

  • Naming tests well can clarify the tests intent. It can also make it easier to notice gaps and patterns in the test coverage and when running the tests.
  • A test does one thing only, keeps things clear and focused.
  • When a test fails, can you see the failed step? Do you have the required information? Can you understand the information provided? There is no such thing as too much information, so long as its well structured.
  • Never trust a test you haven’t seen fail.

And finally, the most important piece of advice: Forget about the shiny, be impressed but ask … Is your tool helping you to do better testing?

Spot the Difference; Automating Visual Regression Testing
Viv Richards

Why do we use test automation? It is more reliable as its performed by tools and scripts, meaning that the risk of human error is dramatically reduced. However, it does have its issues, especially when testing the UI. A large amount of investment is required, its only practical for tests that need repeating and with no human observation there is no guarantee that the UI is user friendly.

One popular pattern used in test automation is the page object model. The issue with this model is that the locations and visual attributes of elements are not usually checked. We played a game of spot the difference where there were 2 versions of the same GUI. The audience could easily spot most of the ‘mistakes’. There were about 10 in total, but only 4 would have been picked up in test automation using the page object model. Things missed out included additional spaces between elements, text styles and fonts and changes to colours or images on the page.

Viv then goes on to demonstrate how a screenshot of a GUI can be compared against previous versions of the GUI as part of test automation so that the software team can be alerted to minor changes in the software a lot sooner. These tests, run repeatedly on future versions of the application, can bring additional value to the software project.

Featured post

The Risk of Forgotten Knowledge

What is the most important thing in your possession right now? What would the implications be if you were to lose it?

Yesterday, I took a flight to Colorado. This is the first time I’ve been to the USA since 2008, and the first time I’ve travelled abroad for work. I am a little nervous, which doesn’t help the fact I am naturally a paranoid traveller. I am the sort of person who checks every minute that I have not lost anything. I will panic if I put my passport in the wrong pocket of my coat or bag and can’t find it later.

While waiting at the airport for the shuttle bus, we noticed a discarded pair of glasses. We all wear glasses so understand how essential they are. This then led to a discussion of what is most important, and what item we’d be most devastated about losing.

I’ve purposefully not taken anything sentimental with me on this trip so I don’t have to worry about losing these. Glasses are an obvious answer, but I have brought a spare pair so it wouldn’t be the end of the world if I lost these. I am not overly attached to my phone either, phones can be replaced and any photos on it have been backed up. Losing my passport would be problematic, but arrangements can be made to get me home safely at the end of my trip. There are items in my possession which losing would bring me a great deal of hardship. In most cases this could be fixed, not always easily but things will get better.

There is something I would be devastated to lose and could never be replaced with any kind of money. My notebooks, one of which includes notes from the UKSTAR conference I attended last week. I still haven’t written up or analysed all my notes from the talks. This is knowledge that is currently only stored in two places, my notebook and in my memory. Memories fade. This has already started as it has been a week now.

Knowledge is not just information, it is a representation of our own personal experiences and interpretations of that information. It will differ from person to person, but each person will develop new and different ideas. New ideas develop into new knowledge.

Knowledge is the most important possession we have and must be shared for two reasons. So that others can learn and develop new ideas from it and so that it is not lost and forgotten, even if the original source has not been preserved.

While on my trip I will be continuing my write ups of my UKSTAR conference notes which will be shared in a series of blog posts. I’ve also completed several tasks on the 30 days of testing challenge but have not yet completed the write ups. The next ‘what I read last week’ post will be published on Sunday. I didn’t do must reading last week because of the conference and preparing for my trip to Colorado.

Main image from http://www.publicdomainpictures.net

Featured post

Angie Jones and Anne-Marie Charrett (UKSTAR talks Day 1, Part 1)

Here is the first blog post where I discuss the talks I attended at the UKSTAR 2019 conference. This covers the first keynote by Angie Jones, and the Deep Dive session run by Anne-Marie Charrett.

Keynote 1 – The Reality of Testing in an Artificial World
Angie Jones

This first keynote of the conference was given by the amazing Angie Jones. I confess, I’d already watched this talk once before at the STAREAST techwell conference last year. They make a selection of their talks available online to watch on demand for a few months after the event and this was one of the talks I chose to watch. Angie is such an engaging speaker that, even though I knew the story she was going to tell, I was still on the edge of my seat wondering what was going to happen next.

Angie challenges the misconception that an application doesn’t need testing because the “AI is doing it!”. She produces several examples where machine learning has gone wrong, which may indicate that the application was not tested adequately enough. This is especially worrying as there may come a point where AI is incorporated into applications where reliability is essential. For example, an application that predicts if a patient is likely to get cancer. Some applications are too important not to test so we cannot be relying on them just working.

So how do we test it? Angie walks us through the process she followed when testing an AI application for the first time.

  • First she learnt how the application works. This is really important as AI will have no pre-determinable results. Therefore, we need to know how it got to this unknown result and test that the AI is calculating the result correctly.
  • Second, we train the system to see if the outcome is correct. Using test automation, we generate large amounts of data. We then test the outcome and see if that matches what we expect to appear based on the data we fed into the system. We repeat this multiple times with different sets of training data to see if the outcome matches what we expect.

AI is all about calculating results that cannot be pre-determined. We should be tested the method for calculating that result, not the outcome itself. If we are putting all our faith in an AI application and relying on them to getting the correct result, how can we NOT test this? People often ask if AI is something to be feared. Without testing, the answer is yes AI is something that should be feared.

Deep Dive – API Exploratory Testing
Anne-Marie Charrett

When attending this conference, my main focus was on test automation. However, I also wanted to learn something new. I was attracted to this deep dive session because API testing is something I’ve not done much before and really think I should start doing. Also, with so much focus on test automation, it was nice to learn something new about exploratory testing for a change. After all, both are equally important.

This talk started by examining how the use of mind maps can encourage testers to take a more systematic approach when doing exploratory testing. She started with the GUI and then went on demonstrate how this same idea can be used to explore the API. She walked us through examples of tests that might be carried out.

This talk was especially useful for API testing novices as she taught us how basic API commands worked, and how they can be used to test the API layer within an application. It was very basic, but useful for getting us started. Exploratory testing is all about learning and gaining experience. It is an excellent testing method to use for learning more about the application being tested and API testing.

Having never attended a deep dive session, I didn’t know what to expect. I really enjoyed the interactivity of the session. Anne-Marie is very good at encouraging audience participation. We were encouraged to suggest tests to run. During each ‘test’, before actually revealing the outcome, she would ask the audience “Whats the hypothesis?” encouraging us to think and talk about what we’d expect the outcome to be.

Featured post

Women in Tech, Diversity and Inclusion (UKSTAR Huddle area discussion)

In this post, I will be continuing to share my experiences at the UKSTAR software testing conference. Previously I wrote about the lean coffee event that took place at the UKSTAR huddle area. This was an area designated for chilling out, playing games, meeting new people and discussing various test related topics.

Another discussion event on diversity and inclusion, mainly focused on international women’s day, was organised to take place in the huddle area. It was a popular event, the number of attendees was so high we had to move to one of the other rooms where it was quieter.

The session took a similar format to the lean coffee session. We were given post-it notes and told to write down a few topics and stick them on the white board. We then voted on the topics we wanted to discuss. There were so many people involved that we spent more time on each topic. Normally lean coffee has about 5 minutes per topic, we spent 15 minutes per topic during the 30 minute session.

Similar to my lean coffee blog post, in this post I am attempting to give a summary of the discussion that took place rather than just presenting my own opinions.

Why do our peers not think diversity is important?

The person who suggested this topic started out by stating why diversity is important. It has been suggested that diverse teams outperform non-diverse teams. She also talked about some unfortunate experiences. For example, she once told some male colleagues about a ‘Women Who Code’ event. The response was “You mean women who can’t code?”. While it was clearly meant as a joke, it was something that has stuck with her.

In my experience, I feel that I am well respected by my male colleagues. Despite this, I do often feel that the issue is not taken seriously. Sometimes it is even joked about, although I’ve never heard any jokes as unfortunate as the one described previously.

I think that the reason that some may not see this as an issue is because it doesn’t affect them as much. They don’t know what its like to be a woman in a team dominated by men. If someone has never experienced a situation where they are not in a privileged group, how can they understand the situation and its issues.

It can be really scary for a women starting work in a new team, especially if it is a male dominated team. However, someone also pointed out that this situation can be just as scary for men. There can be this fear of saying the wrong thing, not knowing what to talk about, or not knowing how to treat women. While some may not understand the importance of diversity, most men don’t want to be seen as ‘macho’ or deliberately exclude women.

The discussion ended with the question “Why do we have to win the respect of men?”, the response was another question: “What is the alternative?”.

I don’t want to be seem as being here because of a quota

This topic was kicked off with the question, “How would you feel if you found out you only got a job because of a quota?” The answer given was ‘insulted’, a sentiment shared by most people present. This is unsurprising as I believe most would prefer to get a job on their own merit.

If I ever found myself in this position, I would strive to prove myself and earn the respect of my peers. I would show that my skills and experience alone show that I was worth hiring.

I then asked the question “Would you turn down a job if you found out you were offered it because of a quota?” I stated that, while I would be offended, it would depend entirely on how desperate I was for that job. Several people in the room agreed that they would be unlikely to turn down an opportunity if they were offered it to meet a quota.

Someone then pointed out that men have advantages that women don’t have. It is often easier for men to progress in certain areas. It is not a level playing field. If there is just one thing that gives women an advantage, why should we not use it?

The discussion then moved on to why there might need to be a quota in certain cases. Sometimes, excuses are made like “No one else applied” or “Don’t know any!”. This is probably where the problem lies. Why are women not applying for these roles? Could there be someone putting them off? No one likes quotas, however sometimes they can encourage employers to actually seek out specific candidates who meet a certain criteria and have the skills required for the role.

Someone suggested the possibility that recruiters may be biased when pre-screening CVs. We are all aware of the infamous AI recruiting tool used by Amazon to screen applicants that was biased against women. It is now becoming more and more common for CVs to have certain personal details that could reveal a persons name, gender and race before passing them on to employers.

Summary

It was great to attend such a lively discussion on gender diversity in certain industries. It was encouraging to have men and women among those who attended. I remember talking to someone afterwards who said that debates like this can go on forever.This is definitely true as both of these topics could have easily gone on for several hours. We could have continued the discussions all afternoon if the room wasn’t required for another talk. Just the 15 minutes of discussion for each topic was enough to give me a lot to think about.

Featured post

Attending Conferences and Testers learning to code (UKSTAR Huddle Area – Lean Coffee)

The UKSTAR conference is sadly over but what an experience it was. As well as attending some amazing talks, I also took the time to see all the exhibitors, meet and speak to so fellow attendees and visit the huddle area.

The huddle area included a ‘duck pond’ where anyone could enter a competition to win a UKSTAR water bottle (I managed to win one 😊), several board games, and opportunities to discuss a variety of testing topics. One particular event I took part in was the lean coffee session on the tuesday morning.

About 6 people were at the lean coffee event, apparently it is suggested that we have no more than 5 people but we seemed to be ok with the extra person. To start, we were given post-it notes and asked to write down a few topics and stick them on the white board. We then voted on which topics to talk about. Each discussion lasted about 5 minutes with the option to extend this if everyone else agreed that they’d like to continue the discussion. Typically, because of the small group, everyone was given a chance to say something about the topic.

In this post, I am attempting to give a summary of the discussion that took place rather than just present my own opinions.

Why do we attend conferences? How do we know if they are worth it?

The first topic chosen was “How do we know if teams arelearning from conferences?”, although this was merged with another suggestion “Whatconference do you plan to attend next?”.

Not everyone can communicate with confidence what they’ve learnt or what their experiences were when attending a conference. So how do managers and businesses know that the investment was worth it? Most mentioned that they were required to either write a report or do a presentation showing what they’ve learnt. I’ve never shied away from presenting my findings to a team – from either personal research, experiences at work, or attending events like conferences.

I suggested that attending a conference can add to a colleague’s personal development which can improve the way they work. Networking, discussions, and being outside their comfort zone can add to their confidence and communication skills. I’m sometimes worried that I’ll no gain anything from attending a conference, which would be disastrous for myself, the business and my colleagues who may wish to attend similar events in the future. Fortunately, this has never happened.

We all discussed reasons for attending a conference and how we managed to get support from our managers to attend a conference. I focused on specific talks and what I expected to gain from attending this talk. The reasons were mainly focused on learning and networking. Andrew Brown, one of the speakers at the conference, was one of those present and said that he can often only attend a conference if he is speaking. This highlights the issue where people often only have the opportunity to attend conferences if they work at a company willing to invest in its employees that way.

Andrew gave some brilliant advice during the discussion: “Never go to a conference session where you already know the answer”.

How can we find out how we learn?

This question was the basis for our second topic. It is definitely one that is hard to answer. It is something that a lot of people can take years to figure out.

It was mentioned how it can be difficult to engage with certain colleagues, especially when giving presentations. There is often that one person who struggles to understand, maybe because they are disinterested or don’t find presentations as the best learning tool.

Some prefer to read a book or article, some like to listen to podcasts or watch videos, some like to discuss. The preferred methods for learning can be very diverse.

Do Testers need to learn to code?

This was a topic which I suggested so was asked to open the discussion. I suggested 2 different avenues for discussion – Coding for test automation and coding for manual testing.

With test automation it can be easily argued that coding and programming skills are essential. However, with the existence of ‘code-less’* automation tools it may be easy to suggest that we don’t even need to know how to code for that anymore.

With manual testing, technically a tester does not need to know how to code to do manual testing. However, knowledge of the basic programming constructs, can make it easier for them to understand the changes being made to the application better and therefore improve their testing process.

A couple of people suggested that, even though it wasn’t essential for testers to know how to code, having that skill can be good for their career development. Times are changing, which means their job is also likely to change significantly. Knowing how to code can keep a testers options open and allow them to keep up with the times.

Someone made the distinction between reading code and writing code. It was suggested that there were huge benefits in including testers in code reviews. For this, the tester only needs to know how to read code.

It is a topic discussed often throughout the testing community, and one that can go in any direction.

Summary

This as the first lean coffee session I’d ever attended. Discussions are short and quick and, with no pre-planning, could go in any direction. With a group of attendees who have never met, there can be an interesting mix of diverse opinions. I will definitely be attending one again in the future if the opportunity arises.

This is the first of a series of blog posts that will cover my experiences at the UKSTAR conference. Feel free to comment with your own thoughts on some of the topics we discussed at the Lean Coffee session.

*I see the term code-less as a misnomer when it comes to test automation. The code may be invisible to the tester, but it still exists in the background. The code is generated automatically as the automated tests are developed.  

Main image from https://www.publicdomainpictures.net

Featured post

Are businesses only asking for Test Automation?

Recently I published a blog post on the limitations of test automation and how it should be used to improve our overall test strategy rather than attempt to replace manual testing. I shared this on LinkedIn and the discussion that followed was very interesting.

Generally, most people seemed to agree with me demonstrating that most people know the importance of manual testing, the limitations of test automation and how best to utilise both to create the best test strategy.

However, there were also several frustrated responses from people saying that there seemed to be more job adverts for test automation than manual testers. Some questioned if these companies asking for test automation experience actually need test automation, or even know how it can be used.

This may indicate that while software testers recognise the importance of manual testing, the same cannot be said for software engineering companies in general.

Why are businesses recruiting for test automation?

I am not a business leader, or involved in recruitment, so I can only speculate.

It is most likely that businesses are recognising the potential value that test automation can bring to a testing project. However, are businesses even aware what this value is? Even if a test automation project is successful, there may be some disappointment when the expected value is not achieved, despite the fact that there is still proven value.

There is a concern that some businesses are choosing to begin using test automation because other companies are using test automation. This would certainly increase the pressure on them to employ testers with experience in test automation.

What about Manual Testers?

I believe that a good test automation developer first needs experience as a manual tester. Without these skills, I don’t think we can expect a test automation developer to make the right decisions when it comes to implementing a test strategy.

Before starting out, the tester needs to gain knowledge and understanding of the software application and how its used. I don’t think it would be possible for the tester to create valuable automated tests without knowledge of the software. This knowledge is best gained by exploring, experiencing and learning about the software. This will allow them to design the tests in the best possible way.

I believe that all manual testers have the potential to become great test automation developers. However, this does not mean they should be forced into it. Businesses should be making use of both manual and test automation in their overall testing strategy. There are just too many limitations to rely solely on test automation.

How is the increase in demand for test automation affecting manual testing?

If we are spending our time developing automated tests, that is less time spent on manual testing. This is OK if the automated tests are bringing value to the project. However, we must still remember that manual testing is just as valuable.

As mentioned in my previous blog post, test automation should be used to enhance testing, not replace it. The overall testing strategy needs to include scripted manual and exploratory testing as well. If there are not as many manual testers being hired, we have to think about how this might be affecting the overall testing effort at these companies.

Test Automation is development, shouldn’t the Software Developers be doing this?

Test automation is most certainly development. Anyone who develops automated tests should be classed as a developer. However, there is a difference between the type of products that software developers and test automation developers work on.

A different mindset is required for both types of developers. As I said earlier, I believe that a test automation developer should first have experience of manual testing. It is this experience that puts them into the correct mindset. A software developer is unlikely to have this mindset.

Conclusion

The increase in demand for test automation means that businesses are clearly aware that there is value to be gained. However, they may also be failing to recognise the costs and limitations. These limitations are the reason why manual testing is still required (and this is unlikely to change for some time).

The decision about whether to use test automation or not should be based on the advice given by software testers. A good testing strategy may or may not include test automation – but MUST include manual testing.

If businesses want to be using test automation as part of their overall test strategy, they should invest in both manual and automated testing.

I would like to thank everyone who commented on the original blog post “We don’t need automation, we need better testing”. It was these comments that inspired this post.

Main image taken from http://www.publicdomainpictures.net

Featured post

We Don’t Need Automation, We Need Better Testing

Fry: If your programming told you to jump off a bridge, would you do it?

Bender: I dunno, I’d have to check my programming… yup.

Futurama, Space Pilot 3000

I was testing an application where I needed to check that a popup window appeared on a certain occasion, and it closed down when the close button on the window was clicked. This is a very simple check that had been automated. One day, this test was chosen to be run overnight.

Unbeknown to me, there was a bug where the application would crash when the popup window was closed. For some reason, the test still passed.

Fortunately, the next step in the test could not be run because the software was no longer running. Normally, this is not a good thing as it is still good to know if the remaining steps in the test pass. On this occasion, it was a blessing as otherwise I may not have known about such a critical bug. It also made investigating the bug much easier.

My immediate assumption was that the failed step was where the issue was. I soon found that this was not the case. It didn’t take long to find the bug by rerunning the previous step. The step still passed but I could clearly see that the software was no longer running.

Once the bug was reported, I took a look at the automated test to determine why the test passed despite the application crashing. I soon found that while I included a check that the popup window was closed, I’d neglected to include a check that the software was still running. All the validation checks had been successful, so the test passed.

Automated tests will only do what its told, but it still has its uses

This highlights an important issue with test automation. It will only do what its programming tells it to do and not deviate in any way. If the test was being run manually, a human would have immediately noticed that the software was no longer running.

On the other hand, even though the test was passed incorrectly and stopped the remaining steps in the test from being run, it still provided the tester with enough information to find the bug. With the many other tests that need to be run, it might have been some time before this specific test was run. At least with test automation, the tester was made aware of a particular area within the software where the bug existed.

Automated testing allows scripted tests to be run quicker, freeing up more time for exploratory testing. It can also highlight areas of the software that may not be working as expected, enabling us to target exploratory testing to at-risk areas of the application.

Our aim is not to automate, but to enhance testing

An automated test will only run the test the way it is programmed to, but these tests can still be extremely useful. Michael Bolton wrote a brilliant article on the value and cost of Automated Checking in which he states that:

“Automation is NOT the goal. Tools and automation are a means of advancing some aspects of your test strategy.”

Michael Bolton, Value and Cost in Automated Checking or “Don’t Fall into GeMPuB”

Like software, test automation is not perfect and will contain bugs. We must not be completely reliant on it. Test automation should only be a part of the overall test strategy. I don’t think I’ve ever found a bug using test automation alone. At best, it highlights an area within the software that may not be working as expected. Additional exploratory testing had to be carried out to find out if there was an bug or not, and the precise details of the bug.

What is a good testing strategy?

For me, a good test strategy contains a mixture of scripted automated and manual tests, and some exploratory testing.

There should be scripted tests that cover the checks that MUST take place. These are the ones that cover the core features within the software. It is up to the tester to decide if these should be automated or not. There are a lot of benefits to automation, but the value must exceed the cost.

There MUST be exploratory testing. The tester can examine the software without being limited to what the script says. The freedom to explore the software may allow the tester to find hidden issues which could negatively effect the overall user experience. It is also useful to do more rigorous testing on areas of the software that have been changed. Perform extra checks so that we can be certain that the software works.

Automated testing is useful, but it can only enhance the testing, not replace it.

Featured post

Bug Hunting is a Team Sport!

On the 23rd January 2019, I gave my very first 99 second talk at the #MidsTest meetup in Birmingham. This blog post is inspired by the talk I gave. 

I once had a case where an automated test had failed. After some investigation, it was found that this had been due to a new bug that had been introduced to the software. While discussing this new defect, one of the developers jokingly said, “You didn’t find this bug, the automated test did”. In reply, someone else said “Yeah, but Louise designed the test”.

This made me think: Who did find the bug?

Multiple automated test tasks that lead to bug discovery

There are so many tasks involved in test automation, including:

  • Developing test automation framework
  • Developing automated test cases
  • Maintaining automated test cases and framework
  • Selecting which automated tests to run
  • Analysing test results, and investigation of failed tests

Which of these tasks led to the discovery of the bug? If these tasks were run by multiple people, then no single person can claim to have found the bug.

If the automated framework did not exist then the test case itself could not exist. If the test case was not run, then there there is a chance that the issue may have remained unknown to the test team. Once a potential error was known to the test team, then there would not have been any additional investigation or exploratory testing carried out to find the precise details of the bug.

In my experience, an automated test rarely ‘finds’ a bug but only highlights areas of concern within the application. The precise details of the bug only become known once someone has analysed the test results and investigated the source of the failure.

How about bugs found using Manual Testing?

If the testing strategy includes a combination of manual and automated testing, then credit should not only be given to the manual tester. The inclusion of automated testing can help free up additional manual testing time. This often leads to increased test coverage, and more defects being found. Just like bugs found due to an automated test failure, it is a combination of testing activities that led to the bug being found.

There are some test teams who rely solely on manual testing. In these cases, effort should be focused on ensuring that all areas of the application receive adequate test coverage instead of finding as many bugs as possible. Bug hunting should be a team sport, and teams are not in competition with each other.

But it is the developers who fix the bugs….

The testers may be the ones who find the bug, but it is the developers who fix the bug. Of course, the developers wouldn’t know about the bug in the first place if it wasn’t for the testers.

Developers and testers should work together to improve the software. There are some who feel that developers are the ones who create the bugs and they need testers to help fix their mistakes. Others feel that testers are just trying to find bugs and problems with the software. These are both very dangerous mindsets to have.

Both parties want the application to work, and to be fit for purpose. This can only be achieved if the developers and testers work together. It is through collaboration between the test team and development team that the quality of the software is improved

Take responsibility, not credit

Recognition for hard work is essential, however it should also be recognised that more defects can be found by combining the skills of the testing team.

Quality is the responsibility of the entire team – testers and developers. Each member should make use of their particular skill sets to benefit the team and the software. Finding and fixing defects is one of the best methods for improving the quality of the software.

Featured post

Pizza Delivery: Importance of Good UX Design

Today, on my drive to work, I was listening an episode of ‘The good, the bad and the buggy’ podcast – specifically the episode titled ‘Food for Thought’. This episode focused on the way technology has influenced the way we order food.

It has occurred to me that where I choose to buy pizza is more down to the quality of the online website than the quality of the pizza itself.

A lot of people will have heard of the well-known pizza restaurant Dominos. The pizza they sell there is pretty good, but I’ve always disliked their range of side orders. I’ve found them overpriced and lacking in choice. Dominos is also generally more expensive than other local pizza restaurants.

There was this local pizza restaurant in the town I went to university. The pizza was delicious, and the prices a lot more reasonable compared to Dominos. However, when a Dominos opened up in the town, I started getting my pizza from there instead. Not because of the quality of the pizza, which I’ve already stated was inferior to the other local pizza restaurant. Dominos had a really good online ordering system. It was so easy to use, even if you had a complicated order that required like extra pizza toppings. A pizza of my choice could be ordered in minutes. It even had a tracker saying how long it would be before my pizza arrived.

This improved the user experience so much that I preferred to order my pizza from Dominos. The introduction of a good online ordering system has helped improve their customer base, and their profits. It shows how much value can be gained from investing in technology as well as pizza.

Companies are now relying a great deal on developing technology to boost sales. Technology can be used to improve the product or service they well. However, there does seem to be more focus on user experience. Happy customers are more likely to return and spend more money.

Image taken from http://www.publicdomainpictures.net

Featured post

Report Bugs – or they will not go away!

I’ve just listened to episode 8 of the Guilty Tester podcast. During this episode an interesting bug was described. The marketing team for a shopping website was unable to create promotions or discounts in December.

Why? Because the software did not allow them to. The application supported the creation of promotions in the other 11 months of the year, but not December. In the code, the months had been indexed using 0 – 11 instead of 1 – 12. When the month December was selected, the number 12 was used which failed the validation check because the month was not between 0 and 11.

Interestingly, this platform had been used for 18 months and the marketing team just accepted that they could not create offers in December. They had not complained or reported the bug.

This made me ask the question: How many bugs found by customers go unreported?

Minor bugs have little impact and are unlikely to cause complete loss of customer loyalty but can still devalue the business. Critical bugs, like the one mentioned above, are not just annoying. They prevent the customer from being able to use the application they way they want to. A customer is more likely to report these bugs than the minor ones but they may not see this as a valuable use of their time. Why should they be the ones reporting these bugs? It is up to the business to be producing good quality software. In this case, the bug does not get fixed and the customer either accepts it or stops using the application altogether.

Software applications are not developed for fun, they are developed for the customer. As a software tester, I report every single bug I find. No matter how minor. Unfortunately, despite my best efforts, a bug may occasionally slip through the net. If something is preventing the customer from using the system the way they need to we want to know about it. Developers cannot fix the bug if they don’t know its there.

If there is a bug that prevents you from using the application the way you want to, please report it. This bug may not just be affecting you but other customers as well.

This lesson should not just apply to the use of software applications. I recently walked into a shop and saw that someone had spilled a drink and left it there unreported. I had to step over a small puddle when entering the shop. What if someone had slipped on the drink? The shop would have been legally responsible but how could they have cleaned up the spill if they were unaware of it?

At work, if there is something preventing you from doing your job properly then you should report it. The ability to do your job properly affects your colleagues and any other stakeholders. Anything that could improve your work performance should be taken seriously, but it won’t happen if you don’t report it.

If there is something that can be done to make your life easier, don’t be shy! Tell someone, and something might be done about it.

Featured post

Communicating, speaking and debating

It is nearly the end of the year and I’ve been looking back at how I’ve changed, what I’ve learnt and what I’ve achieved. I think this year has been the most important one yet for my career.

This is the year that I started talking more. I began posting on LinkedIn about software testing, I started writing blog posts and articles for external sites, and I created my own personal blog (although I haven’t written many blog posts here yet).

This all started when I gave a lightning talk at the Spring 2018 OnlineTestConf in June on Women in Testing. My submission was very last minute as I was a little apprehensive about giving a talk to so many people, even if it was only 5 minutes. Fortunately I decided to give it a go. I enjoyed the experience so much that it prompted me to apply to more conferences. One conference submission got accepted for the Fall 2018 OnlineTestConf. I gave a 45 minute talk on automated testing, and another lightning talk.

The confidence I gained from giving these talks has helped improve as a software tester, and boosted my confidence. I started posting on LinkedIn more, and commenting on other peoples posts. At work, we were asked to write articles for the company blog in order to promote a product launch. I eagerly volunteered, something I probably wouldn’t have had the courage to do previously.

I am always grateful when people respond to my posts. I don’t care if the respondent shares my point of view. The very fact that they have taken the time to respond means that they have taken the time to read what I have to say. If they agree with me, then the acknowledgment of my opinion gives me the confidence to share my ideas more. If they disagree with me, then I am prompted to rethink my opinions. From this my ideas develop further, and sometimes change completely.

The real benefit of this increase in online activity is that I feel more able to speak out more and my skills as a software tester have improved. I have developed both professionally and personally.

Featured post

Online Test Conf – my first conference talk

The day finally came! On 27th November, I gave my talk at the Online Test Conf and what an experience it was. I will not deny that I was nervous, which is to be expected. Not only did I give the 45 minute talk on automated testing, I was also selected to give another lightning talk.

What I really love about the online test conf is the slack channels that are set up for each talk (including the lightning talks). This helps make the conference a lot more interactive. Live conferences can be a lot more engaging, but there isn’t always an opportunity to ask questions and discuss the topic with the speaker once the talk and conference has ended. There will be 5 – 10 minutes for Q&A, but your question may not always be asked. With the slack channel, everyone can ask a question, and there is plenty of time for the speaker to answer. I was chatting to people and answering questions for over an hour after my talk had ended. 

I’m very pleased with how my main talk went. The feedback on the slack channel was very positive. A couple of weeks ago I did a run through of my talk to some colleagues at work. It took 30 minutes exactly. On this occasion, my talk went a little longer meaning I had to rush through the last couple of slides. There was only a few minutes for Q&A.

I wasn’t as happy with how the Q&A went, I got a little confused while answering one of the questions because it contradicted what I thought I’d said during the talk. I expect I didn’t say it right, or the person misheard me. When I listen to the recording, I plan to answer the questions again to give better answers. 

Despite this little setback, I feel the talk was a success. My ideas of test automation, and the message I was trying to communicate seemed to have been well received and understood. I wanted to demonstrate that there doesn’t always need to be hundreds or thousands of automated test cases. Sometimes a few well designed automates tests can be just as effective.  Several people asked if I was able to share the presentation slides. They really agreed with my message and wanted to also demonstrate this to their colleagues. 

My lightning talk also went well.  I didn’t expect it to be selected as I’d already spoken. My talk was titled “if a tree falls in the forest…” after the famous philosophical riddle. In it I discussed the impact of bugs that are found too late in the software development life cycle. I found I didn’t need the full 5 minutes as I felt I was able to explain my idea in just 3. 

I reworded the original riddle to the following: “If a bug exists in the software, and there is no software tester to find it, does it make a sound?”

I took the new riddle and made it into a linked in post. There have been some interesting responses. I also took the transcript and make it into a linkedin article.  The recordings of both my talks will be posted as soon as they become available.

Software Test Automation Power Hour with Angie Jones – What I Read Last Week (4th August 2019)

The latest Ministry of Testing power hour was on Test Automation, with questions answered by Angie Jones.

At the time, I was in the process of writing out my latest blog post. This was a response to a question I was asked while giving a talk on Record and Playback in test automation at the #MidsTest meetup in Birmingham.

Since I am due to give this talk again, I decided this was a great opportunity to find out what Angie’s thoughts on the subject were. I mentioned how and why I like to use Record and Playback. Crucially, I mentioned that I adapt the code to make them more maintainable and robust.

I really liked Angie’s answer. She talked about how she discourages the use of Record and Playback, but was glad that I modified the code. This supports the message I am trying to convey in my talk. It is a matter of choice if you decide to use Record and Playback. There are many benefits, but only if it it used right. Any code generated from Record and Playback must be adapted and modified.

Angie’s thoughts on the use of Record and Playback in test automation. Answered in the Ministry of Testing Power Hour on test automation.

Related Links

Power Hour event – Angie Jones – Ministry of Testing
Questions about software test automation, with answers provided by Angie Jones.

Open Letter to Codeless Automation Tool Vendors – Angie Jones
A link shared by Angie during the power hour event. I’ve been curious about new codeless automation tools and had meaning to ask a question about this. However, I forgot to post it before the power hour. I was glad that she addressed this in her answer to my question. Her open letter addresses the issues with Record and Playback tools, and provides a list of features that a codeless automation tool should include.

What’s That Smell? Tidying up our test code – Angie Jones – talk at SauceCon 2019
Another link shared by Angie during the power hour event. This was a video of a talk given at SauceCon 2019 about refactoring test automation code.

After watching this talk, I made my first attempt at sketch noting. This was not done live, but while reviewing and writing up my notes after watching the talk. This was inspired by sketch notes of talks given at Collaborate Bristol conference I attended a couple of months ago. I was struck by how beautiful these sketch notes were. Unfortunately, my attempt was a little more messy. However, it gave me an opportunity to review my notes and take the most important points. It is nice to have all these main points displayed on a single page of A4 paper. This will help jog my memory when revisiting these notes in the future. I am definitely going to give this another go in the future. Next conference will be SwanseaCon on 9th September.

My first (messy) attempt at sketch noting.

Other blogs that share lists of test related articles

https://5blogs.wordpress.com/ (daily)
http://blog.testingcurator.com/ (weekly)
http://thatsabug.com/ (weekly)
https://weapontester.com/tea-time (weekly)
https://www.ministryoftesting.com/feeds/blogs

Testing Conferences
https://testingconferences.org/

The Club
https://club.ministryoftesting.com/
A forum for discussing, asking questions, answering questions, and requesting help. Run by the Ministry of Testing.

Feel free to recommend anything that you think is of interest.
Main image taken from http://www.publicdomainpictures.net

Learning about Personas – What I Read Last Week (28th July 2019)

A few months ago I came across the term ‘Abuse Cases’ in a blog post by Nicola Owen. It was a term that I’d never come across before. This inspired me to write a blog post where I provided my own interpretation of Abuse Cases.

Abuse Cases – Understanding Motives

When Ministry of Testing announced their latest Power Hour event on Personas, I was eager to submit a question about Abuse Cases. Gem Hill beat me to it (although she didn’t use the term Abuse Cases). I see Abuse Cases as being examples of ways the application could be misused. This question didn’t just focus on ways the product could be misused, but also how a product could be attacked.

Check out The Club for a full list of questions and answers from the Power Hour.

Events

Personas – Power Hour
Cassandra H. Leung dedicated an entire hour to answering questions about Personas on The Club. Questions were asked about personas templates, edge cases and personas for those who would misuse an application.

Social Media Discussion

Superhuman discussion (Twitter)
Twitter discussion shared by Cassandra via the Persona Power Hour as an example of a persona created to show how an application could be misused.

Smoke Testing vs Sanity Testing
After finding the discussion about smoke and sanity testing on LinkedIn, I decided to setup another discussion on The Club to see if anyone else had some ideas to share.

Articles and Blog Posts

Learning from Failure: The tricky iOS Environment – Melissa Eaden – Testing and Movies and Stuff
This article contains a tale of a mistake that led to iOS issues and the lessons learned from this mistake.
“Issues…can give us an opportunity to change practices, habits, and better understand the system we are working with.”

“Cheating” Is Necessary – Melissa Eaden – Testing and Movies and Stuff
Is it cheating to look things up? Or ask for clarification?

How to form a regression testing plan with these 5 questions – Mike Kelly – Tech Target
There are many things to consider when setting up a regression test plan. Here, we look at questions about goals, coverage, techniques, maintenance, environment and reporting that should be asked while putting together a regression test plan.

It’s Automation Logs! Better Than Bad, They’re Good! – Paul Grizzaffi – Responsible Automation
In this article, we look at the importance of useful logging and what is required to make it useful.

Testers, Please speak to the developers
This week I published my write up of the 99 second talk I gave at the Birmingham test meetup last week. In this post I talk about the importance of speaking to the developers. Communication ensures that everything understands the requirements and identify ways to make the application more testable.

Other blogs that share lists of test related articles

https://5blogs.wordpress.com/ (daily)
http://blog.testingcurator.com/ (weekly)
http://thatsabug.com/ (weekly)
https://weapontester.com/tea-time (weekly)
https://www.ministryoftesting.com/feeds/blogs

Testing Conferences
https://testingconferences.org/

The Club
https://club.ministryoftesting.com/
A forum for discussing, asking questions, answering questions, and requesting help. Run by the Ministry of Testing.

Feel free to recommend anything that you think is of interest.
Main image taken from http://www.publicdomainpictures.net