Category Archives: UKSTAR

Jaroslaw Hryszko, Amy Phillips and Bas Dijkstra (UKSTAR talks Day 2, Part 2)

It has been 3 weeks but I’ve finally completed the last of the UKSTAR blog posts. The final few summaries were difficult to write, it is amazing how much you can forget in just a few weeks. Fortunately, my note taking skills were good enough to keep my memory fresh.

Adept: Artificial Intelligence, Automation and Laziness by Jaroslaw Hryszko

Jaroslaw gave a highly technical talk about automated defect prediction using static analysis tools and machine learning. In real life, more bugs are often found later in the lifecycle. Jaroslaw demonstrates that using prediction based QA, more bugs can be found earlier in the lifecycle. This saves a significant amount of money as the cost to fix is less.

I found it very interesting that Jaroslaw gave 2 different definitions for bugs and defects. Previously I’d always thought of them as always being the same:

  • Bug – mistake in the source code, doesn’t have to result in a defect.
  • Defect – discrepancy between users expectations and actual behaviour, does not have to be caused by a bug.

I’ve already studied techniques for static analysis so that bugs can be found earlier in the lifecycle, but never really thought much about how machine learning could be applied. This is a subject that I need to read a log more of. My notes are filled with suggestions for papers, articles and topics which I plan to search for online. This talk was highly technical but provided enough information to use as a basis for further research.

Keynote 3* – How to lead successful organisational change by Amy Phillips

We’ve attended this amazing conference, learnt many new facts and developed new ideas that could potentially improve what already takes place at our companies. However, applying these changes is easier said than done.

How do we apply these changes? We can’t just tell everyone this is how we should start doing things. First, we may not have the authority to do this. Second, people don’t like change. In this talk, Amy talks us through a process that could help us gain support from within the organisation. This will increase the chance of the change being embraced instead of rejected.

Steps suggested include:

  • 0. Create foundation
    • Establish credibility so that colleagues are more likely to trust that the change might work
    • Ensure that there is capacity for change. If we attempt to introduce the change at a critical time, like when there is a deadline approaching, the change is more likely to be rejected.
  • 1. Build an emotional connection
  • 2. Identify a north star
    • The north star represents something that we should aim for, a mutual goal.
  • 3. Small steps in the right direction
    • Don’t try and do everything at once.

Originally, this talk was meant to be at the start of the day. I don’t know the reason for moving the keynote, but it seemed to work better this way. This talk seemed well suited to take part at the end of day, giving us a final piece of advice to ensure that we got the most out of the conference.

Deep Dive F – Building Robust Automation Frameworks by Bas Dijkstra

For the final deep dive session, I chose to attend Bas Dijkstra’s session on building automation frameworks. Bas walked us through a series of steps to setup a basic automated test and improve on it. Most of my experience with test automation is self taught so it is interesting to see what steps someone else would follow. It confirms that I am also following recommended steps and fills in any gaps in my knowledge.

Iteration 1 – creating a basic test using record and playback
Once this was done, Bas highlighted some potential issues such as all steps being in one method, everything being hard coded and no browser management.

Iteration 2 – Better browser management
Ensure that the browser is closed down in a tear down script once the test has been run.

Iteration 3 – Waiting and synchronisation
Implement a timeout and waiting strategy, for example “all elements should be visible within 10 seconds”. If this does not happen, a timeout exception should be thrown.

Iteration 4 – Page objects
Makes tests more readable by separating out the flow of the tests. This makes it easier to update and maintain tests.

Iteration 5 – Test data management
Each test run will change the data. Therefore there needs to be a way to create and control the required test data. One option is to reset the database. It is worth talking to the developers who could provide something to make this possible.

Iteration 6 – Quick! More tests!
Make the tests data driven so the data can be varied. Using the same values doesn’t really prove much once the test has already been run. Data driven testing allows alternative data values to be used and allow more edge cases to be covered.

Iteration 7 – Narrowing the scope
Run data driven tests through the API to speed up tests and make the tests more efficient.

Iteration 8 – Service Visualisation
Dependencies aren’t always accessible, which can affect robustness. Use a fake or virtual process to keep the test environment under control.

Advertisements

Bas Dijkstra, Poornima Shamareddy and Wayne Rutter (UKSTAR talks Day 2, Part 1)

We are now on the 2nd and final day of the UKSTAR talks. Thankfully, I’d been able to get a decent nights sleep despite attending networking evening that had been arranged at a local pub. I arrived nice and early, in time for breakfast and the Lean Coffee event at the huddle area. This was followed by another day of brilliant talks.

Keynote 4* – Why do we automate?
Bas Dijkstra

This is a brilliant question that many neglect to ask. In this talk, Bas discusses the common mistakes that occur in test automation such as attempting to automate everything. Instead, we should be asking questions about why we want to automate so we are sure that we are doing the right thing. It is not a case of one size fits all, the approach to test automation will be different for each project. Automation is wonderful but only if done right.

It is important to, every now and then, step back and ask “why?”

  • Why do we need automation?
  • Why this tool?
  • Why these tests?
  • Why this percentage of tests?

Only with a satisfactory answer should we proceed.

Bas used the following quote from the movie Jurassic Park to summarise his point:

“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

Ian Malcolm, Jurassic Park

Bas also used another brilliant analogy: Test automation is like bubble wrap. Its fun to play with but it has its limits and can give a false sense of security. He discusses this analogy more here.

*This was originally scheduled to take place at the end of the day not the beginning. I’ve labelled this as keynote 4 to match the program even though it took place before keynote 3.

Cognitive Testing Talks
Cognitive Testing – Insight Driven Testing Powered By ML by Poornima Shamareddy
Cognitive QA by Wayne Rutter

The next 2 talks were on very similar topics from the modern tester track.

The first talk, Cognitive Testing – insight driven testing powered by ML by Poornima Shamareddy, talked about the development of a self learning system that used data mining and pattern recognition to rank and prioritize test cases. During the talk, Poornima walked us through the process of developing the application including the benefits achieved.

The second talk, Cognitive QA by Wayne Rutter, discussed investigation into ways to identify the amount of test resource required for each area of an application. Wayne went into great detail about some of the different machine learning methods including supervised learning, such as using classification methods, and unsupervised learning, such as clustering. This talk was especially impressive as Wayne was part of the speakeasy program having never given the talk before.

Artificial Intelligence and machine learning concepts are something I’ve not covered since university. My memory of the subjects is a little vague but both talks provided enough basic information about machine learning techniques to make the talk informative to anyone regardless of prior knowledge. It was interesting to see examples of practical applications of artificial intelligence, natural language process and machine learning from a test perspective.

Gerie Owen and Fiona Charles (UKSTAR talks Day 1, Part 3)

Here is the final blog post for day 1 of the UKSTAR conference. This includes Gerie Owen’s talk on wearable technology and Fiona Charles’ keynote on the positives and negatives of disruptive technologies.

A Wearable Story: Testing the human experience
Gerie Owen

Gerie starts the talk by talking about her experiences while running the Boston Marathon. After completing the gruelling race it was found that her time had not been logged correctly. The chip she was wearing was defective. It is easy to imagine how frustrating this must be for someone who has just run 26 miles.

Gerie uses this story to explain what a wearable is and the importance of ensuring that the user gains some value from a wearable technology. In this example, it is clear that no value was gained from the wearable chip used in the Boston Marathon.

Wearables require some kind of human interaction for value to be achieved. Gerie demonstrates how to set up persona’s that can be used for testing a wearable device. These personas will contain details about the life, goals and expectations of someone who is likely to use the device providing an understanding of their expectations. This information is used to create user value stories.

The best outcome of a user story is one where someone gets value. The worst outcome is where no value is achieved at all.

This is the second time I’d heard Gerie Owen speak. The first time was at the spring 2018 Online Test Conf where she gave another brilliant talk on continuous testing.

Technology’s feet on society’s ground
Fiona Charles

The second keynote, and final talk of the day, was give by Fiona Charles who asks the question: Is ‘disruptive’ a good term?

When it is a good term, it can lead to positive change. However, could lead to unintended negative outcomes. Technology reaches everywhere into society but often reflects and favours the privileged. Biases and discrimination can lead to some of these negative outcomes.

It is common for students to have to receive and submit homework via the internet. This is convenient for both the students and the teachers, but what about students who don’t have access to a computer.

Self service checkouts have led to a more efficient shopping experience for both staff and customers. However this has led to more fruit and vegetables being stored in plastic packaging so it can be scanned easier.

We can now buy items online and return them just as easily. But in a lot of cases the returns are just being sent to landfill. In addition, more parcels mean more delivery vehicles. This has resulted in an increase in traffic congestion.

Fiona also includes some more dangerous examples, like an aircraft which almost crashed because the auto-pilot malfunctioned and the pilot was not able to override it easily. As technology advances, we need to think about how much human intervention should be retained.

There are potential ethical implications of technology, especially as Artificial Intelligence starts to gain prominence. We must question assumptions, biases, objectives and decisions. We must be asking:

  • Should we build this?
  • Is it right to build this?
  • What could go wrong?

Peet Michielsen, Joep Schuurkes and Viv Richards (UKSTAR talks Day 1, Part 2)

Here is the next blog post where I provide summaries for the talks I attended at the UKSTAR software testing conference. This covers the first 3 talks in the automation track. Peet Michielsen talks about how to fix the ‘leaks’ in the test pipeline, Joep Schuurkes shares a few tips on how to choose or design a test automation framework, and Viv Richards introduces us to visual regression testing.

When you’re moving house and the pipeline needs re-plumbing
Peet Michielsen

Peet Michielsen walks us through his journey from one company, where he setup a release pipeline from scratch, to a new company which already had a pipeline in place but found there were several ‘leaks’. He talked about some of the challenges he faced at the new company to fix these leaks. Finally, he gave some tips for improving the release pipeline.

This talk used the plumbing analogy a lot to explain his points. Generally, he was demonstrating the importance of allowing the project to ‘flow’ and not be held up by delays in testing. Replace anything that is obsolete and introducing reliable test automation are a couple of the ways to improve this flow. The use of test automation and continuous integration is what makes a software project ‘flow’.

Peet did not refer to any particular tools or technologies that he uses. This helped him demonstrate that his ideas could be applied to most projects regardless of the tools and processes they already use. This seems like a good idea as I am often put off by talks that focus strongly on technologies which are unsuitable for my current work. It can distract people from the actual message. The ideas that were presented in this talk could easily be applied to any test project.

What to look for in a test automation tool
Joep Schuurkes

Joep starts off this talk by discussing some of the issues he had with previous test automation tools and why this led to him building his own framework. This helped solved most of the issues he was previously having. His new framework, created using python, used a mixture of existing commands as well as newly developed ones – gluing well established tools and libraries together.

Throughout the talk, Joep showed us how he did completed some of the following test activities using his framework – Create, Read, Debug, Run and Report. With each activity he provided some great tips that can be used to improve a test automation framework.

Some of my favourite tips include:

  • Naming tests well can clarify the tests intent. It can also make it easier to notice gaps and patterns in the test coverage and when running the tests.
  • A test does one thing only, keeps things clear and focused.
  • When a test fails, can you see the failed step? Do you have the required information? Can you understand the information provided? There is no such thing as too much information, so long as its well structured.
  • Never trust a test you haven’t seen fail.

And finally, the most important piece of advice: Forget about the shiny, be impressed but ask … Is your tool helping you to do better testing?

Spot the Difference; Automating Visual Regression Testing
Viv Richards

Why do we use test automation? It is more reliable as its performed by tools and scripts, meaning that the risk of human error is dramatically reduced. However, it does have its issues, especially when testing the UI. A large amount of investment is required, its only practical for tests that need repeating and with no human observation there is no guarantee that the UI is user friendly.

One popular pattern used in test automation is the page object model. The issue with this model is that the locations and visual attributes of elements are not usually checked. We played a game of spot the difference where there were 2 versions of the same GUI. The audience could easily spot most of the ‘mistakes’. There were about 10 in total, but only 4 would have been picked up in test automation using the page object model. Things missed out included additional spaces between elements, text styles and fonts and changes to colours or images on the page.

Viv then goes on to demonstrate how a screenshot of a GUI can be compared against previous versions of the GUI as part of test automation so that the software team can be alerted to minor changes in the software a lot sooner. These tests, run repeatedly on future versions of the application, can bring additional value to the software project.

Angie Jones and Anne-Marie Charrett (UKSTAR talks Day 1, Part 1)

Here is the first blog post where I discuss the talks I attended at the UKSTAR 2019 conference. This covers the first keynote by Angie Jones, and the Deep Dive session run by Anne-Marie Charrett.

Keynote 1 – The Reality of Testing in an Artificial World
Angie Jones

This first keynote of the conference was given by the amazing Angie Jones. I confess, I’d already watched this talk once before at the STAREAST techwell conference last year. They make a selection of their talks available online to watch on demand for a few months after the event and this was one of the talks I chose to watch. Angie is such an engaging speaker that, even though I knew the story she was going to tell, I was still on the edge of my seat wondering what was going to happen next.

Angie challenges the misconception that an application doesn’t need testing because the “AI is doing it!”. She produces several examples where machine learning has gone wrong, which may indicate that the application was not tested adequately enough. This is especially worrying as there may come a point where AI is incorporated into applications where reliability is essential. For example, an application that predicts if a patient is likely to get cancer. Some applications are too important not to test so we cannot be relying on them just working.

So how do we test it? Angie walks us through the process she followed when testing an AI application for the first time.

  • First she learnt how the application works. This is really important as AI will have no pre-determinable results. Therefore, we need to know how it got to this unknown result and test that the AI is calculating the result correctly.
  • Second, we train the system to see if the outcome is correct. Using test automation, we generate large amounts of data. We then test the outcome and see if that matches what we expect to appear based on the data we fed into the system. We repeat this multiple times with different sets of training data to see if the outcome matches what we expect.

AI is all about calculating results that cannot be pre-determined. We should be tested the method for calculating that result, not the outcome itself. If we are putting all our faith in an AI application and relying on them to getting the correct result, how can we NOT test this? People often ask if AI is something to be feared. Without testing, the answer is yes AI is something that should be feared.

Deep Dive – API Exploratory Testing
Anne-Marie Charrett

When attending this conference, my main focus was on test automation. However, I also wanted to learn something new. I was attracted to this deep dive session because API testing is something I’ve not done much before and really think I should start doing. Also, with so much focus on test automation, it was nice to learn something new about exploratory testing for a change. After all, both are equally important.

This talk started by examining how the use of mind maps can encourage testers to take a more systematic approach when doing exploratory testing. She started with the GUI and then went on demonstrate how this same idea can be used to explore the API. She walked us through examples of tests that might be carried out.

This talk was especially useful for API testing novices as she taught us how basic API commands worked, and how they can be used to test the API layer within an application. It was very basic, but useful for getting us started. Exploratory testing is all about learning and gaining experience. It is an excellent testing method to use for learning more about the application being tested and API testing.

Having never attended a deep dive session, I didn’t know what to expect. I really enjoyed the interactivity of the session. Anne-Marie is very good at encouraging audience participation. We were encouraged to suggest tests to run. During each ‘test’, before actually revealing the outcome, she would ask the audience “Whats the hypothesis?” encouraging us to think and talk about what we’d expect the outcome to be.

Women in Tech, Diversity and Inclusion (UKSTAR Huddle area discussion)

In this post, I will be continuing to share my experiences at the UKSTAR software testing conference. Previously I wrote about the lean coffee event that took place at the UKSTAR huddle area. This was an area designated for chilling out, playing games, meeting new people and discussing various test related topics.

Another discussion event on diversity and inclusion, mainly focused on international women’s day, was organised to take place in the huddle area. It was a popular event, the number of attendees was so high we had to move to one of the other rooms where it was quieter.

The session took a similar format to the lean coffee session. We were given post-it notes and told to write down a few topics and stick them on the white board. We then voted on the topics we wanted to discuss. There were so many people involved that we spent more time on each topic. Normally lean coffee has about 5 minutes per topic, we spent 15 minutes per topic during the 30 minute session.

Similar to my lean coffee blog post, in this post I am attempting to give a summary of the discussion that took place rather than just presenting my own opinions.

Why do our peers not think diversity is important?

The person who suggested this topic started out by stating why diversity is important. It has been suggested that diverse teams outperform non-diverse teams. She also talked about some unfortunate experiences. For example, she once told some male colleagues about a ‘Women Who Code’ event. The response was “You mean women who can’t code?”. While it was clearly meant as a joke, it was something that has stuck with her.

In my experience, I feel that I am well respected by my male colleagues. Despite this, I do often feel that the issue is not taken seriously. Sometimes it is even joked about, although I’ve never heard any jokes as unfortunate as the one described previously.

I think that the reason that some may not see this as an issue is because it doesn’t affect them as much. They don’t know what its like to be a woman in a team dominated by men. If someone has never experienced a situation where they are not in a privileged group, how can they understand the situation and its issues.

It can be really scary for a women starting work in a new team, especially if it is a male dominated team. However, someone also pointed out that this situation can be just as scary for men. There can be this fear of saying the wrong thing, not knowing what to talk about, or not knowing how to treat women. While some may not understand the importance of diversity, most men don’t want to be seen as ‘macho’ or deliberately exclude women.

The discussion ended with the question “Why do we have to win the respect of men?”, the response was another question: “What is the alternative?”.

I don’t want to be seem as being here because of a quota

This topic was kicked off with the question, “How would you feel if you found out you only got a job because of a quota?” The answer given was ‘insulted’, a sentiment shared by most people present. This is unsurprising as I believe most would prefer to get a job on their own merit.

If I ever found myself in this position, I would strive to prove myself and earn the respect of my peers. I would show that my skills and experience alone show that I was worth hiring.

I then asked the question “Would you turn down a job if you found out you were offered it because of a quota?” I stated that, while I would be offended, it would depend entirely on how desperate I was for that job. Several people in the room agreed that they would be unlikely to turn down an opportunity if they were offered it to meet a quota.

Someone then pointed out that men have advantages that women don’t have. It is often easier for men to progress in certain areas. It is not a level playing field. If there is just one thing that gives women an advantage, why should we not use it?

The discussion then moved on to why there might need to be a quota in certain cases. Sometimes, excuses are made like “No one else applied” or “Don’t know any!”. This is probably where the problem lies. Why are women not applying for these roles? Could there be someone putting them off? No one likes quotas, however sometimes they can encourage employers to actually seek out specific candidates who meet a certain criteria and have the skills required for the role.

Someone suggested the possibility that recruiters may be biased when pre-screening CVs. We are all aware of the infamous AI recruiting tool used by Amazon to screen applicants that was biased against women. It is now becoming more and more common for CVs to have certain personal details that could reveal a persons name, gender and race before passing them on to employers.

Summary

It was great to attend such a lively discussion on gender diversity in certain industries. It was encouraging to have men and women among those who attended. I remember talking to someone afterwards who said that debates like this can go on forever.This is definitely true as both of these topics could have easily gone on for several hours. We could have continued the discussions all afternoon if the room wasn’t required for another talk. Just the 15 minutes of discussion for each topic was enough to give me a lot to think about.

Attending Conferences and Testers learning to code (UKSTAR Huddle Area – Lean Coffee)

The UKSTAR conference is sadly over but what an experience it was. As well as attending some amazing talks, I also took the time to see all the exhibitors, meet and speak to so fellow attendees and visit the huddle area.

The huddle area included a ‘duck pond’ where anyone could enter a competition to win a UKSTAR water bottle (I managed to win one 😊), several board games, and opportunities to discuss a variety of testing topics. One particular event I took part in was the lean coffee session on the tuesday morning.

About 6 people were at the lean coffee event, apparently it is suggested that we have no more than 5 people but we seemed to be ok with the extra person. To start, we were given post-it notes and asked to write down a few topics and stick them on the white board. We then voted on which topics to talk about. Each discussion lasted about 5 minutes with the option to extend this if everyone else agreed that they’d like to continue the discussion. Typically, because of the small group, everyone was given a chance to say something about the topic.

In this post, I am attempting to give a summary of the discussion that took place rather than just present my own opinions.

Why do we attend conferences? How do we know if they are worth it?

The first topic chosen was “How do we know if teams arelearning from conferences?”, although this was merged with another suggestion “Whatconference do you plan to attend next?”.

Not everyone can communicate with confidence what they’ve learnt or what their experiences were when attending a conference. So how do managers and businesses know that the investment was worth it? Most mentioned that they were required to either write a report or do a presentation showing what they’ve learnt. I’ve never shied away from presenting my findings to a team – from either personal research, experiences at work, or attending events like conferences.

I suggested that attending a conference can add to a colleague’s personal development which can improve the way they work. Networking, discussions, and being outside their comfort zone can add to their confidence and communication skills. I’m sometimes worried that I’ll no gain anything from attending a conference, which would be disastrous for myself, the business and my colleagues who may wish to attend similar events in the future. Fortunately, this has never happened.

We all discussed reasons for attending a conference and how we managed to get support from our managers to attend a conference. I focused on specific talks and what I expected to gain from attending this talk. The reasons were mainly focused on learning and networking. Andrew Brown, one of the speakers at the conference, was one of those present and said that he can often only attend a conference if he is speaking. This highlights the issue where people often only have the opportunity to attend conferences if they work at a company willing to invest in its employees that way.

Andrew gave some brilliant advice during the discussion: “Never go to a conference session where you already know the answer”.

How can we find out how we learn?

This question was the basis for our second topic. It is definitely one that is hard to answer. It is something that a lot of people can take years to figure out.

It was mentioned how it can be difficult to engage with certain colleagues, especially when giving presentations. There is often that one person who struggles to understand, maybe because they are disinterested or don’t find presentations as the best learning tool.

Some prefer to read a book or article, some like to listen to podcasts or watch videos, some like to discuss. The preferred methods for learning can be very diverse.

Do Testers need to learn to code?

This was a topic which I suggested so was asked to open the discussion. I suggested 2 different avenues for discussion – Coding for test automation and coding for manual testing.

With test automation it can be easily argued that coding and programming skills are essential. However, with the existence of ‘code-less’* automation tools it may be easy to suggest that we don’t even need to know how to code for that anymore.

With manual testing, technically a tester does not need to know how to code to do manual testing. However, knowledge of the basic programming constructs, can make it easier for them to understand the changes being made to the application better and therefore improve their testing process.

A couple of people suggested that, even though it wasn’t essential for testers to know how to code, having that skill can be good for their career development. Times are changing, which means their job is also likely to change significantly. Knowing how to code can keep a testers options open and allow them to keep up with the times.

Someone made the distinction between reading code and writing code. It was suggested that there were huge benefits in including testers in code reviews. For this, the tester only needs to know how to read code.

It is a topic discussed often throughout the testing community, and one that can go in any direction.

Summary

This as the first lean coffee session I’d ever attended. Discussions are short and quick and, with no pre-planning, could go in any direction. With a group of attendees who have never met, there can be an interesting mix of diverse opinions. I will definitely be attending one again in the future if the opportunity arises.

This is the first of a series of blog posts that will cover my experiences at the UKSTAR conference. Feel free to comment with your own thoughts on some of the topics we discussed at the Lean Coffee session.

*I see the term code-less as a misnomer when it comes to test automation. The code may be invisible to the tester, but it still exists in the background. The code is generated automatically as the automated tests are developed.  

Main image from https://www.publicdomainpictures.net