What stands in the way of Ethical AI?
Who do we design for? It is probably not who you think (or want). We want to design for the end-user, but we are often promoting the views of the business stakeholders. If they don’t approve of something, then it can’t be delivered.
There are several false beliefs with AI:
- AI created super human intelligence
- AI can be ethical
These are both false and any attempts to achieve this will product ethical zombies – something that cannot think for itself.
Designers need to account for the ethical design of AI applications. To achieve this, several ethical principles need to be established:
The ability to be switched off at the request of the user
- Well being
Deploying the application doesn’t harm a human (physically or mentally)
The user is able to report on unfair outcomes
It must be clear to the user that the application uses AI. It must also be clear how the AI makes its decisions
- Awareness of Misuse
It should be clear that the system can be misused, how it can be misused and the user should be able to report this when it has happened
Only when these principles have been implemented can a MEP be achieved – Minimum Ethical Product.
Diverse representations in design and awkward conversations with colleagues
There is no such thing as a completed neutral tool. Everyone is guilty of unconscious bias which can have an effect on the design of products. Lack of representation of certain demographics can also lead to misunderstandings. To avoid this, we need to start having these awkward conversations so that there is a more accurate representation. Reach out to users, include them so that there is a better understanding of what they want and need.
Stock photos that don’t represent real people, forms that only allow official names or male/female genders, proving the option for doctor male and doctor female instead of just doctor (why?). The list goes on.
These have become known as edge cases, or people we don’t care about or don’t represent the main users. They are excuses we make when we don’t want to discuss certain people. Instead, we should use the term ‘stress cases’ – cases that need more attention.
Applications should be make for anyone to use, not just those who we see as ‘normal’.
Several books were recommended. I’m currently reading ‘Technically Wrong’. A lot of the examples used in the talk are mentioned in this book, I strongly recommend reading it. I’ve already ordered ‘The Politics of Design’ on Amazon.
Positive user experience and collaboration are essential when software testing. With all the software testing events that I take part in, it is good to step back and think things through a little differently. Collaborate Bristol 2019 gave me an opportunity to do just that. I now have new avenues of research to explore, which will help expand my knowledge and experience in software testing.
Thankyou Simon Norris and the other organisers at Collaborate Bristol for an enjoyable and informative day.