Is it too early to think about responsible AI development?

The author of this text, Minna Mustakallio, oversaw Saidot’s product development. She has 20 years of experience from various design and management roles in digital industry. She is a key contributor in Finland’s Artificial Intelligence programme’s Ethics Challenge and a member of the transparency work group in IEEE’s Ethics Certification Program for Autonomous & Intelligent Systems. As the rest of the Saidot team, Minna has extensive international experience, and as a matter of fact, she is an Australian passport holder.

Time for responsible AI development

After finishing teaching an AI system, the requirement to shed light on the inner workings of the algorithmic system or application you just created is possibly the last thing in your mind. However, you might have a nagging feeling that somebody will ask why your model spits out the results it does. Or that someone would like to know how your team handled possible bias or, heaven forbid, why the whole system exists in the first place and how it answers to the original problem. But what if you turned it around and started to process these questions from the beginning?

There is a lot of high-level discussion about AI ethics and responsible AI – it looks like AI hype is getting a formidable counter force as cases of not-so-good use of data and algorithms keep appearing. Regulators are also waking up, and the first takes on regulating automated decision-making are following in the footsteps of GDPR. Canada Directive On Automatic Decision-Making, EU guidelines and US bill for Algorithmic Accountability Act are just the very beginning.

Despite these initiatives, the keys to ethical AI creation are not in the hands of regulators – even though regulation should be regarded as a welcomed way to set limits on the use of AI technologies - they are in the hands of the people who create the systems: the teams of data scientists, designers, project owners, domain experts, developers, sociologists…

Ethics is not an outcome but it's about thinking what is right and what is wrong and acting accordingly. AI creation becomes ethical when AI systems are designed and built in a mindful way, making sure that essential questions are raised and that people who should be included are included. Ways of working should focus on collaboration through asking and answering questions about the system being built. Documentation should be something that happens mostly during the process – increasing the understanding of everyone involved and leading to better, more mindful design and implementation decisions. When everyone is actively thinking about consequences, the work becomes consequence-driven.

Do not wait for regulations or news headlines about irresponsible AI development – establish a culture, processes and ways of working that promote a common understanding of the impact of the systems you design and train together.

No tool can substitute commitment, integrity or empathy – but I hope that Saidot’s platform can help you navigate essential questions and accountabilities that are needed for collaboration in this age of mindful people building mindless algorithms.

Try it with the next application you build or run your existing system on the platform. Involve your whole team in making sure you are clear about the purpose, behaviour, consequences and accountabilities of your system. And, don’t hesitate to give feedback and input along the way!