This blog post is part of a series where we introduce some basic concepts of responsible AI to SMEs developing AI based systems.
You may have heard that the most important ethical question is: why? Answering this question will help you formulate how you think and communicate the intended purpose of your AI product. You can also use the following helpful questions in working with your intended purpose and documenting it in a way that is understandable by others:
Lawful AI respects all applicable laws and regulations. While there are specific AI regulations being prepared in EU and many other regions globally, AI is already today regulated by many legally binding rules in EU, state and international levels. Some of the important ones are: EU primary law (the Treaties of the European Union and its Charter of Fundamental Rights), EU secondary law (e.g. GDPR, anti-discrimination directives, the Product Liability Directive), the UN Human Rights treaties and the Council of Europe’s conventions (e.g. European Convention of Human Rights), and various national laws.
In anticipation of the new EU AI regulations, there are use cases which are more likely to be under more strict regulative control or even prohibited in future. EU’s proposed AI Act suggest a prohibition for AI products that
Ethical AI respects values and norms of the environment where it’s going to be used and of the people it impacts. The concept 'value' means the importance of a thing or an action; values provide ideals and standards with which to evaluate things, choices, actions, and events. Without knowing the intended purpose, use context and users, it might be difficult to define whether AI is ethical or unethical. This is why it is important that AI products are not developed in isolation, but in frequent dialogue with stakeholders beyond familiar. Only by exposing your ideas are you able to gain feedback that is necessary to answer to how your product fits the norms of people affected.
More and more often organizations have their own ethical AI principles to articulate the values that are important to an organization, and to help guide ethical considerations. If your organization does not have ones, you can use OECD’s principles as a checklist:
To summarize, the first ethical consideration of your AI project is to ensure that you work on economically, socially and environmentally sustainable mission considering widely the stakeholders affected by your system.