The Artificial Intelligence Act, also known as AI Act, is a proposed regulation by the European Union. The Act was introduced by the European Commission on April 21, 2021 with an aim to establish a unified regulatory and legal framework for all sectors and types of artificial intelligence. The AI Act adopts a risk-based approach where the obligations for a system are proportionate to the level of risk that the system poses. The Act distinguishes the following categories of systems: unacceptable systems that are prohibited in the EU, high-risk systems that are subject to stricter obligations and conformity assessment requirements, general purpose and foundation models subject to tailor-made requirements, limited risk systems subject to transparency obligations, and minimal to no-risk systems that fall under the regulative requirements contained in the Act. The AI Act is currently in its final phase, the trilogue, the finalising negotiations between the European Parliament, the European Commission, and the Council of the European Union. The final version of the AI Act is expected to be reached by the end of 2023.
The Act regulates AI systems placed or put into service in the EU market. The Act places its main obligations on providers and deployers of AI systems. Providers cover natural and legal persons, including public authorities and agencies developing AI systems to place them on the market under their name or trademark. Deployers, on the other hand, cover similar entities using AI systems under its authority, with the exception of use for personal and non-professional activities.
The majority of the regulatory obligations contained in the Act are directed towards high-risk AI systems. The Act classifies as high-risk AI systems certain standalone AI systems, as well as AI systems used as safety components of products, AI systems embedded in products, or systems that are products subject to a third-party assessment under sectoral legislation. The list of standalone high-risk AI systems includes biometric identification and categorisation of natural persons, management and operation of critical infrastructure, educational or vocational training, employment, including workers management and access to self-employment, access to essential private and public services, law enforcement, migration, asylum, and border control management, as well as administration of justice and democratic processes. The high-risk AI systems will be subject to mandatory requirements both before and after they are introduced to the market. These requirements encompass various aspects, including the development of a risk management process for identifying and mitigating risks (Art 9), the implementation of appropriate data governance and management practices (Art 10), the creation of technical documentation that facilitates the assessment of the system's compliance (Art 11), the establishment of ongoing system monitoring throughout its lifecycle (Art 12), the assurance of an adequate level of accuracy, robustness, and cybersecurity (Art 15), the facilitation of human oversight of the system's operations (Art 14), the provision of transparency to empower users in understanding and confidently utilising the products (Art 13), and the implementation of a quality management system, comprising written policies, procedures, and instructions, to ensure compliance with the regulation (Art 17).
The specific requirements for general-purpose AI systems and foundation models are currently under discussion. The European Parliament's revision of the AI Act introduced new obligations for providers of foundation models. These obligations include ensuring robust protection of fundamental rights, health, safety, the environment, democracy, and the rule of law. Additionally, providers of foundation models are required to assess and mitigate risks, comply with design, information, and environmental requirements, and register in the EU database. Furthermore, generative AI systems would be subject to additional transparency requirements. Limited risk systems on the other hand are required to comply with information and transparency obligations. Limited risk systems cover AI systems designed to interact with natural persons, systems capable of emotion recognition or biometric categorisation, as well as systems that generate or manipulate image, audio, or video content such as deepfakes. Lastly, all other systems not falling into the scope of unacceptable systems, high-risk systems, foundation models or generative AI, or limited risk systems are categorised as minimal or no risk systems. In the AI risk-based framework, the majority of AI systems are categorised as minimal to no risk.