Artificial Intelligence Act (AI Act)

European Commission

AI Act
Policy type icon
Regulative proposal
Alternative names icon
Private and public sector

Policy overview

The Artificial Intelligence Act, also known as AI Act, is an AI law by the European Union. The Act was proposed by the European Commission on April 21, 2021 with an aim to establish a unified regulatory and legal framework for all sectors and types of artificial intelligence. The AI Act adopts a risk-based approach where the obligations for a system are proportionate to the level of risk that the system poses.  The Act distinguishes the following categories of systems; Unacceptable systems that are prohibited in the EU; High-risk systems that are subject to stricter obligations and conformity assessment requirements; Generative AI and general-purpose models are subject to transparency requirements and compliance with copyright law, with the exception of high-impact general-purpose AI models that might pose systemic risk. Such systems must undergo thorough risk assessment, incident reporting, testing and evaluation, ensure cybersecurity and provide information on energy consumption; Providers of AI systems that interact with natural persons or create synthetic content, and deployers of emotion recognition, biometric categorisation, deepfake systems, as well as certain AI systems manipulating text are subject to transparency obligations. The AI Act is currently in its final phase, the trilogues have been concluded, and political agreement has been made. The final version of the AI Act is expected to be published in early 2024. The AI Act will transition in the following stages: * The ban on prohibited AI practices apply after 6 months of the AI Act coming into force. * Obligations concerning general-purpose AI models apply after 12 months of the AI Act coming into force.  * Requirements and obligations concerning providers of standalone high-risk systems listed in Annex III apply 24 months after the AI Act comes into force. * Transparency obligations of providers of AI systems that interact with natural persons or create synthetic content, and deployers of emotion recognition, biometric categorisation, deepfake systems, as well as certain AI systems manipulating text, apply after 24 months of the AI Act coming into force.  * Deployers of high-risk systems developed by third-party providers apply after 24-36 months of the AI Act coming into force.  * Providers of high-risk systems subject to Union harmonisation legislation listed in Annex II apply 36 months after the AI Act comes into force.

The Act regulates AI systems placed or put into service in the EU market and places its main obligations on providers and deployers of AI systems. The AI Act applies to providers who place on the EU market or put into services AI systems. This means that also providers who are not established or located in the European Union can be subject to the Act if their systems are made available in the EU. Deployers who are established or located in the EU are also subject to the Act. Furthermore, it is also important to note that providers and deployers who are established or located outside the EU fall under the scope of the AI Act in case the output, outcomes, or results produced by the AI system are utilised in the EU. Providers cover natural and legal persons, including public authorities and agencies developing AI systems to place them on the market under their name or trademark - in practical terms, these are the entities that develop an AI tool. Deployers, on the other hand, cover similar entities using AI systems under its authority, with the exception of use for personal and non-professional activities - deployers are the entities that buy and take into use the tool developed by the provider. In addition to providers and deployers to whom the majority of the obligations under AI Act are directed at, the AI Act also places certain obligations to importers, distributors, and authorised representatives of AI systems. The Act defines an AI system as a machine-based system created to operate with different degrees of autonomy and that learns and adjusts its behaviour over time. Its purpose is to produce outputs, whether explicitly or implicitly intended, such as predictions, content, recommendations, or decisions that can have an impact on physical or virtual environments. The AI Act's requirements do not apply to research, development, and prototyping activities before the AI system is placed on the market, taken into use. Moreover, AI systems used solely for military, defense, or national security purposes are not subject to this regulation, no matter who operates them. Lastly, creators of free and open-source AI models are mostly free from the duties that typically apply to AI system providers. However, this exemption does not cover those who provide general-purpose AI models that carry significant systemic risks - providers of such AI systems must still fulfil certain obligations.

Most of the regulatory obligations in the Act are directed towards high-risk AI systems. The Act classifies certain standalone AI systems as high-risk AI systems, as well as AI systems used as safety components of products, AI systems embedded in products, or systems that are products subject to a third-party assessment under sectoral legislation. The list of standalone high-risk AI systems includes biometric identification and categorisation of natural persons, management and operation of critical infrastructure, educational or vocational training, employment, including workers management and access to self-employment, access to essential private and public services, law enforcement, migration, asylum, and border control management, as well as administration of justice and democratic processes. The high-risk AI systems will be subject to mandatory requirements both before and after they are introduced to the market. These requirements encompass various aspects, including the development of a risk management process for identifying and mitigating risks (Art 9), the implementation of appropriate data governance and management practices (Art 10), the creation of technical documentation that facilitates the assessment of the system's compliance (Art 11), the establishment of ongoing system monitoring throughout its lifecycle (Art 12), the provision of transparency to empower users in understanding and confidently utilising the products (Art 13), the facilitation of human oversight of the system's operations (Art 14), the assurance of an adequate level of accuracy, robustness, and cybersecurity (Art 15), and the implementation of a quality management system, comprising written policies, procedures, and instructions, to ensure compliance with the regulation (Art 17). Furthermore, providers of high-risk systems are subject to conformity assessment obligations (Art. 43), post-market monitoring obligation (Art. 61) and various administrative obligations.

The AI Act also places specific obligations to providers of general purpose AI modelsand providers of general purpose AI models with systemic risks. These obligations include requirement to draft technical documentation, provide transparency to downstream AI system providers, comply with EU copyright law, as well as document and make publicly available summaries of model training data. Providers of general purpose AI models with systemic risks are subject to additional supplementary technical documentation obligations, as well as obligations concerning model evaluation and testing, risk assessment and mitigation, incident tracking, and cybersecurity. Furthermore, general purpose AI systems, including general-purpose AI models are subject to certain transparency and information obligations with other AI systems posing transparency risk. Such system posing transparency risk cover AI systems designed to interact with natural persons, systems capable of emotion recognition or biometric categorisation, as well as systems that generate or manipulate image, audio, or video content, and deepfakes. Lastly, all other systems not falling into the scope of unacceptable systems, high-risk systems, general purpose AI models or systems with transparency risk are categorised as minimal or no risk systems. In the AI risk-based framework, the majority of AI systems are categorised as minimal to no risk.

Templates in Saidot

No items found.