EU Adopts AI Act
As it did with the regulation of data privacy by adopting the GDPR, the European Union is once again at the forefront of regulation. This time the EU Parliament has taken up regulation of AI. The Parliament started down the path to define regulations related to the use of AI in 2021, which culminated with the passage of the AI Act on March 13, 2024 (the “Act”). The Act applies broadly to AI, not just generative AI, by defining an AI system as “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
The Act regulates AI systems by placing the majority of the onus on developers of the AI systems according to the risks the AI system poses. The Act separates AI systems into the following risk categories:
- Prohibited AI Systems – these are systems that deploy “subliminal techniques” or are “purposefully manipulative” with the objective of causing a person to make a decision they would not otherwise make or cause the person significant harm. As the name implies, these systems are prohibited from use. Violating this prohibition carries steep penalties, including monetary damages of up to 35,000,000 Euros or 7% of total worldwide revenue, whichever is higher. Examples of prohibited AI systems include systems that conduct social scoring, facial recognition databases created from untargeted scraping of images, or systems that infer emotions in education or workplace settings (except for medical or safety reasons).
- High Risk AI Systems – these are systems that fall into two groups. The first group includes systems that are both (1) intended to be used as a safety component of a system or is the product itself, and (2) are required to go through independent third party assessment. The second group includes systems that fall into various classes, such as biometrics and critical infrastructure safety, as well as systems that use profiling for decision making with impacts on education, work, or access to essential private and public services. There are exceptions in each category for systems that perform a narrow procedural task or improve an already completed human decision, among others. The Act focuses the majority of its regulations on these high risk AI systems to require developers to fulfill a number of risk management and governance obligations when developing and deploying systems. Additionally, the Act includes specific, technical requirements be built into these high risk AI systems.
- Limited Risk AI Systems – these systems are those that do not trigger the high risk category, but are still subject to some transparency obligations. For example, a developer or deployer of a chatbot must ensure that the user is aware they are interacting with AI.
- Minimal Risk AI Systems – these systems are unregulated under the Act and would include systems such as spam filters.
The scope of the Act has an extraterritoriality component, as it applies to AI systems in service or on the market in the EU regardless of the domical of the provider. The Act will take effect in a phased approach beginning 6 months after entry and starting with prohibited AI systems. Full implementation of the Act will not occur until 36 months from March 13.
The text of the Act can be found HERE.