Artificial Intelligence Act

Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence

Status

EU

In force from 1 August 2024.

EEA

Pending.

Norway

Pending.

Scope

The EU AI Act aims to regulate the use of artificial intelligence across the EEA. It is designed to ensure AI systems are safe, transparent, and accountable. The Act classifies AI applications into risk categories (unacceptable risk, high risk, general-purpose AI models with systemic/non-systemic risk) and sets out specific requirements and standards for each category.

The AI Act imposes the most stringent regulatory burden on natural or legal persons developing and placing AI Systems on the market, but also targets importers, distributors and users using an AI system under their authority (deployers).

High-risk AI systems encompasses systems which may be used for various purposes in different sectors, such as safety components in critical infrastructure, HR (recruitment and decision-making) and education.

Relevance

The AI Act is the first comprehensive law on AI by a major regulator anywhere. Norwegian businesses operating in or entering the EU market will need extensive knowledge of its requirements (when finalized), particularly around high-risk applications and general-purpose AI systems, to capitalize on innovation opportunities while adhering to regulatory expectations. Needless to say, the AI Act’s provision for imposing fines up to €35 million or 7% of global turnover (depending on severity and type of breach) for non-compliance highlights the EU’s serious commitment to ensuring AI is used and manufactured responsibly.

Key obligations

AI-systems with unacceptable risk are prohibited (such as systems deploying sublimal or manipulative techniques to affect a person’s behaviour or decisions, and systems using real-time remote biometric identification in publicly accessible spaces).

For high-risk AI systems, providers are i.e. obliged to implement a risk management system, ensure quality of training data, and provide information enabling deployers to interpret the system’s output and use it appropriately. The system design must allow for human oversight, and achieve an appropriate level of accuracy, robustness and cybersecurity.

Deployers of high-risk AI systems must implement measures to ensure and monitor that the systems are used in compliance with its instructions, ensure human oversight by competent personnel, ensure that input data is relevant and sufficiently representative and, depending on the intended use, provide information to affected users and conduct an impact assessment.

General Purpose AI Systems (systems with capability to serve a variety of purposes, such as OpenAI) will be subject to mandatory transparency requirements, technical documentation, compliance with copyright laws, and detailed summaries of training data content. High-impact general-purpose AI models will face additional obligations, including risk assessments and reporting on incidents and energy efficiency.