Artificial Intelligence Liability Directive

Proposal for a Directive adapting non-contractual civil liability rules to AI (COM(2022)496)

Status

EU

Commission proposal of 28 September 2022.

EEA

Pending.

Norway

Pending.

Scope

The proposed Directive seeks to regulate civil law claims based on damages caused by an AI system under fault-based liability regimes (negligent acts or omissions).

It is primarily providers of AI systems that may be subject to liability under the act, but also distributors, importers, users or other third-parties who place on the market or put into service a high-risk AI system, modify the intended purpose of a high-risk AI system already placed on the market or put into service or make a substantial modification to a high-risk AI system.

Relevance

The proposal responds to challenges identified in existing liability frameworks that struggle to accommodate claims for damages caused by AI, due to the technology’s complexity, autonomy, and opacity. This situation potentially leaves victims unable to pursue compensation effectively, facing high upfront costs and prolonged legal proceedings.

Implementation in Norway will potentially require amendments of the Norwegian Dispute Act, i.e. due to the introduction of special rules on the burden of proof and presentation of evidence.

In September 2024, a complementary impact assessment of the proposal was concluded by the European Parliament. Recommendations included converting the directive to a regulation and widening the scope of application to also cover general-purpose AI systems, high-impact AI systems and software in general.

Key obligations

The proposal will empower courts to order the disclosure of evidence related to specific high-risk AI systems suspected of causing damage, aiming to assist claimants in gathering necessary evidence for their claims.

The proposal further introduces rebuttable presumptions to assist claimants in proving their cases, especially concerning the causal link between an AI system’s output (or lack thereof) and incurred damages. For high-risk AI systems, if a defendant is shown to have breached specific obligations under the AI Act or failed to comply with evidence disclosure orders, courts may presume their fault contributed to the harm.

Obligations and presumptions vary based on whether the AI system in question is classified as high-risk. For non-high-risk AI systems, courts will apply a presumption of causality only if proving such a link would be excessively difficult for the claimant. Where AI systems are used in personal, non-professional capacities, the proposal limits the application of causality presumptions, applying them only if the non-professional user significantly interfered with the AI system’s operation.