Skip to content. | Skip to navigation

Personal tools
Sections
You are here: Home Breaking news New EU product liability rules to cover AI

New EU product liability rules to cover AI

28 September 2022, 23:45 CET
— filed under: , , ,
New EU product liability rules to cover AI

Artificial intelligence - Image by Alejandro Zorrilal Cruz

(BRUSSELS) - The EU Commission proposed Wednesday to modernise product liability rules to cover artificial intelligence (AI) for the first time, making it easier for victims of AI-related damage to get compensation.

The EU executive has adopted two proposals to adapt liability rules, first, a modernisation of existing rules on the strict liability of manufacturers for defective products (from smart technology to pharmaceuticals).

The revised rules would give businesses legal certainty so they can invest in new and innovative products and will ensure that victims can get fair compensation when defective products, including digital and refurbished products, cause harm.

Second, the Commission proposes for the first time a targeted harmonisation of national liability rules for AI, making it easier for victims of AI-related damage to get compensation. The new rules would ensure that victims benefit from the same standards of protection when harmed by AI products or services, as they would if harm was caused under any other circumstances.

"Proper standards of protection for EU citizens are the basis for consumer trust and therefore successful innovation," said Justice Commissioner Didier Reynders: "New technologies like drones or delivery services operated by AI can only work when consumers feel safe and protected."

The revised Product Liability Directive modernises and reinforces the current well-established rules, based on the strict liability of manufacturers, for the compensation of personal injury, damage to property or data loss caused by unsafe products, from garden chairs to advanced machinery. It ensures fair and predictable rules for businesses and consumers alike by:

  • Modernising liability rules for circular economy business models: by ensuring that liability rules are clear and fair for companies that substantially modify products.
  • Modernising liability rules for products in the digital age: allowing compensation for damage when products like robots, drones or smart-home systems are made unsafe by software updates, AI or digital services that are needed to operate the product, as well as when manufacturers fail to address cybersecurity vulnerabilities.
  • Creating a more level playing field between EU and non-EU manufacturers: when consumers are injured by unsafe products imported from outside the EU, they will be able to turn to the importer or the manufacturer's EU representative for compensation.
  • Putting consumers on an equal footing with manufacturers: by requiring manufacturers to disclose evidence, by introducing more flexibility to the time restrictions to introduce claims, and by alleviating the burden of proof for victims in complex cases, such as those involving pharmaceuticals or AI.

The purpose of the AI Liability Directive is to lay down uniform rules for access to information and alleviation of the burden of proof in relation to damages caused by AI systems, establishing broader protection for victims (be it individuals or businesses), and fostering the AI sector by increasing guarantees. It will harmonise certain rules for claims outside of the scope of the Product Liability Directive, in cases in which damage is caused due to wrongful behaviour. This covers, for example, breaches of privacy, or damages caused by safety issues. The new rules will, for instance, make it easier to obtain compensation if someone has been discriminated in a recruitment process involving AI technology.

The Directive simplifies the legal process for victims when it comes to proving that someone's fault led to damage, by introducing two main features: first, in circumstances where a relevant fault has been established and a causal link to the AI performance seems reasonably likely, the so called ‘presumption of causality' will address the difficulties experienced by victims in having to explain in detail how harm was caused by a specific fault or omission, which can be particularly hard when trying to understand and navigate complex AI systems. Second, victims will have more tools to seek legal reparation, by introducing a right of access to evidence from companies and suppliers, in cases in which high-risk AI is involved.

Product Liability Directive - background guide
AI Liability Directive - background guide

Proposal for a Directive on adapting non contractual civil liability rules to artificial intelligence

Commission White Paper on Artificial Intelligence - A European approach to excellence and trust

Commission Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics

Expert Group report on Liability for artificial intelligence and other emerging digital technologies

Comparative Law Study on Civil Liability for Artificial Intelligence


Document Actions