Sat. Nov 9th, 2024
Brussels, 28 September 2022

 

Today, the Commission adopted two proposals to adapt liability rules to the digital age, circular economy and the impact of global value chains. Firstly, it proposes to modernise the existing rules on the strict liability of manufacturers for defective products (from smart technology to pharmaceuticals). The revised rules will give businesses legal certainty so they can invest in new and innovative products and will ensure that victims can get fair compensation when defective products, including digital and refurbished products, cause harm. Secondly, the Commission proposes for the first time a targeted harmonisation of national liability rules for AI, making it easier for victims of AI-related damage to get compensation. In line with the objectives of the AI White Paper and with the Commission’s 2021 AI Act proposal, setting out a framework for excellence and trust in AI – the new rules will ensure that victims benefit from the same standards of protection when harmed by AI products or services, as they would if harm was caused under any other circumstances.

Revised Product Liability Directive, fit for the green and digital transition and global value chains

The revised Directive modernises and reinforces the current well-established rules, based on the strict liability of manufacturers, for the compensation of personal injury, damage to property or data loss caused by unsafe products, from garden chairs to advanced machinery. It ensures fair and predictable rules for businesses and consumers alike by:

• Modernising liability rules for circular economy business models:by ensuring that liability rules are clear and fair for companies that substantially modify products.

• Modernising liability rules for products in the digital age:allowing compensation for damage when products like robots, drones or smart-home systems are made unsafe by software updates, AI or digital services that are needed to operate the product, as well as when manufacturers fail to address cybersecurity vulnerabilities.

• Creating a more level playing field between EU and non-EU manufacturers:when consumers are injured by unsafe products imported from outside the EU, they will be able to turn to the importer or the manufacturer’s EU representative for compensation.

• Putting consumers on an equal footing with manufacturers:by requiring manufacturers to disclose evidence, by introducing more flexibility to the time restrictions to introduce claims, and by alleviating the burden of proof for victims in complex cases, such as those involving pharmaceuticals or AI.

Easier access to redress for victims AI Liability Directive

The purpose of the AI Liability Directive is to lay down uniform rules for access to information and alleviation of the burden of proof in relation to damages caused by AI systems, establishing broader protection for victims (be it individuals or businesses), and fostering the AI sector by increasing guarantees. It will harmonise certain rules for claims outside of the scope of the Product Liability Directive, in cases in which damage is caused due to wrongful behaviour. This covers, for example, breaches of privacy, or damages caused by safety issues. The new rules will, for instance, make it easier to obtain compensation if someone has been discriminated in a recruitment process involving AI technology.

The Directive simplifies the legal process for victims when it comes to proving that someone’s fault led to damage, by introducing two main features: first, in circumstances where a relevant fault has been established and a causal link to the AI performance seems reasonably likely, the so called ‘presumption of causality’ will address the difficulties experienced by victims in having to explain in detail how harm was caused by a specific fault or omission, which can be particularly hard when trying to understand and navigate complex AI systems. Second, victims will have more tools to seek legal reparation, by introducing a right of access to evidencefrom companies and suppliers, in cases in which high-risk AI is involved.

The new rules strike a balance between protecting consumers and fostering innovation, removing additional barriers for victims to access compensation, while laying down guarantees for the AI sector by introducing, per instance, the right to fight a liability claim based on a presumption of causality.

Members of the College said

Vice-President for Values and Transparency, Věra Jourová said:

“We want the AI technologies to thrive in the EU. For this to happen, people need to trust digital innovations. With today’s proposal on AI civil liability we give customers tools for remedies in case of damage caused by AI so that they have the same level of protection as with traditional technologies and we ensure legal certainty for our internal market.”

Commissioner for Internal Market, Thierry Breton, said:

“The Product Liability Directive has been a cornerstone of the internal market for four decades. Today’s proposal will make it fit to respond to the challenges of the decades to come. The new rules will reflect global value chains, foster innovation and consumer trust, and provide stronger legal certainty for businesses involved in the green and digital transition.”

Commissioner for Justice, Didier Reynders, said:

“While considering the huge potential of new technologies, we must always ensure the safety of consumers. Proper standards of protection for EU citizens are the basis for consumer trust and therefore successful innovation. New technologies like drones or delivery services operated by AI can only work when consumers feel safe and protected. Today, we propose modern liability rules that will do just that. We make our legal framework fit for the realities of the digital transformation.”

Next steps

The Commission’s proposal will now need to be adopted by the European Parliament and the Council.

It is proposed that five years after the entry into force of the AI Liability Directive, the Commission will assess the need for no-fault liability rules for AI-related claims if necessary.

Background

The current EU rules on product liability, based on the strict liability of manufacturers, are almost 40 years old. Modern rules on liability are important for the green and digital transformation, specifically to adapt to new technologies, like Artificial Intelligence. This is about providing legal certainty for businesses and ensuring consumers are well protected in case something goes wrong.

In her Political Guidelines, President von der Leyen laid out a coordinated European approach on Artificial Intelligence. The Commission has undertaken to promote the uptake of AI and to holistically address the risks associated with its uses and potential damages.

In its White Paper on AI of 19 February 2020, the Commission undertook to promote the uptake of AI and to address the risks associated with some of its uses by fostering excellence and trust. In the Report on AI Liability accompanying the White Paper, the Commission identified the specific challenges posed by AI to existing liability rules.

The Commission adopted its proposal for the AI Act, which lays down horizontal rules on artificial intelligence, focusing on the prevention of damage, in April 2021. The AI Act is a flagship initiative for ensuring safety and trustworthiness of high-risk AI systems developed and used in the EU. It will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation. Today’s AI liability package complements the AI Act by facilitating fault-based civil liability claims for damages, laying down a new standard of trust in reparation.

The AI Liability Directive adapts private law to the new challenges brought by AI. Together with the revision of the Product Liability Directive, these initiatives complement the Commission’s effort to make liability rules fit for the green and digital transition.

For More Information

Proposal for a Directive on adapting non contractual civil liability rules to artificial intelligence   

Proposal: Revision of the Product Liability Directive

Questions & Answers:AI Liability Directive

Questions & Answers:Product Liability Directive

Liability rules on Artificial Intelligence

Commission White Paper on Artificial Intelligence – A European approach to excellence and trust

Commission Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics

Expert Group report on Liability for artificial intelligence and other emerging digital technologies

Comparative Law Study on Civil Liability for Artificial Intelligence

Source – EU Commission


Q&A: AI Liability Directive

 

Brussels, 28 September 2022

The AI Liability Directive complements and modernises the EU civil liability framework, introducing for the first time rules specific to damages caused by AI systems.

The new rules will ensure that victims of harm caused by AI technology can access reparation, in the same manner as if they were harmed under any other circumstances. The Directive introduces two main measures: the so called ‘presumption of causality’, which will relieve victims from having to explain in detail how the damage was caused by a certain fault or omission; and the access to evidence from companies or suppliers, when dealing with high-risk AI.

  1. Why do we need a new Directive?

As technological advances continue to roll-out, so must the guarantees put in place to ensure that EU consumers benefit from the highest standards of protection, even in the digital age. The Commission is committed to ensuring that pioneering technological innovation is never at the expense of safeguards for citizens. A harmonised legal framework is required at EU level to avoid the risk of legal fragmentation when filling the voids brought by these unprecedented technological advances.

Current national liability rules are not equipped to handle claims for damage caused by AI-enabled products and services. In fault-based liability claims, the victim has to identify whom to sue, and explain in detail the fault, the damage, and the causal link between the two. This is not always easy to do, particularly when AI is involved. Systems can oftentimes be complex, opaque and autonomous, making it excessively difficult, if not impossible, for the victim to meet this burden of proof. One of the most important functions of civil liability rules is to ensure that victims of damage can claim compensation. If the challenges of AI make it too difficult to access reparation, there is no effective access to justice. By guaranteeing effective compensation, these rules contribute to the protection of the right to an effective remedy and a fair trial, both included in the EU Charter of Fundamental Rights.

The new rules will ensure that any type of victim (either individuals or businesses) can have a fair chance to compensation if they are harmed by the fault or omission of a provider, developer or user of AI. Furthermore, investing in trust and establishing guarantees should something go wrong, is investing in the sector and contributing to its uptake in the EU. Effective liability rules also provide an economic incentive to comply with safety rules, therefore contributing to the prevention of damage.

  1. How will this Directive help victims?

The new rules cover national liability claims based on the fault or omission of any person (providers, developers, users), for the compensation of any type of damage covered by national law (life, health, property, privacy, etc.) and for any type of victim (individuals, companies, organisations, etc.).

The new rules introduce two main safeguards:

  • First, the AI Liability Directive alleviates the victims ‘burden of proof by introducing the ‘presumption of causality’: if victims can show that someone was at fault for not complying with a certain obligation relevant to the harm, and that a causal link with the AI performance is reasonably likely, the court can presume that this non-compliance caused the damage. On the other hand, the liable person can rebut such presumption (for example, by proving that a different cause provoked the damage).
  • Second, when damage is caused because, for instance, an operator of drones delivering packages does not respect the instructions of use, or because a provider does not follow requirements when using AI-enabled recruitment services, the new AI Liability Directive will help victims to access relevant evidence. Victims will be able to ask the court to order disclosure of information about high-risk AI systems. This will allow victims to identify the person that could be held liable and to find out what went wrong. On the other hand, the disclosure will be subject to appropriate safeguards to protect sensitive information, such as trade secrets.

Together with the revised Product Liability Directive, the new rules will promote trust in AI by ensuring that victims are effectively compensated if damage occurs, despite the preventive requirements of the AI Act and other safety rules.

  1. What kind of AI is concerned by the proposal?

The Directive is focused on providing victims with the same standards of protection when harmed by AI systems as they would be if harmed under any other circumstances. The AI Liability proposal thus applies to damage caused by any type of AI systems: both high-risk and not high-risk.

  1. How will the new rules contribute to innovation and development in the field of AI?

The proposal for the AI Liability Directive balances the interests of victims of harm related to AI systems and of businesses active in the sector.

To do that, the Commission has chosen the least interventionist tool (rebuttable presumptions) for easing the burden of proof. As such, the AI Liability Directive does not suggest a reversal of the burden of proof, to avoid exposing providers, operators and users of AI systems to higher liability risks, which may hamper innovation of AI-enabled products and services.

Furthermore, by ensuring that victims of AI enjoy the same level of protection as in cases not involving AI systems, the proposal for the AI Liability Directive contributes to strengthening public trust in the AI technologies, thereby encouraging AI roll-out and uptake around the Union.

Businesses will be in a better position to anticipate how the existing liability rules will be applied, and thus to assess and insure their liability exposure. This is especially the case for businesses trading across borders, including for small and medium-sized enterprises (SMEs), which are among the most active in the AI sector.

  1. What is the relationship with the Product Liability Directive?

The revised Product Liability Directive modernises the existing EU-level strict product liability regime and will apply to claims against the manufacturer for damage caused by defective products; material losses due to loss of life, damage to health or property and data loss; and is limited to claims made by private individuals.

The new AI Liability Directive makes a targeted reform of national fault-based liability regimes and will apply to claims against any person for fault that influenced the AI system which caused the damage; any type of damage covered under national law (including resulting from discrimination or breach of fundamental rights like privacy); and claims made by any natural or legal person.

As regards alleviations to the burden of proof, the two Directives introduce similar tools (right to disclosure of evidence, rebuttable presumptions) and use similar wording to ensure consistency, regardless of the compensation route chosen.

  1. What is the relationship with the Artificial Intelligence Act?

The AI Act and the AI Liability Directive are two sides of the same coin: they apply at different moments and reinforce each other. Safety-oriented rules aim primarily to reduce risks and prevent damages, but those risks will never be eliminated entirely. Liability provisions are needed to ensure that, in the event that a risk materialises in damage, compensation is effective and realistic. While the AI Act aims at preventing damage, the AI Liability Directive lays down a safety-net for compensation in the event of damage.

The AI Liability Directive uses the same definitions as the AI Act, keeps the distinction between high-risk/non-high risk AI, recognises the documentation and transparency requirements of the AI Act by making them operational for liability through the right to disclosure of information, and incentivises providers/users of AI-systems to comply with their obligations under the AI Act. The Directive will apply to damage caused by AI systems, irrespective if they are high-risk or not according to AI Act.

For More Information

Proposal for a Directive on adapting non contractual civil liability rules to artificial intelligence

Press release

Questions & Answers: Product Liability Directive

Commission White Paper on Artificial Intelligence – A European approach to excellence and trust

Commission Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics

Expert Group report on Liability for artificial intelligence and other emerging digital technologies

Comparative Law Study on Civil Liability for Artificial Intelligence

Source – EU Commission


Q&A on the revision of the Product Liability Directive

 

Brussels, 28 September 2022

1. Why do product liability rules need to be updated?

For nearly 40 years, the Product Liability Directive (PLD) has provided a legal safety net for citizens to claim compensation when they suffer damage caused by defective products. However, the PLD dates back to 1985 and does not cover categories of products emerging from new digital technologies, like smart products and artificial intelligence (AI). Similarly, the current rules are unclear about how to determine who would be liable for defective software updates, defective machine learning algorithms or defective digital services that are essential for a product to operate. The current rules are silent on who is liable when a business substantially modifies a product that is already on the market or when a product has been directly imported from outside the Union by a consumer. This makes it difficult for businesses to assess the risks of marketing innovative products and leaves victims of damages without the possibility of compensation for an increasing number of products. The revision of the PLD will ensure that the new rules for product liability are adapted to new types of products to the benefit of both businesses and consumers.

2. What products will be covered by the revised rules?

The revised product liability rules will apply to all products, from garden chairs to cancer medicines, from agricultural products to advanced machinery but also to software updates. Like other products, defective software and AI systems could also cause harm, for example if they are embedded in, a cleaning robot, or placed on the market as a digital product in its own right, like a medical health app for a smartphone. The new PLD makes explicit that injured people can claim compensation if software or AI systems cause damage.

The new rules also consider products stemming from circular economy business, namely business models in which products are modified or upgraded. The proposal creates the legal clarity that industry needs in order to embrace circular business models. The rules of the PLD (including the possible presumptions) would apply to remanufacturers and other businesses that substantially modify products in case these products cause damage to a person, unless they show that the defect relates to an unmodified part of the product.

3. How do the new rules ensure a better protection for consumers?

The new rules allow people to claim compensation for harm caused by a defective product, including personal injury, damage to their property or data loss. People can also claim compensation if the property that was damaged was used for professional as well as private purposes, such as a company cargo bike or home office equipment. To reflect the fact that product safety can be affected by software updates, upgrades and digital services; people will now also be able to claim compensation when these are defective and cause harm.

The new rules also help to put people claiming compensation on an equal footing with manufacturers, by requiring manufacturers to disclose information and by alleviating the burden of proof in complex cases, e.g. certain cases involving pharmaceuticals or AI.

4. Are there limits on the level of compensation that can be claimed under the new rules?

The revision modifies the current rules by removing the existing lower threshold and upper ceiling that has prevented people being fully compensated for the damage they suffer.

5. Can you use the PLD to get compensation for infringements of fundamental rights?

People will be able to bring a claim for damages against the manufacturer if the defective product has caused death, personal injury, including medically recognised psychological harm, damage to property or data loss.

The new rules do not allow compensation for infringements of fundamental rights, for example if someone failed a job interview because of discriminatory AI recruitment software. The draft AI Act currently being negotiated aims to prevent such infringements from occurring. Where they nevertheless do occur, people can turn to national liability rules for compensation, and the proposed AI Liability Directive could assist people in such claims.

6. What are the changes for companies?

Already today, companies are obliged to compensate people injured by defective products. In addition, the new PLD will now require companies to disclose evidence that a claimant would need to prove their case in court. This is to address the asymmetry of information between the manufacturer and consumer: manufacturers know much more than consumers about how the product in question was produced and brought to market.

7. Who is liable for defective products manufactured outside the EU?

The existing PLD makes importers liable for defective products manufactured outside the Union. This is because it would have been too difficult for consumers to seek compensation from companies outside the Union.

Today’s global value chains allow consumers to buy products from outside the Union directly, without an importer. That is why the new PLD will allow consumers to seek compensation from the non-EU manufacturer’s representative. Thanks to the Market Surveillance Regulation and the upcoming revision of the General Product Safety Regulation, this will mean that, there will be an EU-based liable person from whom to seek compensation.

Distributors (offline and online sellers) can also be held liable if they fail to give the name of the EU-based liable person to the injured person on request. This applies to online marketplaces too, but only if they present themselves to the consumer as a distributor.

8. What is the relationship between this new PLD and the AI Act?

In April 2021, the Commission published a proposal for a Regulation on artificial intelligence (AI Act). The AI Act sets out rules to ensure AI systems meet high safety requirements, including logging by design and cybersecurity features.  This is similar to what other EU safety legislation does for other products, such as machinery, radio equipment or consumer products in general. The revised PLD makes clear that all these mandatory safety requirements, including those set out in the AI Act, should be taken into account when a court assesses if a product is defective. The revised PLD crucially also makes clear that software, including AI systems, is a product. Therefore, if AI systems are defective and cause death, personal injury, property damage or data loss, injured people can use the PLD to claim compensation. The revised PLD will give businesses the legal certainty and level playing field they need to invest in AI technologies, and will give consumers the protection they need to encourage the uptake of AI-enabled products in the future.

9. What is the relationship between this Directive and the proposed AI liability Directive?

The PLD makes manufacturers strictly liable for harm caused by their defective products – this covers the full range of products, including software and AI systems. However, the PLD does not exist in isolation and victims often have a choice on which legal basis they want to make a claim. All Member States have fault-based liability regimes that require injured people to prove somebody’s fault caused the harm they suffered. If a victim seeks compensation under such national fault-based liability rules (e.g. for harm not covered by the PLD such as infringements of fundamental rights or claims against users of products rather than against the manufacturer) and the claim concerns damage caused by an AI system, the proposed AI Liability Directive could, on certain conditions, help claimants overcome the difficulties they might otherwise face because of the opacity of the AI system involved.

 

 

Source – EU Commission
Forward to your friends