If an AI system causes someone harm, intentionally or by a negligent act or omission, will they be able to claim compensation for damages? The European Commission has proposed harmonised civil liability rules - the Artificial Intelligence Liability Directive (AILD) - to ensure that "persons harmed by artificial intelligence systems enjoy the same level of protection as persons harmed by other technologies."
This Directive lays down common rules on:
- Disclosure - '(a) the disclosure of evidence on high-risk artificial intelligence (AI) systems to enable a claimant to substantiate a non-contractual fault-based civil law claim for damages';
- Burden of Proof - '(b) the burden of proof in the case of non-contractual fault-based civil law claims brought before national courts for damages caused by an AI system.'
Here we summarise why AILD is necessary, its key provisions, and its interaction with the EU's Artificial Intelligence Act (AI Act). Whilst AILD may not come into force for a number of years, and then there is a two year transition period, those who procure, design, deploy and use AI systems should take note: there are significant AI regulations on the near-horizon with a clear direction of travel. The framework is relevant not just to EU users and providers but also to those placing on the market or putting into service AI systems in the EU or where the output produced AI systems is used in the EU.
Why is AILD necessary?
AILD should be seen as part of the European Commission's broader work on new and emerging technologies. In its White Paper on Artificial Intelligence, the Commission undertook to promote the uptake of artificial intelligence and to address the risks associated with certain of its uses. The Commission proposed a legal framework for AI "which aims to address the risks generated by specific uses of AI through a set of rules focusing on the respect of fundamental rights and safety." The Commission also recognised the need to harmonise liability rules.
Liability is one of the top barriers to the use of AI by European countries according to an EU survey.
Current national liability rules, in particular based on fault, are not well suited to handling liability claims for damage caused by AI-enabled products and services. That is because victims of harm may need to prove a wrongful action or omission by a specific person who caused the damage. However, the nature of AI - including the complexity, autonomy and opacity of the systems - may make it difficult or impossible for victims to do so.
Further, there is a risk of divergence between Member States if there isn't a harmonised approach to liability for harm caused by AI systems. There is a cost to such divergence; a lack of legal certainty for those using AI systems and reduced trustworthiness of those AI systems. The EU's impact assessment estimates the additional market value for harmonising the liability rules for AI systems at between c.EUR 500 million and 1.1 billion.
When and to what does AILD apply?
AILD is for non-contractual liability and, potentially, state liability.
AILD does not apply to criminal liability and it does not affect:
- EU law regulating liability in the field of transport
- any rights which an injured person may have under national laws implementing the liability directive for defective products;
- exemptions from liability and the due diligence obligations as laid down in the Digital Services Act.
In addition, as the Commission explains, 'Beyond the presumptions it establishes, [AILD] does not affect Union or national rules determining, for instance, which party has the burden of proof, what degree of certainty is required as regards the standard of proof, or how fault is defined.'
Member States may adopt national rules that are more favourable for claimants.
Who does AILD apply to?
A claimant is a person bringing a claim for damages who:
- is the injured person, or
- persons that have succeeded in or have been subrogated into the injured person’s rights (e.g. an insurance company or heirs to a deceased victim). someone acting on behalf of one or more injured parties, in accordance with Union or national law.
the aim is to give more possibilities to persons injured by an AI system to have their claims assessed by a court
A defendant is the person against whom a claim for damages is brought.
Disclosure and preservation of evidence
A court, the claimant and defendant will need evidence for any claim. Whether that evidence is available can be a significant barrier to successfully scoping and starting any claim for damages.
AILD requires that Members States ensure that national courts are empowered to order disclosure of 'relevant evidence'
- about that specific high-risk AI system which is suspected of having caused damage;
- from one or more of a product manufacturer, distributor, importer, or any other third-party who has placed that high-risk AI system onto the EU market, or from the user of the high-risk AI system;
- for either a claimant or a potential claimant who had requested, but was refused, that disclosure from one of the above.
To obtain disclosure the potential claimant must:
- present facts and evidence sufficient to support the 'plausibility of a claim for damages'.
- first take all proportionate attempts at gathering the relevant evidence from the defendant.
National courts must also be empowered to order specific measures to preserve evidence.
The court's powers are limited to 'disclosure of evidence which is necessary and proportionate to support a potential claim or a claim for damages and the preservation to that which is necessary and proportionate to support such a claim for damages.'
Proportionality includes considering:
- the legitimate interests of all parties;
- protection of trade secrets and confidential information, such as information related to public or national security, noting that such information can be protected if disclosed.
Failure to disclose or preserve evidence will mean that a court 'shall' presume the defendant's non-compliance with a relevant duty of care. The defendant can rebut that presumption.
In our view: The proposals are balanced and overall positive; by empowering national courts, AILD provides legal guidance to claimants that relevant evidence should be preserved and disclosed, and a mechanism by which that evidence can be obtained. Disclosure requirements will be balanced against legitimate interests of parties and in particular protection of trade secrets and confidentiality. However, there will still be practical issues for bringing a claim - what is 'relevant' evidence may be difficult to determine by a claimant, and it may be unclear which of the various stakeholders in an AI's lifecycle holds the relevant (potentially fragmented) evidence required to bring a claim.
Rebuttable presumption of a causal link in the case of fault
The causal link between the fault of the defendant and the output produced by the AI system (or the AI system's failure to produce an output) requires all of the following conditions to be met:
- the claimant has demonstrated or the court has presumed pursuant to Article 3(5) [Defendants' failure to disclose or preserve documents as ordered], the fault of the defendant, or of a person for whose behaviour the defendant is responsible, consisting in the non-compliance with a duty of care laid down in Union or national law directly intended to protect against the damage that occurred;
- it can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output; and
- the claimant has demonstrated that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage.
Establishing the defendant's fault
How does the claimant demonstrate point 1 for a high-risk AI system, that the defendant was at fault?
If damages are claimed from a provider, the claimant must demonstrate that any of the following requirements were not met (taking into account the steps undertaken in and the results of the risk management system pursuant to certain obligations under the AI Act), namely that the AI system:
- involves training models of data but was not developed on the basis of the training, validation and testing required under the AI Act; or
- was not designed and developed in a way that:
- meets the transparency requirements in the AI Act;
- allows for effective oversight by natural persons as required in the AI Act;
- so as to achieve, in the light of its intended purpose, an appropriate level of accuracy, robustness and cybersecurity as required in the AI Act; or
- was not immediately subject to the necessary corrective actions to bring the AI system into conformity, or to withdraw the AI system with specific obligations under the AI Act.
If damages are claimed from a user, the claimant must demonstrate that the user:
- did not comply with its monitoring obligations under the AI Act; or
- exposed the AI system to input data under its control which is not relevant in view of the system's intended purpose.
But point 1 is not met:
- for high-risk AI systems where the defendant demonstrates that 'sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link'.
- for non-risk AI systems where the court considers it was not excessively difficult for the claimant to prove the causal link.
What if the damages are claimed from a defendant who used the AI system in the course of a personal, non-professional activity? Then the presumption laid down in paragraph 1 shall apply only where the defendant materially interfered with the conditions of the operation of the AI system or if the defendant was required and able to determine the conditions of operation of the AI system and failed to do so.
In our view: again, these proposals are broadly positive. They provide greater legal certainty as to when, and how, the burden of proof is established. Yet again, there will be some practical difficulties for claimants and much of detail is not settled. For example, the AI Act is still the subject of debate and further amendment (as we have written about here), so what the obligations are under the AI Act which trigger the presumptions above, and how easy they are for a claimant to establish, remains to be seen.
Interaction with the AI Act
AILD emphasises and complements the importance of the AI Act:
- Failure to comply with the obligations for high-risk AI systems imposed by the AI Act triggers the burden of proof provisions under AILD.
- AILD provides victims routes to compensation for harms caused by AI systems whilst the AI Act imposes potentially significant fines for non-compliant AI systems.
What next?
The Commission has proposed AILD but it is still subject to debate, amendment and oversight by various other EU functions. The EU AI Act was proposed in April 2021 and, one and a half years later, it is still being debated. However, they both have significant political support within the EU. How long it will be until the AI Act and AILD are in force is unknown but likely to be in the years rather than months or decades.
However, even once AILD is in force, Member States have another two years to bring into force the laws, regulations and administrative provisions necessary to comply with AILD.
If you would like to discuss how you procure, develop and deploy AI - the liability issues on what regulation is on the horizon - please contact Tom Whittaker or Brian Wong.
Whilst AILD may not come into force for a number of years, and then there is a two year transition period, those who procure, design, deploy and use AI systems should take note: there are significant AI regulations on the near-horizon with a clear direction of travel.