The American Law Institute's (ALI) Council has voted to approve the launch of a principles of law project focussed on civil liability for artificial intelligence (AI) (see here).

The principles of law project is:

  • focussed on civil liability, noting that other types of harm have their own distinctive doctrinal questions, such as the topics of copyright infringement and defamation;
  • in circumstances where:
    • AI systems are allegedly resulting in physical harm to persons, for example, as a result of chatbot usage;
    • but specific questions will be difficult to answer, such as the general-purpose nature of AI systems and black-box decision-making processes;  
  • to assist understanding what civil liabilities regimes apply to AI in the (current) absence of a ‘sufficient body of caselaw that could be usefully restated’.

“This project can help courts, the tech industry, and federal regulators understand the legal implications of AI,” explained Wood. “It focuses on common-law principles of responsibility, which can guide decision-making in the absence of applicable legislation. By identifying these principles, the project can help avoid conflicts between federal and state laws and provide clarity for all involved parties.”

Further, the ALI anticipates that any outcome from the principles project may be useful for the range of AI-related bills being considered by US state legislates and at federal level.

"These efforts could benefit from a set of principles, grounded in the common law, for assigning responsibility and resolving associated questions such as the reasonably safe performance of AI systems."

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian WongLucy PeglerDavid VarneyMartin Cook or any other member in our Technology team.

For the latest on AI law and regulation, see our blog and sign-up to our AI newsletter.