On 8 September 2023, two US Senators proposed a comprehensive bi-partisan framework to underpin what could be known as the US AI Act. Here we summarise the key aspects of the framework which should be read in context of other proposals and calls for AI-specific regulations in the US (federal and state), EU (through the EU AI Act, expected to be enacted late 2023 / early 2024), and UK.

What is the proposed bi-partisan framework?

Senators Richard Blumenthal (Democrat, Connecticut) and Josh Hawley (Republican, Missouri)  proposed a "Bipartisan Framework for U.S. AI Act". The five key parts of the framework are:

  1. Establish a Licensing Regime Administered by an Independent Oversight Body

A licensing system for companies that either deploy “sophisticated general purpose AI models”, such as GPT-4, or those used in “high-risk” situations, such as facial recognition. This would require registration with an independent licencsing body which will be authorised to conduct audits of licence applicants.

  1. Ensure Legal Accountability for Harms

Legal accountability for breach of privacy, violation of civil rights, or otherwise "cognizable harms" caused by AI systems should be enforceable by private means or through an enforcement body. The framework seeks that such breaches will not be protected by the exclusion in s.230 of the Communications Decency Act 1996, which protects internet platforms from liability arising from third party content posted on the platform’s site.  Congress should also take steps to directly prohibit harms already emerging from AI, such as "non-consensual explicit deepfake imagery" of real people and election interference.

  1.  Defend national security and international competition

Congress should use sanctions, export controls and other legal restrictions to limit the transfer of AI models, hardware and related to equpment to specific countries and those engaged in gross human rights violations.

  • Promote Transparency:

Developers must disclose essential information including the “training date, limitations, accuracy and safety” of AI to users such as notifying them when they are interacting with AI and watermarking any AI generated deepfakes.  

  • Protect Consumers and kids:

This includes:

    • Companies must use “safety breaks” such as notifying users when AI is aiding decision making and imposing strict limits on any generative AI involving children.
    • Users should have a right to an affirmative notice that they are interacting with an A.I. model or system. 
    • A.I. system providers should be required to watermark or otherwise provide technical disclosures of A.I.-generated deepfakes. 
    • The new oversight body should establish a public database and reporting so that consumers and researchers have easy access to A.I. model and system information, including when significant adverse incidents occur or failures in A.I. cause harms.

“This bipartisan framework is a milestone—the first tough, comprehensive legislative blueprint for real, enforceable AI protections. It should put us on a path to addressing the promise and peril AI portends,”- Senator R. Blumenthal.

These proposals should be seen in the context of Blumenthal and Hawley's other work on AI (see announcement here):

In July, Blumenthal and Hawley held a hearing titled, “Oversight of AI: Principles for Regulation” bringing together academic and industry leaders. In May, Blumenthal and Hawley held their first hearing titled, “Oversight of AI: Rules for Artificial Intelligence” which heard from OpenAI CEO Sam Altman, IBM Chief Privacy & Trust Officer Christina Montgomery, and NYU Professor Gary Marcus.  

Further, these proposals come at a time when there is a lot going on in the US for responsible AI, including at state and federal level. For example, the Biden-Harris administration has obtained voluntary commitments from technology companies to manage the risks posed by AI, and the White House's voluntary frameowork to assist the US government and private sector in moving responsible AI principles into practce - aka the 'AI Bill of Rights' (see our explanation here). 

However, there are calls in the US (and UK) that legislation is required to address the opportunities and risks of AI.  What the proposed legislation looks like depends on the jurisdiction, risks the proposer(s)/organisation are concerned with, and context; they are likely to look different, specific to a sector, technology, use, and/or legal issue.  The proposed US AI Act is another such example and reflect some of the key issues that various proposals for AI regulations are seeking to address, although often in different ways.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, or any other member in our Technology team.

This article was written by Anousha Al-Masud.