U.S. Representatives Anna Eshoo (D-CA) and Don Beyer (D-VA), who serve as Co-Chair and Vice Chair, respectively, of the Congressional Artificial Intelligence (AI) Caucus, have introduced proposals for the AI Foundation Model Transparency Act, an ‘ambitious legislation to promote transparency in artificial intelligence foundation models.’

Here we pick out the key points:

  • What are foundation models?

According to the bill's accompanying press statement (and as reflected in the draft bill):

Foundation models are ‘artificial intelligence models that are trained on broad data, generally use selfsupervision, contain billions of parameters, and are applicable across a wide range of contexts or applications.

  • Why is the Act necessary?

The concern is that ‘Widespread public use of foundation models has also led to countless instances where the public is being presented with inaccurate, imprecise, or biased information’. The causes are several, including biases and limitations in the data the model was trained on.  There are significant risks in specific use-cases, such as healthtech and fintech, that use of AI systems may create, perpetuate and worsen discrimination.

  • What does the Act intend to do?

The AI Foundation Model Transparency Act intends to:

  • Direct the FTC [Federal Trade Commission], in consultation with NIST [National Institute of Standards and Technology], the Copyright Office, and OSTP [White House Office of Science and Technology Policy] (which published an ‘AI Bill of Rights’ in 2022), to set transparency standards for foundation model deployers by asking them to make certain information publicly available to consumers. Such standards with accompanying guidance to be published not more than 9 months after enactment of the Act. Those standards may include information about;
    • sources and retention of training data;
    • the size and composition of the training data;;
    • data governance procedures;
    • how training data was labelled;
    • the intended purposes and foreseen limitations of the foundational model;
    • efforts to align with other standards or frameworks, such as the NIST AI Risk Management Framework;
    • performance during evaluation, such as any self- or third-party audit;
    • computational power of the foundation model.
    • Note that separate standards may be published for open-source foundation models.
  • Direct companies to provide consumers and the FTC with information on the model’s training data, model training mechanisms, and whether user data is collected in inference; and
  • Protect small deployers and researchers, while seeking responsible transparency practices from our highest-impact foundation models.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, David VarneyLucy Pegler, Martin Cook or any other member in our Technology team.