The White House has published a ‘Blueprint for an AI Bill of Rights’.  Here we set out the key parts of the Bill of Rights and, whilst not directly analogous to what is happening in the UK, identify some parallels which can be drawn.  That is particularly useful when industry operates across jurisdictions, and important whilst the UK develops its proposals for AI-specific regulations (with a White Paper on AI-specific regulation expected later in 2022).

What is the US AI Bill of Rights?

The White House recognises that AI promises benefits but also poses risks – AI systems for hiring and credit rating can be biased and discriminatory, whilst social media data collection can risk data privacy breaches.  But the White House considers that such risks are not inevitable and they can be managed.

The Bill of Rights is a:

framework [which] applies to (1) automated systems that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.  The latter includes: civil rights, liberties, and privacy; equal opportunities; and access to critical resources or services.

[it] is meant to assist governments and the private sector in moving principles into practice.

However, the legal disclaimer makes clear that it:

  • is non-binding and does not constitute U.S. government policy;
  • does not amend or affect interpretation of any existing statute, regulation policy or international instrument;
  • is not binding guidance for public or Federal agencies and does not require compliance;
  • does not determine what the US government's position will be in any international negotiation.

So, is the Bill of Rights important?  Yes:

  • it shows the US Office of Science and Technology Policy's view on regulating AI, and the OSTP advises the US President and leads US interagency science and technology policy co-ordination;
  • it provides a useful technical companion ‘From Principles to Practice—a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process’; and
  • there are parallels to be drawn based on the principles identified and need for further guidance, as we summarise in this article.

What are the principles?

The Office of Science and Technology Policy has ‘identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.

The following table sets out those principles against the UK's proposed cross-sectoral issues.  It is a rough comparison - the principles do not match neatly - but helps to draw out that there is a broad degree of overlap in high-level principles. However, there is divergence in language which may in turn lead to differences in interpretation and implementation depending on what guidance is produced.  It appears consistent with how both the US and UK state that their approaches are compatible with the OECD principles for AI ethics which both countries signed in 2019.

US - Bill of Rights principlesUK - proposed cross-sectoral principles
Safe and Effective Systems

You should be protected from unsafe or ineffective systems
AI is used safely
Algorithmic Discrimination Protections

You should not face discrimination by algorithms and systems should be used and designed in an equitable way

embed considerations of fairness into AI

and

AI is technically secure and functions as designed

Data Privacy

You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.

[not a cross-sectoral principle but is provided for by UK data protection legislation]
Notice and Explanation

You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.


AI is appropriately transparent and explainable
Human alternatives, consideration and fallback

You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.


clarify routes to redress or contestability

-Legal persons’ responsibility for AI governance


It is not clear whether the Bill of Rights extends to an equivalent of either: the UK's proposed principle 'define legal persons' responsibility for AI governance'; or the proposals in the EU AI Act to make a specific natural or legal person responsible for placing an AI system on the EU market. 

The closest points appear to relate to contestability and accountability.  References in the Bill of Rights to 'fallback' appear to relate to the citizen being able to turn to a human decision-maker instead of the automated decision-making tool.  The explanation of 'what should be expected of automated systems' when applying the principle of 'human alternatives, consideration, and fallback' refers to public reporting of accessibility, timeliness, and effectiveness of human alternatives.  The principle for 'Notice and explanation' says that notices should be accountable, and 'clearly identify the entity responsible for designing each component of the system and the entity using it.' But none of those are the same as a principle of identifying who would be liable and for what.

The importance of sectors and context

The UK policy paper sets out how AI-specific regulation will depend on the context.  The Bill of Rights does the same:

'The appropriate application of the principles set forth in this white paper depends significantly on the context in which automated systems are being utilized. ...'

Notably, sometimes the context means that the principles should not be applied at all:

'... In some circumstances, application of these principles in whole or in part may not be appropriate given the intended use of automated systems to achieve government agency missions. ...' 

And just like the UK policy paper, further guidance will be required:

'... Future sector-specific guidance will likely be necessary and important for guiding the use of automated systems in certain settings such as AI systems used as part of school building security or automated health diagnostic systems.'

So how the principles - in the US or UK - are applied in practice will depend on, amongst other things, the context.  Industry can have some reassurance that there is a growing consensus of what the issues and principles are, but there is some uncertainty over how context-dependent application of the guidance is. Put another way, to what extent will each case be dependent on its own facts, providing limited guidance to other AI use-cases?   

In the meantime, the Bill of Rights provides:

  • further detail on what should be expected of automated systems for each of the principles.  Similar detail should be expected from the UK's White Paper or subsequent guidance; and
  • a technical companion which 'gives concrete steps that can be taken by many kinds of organizations—from governments at all levels to companies of all sizes—to uphold these values'.

Inevitably these are US focused but remain useful guidance and examples for industry, policymakers and regulators worldwide.  Including a reminder of what the US government has already done to turn principles into practice for AI, such as:

  • 'Executive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government requires that certain federal agencies adhere to nine principles when designing, developing, acquiring, or using AI for purposes other than national security or defense. These principles—while taking into account the sensitive law enforcement and other contexts in which the federal government may use AI, as opposed to private sector use of AI—require that AI is: (a) lawful and respectful of our Nation’s values; (b) purposeful and performance-driven; (c) accurate, reliable, and effective; (d) safe, secure, and resilient; (e) understandable; (f ) responsible and traceable; (g) regularly monitored; (h) transparent; and, (i) accountable. [OSTP considers the Bill of Rights compatible with Executive Order 13960]
  • Affected agencies across the federal government have released AI use case inventories and are implementing plans to bring those AI systems into compliance with the Executive Order or retire them. [Although not all were available at the time of this article; 404 Not Found error at the Department of Commerce]

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Brian Wong.