The UK Centre for Data Ethics and Innovation (CDEI) - part of the Department for Science, Innovation and Technology - commissioned research into the views of the public, early adoptors and public sector workers to 'understand attitudes towards foundation models and their use in the public sector'. Here we set out the key findings and recommendations.
The key findings were:
- Participants are open towards the use of foundation models within the public sector.
- Participants see foundation models having the biggest positive impact in use cases which will: help the general public, rather than ‘just’ helping public sector workers; and those that have a tangible, rather than abstract, benefit.
- Accuracy is the biggest concern. Participants will only support the use of foundation models if they reliably produce accurate outputs.
- Participants are most comfortable with use cases where they feel potential inaccuracy poses the least risk.
- Participants see the lack of emotional intelligence as a limitation of foundation models, which means their use is only felt to be suitable in certain situations.
- Overall, participants want there to be human accountability over decisions and outputs taken from foundation models. They see models as assisting and augmenting human capability, rather than replacing it.
- Participants are concerned about the potential impact of foundation models on the labour market.
Recommendations for using and communiating about foundation models in the public sector:
- Start with perceived ‘low risk’ use cases, which feel furthest away from directly impacting negatively on individuals
- Focus on use cases where the accuracy of foundation models’ outputs can be guaranteed (or at least, where inaccuracy can be mitigated).
- Ensure clear channels of human accountability for foundation model supported decisions, particularly in cases which directly affect members of the public.
- Focus on the capability of foundation models to enhance human capability as a rationale for their introduction into the public sector.
- Emphasise there will be human accountability for decision making.
- Don’t say that there are no risks to using foundation models.
The CDEI has engaged before with the public on AI, specifically on AI governance. That research, and the research above, is relevant to future AI regulation; the UK White Paper (see our flowchart for navigating the UK's positoin here) states that the CDEI's research into AI governance 'has informed ... policy development' and to support the view that 'In order to maintain the UK’s position as a global AI leader, we need to ensure that the public continues to see how the benefits of AI can outweigh the risks'. This aligns with the position in other research, such as reports by the Ada Lovelace Institute - an independent research body in the UK to 'ensure data an AI work for people and society' - that state '‘Regulating AI’ means addressing issues that could harm public trust in AI and the institutions using them' (see here).
If you would like to discuss how current or future regulations impact what you do with AI, please contact David Varney, Tom Whittaker, Brian Wong, or any other member in our Technology team.
"This research was commissioned as a topic of general interest to the Centre for Data Ethics and Innovation (CDEI) and other Department for Science, Innovation and Technology teams, whilst also taking into account the increasing interest in the use of foundation models in the public sector across different departments."