The G20 has published a report on ‘Mapping the Development, Deployment and Adoption of AI for Enhanced Public Services in the G20 Members’ (see here).
The report “presents G20 members’ experiences in the development, deployment, and use of AI for public services. It maps existing approaches and methodologies, including frameworks and indicators, used by G20 members to assess and facilitate AI adoption in and by the public sector”. It includes various examples of where AI is being used or tested in the public sector, including examples from the UK, such as work to improve legislative drafting (see our article here).
In summary:
- many governments are already using AI, including for chatbots, virtual assistants, facial recognition. This is in order to improve operational efficiency, enhance responsiveness, reduce costs and increase citizen engagement;
- G20 members are concerned about ensuring equity and non-discrimination. Fairness and inclusiveness are emphasised as fundamental values in the development, deployment, and use of AI. “To achieve fairness, AI systems should be safe, secure, trustworthy, transparent, and explainable, which needs regulatory mechanisms that enable an assessment to verify if these systems adhere to ethical principles.”
- Governments generally pursue two strategies to tackle the development, deployment, and use of AI in the public sector:
- guidelines - frameworks of ethical standards, regulatory compliance, and operational protocols;
- experimentation - testing AI applications in controlled environments or pilots, in order to identify potential risks, refine the system(s), and assess practical implications.
However, there is a risk of lack of coherence and consistency within governments and across governments in different countries - “most of the countries informed that they do not have a government body responsible for developing and monitoring the implementation of their respective national AI strategy for the public sector”.
The G20 Ministerial Declaration noted:
We reaffirm our commitment to leverage AI for good and our determination to take a balanced approach that unlocks the full potential of AI, promoting an equitable access to and sharing of its benefits. We also underline our engagement to promote the benefits and mitigate risks derived from this technology by committing to risk-based and human-centric, development-oriented, innovation-friendly AI policy and governance approaches that are consistent with applicable legal frameworks on security, privacy and protection of personal data, human rights and intellectual property rights. We also highlight our commitment to work together to promote international cooperation and further discussions on AI for inclusive sustainable development and inequality reduction.
The report also includes a useful compilation of measurement tools and frameworks.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, David Varney, Martin Cook or any other member in our Technology team.
For the latest on AI law and regulation, see our blog and sign-up to our AI newsletter.