On the 18th September 2023, the Organisation for Economic Co-operation and Development (OECD) published its paper on Initial policy considerations for Generative Artificial Intelligence. 

The overriding message of the paper is that whilst Generative Artificial Intelligence (AI) offers “transformative potential across multiple sectors such as education, healthcare and scientific research”, these technologies also “pose critical societal and policy challenges” that policy makers must confront: potential shifts in labour markets, copyright uncertainties, and risk associated with the perpetuation of societal biases and the potential for misuse in the creation of disinformation and manipulated content. We summarise the paper’s findings below.

Policy issues raised by Generative AI

The paper highlights the areas that the OECD considers policy makers should address and/or consider. To summarise: 

  • Mis- and disinformation. AI amplifies risks of mis- and disinformation as humans are less capable of differentiating AI from human-generated content. This can cause “material harm at individual and societal levels, particularly on science-related issues, such as vaccine effectiveness and climate change, and in polarised political contexts”. The paper suggests that mitigation measures could include increasing model size, developing models that provide evidence and source material, watermarking and developing AI systems that help detect synthetic content. However, the paper notes that “these measures have limitations and are widely expected to be insufficient, calling for innovative approaches that can address the scale of the issue.”

 

  • Echo and perpetuate biases. AI can echo, automate, and perpetuate social prejudices, stereotypes, and discrimination by replicating biases contained in training data, such as correlating female roles to typical female names. The paper suggests that mitigation approaches include enhanced inclusivity in and curation of training data, research, auditing, and model-fine tuning through human feedback to mitigate the risk of marginalisation or exclusion of specific groups.

 

  • Implications for intellectual property rights. The paper notes that AI models are trained on data that include copyrighted data, mostly without the authorisation of rights-owners and highlights the ongoing debate “whether artificially generated outputs can themselves be copyrighted or patented and if so, to whom”.

 

  • Impact on labour markets. AI may increase job exposure in high-skilled occupations and can benefit jobs by creating demand for new tasks and complementary skills, resulting in new jobs for which human labour has a comparative advantage.

Risk mitigation measures

Considering the above challenges, the paper highlights that future risks of AI could demand “solutions on a larger, more systemic scale.” These could include regulation, ethics frameworks, technical AI standardisation, audits, model release, and access strategies, among others. At the same time the paper recognises that, “governments have been quick to recognise the transformative nature of generative AI and are taking action to keep pace with change.” For example, in May 2023, the Group of Seven (G7) countries committed to advance international discussions of AI governance in pursuit of inclusive and trustworthy AI and established the Hiroshima AI Process in collaboration with the OECD under the Japanese G7 Presidency to help improve governance of generative AI.

Conclusion

The paper concludes that generative artificial intelligence creates new content in response to prompts, offering “transformative potential across multiple sectors,” however, these technologies also pose “critical societal and policy challenges that policy makers must confront.” 

The OECD is committed to helping governments keep up with the rapid change in generative AI and concludes “the future trajectories of generative AI are difficult to predict, but governments must explore them to have a hand in shaping them.” 

Related articles

Please see our articles ‘Navigating the EU AI Act: flowchart', ‘The Artificial Intelligence (AI) Law, Regulation and Policy Glossary’ and ‘AI regulation in the UK: Government White Paper published’

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, David Varney, Martin Cook or any other member in our Technology team

Written by Liz Smith and Nicole Simpson