Recently, GPT-3 - "a cutting edge language model that uses machine learning to produce human like text" - wrote an op-ed in the Guardian. Put another way, AI can write (nearly) as well as humans.
To some this may appear to be an example of where AI is replacing humans; a possibility echoed across a variety of industries, from robo-advisers in financial services to self-driving vehicles. But the GPT-3 op-ed actually shows that humans continue to play central, albeit evolving, roles.
Let's take a few examples of where humans played a crucial role in order for the GPT-3 op-ed to be published. For GPT-3 to be able to produce lengthy text that can pass as an op-ed it had to be trained on c.450GB of text - the <u>training</u> sets were selected and prepared for use by GPT-3's creators, OpenAI. The Guardian's editors commissioned the op-ed by providing the <u><em>input </em></u>which specified the focus of the article, the style and the word count. And before the op-ed was published, GPT-3 produced 8 drafts (<u>outputs)</u> which required the Guardian's editor's attention. Humans clearly continue to have roles at each stage of AI being designed, deployed and used. Those roles will continue to change as the AI and its uses change.
This is important because there is a risk that AI goes wrong and causes loss. AI is not a legal entity in itself and cannot be liable for loss suffered, but those who suffer loss will seek to recover that loss from someone. In order to determine who is liable when AI goes wrong there will be complex factual and legal questions - what went wrong and who did (or did not) do what? This will be fact specific to the AI, how it was used and by whom. This means that the roles of humans is becoming ever more important, not just to try and make the AI work, but to understand who is liable when the AI goes wrong.
A robot wrote this entire article. Are you scared yet, human?
https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3