The Department for Digital, Culture, Media and Sport (DCMS) recently published a report on “Understanding UK AI R&D commercialisation and the role of standards”. It identifies current routes through which commercialisation takes place as well as several barriers and challenges that are preventing the commercialisation of AI Research and Development (R&D).
By way of background, AI R&D refers to work undertaken by universities and research and technology organisations (RTOs) to collect, analyse and apply knowledge and data in order to better understand and improve AI systems. Commercialisation refers to the implementation of ideas and research into marketable AI products or services which have the potential to generate revenue.
How is AI R&D commercialised?
The DCMS identify four ‘priority routes’ through which AI R&D is currently commercialised in the UK:
- University spinouts: businesses that grow out of a university research project, which attempt to transform research into a commercial product or service, often through an accelerator programme.
- Startups: businesses in the early stages of operations, exploring a new business model, product or service, including ‘innovation agencies’ like UKRI, Venture Capital investors and Development institutions like the British Business Bank;
- Large firms that commercialise AI R&D: firms that commercialise AI R&D: ‘Big Tech firms’ (Amazon, Apple, Microsoft, Meta and Google (Alphabet)) and also other large technology companies such as ARM, Graphcore, IBM, Netflix and Twitter that operate across multiple sectors and develop technologies with a wider scope of application; and
- Direct hire and joint tenure arrangements: relationships between industry (often large technology firms) and academia that allow for a back and forth flow of AI talent between the two.
Barriers to commercialisation
Barriers and challenges
Direct hire and JVs
The need for trustworthy AI
An obstacle for successful commercialisation of AI R&D is whether the AI is trustworthy. The need for trustworthy AI is well recognised: a UK AI standards hub was launched as part of the UK AI Strategy; the AI Council Roadmap set out recommendations on improving public trust in AI systems; the Centre for Data Ethics and Innovation (CDEI) was established to focus primarily on the trustworthy use of data and AI in the public and private sector.
DCMS recognises the role of standards in developing trustworthy AI:
- The work of Standards Developing Organisations (SDOs) and the creation of technical standards for AI help establish trust amongst consumers, users and businesses in areas such as privacy, security, fairness and the removal of algorithmic bias.
- Technical standards may eventually support interoperability between the products and systems of different businesses, easing uptake of new products and thereby increasing their commercial value.
- However, technical standards are currently in the early stages of development and SDOs are seen to interact only with large technology companies reflecting their, and not industry’s or consumers’, interests.
The full report can be accessed here.
This article was written by Marija Nonkovic and Tom Whittaker.