The Department for Digital, Culture, Media and Sport (DCMS) recently published a report on “Understanding UK AI R&D commercialisation and the role of standards”. It identifies current routes through which commercialisation takes place as well as several barriers and challenges that are preventing the commercialisation of AI Research and Development (R&D). 

By way of background, AI R&D refers to work undertaken by universities and research and technology organisations (RTOs) to collect, analyse and apply knowledge and data in order to better understand and improve AI systems. Commercialisation refers to the implementation of ideas and research into marketable AI products or services which have the potential to generate revenue.

How is AI R&D commercialised?

The DCMS identify four ‘priority routes’ through which AI R&D is currently commercialised in the UK:

  • University spinouts: businesses that grow out of a university research project, which attempt to transform research into a commercial product or service, often through an accelerator programme.
  • Startups: businesses in the early stages of operations, exploring a new business model, product or service, including ‘innovation agencies’ like UKRI, Venture Capital investors and Development institutions like the British Business Bank;
  • Large firms that commercialise AI R&D: firms that commercialise AI R&D: ‘Big Tech firms’ (Amazon, Apple, Microsoft, Meta and Google (Alphabet)) and also other large technology companies such as ARM, Graphcore, IBM, Netflix and Twitter that operate across multiple sectors and develop technologies with a wider scope of application; and
  • Direct hire and joint tenure arrangements: relationships between industry (often large technology firms) and academia that allow for a back and forth flow of AI talent between the two.

Barriers to commercialisation

Route

Barriers and challenges

University Spinouts

  • Sector-specific but generally: length of time it takes to yield a profit; suitable choice of business models; the level of digitisation of the sector; and the level of regulation.
  • For example, in healthcare high levels of regulation means constraints on many important enablers for commercialisation. These may make it more difficult to access datasets due to patient confidentiality, as well as placing other legal requirements on providers that generate a higher level of costs, relative to other sectors.

Startups

  • Public funding: Grants can take a long time to apply for with no guarantee of being awarded to the applicants (a research opportunity cost). When funding is provided, grants do not tend to incentivise start-ups to align with market needs.
  • Private funding: it is more difficult to obtain funding for capital intensive projects such as AI hardware, as it requires much higher capital costs that can only receive a substantial return on investment over a longer term, a time frame that private investors are often deterred from exploring.
  • A lack of ambition of UK venture capital investors for funding AI startups compared to those from larger markets such as US and China.

Large Firms 

  • Catering for functional and non-functional requirements, and dealing with surge capacity increase as the scale of deployment increases.

Direct hire and JVs

  • Academics with joint tenure positions may need to sign extensive Non-Disclosure Agreements (NDAs) to prevent data sharing between company and university.
  • Academics are often not allowed to be involved in other startups while working with large companies.


The need for trustworthy AI

An obstacle for successful commercialisation of AI R&D is whether the AI is trustworthy.  The need for trustworthy AI is well recognised: a UK AI standards hub was launched as part of the UK AI Strategy; the AI Council Roadmap set out recommendations on improving public trust in AI systems; the Centre for Data Ethics and Innovation (CDEI) was established to focus primarily on the trustworthy use of data and AI in the public and private sector.

DCMS recognises the role of standards in developing trustworthy AI:

  • The work of Standards Developing Organisations (SDOs) and the creation of technical standards for AI help establish trust amongst consumers, users and businesses in areas such as privacy, security, fairness and the removal of algorithmic bias.
  • Technical standards may eventually support interoperability between the products and systems of different businesses, easing uptake of new products and thereby increasing their commercial value.
  • However, technical standards are currently in the early stages of development and SDOs are seen to interact only with large technology companies reflecting their, and not industry’s or consumers’, interests.

The full report can be accessed here.

If you would like to discuss the commercialising AI R&D, please contact Tom Whittaker or David Varney.

This article was written by Marija Nonkovic and Tom Whittaker.