The Financial Reporting Council recently reported that 73% of companies discussed AI related matters in their annual reports (up from 49% last year). The disclosures included information about how companies were using AI within their operations, the opportunities presented and the associated risks.

Regardless of whether they are required to report on it, all boards need to assess the potential risks involved when using generative AI. These include:

  • Privacy risk: in the event that personal data is collected, stored and used, there is a risk of data protection violations;
  • Bias and discrimination risk: where AI is trained on biased data or algorithms or as a result of how it is designed and used, AI systems can lead to biased and discriminatory outputs;
  • Ownership rights risk: establishing who owns the intellectual property connected with the results from AI, or from a combined AI and human effort;
  • “hallucination” risk: AI models can “hallucinate”, acting in an unpredictable and opaque manner and generating different answers to the same questions; and
  • Poor-decision risk: if the organisation makes a decision based on inaccurate outputs or gives them improper weight or excludes other relevant factors due to the use of AI.

Boards must have a clear strategy for the use of AI within the business to maximise its potential whilst managing and mitigating these risks. Some of the key barriers to this have been identified as (i) the lack of AI expertise at board level, (ii) the difficulty in quantifying the return on investment in AI, and (iii) concerns about the ethical and societal implications of using AI (Woodside Capital Partners, Artificial Intelligence and how Global Boards Need to be Prepared (18 October 2024)). 

The way to determine and implement a strategy will of course vary from business to business, however there are some straightforward, initial steps which all boards should be looking at. For example steering committees may be a useful way to focus the workstream, with clear goals and metrics. Further, establishing a set of “AI principles” to govern the business’ approach in this area may provide a helpful platform from which to build upon, as well as a valuable mainstay when challenges arise in the future. Boards may also look to recruit directors or senior management with experience in this area to ensure that the business has the necessary resources to progress its strategy in a proactive, meaningful and responsible manner.

It is clear that companies are already looking at the opportunities, as well as the inherent risks, connected with AI. It is important that boards now develop a corporate governance strategy to enable their businesses to confidently harness current opportunities and evaluate those that will arise in the future.

Further information

If you would like to discuss corporate governance and AI, please speak to your usual contact at Burges Salmon, Tom Whittaker or Charlotte Hamilton.