Earlier this week, the GC100, the association of General Counsel and Company Secretaries working in FTSE 100 companies, published the results of a poll of their members on the topic of the use of AI at Board and Committee levels. Specifically, the article looked at what might be considered one of the ‘lower risk’ utilisations of AI in the Boardroom: Board Packs and minute-taking. Although the sample size was small (109 respondents, with not all respondents answering every question), the responses illustrate a growing appetite among GCs and CoSecs to introduce AI, at least in these limited circumstances, provided concerns around confidentiality and information security can be responsibly addressed.
Of those that answered, 92% of respondents have not introduced AI to assist with minute-taking in their organization. The same number do not currently use third-party supplied AI technology to assist with the preparation of board packs. However, 37% were in favour of the use of AI in the Boardroom (with 27% undecided). Of the 36% against such use, and in fact also among advocates of its use, particular concerns cited were that while AI may drive efficiency, concerns remained about: risk of error (e.g., AI missing nuance or misunderstanding what had been said); confidentiality and loss of privilege in legal advice contained in packs/minutes; the discoverability of the information recorded by AI; and a lack of transparency in the process by which AI determined what should go into the minutes (i.e., how did AI exercise judgment on content). One respondent noted that their directors have expressed concerns about the verbatim recording of confidential and sensitive conversations; “Minutes are not and should not be verbatim records of meetings," noted another.
Partially addressing those concerns, early adopters point out that minutes and transcripts produced by AI are typically reviewed by the CoSec (or team member). At present, that appears to be an absolutely essential element of quality control. Provided confidentiality, privilege, and information security concerns can be addressed, AI has the power to be a tool to do some of the legwork, for example, putting all Board/committee packs into standardized formatting and presentation style; sense-checking for grammatical errors; assisting with the addition of analytical graphics; or even suggesting improvements in the presentation of information. But whether used for preparing Board packs or for producing a first draft of minutes, there seems to be agreement that human input is an essential element before finalisation.
The reservations expressed are to be expected, but do not seem insurmountable (as is evident from the 8% who have already embraced AI in some way). There are AI tools where information can be ‘sequestered’ to address concerns about confidentiality and other sensitivities. Likewise, to the apparent concerns of certain respondents that Directors and Committee attendees would not want there to be verbatim records, AI does not necessarily need to ‘take a verbatim recording’ in order to produce a summarized output. However, needing to address these and other issues is illustrative of a factor which is evidently one of the blockers to faster adoption of AI by Boards and their committees: it won’t happen until the majority of those sat in the room are happy they understand what it’s doing and how these apparent risks are being mitigated.
This article was written by Lloyd Nail.
If you would like to discuss AI regulation and risk, please contact Lloyd Nail, Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook, Liz Smith or any other member in our Technology team.