“Could we have foreseen the ethical impacts of the world wide web when it was suggested?”“Looking before we leap? Ethical review processes for AI and data science research”.
That was the question posed to the panel by Professor Dame Wendy Hall on 24 January, as the Ada Lovelace Institute presented the findings of its recent research paper
The paper was published following recent high-profile examples of controversial and unethical AI research. This particular example prompted widespread concerns around the dangers of AI being used to discriminate against individuals. However, with AI research subject to significant growth, issues such as these are being brought into sharper focus and raise complex questions around the ethics of AI research and the impact of such research on society.
Dealing with these issues on a day-to-day basis in industry and academia are research ethics committees (REC). A REC is a group of people selected to review research proposals and formally assess if the research is ethical. It is in this context that the paper considered the challenges faced by RECs in the context of AI research; how RECs are structured to consider AI research; the limits of research reviews; and what changes are needed in respect of RECs and the wider research ecosystem to address these issues.
The Ada Lovelace Institute summarised some of the key findings of the paper:
- Resource and expertise challenges. Many RECs lack the resources, expertise and training to appropriately address the risks that AI and data science pose. In addition, the use of multi-site and public-private partnerships (often cross-country) can lead to inconsistent decision-making by RECs.
- Existing principles may not be well-suited to AI and data science research. Many AI principles are being developed, and there are over 90 worldwide. However, these have been developed in the context of AI products i.e. how to develop an AI product, rather than in the context of AI or data science research.
- REC processes do not always evaluate the full range of algorithmic harms. RECs often carry out their reviews ex ante (i.e. before research is carried out and published). However, risks can emerge at different stages of AI and data science research.
- Corporate laboratory ethics review processes have lacked transparency. Most laboratories do not engage with external expertise, and in laboratories used for commercial purposes there have been concerns that ethical activities have not been encouraged to the same degree that they might be in public institutions.
The event also considered some of the paper’s key recommendations:
- Incorporate broader societal impact thinking in REC reviews
- Multi-stage ethics reviews for high-risk research
- Interdisciplinary expertise and training
- Corporate labs must be more transparent about the review process
- Eco-system building – funders must incentivise broader impact thinking
A considered panel discussion followed, considering key questions such as when an REC review should begin and end, and what impact it should have; how different actors can incentivise a culture of different ethical research; and how RECs can address the broader societal impact of AI and data science research.
The discussion concluded with the panellists considering what key actions they would review in the area of ethical Ai and data science research in the next six months. Answers included engaging more directly with the individuals carrying out AI and data science research; including statements in research around the potential societal impacts of the work; sharing case studies within the industry; involving a wider and more diverse group of people in ethical reviews and seeking independent advice.
The “Looking before we leap?” paper and the topics that were discussed by the panel further reinforce that AI research is fast-moving and it has the potential to affect society in many ways. However, these discussions also highlight that many of those ethical impacts may be unanticipated by or overlooked during ethical reviews, and more needs to be done to consider the wider and longer-term impacts of AI and data science.
As we have commented in a number of recent articles on our AI blog, the ethical, legal and regulatory landscape around AI is subject to significant change and development. If you would like to discuss how current or future regulations and guidance impact what you do with AI, please contact Tom Whittaker or Brian Wong.
This article was written by Helena Sewell.
Since products and services built with AI and data science research can have substantial effects on people’s lives, it is essential that this research is conducted safely and responsibly, and with due consideration for the broader societal impacts it may have. https://www.adalovelaceinstitute.org/event/looking-before-we-leap/