Two Google employees in a recent Harvard Business Review (HBR) article have emphasised the importance a collaborative approach to AI development; AI developers and data scientists should partner with the communities, stakeholders and experts who understand how those AI systems will interact with in practice.
As the authors note, "AI has the power to amplify unfair biases, making innate biases exponentially more harmful." There is a particular risk that data scientists and developers make "causation mistakes" where a correlation is wrongly thought to signal a cause and effect. "This lack of understanding can lead to designs based on oversimplified, incorrect causal assumptions that exclude critical societal factors and can lead to unintended and harmful outcomes."
To address this risk, the authors suggest that the societal context needs to be factored into the AI system - the "community-based system dynamics". However, no individual person or algorithm can see the society's complexity in its entirety or fully understand it. "So, to account for these inevitable blindspots and innovate responsibly, technologists must collaborate with stakeholders — representatives from sociology, behavioral science, and the humanities, as well as from vulnerable communities — to form a shared hypothesis of how they work."
The article is of particular interest because there are calls for an Accountability for Algorithms Act in the UK which include "a right for workers to be involved to a reasonable level in the development and application of systems". Such a right is motivated by the need to ensure transparency. But the HBR article shows that such stakeholder involvement can improve an AI system's performance also.
There have been many calls for AI to be developed "ethically" (see the EU's proposals for an ethical framework here); perhaps such calls will carry greater weight if the ethical principles can be shown to simultaneously improve technical performance also. As the authors say, AI engineers need to think beyond engineering.
AI system developers — who usually do not have social science backgrounds — typically do not understand the underlying societal systems and structures that generate the problems their systems are intended to solve. This lack of understanding can lead to designs based on oversimplified, incorrect causal assumptions that exclude critical societal factors and can lead to unintended and harmful outcomes.
https://hbr.org/2020/10/ai-engineers-need-to-think-beyond-engineering