The UK's Responsible Technology Adoption Unit (RTA), which is part of the Department for Science, Innovation and Technology, has produced a ‘Model for Responsible Innovation’ which is a ‘practical tool … to help teams across the public sector and beyond to innovate responsibly with data and AI.’ (here).
Below, we summarise the Model, its ‘Fundamentals’ and ‘Conditions’. Notably, the RTA uses the Model as part of free red-teaming workshops to public sector organisations looking to develop data-driven technology, such as AI systems. Consequently, what the Model provides is both a 'vision for what responsible innovation in AI looks like, and the component Fundamentals and Conditions required to build trustworthy AI' and a practical tool to identify and mitigate potential risks, although what the foundations and conditions look like in practice will depend on the context.
The Model
According to the RTA:
The Model builds on existing frameworks and principles for ethical AI, such as the OECD principles for trustworthy AI. It is designed to align with the UK’s existing domain-specific guidance for responsible innovation, such as the Data Ethics Framework.
Objective
The objective of responsible innovation is building justified trust in the AI and data tools developed and used.
Justified trust comes through designing and deploying systems in a way that builds and deserves the trust of stakeholders to use them.
Fundamentals
The RTA says that Fundamentals that teams should work towards when developing and implementing their systems include:
Transparency - ensuring systems are open to scrutiny, with meaningful information provided to relevant individuals across their lifecycle.
Accountability - ensuring systems have effective governance and oversight mechanisms, with clear lines of appropriate responsibility across their lifecycle.
Human-centred Value - ensuring systems have a clear purpose and benefit to individuals, and are designed with humans in mind.
Fairness - ensuring systems are designed and deployed against an appropriate definition of fairness, and monitored for fair use and outcomes.
Privacy - ensuring systems are privacy-preserving, and the rights of individuals around their personal data are respected
Safety - ensuring systems behave reliably as intended, and their use does not inflict undue physical or mental harms.
Security - ensuring systems are measurably secure and resistant to being compromised by unauthorised parties.
Societal Wellbeing - ensuring systems support beneficial outcomes for societies and the planet.
Notably, the RTA states that they should all appear but do not necessarily need to be maximised, as some will bring trade-offs. ‘For example, some projects could maximise security, at the cost of increased risks that the system is less explainable, transparent and accountable.'
Conditions
Underlying the Fundamentals are the Conditions. These are the technical, organisational and environmental factors that must be satisfied in order for the Fundamentals to be met. Located on the inner ring of the Model, they are:
Meaningful Engagement - engaging effectively with experts, stakeholders, and the general public, using these insights to inform the system in question.
Robust Technical Design - ensuring that the functional (how a program will behave to outside agents) and technical (how that functionality is implemented in code) design of a system is robust.
Appropriate & Available Data - ensuring a system has access to the right data needed to achieve its desired outcomes and effectively monitor performance.
Clear Boundaries - ensuring there are clear boundaries on a system’s intended use, and clear understanding of the consequences of exceeding them.
Available Resources - ensuring the resources (technical, legal, financial, etc.) needed to effectively build and use a system are provided.
Effective Governance - ensuring that the right processes and policies are in place to guide the development and operation of a system, and ensure its adherence to the project’s goals, standards and regulations, providing recourse where necessary.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook, Liz Smith or any other member in our Technology team.