The Parliamentary Office of Science and Technology has produced a useful overview of the importance of interpreting decision-making in machine learning and the techniques available to assist.
Machine learning is a branch of artificial intelligence that allows a system to learn and improve from examples without all its instructions being explicitly programmed. It is used to take decisions or assist with decisions in a range of applications. Some applications are high risk - for example, financial services, healthcare, transport.
Various stakeholders have an interest in interpreting what the machine learning did. The company who developed the machine learning and the company who used the machine learning need to understand what went wrong, amongst other reasons, to prevent it happening again. Regulators and governments will need to understand if there are wider, systemic risks which require them to take action. And if a dispute arises, the courts will need to interpret machine learning to determine liability.
However, the complexity of some machine learning means that it may not be possible to explain how a decision has been reached.
The report notes that what is required for interpretability (it can also be called "explainability" or "intelligibility") is still open to debate. Take machine learning in the healthcare industry for example. If machine learning assists in a medical diagnosis, what is meant by interpretability depends on the context - what and how serious is the medical issue that is being diagnosed?; who needs to understand - the patient or the doctor?; and what else is important - accuracy of diagnosis may be valued more highly than interpretability.
The report also notes that "there is no UK regulation specific to [machine learning]" but that various laws may apply: data protection; human rights; the Equality Act 2010. Some of those laws may have requirements on interpretability. The GDPR provides the right for individuals to receive an explanation of an automated decision made about them, although the extent of this right is debated.
There are principles, standards and guidelines available. The report notes that a range of these have been produced by public bodies and private sector organisations. Some will be more useful or relevant than others; the report notes that a 2019 analysis found that there were 84 sets of ethical principles or guidelines for AI published globally.
This leaves those developing and using machine learning in a potentially difficult position; what reasonable steps should they take so that the machine learning they develop or use is interpretable? What is clear from the report is that there are a variety of tools and techniques to assist but that what will be appropriate will depend on the context; there is no one-size-fits-all
Experts have raised concerns about a lack of transparency in decisions made or informed by ML systems. This is a particular issue for certain complex types of ML, such as deep learning, where it may not be possible to explain completely how a decision has been reached.