The MIT Schwarzman College of Computing has kicked-off an AI Policy Forum to develop frameworks and tools for governments and companies to make practical decisions about AI policy.

There are already various sets of AI principles.  For example, in 2019 the OECD adopted its five Principles on Artificial Intelligence, which it called "the first international standards agreed by governments for the responsible stewardship of trustworthy AI".  Also in 2019, the EU Commission presented its Ethics Guidelines for Trustworthy Artificial Intelligence which identified 7 overarching principles.  There are also the 7 (different) GDPR principles, specifically in respect of processing personal data.

These principles are a useful starting point.  Themes can be drawn such as the importance of human oversight.  And it shows that regulators and governments are taking a keen interest in the development and deployment of AI.  But there is also the need to develop tools to help put those principles into practice.

As the MIT provost explains: “Moving beyond principles means understanding trade-offs and identifying the technical tools and the policy levers to address them. We created the college to examine and address these types of issues, but this can’t be a siloed effort. We need for this to be a global collaboration and engage scientists, technologists, policymakers, and business leaders.”

The AI Policy Forum's work is likely to have a global impact.  It is designed to be a global collaboration.  There are likely to be differences between jurisdictions as to how to regulate and legislate for AI.  But the nature of AI, and many of the companies and the research involved, mean that a lot of the issues that need to be addressed will be global also.

The AI Policy Forum is designed to be a yearlong process, with a summit in spring-2021 and a follow-on event in autumn 2021 to showcase the research to date and what's needed for the future.