The Canadian government recently published a draft Digital Charter Implementation Act 2022 (Bill C-27) which aims, amongst other things, to regulate the development and use of artificial intelligence (AI) in the private sector.

The Act features three pieces of legislation:

  • the Artificial Intelligence and Data Act (AIDA);
  • the Consumer Privacy Protection Act (CPPA) to replace the Personal Information Protection and Electronic Documents Act (PIPEDA); and
  • the Personal Information and Data Protection Tribunal Act.

AIDA is the first Canadian AI legislation which seeks to impose regulatory requirements on those responsible for the creation and use of AI systems.  This follows the Canadian Directive on Automated Decision-Making which specifies how public institutions are to use automated decision-making.

The main purposes of AIDA are to:

  • regulate international and interprovincial trade and commerce in 'high-impact' AI systems by establishing common requirements, applicable across Canada, for the design, development and use of those systems; and
  • prohibit certain conduct in relation to AI systems that may result in serious harm to individuals or their interests.

The key requirements imposed by AIDA on those responsible for AI systems are:

  • Anonymisation. Take measures to protect and maintain anonymised data.
  • Risk assessments. Undertake assessments to determine whether AI systems are 'high impact' and put in place measures to mitigate risk of harm or bias.  'High-impact' AI systems will be defined by separate regulation.
  • Transparency. For high impact systems, publish clear explanations of how AI systems work and the mitigation measures in place.
  • Record keeping. Maintain clear records of the measures established related to risks and the reasons supporting a 'high impact' assessment.
  • Potential audit. If the Canadian Minister has reasonable grounds to believe that a person has contravened the Act, they can require that person to conduct an audit of the AI system, or an engage an independent auditor to do so.

AIDA introduces a monetary penalty scheme, the stated purpose of which is to promote compliance with the regulations and not to punish. Other potential penalties include fines which can go up to a maximum of $25,000,000, or, if greater, 5% of the organisation’s global gross revenues, and for some offences an individual can receive a discretionary fine or imprisonment for up to five years.  AIDA envisages a potential Artificial Intelligence and Data Commissioner to assist with enforcement.


There are clear similarities between AIDA and the EU's proposed AI Act.  Neither is set in stone; there may be further amendments.  Watch this space as to how they converge or diverge.  In particular:

  • Definition of Artificial Intelligence. This is a hot topic in the EU AI Act.  AIDA differs, for example: AIDA refers to a 'technological system' rather than 'software'; AIDA refers to the processing of data related to human activities (the EU Act looks at the techniques used and outputs but doesn't refer to the data processed); AIDA does not refer to the AI being for human-defined objectives (whereas the EU definition currently does);
  • What is a high-impact AI system?  AIDA says 'high-risk' will be defined separately.  What is high-risk for the EU AI Act is the subject of debate but is currently limited to AI used as safety components or in specific systems; and
  • What obligations are imposed for high-impact AI systems? Both AIDA and EU AI Act intend to require AI systems to be subject to obligations, such as impact assessments, governance frameworks, and data retention.  To what extent will AI developers and users need to comply with two different sets of obligations?

For further insight on and discussions about AI governance and regulation, please contact Tom Whittaker or Martin Cook.

With thanks to Carly Philips-Jones for contributing to this article.