Current status of AI policy developments in Canada and abroad

November 23, 2023 | David Krebs, Amanda Cutinha

Ever since generative artificial intelligence (“AI”) technologies have been adopted on a large scale by both businesses and private users in 2023, regulators appear to have increased the speed and intensity of their efforts to propose regulations targeted at the development and use of AI. Certainly, the public eye has never been more focused on the topic than is the case currently.

In this article, we provide a status update on these efforts, both in Canada and abroad. Clearly, there is profound and widespread agreement that AI requires specific regulation, but there is also significant divergence regarding how it should apply to, for example, foundational models (e.g. ChatGPT), what types of systems are “high impact,” and whether certain applications should be prohibited altogether.

Canada’s Artificial Intelligence and Data Act

Canada’s proposed Artificial Intelligence and Data Act (“AIDA”) was introduced as part of the Digital Charter Implementation Act, 2022 (“Bill C-27”), to provide guardrails for the responsible design, development, and deployment of AI systems in Canada.

Since its introduction, AIDA has been under significant scrutiny for a number of reasons, notably that 1) it purports to regulate “high impact systems,” without actually defining what those systems would encompass, 2) it was drafted without the necessary public consultation, and 3) it does not cover uses of AI by government agencies or law enforcement.

In recent meetings of the Standing Committee on Industry and Technology (the “Committee”)[1], it was also pointed out that the definition of “artificial intelligence system” in AIDA may not be perfectly aligned with definitions in other jurisdictions and contexts. The below demonstrates the different definitions of AI system in Canada and the EU:

Canada’s AIDA: “Artificial intelligence system” means a technological system that autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.

EU AI Act: “Artificial intelligence system” (AI system) means software that is developed with one or more techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.

While the EU AI Act does not limit the definition in respect of degree of autonomy, AIDA only applies to technological systems which process data autonomously or partly autonomously.

Due to the cross-border development and use of AI systems it is important that what is deemed “AI” in Canada is consistent with the US and EU, in particular. It was also noted that given that the regulation of AI spans broadly across many disciplines and actors in the economy, there should be an independent regulator of AI in Canada, as opposed to mere oversight by the Ministry.

On October 3, 2023, the Minister of Innovation, Science and Industry of Canada, François-Philippe Champagne wrote to the Committee suggesting amendments to Bill C-27.[2] The suggestions included the following amendments specifically pertaining to AIDA:

  • defining classes of systems that would be considered high impact, including seven classes of systems in respect of matters related to:
    • determinations in respect of employment;
    • determinations as to whether to provide an individual with services, the cost for those services and the prioritization of services;
    • using biometric information in respect of identity authentication or determinations of one’s behaviour or state of mind;
    • moderation of, or presentation of, content to individuals;
    • health care;
    • the adjudication of legal proceedings in court or by an adjudicative body; and
    • the exercise and performance of law enforcement powers, duties and functions.
  • specifying distinct obligations for generative general-purpose AI systems, like ChatGPT, such that, before placing the system on the market or putting into service: impact assessments would be conducted; measures to mitigate the risk of bias be put in place, which measures are to be tested to ensure effectiveness; plain-language descriptions of the capabilities and limitations of the system and risk and mitigation measures are prepared; compliance with regulations are to be ensured;
  • clearly differentiating roles and obligations for actors in the AI value chain including in relation to the high-impact classes outlined above;
  • strengthening and clarifying the role of the proposed AI and Data Commissioner; and
  • aligning with the EU AI Act as well as other advanced economies by making changes to key definitions such as AI and enumerating further responsibilities and accountability frameworks on persons developing/marketing/managing large language models.[3]

From these proposed amendments, it is clear that more changes are required before the AIDA is enacted.

Canada’s voluntary AI Code of Conduct

As we have previously reported, the Canadian government recognized the need for some guidance and foundational principles in the interim period between widespread use of generative AI and the coming into force of AIDA. With that, the “Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems” (the “Code of Conduct”) was published.[4]

The Code of Conduct sets voluntary commitments that industry stakeholders can implement to demonstrate responsible development and management of generative AI systems.

It outlines six core principles including:

  • Accountability: Organizations will implement a clear risk management framework proportionate to the scale and impact of their activities.
  • Safety: Organizations will perform impact assessments and take steps to mitigate risks to safety, including addressing malicious or inappropriate uses.
  • Fairness and equity: Organizations will assess and test systems for biases throughout the lifecycle.
  • Transparency: Organizations will publish information on systems and ensure that AI systems and AI-generated content can be identified.
  • Human oversight and monitoring: Organizations will ensure that systems are monitored and that incidents are reported and acted on.
  • Validity and robustness: Organizations will conduct testing to ensure that systems operate effectively and are appropriately secured against attacks.[5]

The Code of Conduct provides a temporary solution for the current lack of legislation governing AI. It remains to be seen how the Code of Conduct is perceived by key stakeholders and whether it receives substantial adoption.

UK’s AI Safety Summit – “The Bletchley Declaration”

On November 1, 2023, a number of international governments, leading AI companies, civil society groups and AI researchers, met at the AI Safety Summit to consider the risks of AI and discuss how such risks could be mitigated by the international community.

On the opening day of the summit, a declaration was signed by 28 countries (including, Canada, China, the US and the UK) and the EU.  The declaration, which is being referred to as the “Bletchley Declaration,” establishes collaboration between these nations to take a common approach to AI and provides an agenda for addressing AI risk which consists of two action items:

  • identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies; and
  • building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.[6]

The Bletchley Declaration shows a commitment to tackling AI as an international challenge at the outset and provides a promising path towards an internationally-harmonized approach to regulating AI.

EU AI Act

In April 2021, the European Commission introduced the inaugural regulator framework for regulating AI in the form of the EU AI Act. The proposal entails the assessment and categorization of AI systems applicable across various uses, based on the potential risks they present to users. The varying levels of risk will determine the extent of regulatory measures imposed.

The EU AI Act would classify AI systems by risk and mandate various development and use requirements. Its focus is on strengthening rules around data quality, transparency and accountability.

Initially, it appeared that the EU would be the first jurisdiction to govern AI. However, in June of this year, amendments to the draft AI Act[7] raised concerns. In particular, the June changes included a ban on the use of AI for biometric surveillance, and, more controversially, for generative AI systems (like ChatGPT) to disclose AI-generated content.[8] As a result, the EU’s AI Act negotiations are in a deadlock as large EU countries, including France, Germany, and Italy, seek to retract the proposed approach for regulating foundation models in AI.[9] Foundation models, such as OpenAI’s GPT-4, have become a focal point in the late stages of the legislative process. A tiered approach for regulating these models, with stricter rules for more powerful ones, was initially considered but is now contested by some large European nations. Opposition is driven by concerns from companies like Mistral in France and Aleph Alpha in Germany, fearing potential disadvantages against US and Chinese competitors.[10]

The next meeting is set for December 6, 2023 which is a significant deadline given upcoming European elections. [11] If a resolution is not reached, the entire AI Act is potentially at risk.

US Executive Order

On October 30, 2023, the White House enacted an Executive Order (“EO”) focusing on the safe and responsible use of AI in the United States.[12] The EO mandates key executive departments develop standards, practices, and potential regulations within three to twelve months, covering the entire AI lifecycle. While immediate regulatory changes are limited, the order urges federal regulators to utilize existing authority to assess AI system security, prevent discrimination, address employment issues, counteract foreign threats, and alleviate talent shortages. The federal government has committed resources and authority to ensure the ethical use of AI in various sectors, with anticipated developments in guidelines and rules over the next year likely leading to significant new requirements.

Building on voluntary commitments ushered in by the US government in July,[13] the EO moves the US closer to comprehensive AI legislation. In particular, unlike prior efforts to govern AI, the EO creates tangible obligations for both governmental bodies and technologies companies rather than simply providing general principles and guidelines. In particular, it requires that developers of AI systems share safety test results with the government.

We will continue to monitor AI developments in Canada and abroad. In the meantime, should you have questions about the trajectory of AI legislation and how this may impact your organization, please contact our Privacy, Data Protection and Cybersecurity team.


[1] Minutes of Proceedings dated November 7, 2023, Standing Committee on Industry and Technology, https://www.ourcommons.ca/DocumentViewer/en/44-1/INDU/meeting-95/minutes.

[2] Office of the Minister of Innovation Science and Industry, letter from the Honourable Francois-Phillipe Champagne to Mr. Joel Lightbound, online(pdf): https://www.ourcommons.ca/content/Committee/441/INDU/WebDoc/WD12600809/12600809/MinisterOfInnovationScienceAndIndustry-2023-10-03-e.pdf

[3] Ibid.

[4] Government of Canada, “Minister Champagne launches voluntary code of conduct relating to advanced generative AI systems,” News Release (September 27, 2023) online: https://www.canada.ca/en/innovation-science-economic-development/news/2023/09/minister-champagne-launches-voluntary-code-of-conduct-relating-to-advanced-generative-ai-systems.html

[5] Ibid.

[6] UK Government, “Policy Paper: The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023,” November 1, 2023, online: https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023

[7] European Parliament, Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament , June 14, 2023 online: https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html; see also: European Parliament, “EU AI Act: first regulation on artificial intelligence,” News Release (June 14, 2023) online: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence#

[8] European Parliament, “EU AI Act: first regulation on artificial intelligence,” News Release (June 14, 2023) online: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence#

[9] See Open Letter to the Representatives of the European Commission, the European Council and the European Parliament, online (pdf): https://www.igizmo.it/wp-content/uploads/2023/06/Open-Letter-EU-AI-Act-and-Signatories.pdf

[10] Luca Bertuzzi, “EU’s AI Act negotiations hit the brakes over foundation models” Eurative, November 10, 2023, online: https://www.euractiv.com/section/artificial-intelligence/news/eus-ai-act-negotiations-hit-the-brakes-over-foundation-models/

[11] Jillian Deutsch, “The EU’s AI Act Negotiations Are Under Severe Strain,” Bloomberg, November 16, 2023, online: https://www.bloomberg.com/news/newsletters/2023-11-16/eu-ai-act-under-strain-as-chatgpt-could-be-exempt

[12] The White House, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” News Release (October 30, 2023) online: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

[13] The White House, “FACT SHEET: Biden-⁠Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI” News Release (July 21, 2023) online: https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/

Disclaimer

This publication is provided as an information service and may include items reported from other sources. We do not warrant its accuracy. This information is not meant as legal opinion or advice.

Miller Thomson LLP uses your contact information to send you information electronically on legal topics, seminars, and firm events that may be of interest to you. If you have any questions about our information practices or obligations under Canada’s anti-spam laws, please contact us at privacy@millerthomson.com.

© Miller Thomson LLP. This publication may be reproduced and distributed in its entirety provided no alterations are made to the form or content. Any other form of reproduction or distribution requires the prior written consent of Miller Thomson LLP which may be requested by contacting newsletters@millerthomson.com.