As Canada develops an artificial intelligence (“AI”) regulation framework, there are two variant models beginning to emerge: a deregulatory push from the United States, and a risk-based regime from the European Union. The one aspect that both the U.S. and EU have in common is that they are wary of a state‑by‑state “patchwork” of AI rules. It is with the U.S. and EU approaches as examples that Canada’s federal AI Strategy Task Force is consulting on a Canadian national AI framework. Notably, as work is ongoing, provinces like British Columbia, Ontario, Alberta, Manitoba, and Saskatchewan have either implemented or are developing their own AI policies and guidelines.

This article highlights how AI law is developing internationally, and what Canadian businesses should be watching as they plan for AI adoption and governance.

U.S.: AI deregulation

Statements from the U.S. federal government emphasize the potential economic benefits of AI and the importance of a national-level AI framework. The U.S. is focused on economic “dominance” and reducing “cumbersome regulation” as affirmed in an AI Executive Order signed on December 11, 2025 (the “U.S. Order”).[1]  However, the U.S. Order also warns of compliance challenges associated with “a patchwork of 50 different [state] regulatory regimes” and tasks advisors to “jointly prepare a legislative recommendation establishing a uniform Federal policy framework for AI.”[2]  Ultimately, the U.S. Order calls for the implementation of an AI Litigation Task Force “whose sole responsibility shall be to challenge State AI laws inconsistent with the [minimally burdensome national policy framework for AI].”[3] 

In the absence of an existing U.S. national AI policy framework, the near-term impact of the U.S. Order could be a roll back of certain state-level AI laws occurring in a vacuum due to the present lack of any concrete U.S. federal AI polices. The U.S. Order requires the U.S. administration to:

  • publish an evaluation of state AI laws within 90 days; and
  • identify any “onerous laws” that should be referred to the AI Litigation Task Force.[4] 

Perhaps foreshadowing what is expected to be achieved through this process, the U.S. Order specifically calls out a Colorado state AI law for scrutiny on the grounds that it may “force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.”[5]

Thus, although the substance of the future U.S. national-level AI framework remains uncertain, it is clear that the current U.S. administration views a “patchwork” of state-level regulation as a barrier to AI innovation and something to avoid.

Canadian organizations with U.S. operations should anticipate a continued emphasis of the U.S. federal government on AI deregulations and active discouragement of the implementation and application of state-level AI laws.

EU: Risk-based regulation

Similar to the U.S., the EU was motivated by a desire to avoid “[d]iverging national rules” and “fragmentation of the internal [EU] market” in its development of the EU AI Act[6] That legislation is applicable across the EU, and imposes compliance requirements designed to progressively increase in accordance with “the intensity and scope of the risks that AI systems can generate.”[7] For example, the EU AI Act prohibits the use of certain AI systems considered to present unacceptable risk, and mandates risk management, data governance, technical documentation, human oversight and quality management practices, among other requirements, for developers of high-risk AI systems.[8] While the EU AI Act was adopted in June 2024, several of its provisions are slated to come into force in August 2026. A more detailed overview of the AI Act can be found here

Notably, despite the EU and U.S. sharing a common “no-patchwork” goal, reports suggest that other disparities between the approaches have led the U.S. government and technology industry to apply “fierce pressure” on the EU to water down the EU AI Act.[9] Amendments to the EU AI Act have been tabled that aim to reduce the administrative compliance burden for businesses, namely the “Digital Omnibus” package.[10] If adopted, it would, among other things, push back the date when rules relating to high-risk AI systems will come into force, allowing for additional time to promulgate compliance guidelines.[11] It is unclear whether the amendments are to address external pressure being exerted on the EU, or merely to refine what is the first legislation of its kind.

Canada: A federal AI framework under construction

Although Canada has federal laws in force that address some AI-related issues (such as federal privacy laws – PIPEDA), there are none expressly dedicated to AI. The prior AI-focused federal bill died on the table at the time of the federal election of April 28, 2025, and Canadian AI policy makers are currently working to produce a new proposal for a federal AI framework. As an initial step towards a Canadian AI framework to support Canada’s digital sovereignty, the Minister of Artificial Intelligence and Digital Innovation launched Canada’s AI Strategy Task Force (the “Task Force”) in the Fall of 2025, and the input of stakeholders has been sought.[12] The Task Force’s mandate is to develop a federal AI framework focused on Canada’s “vision for AI and digital sovereignty.” Until the development process is completed, the shape that Canada’s federal AI framework will ultimately take is unclear.

In the absence of a federal AI framework, a patchwork of AI approaches has emerged in Canada. Some provinces have developed provincial AI policies and guidelines, including: British Columbia’s Policy on the use of generative AI, and a Digital Code of Practice directed to public servants and contractors; Ontario’s Enhancing Digital Security and Trust Act governing the public sector; and Saskatchewan’s generative artificial intelligence guidelines for Government of Saskatchewan employees. Other provinces are in the process of the development of AI policies, including Manitoba, which is considering a taskforce report, and Alberta, where the provincial Privacy Commissioner recommended that Alberta create its own AI framework in a report published in August, 2025.[13]  Notably, these provincial initiatives are also looking to examples from other jurisdictions, with the Alberta report suggesting that the province’s AI framework align with the EU AI Act.[14] 

These initiatives indicate that Canada could develop a patchwork of provincial AI regulations prior to the federal laws being implemented. Canadian businesses should keep their eye on AI regulation at both the provincial and federal levels.

What should Canadian organizations be doing now?

It is a fact that Canada will not be without AI regulation that affects Canadian businesses for long. Organizations engaged in AI use and/or development should take action now in anticipation of that future regulation.

There are several practical steps that businesses should consider now to prepare for a more regulated AI environment, and the following are some examples:

  • mapping current and planned uses of AI, paying particular attention to the type of data inputs that each require and process (e.g., “sensitive data” as addressed under PIPEDA), and whether the AI is a public-facing or private system; and
  • monitoring emerging provincial and federal AI regulation and guidelines, and aligning internal policies with common themes championed by those initiatives (e.g., responsible use of AI, accountability, etc.).

As AI policy continues to evolve internationally and within Canada, organizations that move early on governance and compliance will be better positioned than those waiting for the final text of a federal law. Miller Thomson’s Technology, Intellectual Property and Privacy lawyers are closely monitoring the work of Canada’s AI Strategy Task Force, provincial developments, and the international AI regulatory environment. If you are using or developing AI tools, procuring AI systems, or adapting to new public‑sector AI requirements, our team can help you strategize and  design a governance framework aligned with emerging Canadian and international standards applicable to your business.


[1] The White House, Ensuring a National Policy Framework for Artificial Intelligence (Dec. 11, 2025), available at https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/ (hereinafter, the “Order”).

[2] Id.

[3] Id.

[4] Id.

[5] Id.

[6] EU AI Act, Supra n. 2 at preamble ¶ (3). 

[7] Id. at preamble ¶ (26). 

[8] Id. at Articles 5–15.

[9] B. Moens, Financial Times, EU set to water down landmark AI act after Big Tech pressure (Nov. 7, 2025), available at https://www.ft.com/content/af6c6dbe-ce63-47cc-8923-8bce4007f6e1

[10] See European Commission, Proposal for a Regulation of the European Parliament and of the Council Amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as regards the simplification of the implementation of harmonised rules on artificial intelligence (Digital Omnibus on AI), available at https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-ai-regulation-proposal.

[11] Id. at Article 1 ¶ (31). 

[12] Government of Canada launches AI Strategy Task Force and public engagement on the development of the next AI strategy (Sept. 26, 2025) available at https://www.canada.ca/en/innovation-science-economic-development/news/2025/09/government-of-canada-launches-ai-strategy-task-force-and-public-engagement-on-the-development-of-the-next-ai-strategy.html.

[13] Comments from the Office of the Information and Privacy Commissioner Regarding Responsible AI Governance in Alberta (July 15, 2025), available at https://oipc.ab.ca/office-of-the-information-and-privacy-commissioner-publishes-report-to-provincial-government-on-how-to-develop-a-framework-governing-use-of-artificial-intelligence-ai-in-alberta/

[14] See, id. at 8.