The recent launch of Project Glasswing on April 7, 2026 signals a broader shift in the cybersecurity landscape as AI is rapidly increasing both the speed and sophistication of cyber threats.

Specifically, Project Glasswing was initiated to provide a specified group of organizations with pre‑launch access to Anthropic’s Mythos tool to help secure critical software before its vulnerabilities can be exploited.

The reason for the launch of this project stemmed from the discovery of how powerful Mythos was in terms of its ability to hack existing systems and identify previously undiscovered software vulnerabilities. Rather than releasing Mythos publicly, Anthropic limited access to vetted organizations to reduce the likelihood of misuse by malicious actors.

This article discusses the potential impact of the development of Mythos and similar tools on Canadian organizations, and the steps they might consider taking to prepare for this new world of AI‑powered cybersecurity risks.

Data protection and cybersecurity laws in Canada

Canadian organizations are subject to a host of data protection, privacy, and cybersecurity laws, depending on their business activities, location, and sector. Commonly, this includes:

Beyond regulatory compliance, organizations must also be in a position to protect their operations and services for customers and other stakeholders. Cybersecurity concerns pose risks not only to data, but also to business continuity and operations. Data breaches and cyber incidents can cause reputational, emotional, financial, and even physical harms to those affected. Needless to say, they represent enterprise‑level risks.

While these risks have been present for many years, they are now evolving due to AI’s susceptibility to being leveraged by bad actors to accelerate, scale, and execute attacks in ways that were previously not possible.

Just as organizations are increasingly using AI to create and analyze content, criminals are using AI to generate malicious code, automate attacks, craft more convincing phishing and spear‑phishing campaigns, and—through tools like Mythos—discover and exploit existing and previously unknown software vulnerabilities. The concern is that these vulnerabilities may be identified and exploited before patches or updates can be developed and deployed.

What should organizations be thinking about in the AI‑powered cybersecurity threat landscape?

From a legal perspective, organizations are required to use reasonable technical, organizational, and physical safeguards to protect personal information.

From an operational and financial perspective, they must safeguard their business and their “crown jewels” of data, including confidential information, intellectual property, and customer data. These underlying obligations have not changed. What is changing is what it takes to meet them effectively.

The following is not a comprehensive or exhaustive list, but highlights key considerations for organizations seeking to protect their operations and data in an AI‑enabled threat environment:

  • Knowledge. Be aware of emerging AI‑driven threats and assess how they may impact your organization’s security posture.
  • Patch. Software patching routines may need to be reconsidered given the speed at which vulnerabilities can now be discovered and exploited. Greater emphasis may be required on continuous or ongoing patching rather than traditional cyclical approaches.
  • Train. AI enables more sophisticated social engineering, phishing, and spear‑phishing attacks. Training should not be a “tick‑the‑box” exercise, but should meaningfully increase staff awareness and their ability to detect and escalate concerns.
  • Detection. Organizations should assume that system penetration is possible, even with strong preventative controls in place. Implementing and actively monitoring effective detection tools is increasingly essential.
  • Incident response. Building organizational “muscle memory” for responding to incidents is more important than ever. This includes clearly assigned roles and responsibilities, familiarity with external advisors, and appropriate insurance coverage. Table‑top exercises remain an important tool in developing these capabilities.
  • Data protection and segregation. Organizations should know where their most sensitive data resides, limit collection and retention to what is necessary, segregate critical data sets, and implement strong access controls to reduce the impact of a potential breach.
  • Service providers. AI‑enabled attacks increasingly target vulnerabilities in vendors and other service providers. Organizations should assess third‑party safeguards and ensure contracts clearly address cybersecurity controls, breach notification obligations, and response expectations in light of these new risks.
  • Use of AI. Organizations should understand how AI tools are being used internally. Clear governance, policies, and controls are needed to prevent unintended data exposure and the creation of new cybersecurity or privacy risks.

If you have any questions or would like to discuss data protection and cybersecurity strategies in more detail, please reach out to our Canadian Privacy & Cybersecurity lawyers.