Artificial intelligence (“AI”) is transforming the fintech sector and AI agents are at the forefront of this shift. AI agents are capable of executing real-time transactions based on pre-programmed objectives. When paired with stablecoins – cryptocurrencies designed to maintain a stable value over time in relation to a reference asset – AI agents can facilitate low-latency and low-cost digital payments. The convergence of AI and blockchain technologies paves the way for autonomous financial systems capable of operating continuously with minimal oversight, presenting both new opportunities and complex regulatory challenges.
This article is the second in a series by Miller Thomson LLP exploring the intersection of stablecoins and AI agents, and their emerging significance within fintech.
Global regulatory awareness and the GENIUS Act
European Union
The European Union AI Act (the “AI Act”) is the world’s first comprehensive AI law and classifies AI systems based on risk:
- Unacceptable Risk Systems: AI systems used for social scoring, manipulation or exploitation are strictly prohibited due to their threat to society.
- High-Risk Systems: AI systems vital to everyday sectors and infrastructure must meet stringent requirements and assessments, provide clear information to users and may require human oversight.
- Limited-Risk Systems: AI systems not classified as unacceptable or high-risk but require specific transparency obligations such as the system’s intended purpose and operational characteristics.
- Minimal-Risk Systems: AI systems not regulated by the AI Act due to the minimal impairment on users.
The AI Act may provide regulatory insight on the integration of AI with stablecoins. As financial services is a high-impact sector, the integration may be initially classified as high-risk, until such digital commercial transactions become more sophisticated.
Canada
Canada does not yet have a regulatory framework specific to AI. However, the Federal Government has proposed the Artificial Intelligence and Data Act (“AIDA”), which is intended to establish broader regulations under the Digital Charter Implementation Act, 2022. Similar to the AI Act, if enacted, AIDA would impose requirements related to the design, development, and deployment of AI systems, with a focus on transparency, accountability, and risk management. Under AIDA, businesses would be required to:
- identify and address the risks related to harm and bias in their AI systems;
- assess the intended use and limitations of their AI system and ensure users are clearly informed of them; and
- implement appropriate risk mitigation strategies and ensure their systems are continually monitored.
United States
As the integration of AI and stablecoins becomes increasingly feasible, lawmakers are beginning to recognize the need for regulatory frameworks that can address the legal, financial and ethical implications of autonomous digital transactions. In the United States, the recently passed Guiding and Establishing National Innovation for U.S. Stablecoins Act (the “GENIUS Act”) establishes a comprehensive federal framework for stablecoin issuance and oversight. While the GENIUS Act does not directly regulate AI, its enactment signals growing legislative awareness of emerging technologies, particularly in the cryptocurrency space.
From concept to capability: The rise of autonomous agents
The development and deployment of AI agents is no longer limited to those with deep technical expertise. For example, OpenAI recently introduced “Agent,” an autonomous AI system capable of performing multi-step tasks by interacting with the web using its own browser. The Agent expands upon OpenAI’s “Operator,” by combining its web interaction capabilities with ChatGPT’s Deep Research tool. This integration demonstrates an AI agent’s ability to plan, browse, and act on a user’s behalf. While the full scope of its commercial use is still evolving, Agent signals a shift toward greater accessibility of AI-driven automation.
Why stablecoins are an ideal fit for AI agents
The integration of AI agents with stablecoins represents a potentially transformative development in the fintech space. Stablecoins offer a predictable medium of exchange that mitigates the volatility associated with traditional cryptocurrencies. This makes stablecoins particularly well-suited for use by AI agents. With successful interoperability, AI agents could facilitate continuous, low-cost digital commerce, operating with minimal human oversight and within a stable valuation framework. As AI agents begin to interact with stablecoins, legislators will be under pressure to ensure that laws and regulations surrounding agency, liability, and contractual enforcement keep up with technological innovation.
Conclusion and key takeaways
Entrepreneurs and businesses in Canada should prepare for the growing role of AI agents in digital commerce and their potential integration with stablecoins, both of which are subject to rapidly evolving legal and regulatory developments. By proactively adopting responsible AI practices, businesses can align with emerging global standards and position themselves at the forefront of innovation in the fintech space.
The following takeaways highlight the key implications for businesses navigating this evolving landscape:
- Autonomous Transactions: The integration of AI agents with stablecoins will enable instantaneous and autonomous financial transactions with minimal human oversight, offering businesses greater speed, efficiency, and scalability.
- Stable Digital Commerce: The integration of AI agents with stablecoins will facilitate stable financial transactions, allowing AI agents to avoid the volatility associated with traditional cryptocurrencies and enabling more reliable financial automation.
- International Legislation: The AI Act signals a path towards autonomous electronic financial systems using a stable monetary medium. Although human oversight may be required in high-risk systems, entrepreneurs engaged in building AI agents integrated with stablecoins may rely on the AI Act to properly classify their activities and adhere to specific compliance measures.
- Canadian Regulation: AIDA intends to regulate responsible design, development, and deployment of AI systems. Although Canadian law does not currently require a license to deploy AI agents, businesses should begin aligning their AI development and deployment practices with AIDA’s risk mitigation and transparency requirements. Proactively adopting internal controls, governance frameworks, and responsible AI practices can reduce legal exposure and position businesses competitively as regulation catches up to innovation.
Miller Thomson LLP’s Technology, IP and Privacy Group lawyers focus on securities, intellectual property, corporate and tax law in the context of cryptocurrency, blockchain and AI. Please reach out to a member of the team if you have any questions.