With the continued absence of AI-specific regulations in Canada, provincial regulators and agencies are increasingly attempting to provide guidance on the use of AI to protect privacy and other human rights in Canada. Organizations, in turn, may be well advised to consider this guidance to ensure compliance and to meet stakeholder expectations regarding the responsible use of AI.
The Information and Privacy Commissioner of Ontario (the “IPC”) and Ontario Human Rights Commission (“OHRC”) have jointly released a new set of principles governing AI use in the province, titled Principles for the responsible use of artificial intelligence (the “Joint Principles”). The Joint Principles complement a growing list of Canadian AI frameworks for both government and private organizations, including the IPC and OHRC’s 2023 joint statement on the use of AI technologies, Ontario’s Responsible Use of Artificial Intelligence Directive, and the federal government’s 2024 guide on the use of generative AI and Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. They also align with international frameworks, such as the European Union’s AI Act, the ASEAN Guide on AI Governance and Ethics, and the Organisation for Economic Co-operation and Development’s AI Principles.
The Joint Principles are intended to assist organizations in protecting privacy, human rights, human dignity, and public trust when implementing AI tools. While not mandatory, the IPC and OHRC strongly recommend that organizations adopt the principles to ensure compliance with Ontario’s privacy and human rights laws. In order to implement these principles, Canadian organizations may wish to review their existing policies and procedures, employee training programs, and contracts or other agreements with vendors.
The Joint Principles consist of the following key points, each of which is to be viewed as being of equal importance:
- Valid and Reliable: Valid AI systems must objectively meet their intended purpose and perform consistently in various circumstances to ensure accuracy. Organizations should conduct regular validity and reliability assessments of each AI system.
- Safe: AI systems should not cause harm or infringe upon human rights, including the rights to privacy and non-discrimination. To maintain safety, organizations should implement strong cybersecurity measures, conduct regular audits, and swiftly decommission unsafe AI systems.
- Privacy-Protective: AI system designers should implement a “privacy by design” approach, which builds privacy protections directly into the system to safeguard personal data. Designers must also comply with federal and provincial privacy legislation.
- Human Rights-Affirming: AI system engineers and designers must proactively prevent human rights violations through measures such as monitoring and adjusting training data to avoid or address inherent biases being incorporated into AI systems. Organizations should avoid using AI uniformly across diverse groups where doing so may result in adverse effect discrimination.
- Transparent: Organizations using AI systems should disclose their use in an understandable manner. In particular, the use of AI systems should be:
- Visible: organizations should publicly disclose their use of AI;
- Understandable: institutions should create clear documentation explaining how the AI system works and why errors may occur;
- Explainable: organizations should be able to justify how and why each AI system produces specific outputs; and
- Traceable: institutions should understand how the AI system operates through data training and management, performance metrics, and periodic evaluations.
- Accountable: Organizations should maintain a “human in the loop” (i.e., human review of AI outputs) to ensure accountability. Institutions should conduct risk assessments, assign oversight responsibility, document decision-making processes, and establish whistleblowing mechanisms for reporting concerns. An independent body should oversee each institution’s use of AI systems and have the authority to implement corrective measures if an AI system malfunctions or fails.
The Joint Principles are the latest example of how governments continue to navigate the multidisciplinary issues posed by AI implementation. While not binding, institutions should consider incorporating the Joint Principles into their AI policies and procedures as a means of prioritizing legal compliance, demonstrating due diligence, reducing legal exposure, and building public trust before binding legislation around the use of AI inevitably arrives.
If you would like further information or to discuss these issues, please reach out to a member of our Technology, Intellectual Property, and Privacy Group.