Are Your HR Artificial Intelligence Systems Legally Compliant?​ ​- Glenn Commandments

While artificial intelligence (AI) is seeping its way into just about all business functions, it seems to be having the greatest impact on HR. According to a new report, 81% of Canadian HR professionals say they use AI tools at work, more than finance, marketing, or any other sector. HR directors should take pride in this; but being in the vanguard of AI adoption also carries certain risk. For all of its potential, AI remains a work in progress. A common vulnerability of AI solutions is their noncompliance with legal requirements. All too often, companies don't discover these compliance glitches until after they've already invested significant resources to put the system into operation. 

AI Privacy & Discrimination Risks  

When using AI for HR functions, the two biggest compliance risks are potential violations of privacy and human rights laws. A perfect example of the former is the union grievance challenging a national bus company's decision to replace its conventional in-vehicle cameras with an elaborate new AI-based Samsara remote monitoring system that gathered more extensive personal data about drivers. Finding that the resulting harms to drivers' privacy outweighed the relatively marginal safety improvements, the federal arbitrator ordered the company to dismantle parts of the Samsara system and pay each affected driver $100 in compensation for the privacy damage [STT de Coach Canada – CSN v Newcan Coach Company ULC (Coach Canada), 2025 CanLII 96672 (CA SA)]. 

Human rights laws come into play to the extent that AI systems embed the subtle biases of the human beings that program them. When used for recruitment, hiring, and other HR functions, these discriminatory algorithms lead to employment decisions based on race, sex, national origin, and other personal characteristics protected by human rights laws. Example: In 2018, Amazon stopped using an AI recruiting program after discovering it was skewed against women because it was programmed using resume data from a sample in which 80% of the resumes were from men. Although I'm not aware of any reported algorithm discrimination cases in Canada, such cases have happened in the U.S., including one in which iTutorGroup Inc. shelled out $365,000 to settle age discrimination claims for using software programmed to reject female job applicants over age 55 and male applicants over 60.   

New Ontario Privacy & Human Rights Commission AI Principles 

Don't make the same mistake! If your company uses automated decision-making systems, generative AI, large language models (LLMs), or other AI technologies for HR (or any other operations), verify their compliance with privacy, human rights, and other laws before you deploy them. The good news is that the vetting process may have just gotten easier. On January 21, the Ontario Privacy and Human Rights Commissions published new guidance for responsible deployment of AI systems. While intended for government and public sector organizations, the guidance also works for companies in the private sector. Specifically, the guidance lists six principles, or hallmarks of responsibility to look for in selecting and using AI systems. 

  1. Valid & Reliable

"AI systems must exhibit valid, reliable, and accurate outputs for the purpose(s) for which they are designed, used, or implemented." Valid, the guidelines explain, means meeting independent testing standards and requirements for their particular uses or applications. Reliability is measured by proof of consistent performance in the environments in which the system is deployed over a specified duration.  

  1. Safe

Safe means designed and used in such a way that doesn't cause harm or unintended harmful outcomes to people, economic security, or the environment. AI systems should have embedded human rights and privacy safeguards and robust cyber security protection. They should also undergo monitoring and evaluation "throughout their life span to confirm that they can withstand unexpected events or deliberate efforts that cause harm." 

  1. Privacy Protective

Look for AI systems that have embedded privacy protections that limit collection, use, and disclosure of personal data to the minimum amount necessary to accomplish the system's function. Products should also have strong security safeguards to protect the confidentiality of the personal data they use against unauthorized access, use, and disclosure.    

  1. Human Rights Affirming

Companies must take active measures to ensure that the AI systems they deploy don't "infringe substantive equality rights" or perpetuate systemic discrimination. Such measures include ongoing monitoring of the system and mitigation of discriminatory impacts detected, the way Amazon did when it discovered that its recruitment system was favoring men over women. The guidance also specifically warns government agencies to beware of AI systems that "unduly target participants in public or social movements, or subject marginalized communities to excessive surveillance that impedes their ability to freely associate with one another." 

  1. Transparent

It's important that AI systems used by public sector institutions be transparent, which the guidelines describe as having four characteristics. 

  • Visible: Institutions should be able to "provide a public account" of the system's operation and intended purpose and how its outputs are used along with notification of when interactions with or information provided to individuals come from AI. 
  • Understandable: Institutions must be able to explain how the technology operates and why errors may occur. 
  • Explainable: Institutions must be able to describe how the process works and why it generates the outputs that it does. 
  • Traceable: Institutions must be able to go back after the fact and collect a thorough account of how the system operates for purposes of monitoring, training, validation, and evaluation. 
  1. Accountable

Select AI systems that are designed to allow for continual human oversight and real-time intervention to correct problems and make adjustments as and when needed.  

Check out these resources to ensure your workplace is using AI in a compliant manner.