We’re seeing increasing legal activity around algorithmic bias and discrimination. So, we’re going to dig a little deeper into this topic to help you understand the risks certain AI tools can pose to your business:
What Is Algorithmic Bias?
Algorithmic bias is a form of (usually unintended) systematic discrimination that occurs when the algorithms used in recruitment or selection processes either favor or disadvantage certain candidates based on personal attributes that are unrelated to job performance. This can be problematic just from a business perspective when good candidates are excluded for irrelevant reasons, but it becomes a legal issue when that exclusion is based on protected attributes (such as race, gender, and age).
This bias is typically unintentional and may occur because of flawed datasets, poor coding driving the algorithm, lack of transparency into AI decision making, poor auditing processes, or a combination of these factors.
Increasing Lawmaker & Regulator Attention
Algorithmic bias in employment decisions were an Equal Employment Opportunity Commission priority in 2024 and will remain one through to 2028 (after we saw the first-of-its-kind penalty for algorithmic bias in 2023 from the EEOC). The Whitehouse AI Bill of Rights referred to algorithmic discrimination protections, and the FTC also stepped into the enforcement arena – banning Rite-Aid from using facial recognition technologies that falsely tagged women and people of color as shoplifters.
Throughout 2024, we’ve also seen increasing activity from state lawmakers. Here’s a very brief overview of some of the laws that have passed:
Colorado’s Artificial Intelligence Act CAIA
The CAIA comes into effect in February 2026 and requires (among other things) employers who are using high-risk AI tools for hiring, promotions, or terminations to require human oversight and to allow candidates to appeal decisions made by automated tools.
Illinois Human Rights Act Amendment
Lawmakers in Illinois have amended the Human Rights Act to expand the number of attributes that are protected from discrimination under state law. The updated law will also require employers to be transparent about the use of AI in hiring and employment decisions.
Protected attributes include race, color, ancestry, national origin, disability, religion, sex, sexual orientation, pregnancy, military status, military discharge, age (over 40), order of protection status, marital status, citizenship, work authorization status, language, conviction record, arrest record, family responsibilities (from Jan 1 2025), and reproductive health decisions (from Jan 1 2025).
Other Lawmaking Activity
There have also been a host of other lawmaking activities, such as Utah’s Artificial Intelligence Policy Act, and a range of bills in California which have not been successful, including:
- The vetoed SB1047 which aimed to increase safety mechanisms around high-risk AI systems, and
- The now-dead AB 2930 which would have forbidden employers from using AI systems if an impact assessment identified a reasonable risk of discrimination.
Action Items For Business Leaders
So far, regulation and enforcement of discriminatory algorithms has been relatively piecemeal. But, this is likely to increase into the future – alongside employee expectations about transparency and fairness. As a result, business leaders may wish to implement the following action items to better align with compliance trends and employee demands:
- Draft clear notices whenever automated systems are used, including information about the technology, risks, and steps taken to prevent discrimination.
- Implement an appeals process for candidates and/or employees who feel that an employment decision was biased or unfair.
- Regularly audit any automated systems used in the hiring process, including updated AI-impact assessments. The findings should be documented and stored for review over time.
If you’re concerned about the legal and compliance risk of algorithmic bias in any tools you use, reach out. Our team can help you develop your AI-impact assessment and review your existing systems.
Disclaimer
The materials available at this website are for informational purposes only and not for the purpose of providing legal advice. You should contact your attorney to obtain advice with respect to any particular issue or problem. Use of and access to this website or any of the e-mail links contained within the site do not create an attorney-client relationship between CGL and the user or browser. The opinions expressed at or through this site are the opinions of the individual author and may not reflect the opinions of the firm or any individual attorney.