Managing The Risk of Algorithmic Bias

March 24, 2024

We recently read’s updated list of AI Gone Wrong. The list highlights some of the failures of artificial intelligence (AI) that have had real-world consequences such as:  

  • A news site adding a ‘Guess the cause of death’ poll to a news article about a woman’s death in Australia. 
  • A meal-planning platform suggesting chlorine gas recipes for its users.  
  • A Texas professor failing an entire class after using a faulty plagiarism detection method.  
  • Air Canada’s chatbot lying about its policies (which landed the airline in a US court – a suit it ultimately lost when it tried to argue that the chatbot should be liable instead). 
  • A medical advice platform suggesting a patient should commit suicide (luckily, this was caught in testing).  

We’ve also seen growing enforcement in the AI and algorithmic bias sphere. Last year, we saw EEOC’s first penalty against a company for bias in its AI hiring. Algorithmic bias in hiring remains an EEOC priority in 2024. We’ve also seen the 2022 Facebook settlement for discrimination in its housing advertising platform that allowed users to target ads to people based on attributes protected under the Fair Housing Act (FHA), like age, sex, familial status, and race. The DOJ has since filed a Statement of Interest under the FHA in two other cases: one relating to discriminatory home appraisals and the other for discriminatory tenant screening practices. 

Given the growing list of failures and the increasing scrutiny of regulators, we wanted to encourage you to consider implementing an AI risk review and audit process at your company.  

Reducing the Risk of Algorithm-Enhanced Platforms and AI 

Here are some general good practices to consider if you are already using AI or are considering introducing it:  

Infographic showing good practices to reduce the risk of algorithmic bias

Train your team 

Any team members involved in using AI- or algorithm-enhanced technologies should be trained on the risks, including the risk of bias. Company leaders and management should also be aware of the risks.  

Develop a risk framework 

US companies should develop a practice of analyzing the potential risks associated with any AI or algorithm-enhanced system, and determining whether the risks are acceptable. Companies need to consider: 

  • Technical risks, such as errors, mistakes, or biases. 
  • Legal risks, including privacy risk and copyright risk.  
  • Ethical risks.  

Companies should weigh the risks above with the potential for financial and reputational consequences, including the potential for loss of trust. 

Implement AI risk audits throughout the entire lifecycle 

AI risk assessment and management is not a ‘one-and-done’ task. It needs to be managed continuously. 

Companies should consider implementing processes that ensure AI risk is assessed and managed throughout the entire lifecycle of the product, including at the following stages:  

  • Design Stage: Integrate risk assessment early on. Consider potential biases, privacy implications, and regulatory requirements before deploying any AI model. 
  • Development and Testing: Continuously evaluate models for accuracy, bias, and compliance before deploying to the public. Ensure rigorous testing, simulating real-world conditions.  
  • Deployment and Monitoring: Regularly monitor deployed systems for unintended consequences or changing circumstances. Establish mechanisms for feedback and updates. 

Implementing processes to ensure trained humans are working alongside the AI at every stage is also key to avoiding mistakes and reducing risk in this sphere.  

For more information, we previously outlined some further specific recommendations for employers using AI in their hiring processes. You can review those here. 

If you need help managing your AI risk, reach out. Our attorneys would love to work with you.   


The materials available at this website are for informational purposes only and not for the purpose of providing legal advice. You should contact your attorney to obtain advice with respect to any particular issue or problem. Use of and access to this website or any of the e-mail links contained within the site do not create an attorney-client relationship between CGL and the user or browser. The opinions expressed at or through this site are the opinions of the individual author and may not reflect the opinions of the firm or any individual attorney.

Other Articles

External Privacy Policy with hand hovering above it and reading glasses sitting on it Is an External Privacy Policy Enough?
GDPR Explained: A Quick Guide for U.S. Businesses
Children’s Data Privacy: Five Takeaways from the FTC’s Recent Workshop

    Ready to Talk?
    Contact Us

    We would to hear from you

    Please take a moment to tell us a few things about your needs and someone from our team will reach out to you as soon as possible.

    We would to hear from you

    Thank you for reaching out!

    Someone from our team will get back to you shortly

    We would to hear from you