3 Essential AI and Privacy Safeguards For 2024

May 7, 2024

As more businesses adopt artificial intelligence (AI), they also adopt increased risks relating to privacy and data misuse. It’s no longer just about cybercriminals or accidental disclosure, consumers and privacy regulators are increasingly concerned with how companies are gathering and using data using AI. 

Three Key Privacy & Security Issues We’re Seeing in Companies Adopting AI 

We’re going to share three key privacy and security concerns we’ve seen our clients face recently: 

AI Privacy Challenge No 1: Transparency in Privacy Notices 

Companies using AI must disclose that use in their privacy notices if the AI is being trained on personal data that company collected. In other words, your company must disclose its use of AI in the privacy notice if:  

  • You are providing data sets that include personal information to third-party AI providers to help train the AI models; or  
  • You are using datasets to “fine-tune” the AI models (even if this info is just being used to improve your own version of the tool and isn’t being sent to a third-party AI provider (e.g., OpenAI). 

Here are a few examples of what this disclosure of personal information might look like in practice:  

  • Your company wants to improve its customer service with a chatbot. It gathers transcripts of past customer service interactions, including names, email addresses, and order details, and forwards this data to a company specializing in natural language processing for chatbots. It does this to train the chatbot on your processes and brand voice. This use must be outlined in your privacy notice. Ideally, since it’s potentially not a use your customers would expect, you would also  allow them to opt in or out, depending on their preferences.  
  • Your company wants to create its own AI-enhanced knowledge bank product. To train this model, your company’s IT team gathers employee and customer records, as well as internal processes, policies, and accesses all internal documents, which includes communications with clients that contain financial information and health information. This use would need to be disclosed and consent provided. 

Finally, it’s important to note that it is not sufficient to update your privacy notices and apply it retroactively. The FTC has made it clear that companies risk engaging in unfair or deceptive behavior if they elect to adopt more permissive data-sharing practices and only inform consumers of this change through quiet, retroactive amendments. 

The FTC will continue to bring actions against companies that engage in unfair or deceptive practices—including those that try to switch up the “rules of the game” on consumers by surreptitiously re-writing their privacy policies or terms of service to allow themselves free rein to use consumer data for product development. Ultimately, there’s nothing intelligent about obtaining artificial consent.” 

AI Privacy Challenge #2: Issues with User-Generated Content 

If your company’s users generate content for your company, allowing third parties to train their AI models on that content is fraught with legal risk.  

For context, user-generated content (UGC) is any form of content created by consumers or users, such as social media posts (including videos, photos, reviews, and comments), product reviews, blog posts, forum discussions, videos, testimonials, and creative responses to challenges or contests that your company runs.  

If your company lets users create UGC, it is wise to  inform users of your intended use of that UGC, including whether you plan to monetize it by selling the rights to train AI models on that content  

AI Privacy Challenge #3: Sensitive Data Is Off Limits 

Finally, FTC Chair Lina Khan has outlined that sensitive data should not be used to train AI and that this will be a focus for the FTC. 

“Sensitive personal data related to health, location or web browsing history should be “off limits” for training artificial intelligence models, US Federal Trade Commission Chair Lina Khan said….” 

In the interim, we suggest that you:  

  • Complete an audit of how you are currently using AI as part of your product, software or services, and maintain an internal record of all AI implementation going forward. This will prepare you for compliance with future regulations and ensure all stakeholders know what and how AI is being used. 
  • Avoid sharing sensitive data with AI models for training purposes to avoid regulator scrutiny. Or, if you elect to use sensitive data to train an AI model, you will want to ensure you have clear consent from your users.  

If you need to change your privacy policies to reflect your use of AI, reach out. Our attorneys would love to help. 

Disclaimer

The materials available at this website are for informational purposes only and not for the purpose of providing legal advice. You should contact your attorney to obtain advice with respect to any particular issue or problem. Use of and access to this website or any of the e-mail links contained within the site do not create an attorney-client relationship between CGL and the user or browser. The opinions expressed at or through this site are the opinions of the individual author and may not reflect the opinions of the firm or any individual attorney.

Other Articles

External Privacy Policy with hand hovering above it and reading glasses sitting on it Is an External Privacy Policy Enough?
GDPR Explained: A Quick Guide for U.S. Businesses
Children’s Data Privacy: Five Takeaways from the FTC’s Recent Workshop

    Ready to Talk?
    Contact Us

    We would to hear from you

    Please take a moment to tell us a few things about your needs and someone from our team will reach out to you as soon as possible.

    We would to hear from you

    Thank you for reaching out!

    Someone from our team will get back to you shortly

    We would to hear from you