OpenAI Sued Over Claims ChatGPT User Data was Shared with Google and Meta

Kinyua Njeri (Sam Kin)  - Tech Expert
Last updated: May 15, 2026
Share
OpenAI Sued Over Claims ChatGPT User Data was Shared with Google and Meta
  • A California class action lawsuit accuses OpenAI of sharing ChatGPT user data, including chat query topics, email addresses, and user IDs with Google and Meta through embedded Facebook Pixel and Google Analytics tracking codes.
  • The complaint alleges that tracking scripts transmitted user data in real time to advertising systems on Facebook, Instagram, and Google for targeted marketing purposes, this violates federal and state privacy laws.
  • Plaintiffs seek class action status for affected US users, as OpenAI faces additional privacy investigations from Canadian watchdogs and a separate lawsuit over chatbot advice in a mass shooting.

OpenAI faces a new class action suit, claiming that the company disclosed ChatGPT’s user data, such as email and unique identifiers, to both Meta and Google without the users’ consent. The suit claims that every time someone uses ChatGPT, the AI company automatically sends their tracking code to both Google and Meta for access. 

This lawsuit, filed in the Northern District of California, makes it one of the first federal class action lawsuits against an AI based chatbot for handling the private information of users. One of the lead plaintiffs in this case, Saje Lim, who lives in California, requested that this suit should receive class status representing thousands of other affected users.

With ongoing privacy issues about the use of artificial intelligence globally, it will be interesting to see how this suit develops and whether any law will emerge as a result of this case.

Facebook pixel aid data tracking

According to the lawsuit, OpenAI included Facebook’s Pixel and Google’s Analytics tracking codes on the ChatGPT website. This means that if someone uses ChatGPT, the AI will automatically send the user’s data to Meta and Google.

In addition, the Facebook Pixel sent HTTP requests (in real time) with both the content of the user’s query and related cookies to Meta’s servers. Meta confirms that it uses the telemetry as part of its Core Audiences and Custom Audiences systems to create highly-targeted advertising for Facebook and Instagram.

Google analytics allegedly enabled data sharing

Google faces accusation of violation of privacy laws through the usage of Google Analytics and related Ads tags that allowed Google to collect hashed email addresses when users sign into ChatGPT.

Additionally, the suit accused Google of the collection of device and browser identifiers as well as cookies that track user behavior and could link to their Google accounts. Google allegedly took this information and combined it with other information to create a cross-device performance and demographic profile for purposes of remarketing.

The complaint alleges that these integrations were a form of intentional eavesdropping that assisted third parties in intercepting user communications. The plaintiffs allege that every ChatGPT interaction is part of an electronic communication which OpenAI had no legal right to disclose without the user’s consent under state and federal privacy regulations.

The action lawsuit wants to include all United States residents whose PII as well as communications through ChatGPT were transmitted in bulk to a third party.

OpenAI faces rising privacy scrutiny

This lawsuit adds to accumulating privacy and legal challenges confronting OpenAI. Canadian federal and provincial privacy watchdogs stated that OpenAI failed to comply with Canadian Privacy Laws during its training of ChatGPT Models.

Europe has also taken action. Italy banned ChatGPT over privacy concerns in a landmark move that sent shockwaves through the AI industry, highlighting how governments worldwide are scrutinizing OpenAI’s data practices.

The authorities carried out a joint investigation into OpenAI’s process and made a ruling in May. Through their findings, they concluded that OpenAI goes overboard in its collection of information for the purpose of training its ChatGPT models. The authorities found that the company collected sensitive personal information, including health status, political views, and similar critical data via chat sessions.

Additionally, in January this year, a federal judge ordered OpenAI to turn over 20 million de-identified ChatGPT chat logs as evidence in copyright litigation brought by news organizations. The ruling rejected OpenAI’s privacy objections; the court concluded that reduced sample size, de-identification, and an existing protective order sufficiently addressed user privacy concerns.

Also, the company faces a separate lawsuit in Florida alleging that ChatGPT provided advice that enabled a mass shooter to plan an attack that killed two people and wounded six others at Florida State University. Authorities disclosed that ChatGPT gave information about what time and location would maximize victims, and the AI also indicated the type of gun and ammunition to use.

If the court grants class action status in this data-sharing case and rules against OpenAI, the decision could force major changes across the AI industry. As such, businesses might have to provide consumers with more precise measures for consenting to share personal data, establish tighter controls over how they handle their consumers’ data, and be more open with consumers about how they handle their personal information with advertising partners.

Share this article

About the Author

Kinyua Njeri is a journalist, blogger, and freelance writer. He’s a technology geek but mainly an internet privacy and freedom advocate. He has an unquenchable nose for news and loves sharing useful information with his readers. When not writing, Kinyua plays and coaches handball. He loves his pets!

More from Kinyua Njeri (Sam Kin)

Comments

No comments.