“`html
A notable data leak has uncovered hundreds of thousands of private user discussions with Elon Musk’s AI chatbot, Grok, appearing in public search results.
The event, originating from the platform’s “share” functionality, has rendered sensitive user information openly accessible online, seemingly without the awareness or explicit approval of the users involved.
The leak was detected when it became evident that utilizing Grok’s share button did more than merely produce a link for a designated recipient. It resulted in a publicly accessible and indexable URL for the conversation transcript.
As a result, search engines like Google crawled and indexed this material, rendering private conversations searchable by anyone. A Google search conducted on Thursday confirmed the extent of the issue, revealing nearly 300,000 indexed Grok interactions, with some technology publications estimating the total to be even greater, exceeding 370,000.
An examination of the compromised chats emphasizes the seriousness of the privacy violation. Transcripts observed by the BBC and other platforms included users inquiring of Grok for profoundly personal or sensitive information. Illustrations spanned from generating secure passwords and intricate medical questions to creating weight-loss meal plans.
Through the CybersecurityNews team’s exploration using Google Dork Queries, we identified multiple pages corresponding to the query site:https://x.com/i/grok?conversation=.

The data also exposed users probing the chatbot’s ethical limits, with one indexed interaction featuring detailed directions on manufacturing a Class A drug. While user account information may be anonymized, the content of the inquiries themselves can readily encompass personally identifiable or highly sensitive data.

This occurrence is not an isolated incident in the swiftly changing AI landscape. OpenAI, the developer of ChatGPT, recently reversed an initiative that also led to shared conversations surfacing in search results.
Likewise, Meta encountered backlash earlier this year after its Meta AI chatbot’s shared discussions were compiled into a public “discover” feed. These recurring incidents highlight a concerning trend of prioritizing feature launches over user confidentiality.
Experts are raising concerns, labeling the situation as a significant lapse in data security. “AI chatbots are a privacy catastrophe in the making,” Professor Luc Rocher of the Oxford Internet Institute informed the BBC, cautioning that leaked discussions containing sensitive health, financial, or personal details will remain online indefinitely.
The root of the problem lies in the deficit of transparency. Dr. Carissa Véliz, an associate professor at Oxford’s Institute for Ethics in AI, underscored that users were not sufficiently informed that sharing a chat would render it public. “Our technology doesn’t even divulge what it’s doing with our data, and that’s an issue,” she asserted.
As of this report, X, the parent organization of Grok, has not provided a public statement on the situation.
“`