xAI’s Grok Leak Exposes 370,000 Private Chats to Search Engines

xAI’s Grok Leak Exposes 370,000 Private Chats to Search Engines

Elon Musk's xAI group published more than 370,000 Grok conversations that were indexed by search engines with no warning to users.

As 9to5Mac reports:

  • "More than 370,000 Grok AI chats have been published on the Grok website and indexed by search engines, making them public".

That includes not only chat transcripts but user-uploaded photos, spreadsheets, and documents.

The root of the issue lies in Grok's "share" feature. When a user engages it, a unique URL is generated for sharing-but xAI failed to prevent indexing.

As TechCrunch explains:

"Those URLs are being indexed by search engines like Google, Bing, and DuckDuckGo, which in turn lets anyone look up those conversations on the web".

Exposed content ranges from mundane queries to deeply sensitive data. One shared conversation included instructions on illicit drug synthesis, bomb construction, suicidal methods, hacking crypto wallets, and even a "detailed plan for the assassination of Elon Musk".

Reports note that xAI offered no alert or disclaimer when users shared content.

Forbes observed, "On Musk's Grok, hitting the share button means that a conversation will be published on Grok's website, without warning or a disclaimer to the user".

This incident echoes a prior misstep by OpenAI, whose shared ChatGPT transcripts were briefly searchable after an opt-in feature. xAI faces harsher scrutiny, given the magnitude and sensitivity of the exposed data.

Consequences extend beyond privacy. The Verge underscores the broader significance:

"Whenever a user hit 'share' on a Grok conversation, the URL it generated was also searchable on Google, Forbes reported".

It raises questions about internal protocols, risk awareness, and design oversight.

Users entrusted Grok with personal queries, assuming private exchanges remained private. That trust was broken at scale. xAI's lack of transparency puts the company's operational discipline in question, especially as Grok inches closer to enterprise and government applications.

The incident may prompt regulatory scrutiny, particularly from privacy watchdogs. It may also force xAI to overhaul consent mechanisms, restrict indexing automatically, and reevaluate default behavior for any "share" function.

Industry stakeholders now must reassess their reliance on Grok's security assurances. Enterprises and developers integrating such tools should assume the worst and verify safeguards. And users deserve clarity on what "sharing" means in practice-particularly when it can make private chats public, without notice.

This episode demands accountability and fixes. At stake is both user privacy and xAI's credibility. The company must act quickly to reinforce its systems and reassure partners that privacy is not treated as an afterthought.