ChatGPT private chats leaked on Google Search: Here’s how to protect your privacy
ChatGPT has accidentally exposed some private conversations on Google Search. Here’s how you can protect your data.
If you recently used ChatGPT’s browsing mode to look something up online, your private conversations had been unintentionally exposed through Google Search results. The issue, which surfaced earlier this month, once again raises questions about how securely AI systems handle user data when browsing the web.
How the Leak Was Discovered
Developers first noticed the problem while reviewing their Google Search Console dashboards, TechCrunch reported. Rather than using standard keyword searches, they found full sentences that looked like the prompts that users usually enter into ChatGPT. These sentences were detailed and conversational, which suggests that parts of ChatGPT interactions had been indexed by Google.
Also read: Google Pixel phones get AI remixing, smarter alerts, and power-saving maps mode with new update
Analytics researcher Jason Packer and consultant Slobodan Manić investigated the matter further. Their findings showed that the leak came from ChatGPT’s web browsing mode. In certain cases, a small number of conversations were being routed through a “hints=search” tag, which caused parts of user prompts to be added to URLs. As Google automatically scans and indexes such URLs, these private fragments became visible to unrelated website owners.
Also read: Samsung Galaxy S26 series launch timeline leaked again: Here’s when it is actually launching
What OpenAI Said
According to a report by Ars Technica, OpenAI confirmed the issue and clarified that it affected only a limited number of searches. The company stated that the glitch has been fixed, but did not specify how long it existed or how many users were affected. Although the incident didn’t expose passwords or personal data, it highlights how closely AI tools interact with public web systems.
This is not the first time ChatGPT’s data visibility has raised concerns. Earlier in the year, users discovered that shared chat links were being indexed by Google. At that time, the issue was traced to public sharing settings. However, the latest glitch occurred without user involvement, which makes it more concerning from a privacy standpoint.
Also read: Planning a trip? Here’s how to check the air quality on Google Maps before you go
How Users Can Protect Their Data
While OpenAI has fixed the leak, users can take several precautions to protect their privacy when using AI tools with web access:
- Avoid entering personal or sensitive information into prompts.
- Use private or incognito browser modes when testing AI features that access the web.
- Disable browsing mode when it’s not needed.
- Regularly clear chat history to reduce data exposure.
So, as AI tools start to be all mixed up with online systems, even little technical problems can cause some pretty unexpected data exposure. The best way to maintain privacy when using web-connected AI assistants is to stay cautious and aware.
E-Paper

