People increasingly turn to ChatGPT and generative AI for deeply personal matters, from venting emotions to seeking therapy-like advice. OpenAI’s own data reveals this intimate usage, but sharing sensitive details carries privacy risks, including data use in public reports like Signals.
This article explores these trends and cautions against oversharing personally identifiable information.
Key takeaways
- OpenAI Signals data shows users frequently use ChatGPT for “expressing” thoughts and feelings, beyond work tasks.
- Younger users (18-34) lead in personal engagement, treating AI as a space to think aloud.
- While convenient, disclosing personal info risks model training or aggregated reporting, per OpenAI policies.
How people use AI for personal issues
OpenAI analyzed millions of consumer messages from July 2024 to the end of 2025, categorizing them into ‘asking’, ‘doing’, and ‘expressing’. Expressive interactions – sharing opinions, feelings, or thoughts without expecting output – form an important percentage of overall use, indicating how people are using AI as a personal confidant.
AI is commonly used to simulate therapists, externalize thoughts, and to disclose secrets. One study found people use ChatGPT for mental health management, self-discovery, companionship, and literacy, often re-enacting distressing events or coaching responses. Reddit users report chatting about “every little thing,” from decisions to jokes, fostering emotional attachments and dependency on AI chatbots over human ties.
Risks of sharing with chatbots
ChatGPT’s default settings allow OpenAI to use prompts for model training, potentially incorporating personal stories into future responses. But even after opting out, aggregated data still appears in Signals reports. This information is anonymized but still reveal trends like expressive use.
OpenAI collects user content (prompts, files) to provide services, analyze usage, and improve models, sharing limited access for safety or legal needs. In the past, coding bugs have exposed payment data, highlighting the fact that there is a definite risk when sharing sensitive information with chatbots. Experts warn that chatbot memory features also build detailed user profiles across sessions.
Why Signals reporting adds concern
OpenAI’s Signals page publicly shares insights from user data, like non-work usage rates by plan or age demographics, excluding enterprises. This aggregated analysis from “millions of consumer messages” shows general trends derived from potentially sensitive chats.
While anonymized, the Signals report underscores how personal inputs fuel public metrics. OpenAI’s privacy policy confirms that every user’s usage data is collected and analyzed for reliability and safety.
Conclusion
ChatGPT offers accessible outlets for personal expression, but risks like data training and Signals reporting make caution essential. Although ChatGPT can be helpful for basic queries, users should always choose human professionals for therapy and medical advice. This will help them avoid potentially risky AI hallucinations and better protect their privacy
Finally, users should check their account settings, opting out of training and limiting the sensitive information they share with chatbots.