In 2021, the FTC charged Flo, one of the most popular period-tracking apps, with secretly sharing menstrual cycle data, pregnancy status, and health information with Facebook and Google for marketing. The app had promised users privacy. Tens of millions of women believed it.
In December 2024, the FTC took action against Mobilewalla for collecting 500 million unique consumer advertising IDs, building audience segments labeled "pregnant women" by tracking visitors to pregnancy centers, and selling that data to government clients.
The data broker industry that makes this possible is worth $278 billion. A $9 billion sub-industry specifically trades doctor and patient health data to pharmaceutical companies.
This is the world your AI conversations enter when you type a prompt.
What the numbers say
Pew Research surveyed Americans in 2023 about how they feel about their data. The results were not subtle.
Eighty-one percent of people feel powerless. Sixty percent would pay more for privacy. And fifty-six percent agree to terms they never read. That gap -- between caring deeply and doing nothing -- is where the AI privacy problem lives.
What ChatGPT actually does with your data
The answer depends on which ChatGPT you use. OpenAI runs two very different systems under the same name, with very different privacy rules.
The pattern is clear: if you're not paying, you are the training data. The free tier is not free -- you're paying with your conversations, your writing style, your business ideas, your private thoughts. OpenAI just doesn't send you an invoice.
What Claude and Gemini do
ChatGPT isn't alone. Every major AI provider follows the same playbook.
Anthropic (Claude): The API never trains on your data -- flat policy, no exceptions. But consumer Claude (Free, Pro, Max) started training on conversations in late 2025 unless you opt out in Privacy Settings. API data is retained for 7 days, then deleted.
Google (Gemini): Consumer Gemini conversations are used for training. Google's privacy policy states that human reviewers read conversations to improve the service. Gemini for Workspace has different terms, but the consumer version is open season.
The distinction is always the same: the consumer product trains on you. The business product doesn't. Privacy is a feature they sell, not a right they grant.
The questions nobody asks
Training gets the headlines. But it's not the only way your data leaves your control.
How long is it stored?
OpenAI retains abuse monitoring logs for 30 days. Anthropic retains API data for 7 days. But "retention" and "deletion" are defined by each company differently -- and you're trusting their word.
Who else sees it?
Your prompts often pass through intermediary services before reaching the model. Each intermediary has its own retention policy. Most users don't know the chain exists.
What can be derived?
A prompt about tax deductions reveals your income bracket. A medical question reveals a health condition. The prompt itself is the sensitive data, not just what the model says back.
This is where AI privacy gets uncomfortable. You might trust OpenAI not to train on your API data. But do you trust the data broker who buys anonymized usage patterns? The insurance company that discovers you asked an AI about chest pain symptoms? The employer who learns you used company time to ask about disability leave?
These aren't hypothetical scenarios. They're happening now:
- A Texas data broker sold marketing lists built around people battling Alzheimer's disease. California fined them $45,000.
- The Gravy Analytics breach in January 2025 exposed device locations at the White House, the Kremlin, Vatican City, and military bases worldwide.
- The New York Attorney General secured $14.2 million from car insurance companies that exploited consumer data purchased from data brokers.
What "private AI" actually means
Privacy in AI isn't one thing. It's a stack of decisions, each of which either protects you or doesn't.
Local execution means your data is processed on your device. Your files, your emails, your documents -- they don't leave your machine. The AI runs where you are, not on someone else's server.
Zero-data-retention means that when your device does send a request to an AI model, the provider processes it and forgets it. No logs. No storage. No 30-day retention window. The prompt goes in, the response comes out, and nothing persists.
No training means your conversations never become part of a future model. Your writing style, your business strategy, your medical questions -- none of it improves a product that someone else profits from.
Most AI products offer one of these. Very few offer all three.
How the major platforms compare
What Rush does differently
Rush is an agent operating system that runs locally on your Mac. Your conversations, session history, and files stay on your device. When agents need AI models to think, every request enforces zero-data-retention -- no provider in the chain stores or trains on your prompts.
Agents themselves are packaged in encrypted containers (AES-256-GCM) with signed binaries. Each agent operates under iOS-like permissions -- it only accesses what you explicitly grant. We log usage metrics to operate the service. We never log what you or your agents say.
Privacy by default, not by toggle.
Does ChatGPT use my conversations to train its models?
Yes, by default. If you use ChatGPT Free or Plus, your conversations are used to train future models unless you manually opt out in Settings > Data Controls. ChatGPT Team, Enterprise, and API users are opted out by default. OpenAI's API has not trained on user data since March 2023.
Is ChatGPT safe for sensitive business data?
On the free and Plus tiers, no -- your data may be used for training unless you opt out, and conversations are logged on OpenAI's servers. ChatGPT Enterprise and API access offer stronger protections, including no training and optional zero-data-retention. For truly sensitive work, consider tools that run locally and never send raw conversations to external servers.
What is zero-data-retention in AI?
Zero-data-retention (ZDR) means the AI provider processes your request and immediately discards it. No prompts or responses are stored on the provider's servers after the response is returned. This is different from "no training" -- a provider might not train on your data but still retain it for weeks for abuse monitoring. ZDR eliminates even that retention window.
How do I stop AI from training on my data?
For ChatGPT: go to Settings > Data Controls > toggle off "Improve the model for everyone." For Claude: go to Privacy Settings and opt out. For Gemini: turn off Gemini Apps Activity in your Google account. Alternatively, use AI tools that enforce no-training by architecture rather than requiring you to find and flip a setting -- the safest toggle is the one you never need to touch.


