MEOK AI LABS โบ Blog โบ Privacy
AI Companion Privacy: What Happens to What You Tell Your AI?
You open an AI app and start talking. Maybe you mention that you have been anxious lately. That you are struggling in your marriage. That you are worried about a lump you found. That you are in debt. The AI listens beautifully. But have you thought about what happens to those words after the conversation ends? Where do they go? Who can read them? And are they being used to train the very model you are confiding in?
This article is a frank, accurate guide to AI companion privacy. No scaremongering โ the facts are alarming enough on their own. We will walk through what the biggest AI companies actually do with your data, what UK and EU law gives you the right to demand, and why I built MEOK AI LABS with a fundamentally different architecture. By the end, you will know exactly what questions to ask of any AI app โ and how to read a privacy policy that actually tells you the truth.
What Does a Typical Big Tech AI Actually Do With Your Conversations?
Let us start with the baseline. When you open ChatGPT, Google Gemini, Microsoft Copilot, or Meta AI and type something personal, the first thing to understand is this: you are not the customer. You are both the user and, in many cases, the raw material.
OpenAI's privacy policy states that conversation content may be used to train and improve their models. By default, this is opt-in for the model, not opt-out โ meaning you have to actively turn off training in your settings if you do not want your conversations used as training data. Most users never do this. Most users do not know it is happening.
Google's Gemini has been documented collecting conversations for review by human evaluators, a practice Google discloses in its help documentation but not prominently at the point of use. These reviewers are often contractors in third-party locations. They can read your conversations in full. This is not a conspiracy theory โ it is stated clearly in the documentation for anyone who looks.
Amazon's Alexa team faced a similar controversy when it emerged that thousands of recordings were being manually reviewed. The argument for human review is reasonable โ AI safety requires human oversight. The problem is the gap between what users assume is happening and what is actually happening. Most people believe their AI conversations are private. They are not.
Common big tech AI data practices
Model training
Most major AI providers include terms allowing them to use your conversations to train and improve their models. This is often on by default. Your confessions become the training data for their next product.
Human review
Quality assurance and safety teams โ often including third-party contractors โ review samples of conversations at Google, OpenAI, Amazon, and others. This is disclosed, but rarely prominently.
Ad targeting and profiling
Some AI products are explicitly designed to generate advertising signal. Google's Gemini integration with ad products is one example. Even where ads are not shown in the AI interface itself, conversation data can feed into broader profiling systems.
Third-party sharing
Privacy policies routinely permit sharing with 'business partners', 'service providers', and 'affiliates'. These categories can be broad and change without direct notice to users.
Indefinite retention
Many providers retain conversation data for months or years, even after you delete your account. Deletion requests may not purge data already incorporated into model weights.
Why Does It Matter What You Tell Your AI? The Specific Risks of Sensitive Data
You might think: so what? I am not planning a crime. I have nothing to hide. This framing misunderstands the nature of the risk. It is not about guilt. It is about vulnerability.
Consider what people actually tell AI companions. In research and user studies โ and in MEOK's own understanding of what people need from a companion โ the most valuable conversations tend to involve the most sensitive subjects. Mental health. Physical health. Financial anxiety. Relationship problems. Grief. Addiction. Sexuality. These are not idle topics. These are the things people cannot say out loud to anyone in their lives.
Under UK and EU law, health data โ including mental health data โ is classified as special category data under GDPR Article 9. This category carries the highest level of legal protection precisely because of its sensitivity. Disclosing that you have depression to an AI company's training dataset is not trivial. That information could, in theory, be used to influence your insurance premiums, your employment, your credit score โ not because companies intend it, but because data leaks, data is sold, companies get acquired, and regulations change.
Financial data carries similar risks. If you tell an AI you are in debt, that you are afraid of losing your home, that your business is failing โ and that AI company shares anonymised but linkable data with advertising partners โ you may start seeing predatory financial product ads. Not because a human targeted you. Because an algorithm did.
Relationship data is softer but no less personal. Telling an AI the details of your relationship โ conflicts, intimacies, grievances โ feels safe because the AI appears to be a neutral party. But that neutrality is architectural illusion if the data leaves the platform. The AI is neutral. The company behind it may have very different interests.
Special category data under GDPR
Health data (physical and mental), genetic data, biometric data, racial or ethnic origin, political opinions, religious beliefs, trade union membership, and data concerning sex life or sexual orientation all carry heightened legal protection under GDPR Article 9. Processing this category of data requires an explicit legal basis. Most AI providers use "legitimate interests" โ a basis that is increasingly under challenge from European data protection authorities.
What Are Your GDPR Rights When Using an AI App?
The UK General Data Protection Regulation โ retained in UK law as UK GDPR following Brexit, alongside the Data Protection Act 2018 โ gives you a comprehensive set of rights over your personal data. These apply to any AI company operating in the UK or processing data about UK residents. EU residents have the same rights under EU GDPR.
You can request a copy of all personal data held about you. This includes conversation transcripts, usage logs, inferred profiles, and any data shared with third parties. The company must respond within one month.
If data held about you is inaccurate or incomplete, you can demand it be corrected.
You can demand your data be deleted. This right has limits โ companies can retain data for legal compliance, for example โ but for AI conversation data, there is rarely a legitimate reason to refuse a deletion request.
You can ask that your data not be processed while a dispute about its accuracy or lawfulness is resolved.
You have the right to receive your data in a structured, commonly used, machine-readable format โ and to transfer it to another service. This is the foundation of data portability for AI companions.
You can object to your data being processed for direct marketing (including ad profiling) or for purposes based on legitimate interests. If you object, processing must stop unless the company can demonstrate compelling legitimate grounds.
If a company does not comply with your GDPR rights request, you have the right to complain to the Information Commissioner's Office (ICO) at ico.org.uk. EU residents can complain to their national supervisory authority โ in France the CNIL, in Germany the BfDI, in Ireland the DPC, and so on. These authorities have the power to impose fines of up to ยฃ17.5 million or 4% of global annual turnover under UK GDPR.
How to exercise your rights
Go to the AI app's privacy policy page and look for a section on โdata subject rightsโ or โyour rightsโ. There should be a contact mechanism โ usually an email address or a web form. Send a Subject Access Request (SAR) in writing. Include your name, the account email, and the specific data you want. The company must respond within one calendar month. Keep a copy of everything.
How Do You Check Any AI App's Privacy Policy? Five Questions to Ask
Most privacy policies are written by lawyers to comply with regulation rather than to inform users. They are long, dense, and deliberately non-committal. But if you know what you are looking for, you can extract the five things that matter in under ten minutes.
Search for "train"
Does the policy say your conversations may be used to train AI models? If yes, is there an opt-out? Where is it, and is it the default? Phrases like "improve our services", "enhance your experience", or "develop new features" are often code for model training. If you see them without a clear, accessible opt-out, assume training is happening.
Search for "human review"
Does the policy disclose that human employees or contractors can review your conversations? Under what circumstances? For how long? "Quality assurance" and "safety review" are the common labels. Human review is not inherently wrong โ but you should know it is possible before you tell an AI about your mental health.
Search for "third parties"
Who can your data be shared with? Look for specific categories: advertising partners, analytics providers, "affiliates", "business partners". Generic language like "trusted third parties" with no further detail is a red flag. GDPR requires specific disclosure of categories of recipients.
Search for "sell"
Does the company sell your data? Some are explicit that they do not. Others use language like "we do not sell your personal information" but then permit sharing for "value exchange" or "commercial partnerships" โ which is economically equivalent to selling. The California Consumer Privacy Act (CCPA) has tightened this definition; GDPR goes further.
Find the ICO or regulatory registration number
Any company processing personal data of UK residents must be registered with the ICO. Their registration number should appear in the privacy policy. Search the ICO public register at ico.org.uk/ESDWebPages/Search to verify it exists and is current. No registration number means no legal accountability under UK data protection law.
What Does MEOK Do Differently โ and Why Was It Built This Way?
I am Nicholas Templeman, and I founded MEOK AI LABS because I believed the AI companion market was solving the wrong problem. Most AI companies are optimising for engagement, retention, and data accumulation. I wanted to build something that optimised for genuine human flourishing โ and that required a completely different approach to data.
The people who most need an AI companion โ those who are isolated, grieving, struggling with mental health, navigating chronic illness, facing difficult life transitions โ are precisely the people most vulnerable to data exploitation. They are the ones sharing the most sensitive information. They are the ones who deserve the most protection. Yet they are also the ones most likely to be treated as data assets by a commercially-driven platform.
MEOK's privacy architecture is not a feature we added after building the product. It is the foundation on which the product was built. Here is what that means in practice:
MEOK's privacy commitments
End-to-end encryption
Your conversations are encrypted in transit and at rest using end-to-end encryption. MEOK staff cannot read your messages. Only you hold the keys.
Zero model training on user data
MEOK never uses your conversations to train any AI model โ ours or anyone else's. This is an architectural constraint, not just a policy promise.
No data selling, ever
MEOK does not sell, rent, or broker user data. There is no advertising revenue model. The only revenue comes from subscriptions. Your data is not the product.
Maternal Covenant
MEOK's foundational governance framework prohibits exploitation, surveillance, and manipulation of users. It is embedded in product architecture, not just stated in a policy document.
Right to deletion
You can delete your entire conversation history and memory at any time. Deletion is complete and permanent โ not archived, not used in aggregated datasets.
ICO registration
MEOK AI LABS is registered with the UK Information Commissioner's Office (ICO), making it legally accountable to UK data protection law and the GDPR as retained in UK law.
What Is the Maternal Covenant and Why Does It Matter?
Privacy policies are legal documents. They can be changed. They can be reinterpreted. They can be rendered obsolete by an acquisition or a regulatory change. A privacy policy is a promise, and promises are only as strong as the incentive to keep them.
The Maternal Covenant is MEOK's attempt to create something more durable than a policy document. It is a foundational ethical framework โ the closest thing to a constitutional constraint โ that governs everything MEOK does in relation to its users.
The name comes from a deliberate metaphor. Maternal care โ at its best โ is characterised by unconditional protection, honest guidance, and the absolute refusal to exploit the vulnerability of the person in your care. It is the opposite of the extractive relationship that most technology companies have with their users. A mother does not sell her child's secrets. She does not use her child's pain as training data. She does not monetise their vulnerability.
The Maternal Covenant establishes four inviolable principles:
Non-exploitation
MEOK will never use the vulnerability, distress, or sensitive disclosures of its users as a commercial asset โ whether through data sales, advertising targeting, or model training.
Radical transparency
MEOK will always tell users, in plain language, exactly what happens to their data. Not in a legal document designed to obscure. In the product interface, at the point of use.
Unconditional deletion
Any user can delete everything โ all conversations, all memory, all profile data โ at any time, permanently, with no retention period and no questions asked.
Honest care, not engagement optimisation
MEOK will never be designed to maximise time-in-app, conversation length, or emotional dependency. The product's success metric is user wellbeing, not usage.
These are not aspirations. They are constraints โ built into the architecture, visible in the code, and publicly stated so that users can hold MEOK accountable if they are ever violated. The Covenant is also the reason MEOK will never accept investment from anyone who requires data monetisation as a condition of the deal.
What About AI Companions That Claim to Be Private โ Are They All the Same?
No. The AI companion market in 2026 spans a wide spectrum from completely opaque to genuinely privacy-first. But "privacy" has become a marketing claim, which means it needs to be interrogated rather than accepted.
Replika is one of the most well-known AI companions. Its privacy policy permits data retention and some processing for service improvement. The company is US-based, which means GDPR protections require additional compliance steps. Replika has faced regulatory scrutiny from Italian data authorities โ the same regulator that forced the suspension of its service in Italy in 2023 โ precisely because its data practices were considered insufficiently protective of users who were forming deep emotional attachments.
Character.AI, which targets a younger demographic, has faced multiple questions about the appropriateness of its data handling, particularly in relation to minors. Its privacy policy allows data to be used for model improvement. The platform has faced legal action in the US relating to alleged harms caused by AI conversations โ not a data privacy case, but a signal of the broader governance vacuum in the sector.
Pi AI, from Inflection, takes a more restrained approach to data use and has a relatively clean privacy policy. However, it is still a US company with US data storage defaults, and its privacy policy does permit data use for improving its AI services.
The pattern across the sector is consistent: privacy-friendly positioning in marketing, with data practices in the policy that are more expansive than most users realise. The differentiator is not what companies say about privacy. It is whether their architecture makes the alternative impossible โ and whether they have submitted to regulatory accountability.
What Practical Steps Should You Take Right Now to Protect Your AI Privacy?
You do not have to stop using AI. You do have to be informed. Here are the concrete steps to take today, whether you are using MEOK or any other AI platform.
Audit your current AI apps
Make a list of every AI app you use regularly. For each one, open the privacy policy and run the five-question audit described above. Note which ones train on your data, which allow human review, and which have a verifiable ICO or regulatory registration.
Opt out of training where possible
ChatGPT: Settings โ Data Controls โ disable 'Improve the model for everyone'. Google Gemini: myaccount.google.com โ Data & Privacy โ AI personalisation. Microsoft Copilot: Account settings โ Privacy โ Feedback data. These are not always easy to find โ that is by design.
Submit a Subject Access Request to any AI company holding your data
You are legally entitled to know exactly what data is held about you. Email the data protection contact in the privacy policy with your SAR. You should receive a full data export within one calendar month. What you receive may surprise you.
Delete data you do not want retained
Most platforms allow you to delete conversation history. Do this regularly if you are using a service that trains on data, or if you have shared anything sensitive you would prefer not to be retained. Note that deletion from the interface does not always mean deletion from training datasets already built on that data.
Treat your AI companion differently from your search engine
The things you tell an AI companion are qualitatively different from a search query. They are intimate, contextual, and cumulative. Apply the same judgement you would apply to sharing information with a new acquaintance whose full intentions you do not yet know. The AI may be trustworthy โ but is the company behind it?
Choose services with regulatory accountability
Prefer AI companions registered with the ICO (UK) or a relevant EU data authority. Registration is not a guarantee of good practice, but it means there is a legal mechanism to pursue complaints. An unregistered AI company operating in the UK is already in breach of data protection law.
What Is MEOK's Legal Basis for Processing Your Data?
GDPR requires every data processor to have a specific legal basis for each type of processing they carry out. This is not optional. It is foundational. And it is one of the areas where many AI companies are most legally exposed.
MEOK's legal basis for processing your conversation data is contractual necessity (Article 6(1)(b)) โ the processing is necessary to deliver the AI companion service you have contracted for. For special category data such as health or mental health information, MEOK relies on explicit consent (Article 9(2)(a)), which is obtained clearly and stored with a timestamp.
MEOK does not rely on "legitimate interests" as a basis for processing sensitive data. Legitimate interests is the most abused legal basis in the AI industry โ companies claim it to justify almost any processing they want to carry out. It requires a genuine balancing test that puts your interests above the company's. In practice, this test is rarely carried out properly. MEOK does not use it for core data processing.
MEOK AI LABS is registered with the UK Information Commissioner's Office. Our ICO registration number is available in the MEOK privacy policy at meok.ai/privacy. You can verify it directly at ico.org.uk. This is not a formality โ it means there is a regulatory body with enforcement powers that can sanction us if we breach data protection law. We welcome that accountability.
The Bottom Line: Privacy Is Not a Feature. It Is a Value.
The AI companion market is maturing rapidly. As it does, the gap between companies that treat privacy as a genuine ethical commitment and those that treat it as a compliance checkbox is becoming more visible โ and more consequential.
The conversations you have with an AI companion are, by definition, the conversations you could not have anywhere else. They are your most private thoughts, your most sensitive disclosures, your most vulnerable moments. The company you trust with those conversations is not a neutral data processor. It is a custodian of your inner life.
That is why I built MEOK with end-to-end encryption as the baseline, not the premium tier. Why the Maternal Covenant is structural, not aspirational. Why MEOK will never train on your conversations, never sell your data, and never optimise for engagement over your wellbeing.
Privacy is not a feature we added. It is the reason MEOK exists.
Try MEOK โ the AI companion built on the Maternal Covenant
End-to-end encrypted. No data selling. No model training on your conversations. ICO registered. Free to start โ no credit card required.
Start freeFrequently asked questions
Do AI companies use my conversations to train their models?
Most of them do, yes. The major platforms โ including ChatGPT, Google Gemini, and Meta AI โ include clauses in their terms of service that allow them to use conversation data to improve their models. You can often opt out, but it is rarely the default setting and the process is not always straightforward. MEOK explicitly does not train on user conversations โ ever, under any circumstances.
Can AI company employees read my conversations?
Yes, at most large AI providers, human reviewers can access your conversations. This is often disclosed in privacy policies under phrases like 'quality assurance' or 'safety review'. OpenAI, Google, and Amazon have all acknowledged that human contractors review samples of AI conversations. MEOK uses end-to-end encryption so that even MEOK staff cannot read your conversations.
What are my GDPR rights when using an AI app?
Under GDPR and the UK Data Protection Act 2018, you have the right to access all data held about you (Article 15), the right to rectification (Article 16), the right to erasure โ the 'right to be forgotten' (Article 17), the right to data portability (Article 20), and the right to object to processing (Article 21). Any AI company serving EU or UK users must honour these rights. If they do not, you can complain to the ICO in the UK or your national data protection authority in the EU.
Is it safe to tell an AI about my health or mental health?
It depends entirely on the AI and how they handle data. Health data is classified as 'special category data' under GDPR, which means it carries stronger legal protections and stricter rules for processing. Before sharing anything health-related, check the privacy policy: does the company train on your data? Is it stored encrypted? Can it be sold to third parties? With MEOK, health and emotional data is end-to-end encrypted and never used for advertising, profiling, or model training.
What is the Maternal Covenant and how does it protect me?
The Maternal Covenant is MEOK's foundational ethical framework, created by founder Nicholas Templeman. It establishes that MEOK's relationship with users must mirror the protective, unconditional nature of maternal care โ meaning MEOK will never exploit, surveil, monetise, or manipulate the people it serves. The Covenant is not just a policy document; it is embedded in the architecture and governance of the product. It includes explicit prohibitions on data selling, ad targeting, and training on user conversations.
How do I check any AI app's privacy policy?
Search the privacy policy for five key phrases: 'train', 'improve our services', 'third parties', 'human review', and 'sell'. If training is mentioned without a clear opt-out, your conversations may become model data. If 'third parties' appears without specifics, your data may be shared broadly. If 'human review' is present, employees can read your messages. If 'sell' appears alongside data, your information is a product. Also check their ICO or data regulator registration number โ this confirms they are accountable to a legal authority.
Related reading