Track your job search with Career Helix — Try Premium free for 30 days. Sign Up

Your Data is Their Gold: The AI Consent Trap

Felix D. Helix
May 02, 2026
11 min read
1 views

In Parts 1 and 2, we explored how AI displaced 55,000 workers and consumed 5 million gallons of water daily while tech companies passed the costs onto everyone else. But there's a third dimension to this that's even more insidious: the forced extraction of your data.

Let me tell you a story that perfectly illustrates what's happening.

The Consent That Isn't

I had to consent to the use of AI to continue my mental health care for 2026.

Excuse me, what? To continue receiving mental health treatment—one of the most intimate, vulnerable, and private aspects of healthcare—I had to agree to let AI process my data.

Technically, I could "remove consent at any time." But I had to opt in so that I could then opt out. And if I didn't opt in? What would happen? No care?

That felt like coercion with a friendly user interface.

I'm sure I am not alone. This is becoming the standard across healthcare. By 2025, over 20 legislative bills have focused on regulating AI in clinical care, requiring healthcare providers to disclose when AI is used in diagnoses, treatment recommendations, or patient communications. This tells me that there is intent in using AI in healthcare across various aspects of care. It's not theory, it's actually happening.

States are scrambling to create protections because the problem is real:

  1. New Mexico now requires mental health practitioners to share information on AI tools with patients and obtain informed consent before using them
  2. Texas House Bill 149, effective January 1, 2026, mandates explicit patient disclosure when AI is involved in healthcare services
  3. Florida's legislation prohibits most mental health practitioners from using AI except for administrative tasks, and requires written consent 24 hours in advance for AI transcription
  4. Ohio prohibits AI from making independent therapeutic decisions or detecting emotional or mental states

Why are all these new laws being pushed? Because AI is being thrown into every part of our lives, whether we want it there or not, whether it makes sense or not, whether it's safe or not.

And the reason should be obvious: AI cannot create anything new, it requires new data to grow.

The Privacy Problem

OpenAI CEO Sam Altman himself highlighted that interactions with AI chat tools like ChatGPT do not have legal confidentiality protections. Unlike conversations with licensed professionals under doctor-patient or attorney-client privilege, chat logs could potentially be subpoenaed in legal proceedings.

Folks are expected to pour their hearts out to an AI mental health chatbot about deeply personal issues, and it has no legal protection? How does that make any sense?

It doesn't get better:

  1. 83% of free mobile health and fitness apps store data locally on the device without encryption
  2. In the first half of 2023 alone, approximately 295 healthcare data breaches were reported, implicating more than 39 million individuals
  3. A Stanford study found that six leading U.S. companies feed user inputs back into their models to improve capabilities, with privacy documentation that's often unclear

Even if you consent to sharing your data for a specific purpose, these models usually incorporate data into all future predictions. The use cases blur beyond what you agreed to. And if you allow data usage, it may be retained for 5 years for training purposes.

What happens to your data after you "withdraw consent"? It's probably too late by then. The model has already been trained. Your data is already baked in. Will AI be used to assess the appropriate care in the future? If so, will a human still double check and verify the result and override the assessment, if needed? In a society where healthcare is already riddled with bias and disparities, it seems a little risky to me.

What AI Really Needs

Every AI model is built on existing data. Words, songs, art, code—created by actual humans who spent years developing their skills and craft. YOUR words, songs, art, code. YOUR data.

Machine learning algorithms don't invent. They identify patterns in massive datasets and reproduce variations of those patterns. The more data they have, the better they get at this reproduction. But it's still just a sophisticated pattern-matching tool built on stolen material.

Why is AI being embedded everywhere, including your mental health platform, social media, good god why does SnapChat need an AI bot? It's not because AI makes your therapy better or your social media experience better. It's because every session, every message, every moment of vulnerability is more data to feed the machine and drive their bottom line.

But it's free!

I get it. I really do.

I'm not a professional writer, and I use AI to help me review my work and even draft these blog posts. The temptation is real. These tools are free, they're convenient, they're right there at your fingertips.

But I'm wary. And maybe you should be too.

Why are these "tools" being made freely available to the masses, embedding themselves into so many aspects of our lives?

It's not altruism. It's not about democratizing creativity or empowering people. It's about data collection. Or, let's be honest: it's about data theft.

Take Anthropic. In September 2025, they made conversations with Claude used for training by default unless users opt out. This is the pattern: start with opt-in, then shift to opt-out once people are dependent on the tool. Like before Amazon was bailed out, we were perfectly fine leaving the house to shop or wait a week or so for an online purchase to arrive but now it seems almost inconceivable to purchase something online without the same day delivery. Who really benefited from that? Not small businesses or our communities. What will society look like when we are as dependent on AI for day to day tasks as a lot of us are on Amazon for shopping?

Didn't Sam Altman say he expected intelligence to be a utility in every home? Pay for intelligence? Pay for a service that was built from our own data?

The Irony of Enhancement

Look, I see value in using machine learning to enhance a person's skills. Maybe help them be more efficient at their existing job. That's a reasonable use case.

The key word here is "enhance," not "replace."

I think that difference matters. A spell-checker or grammar suggestion tool enhances my writing. An AI that writes an entire article for me with a couple of prompts doesn't enhance my skills—it replaces them.

And when enough people let AI replace their skills instead of enhancing them, those skills become obsolete. Which means those jobs become obsolete. Which means those workers become obsolete. In a podcast episode from April 2026, the host mentioned how AI has changed how he thinks of future hires. Something he didn't expect but is now the reality. In a moment when he would have considered to hire a candidate based on other skills they possessed he now found himself thinking how AI would be able to do that work in the future and passed them over. The frightening part is that this was based on AI not being able to do that work now, but the promise that it may be able to do it down the road.

Which is exactly what we covered in Part 1 of this series. Remember those 55,000 layoffs? There are more to come in many industries, not just tech.

We are essentially building a system that will destroy our jobs. For free.

The Regulatory Response

To be fair, governments are starting to notice. The Generative AI Copyright Disclosure Act of 2024 would require companies to disclose the datasets used to train their systems. States are passing healthcare AI transparency laws. The Utah Artificial Intelligence and Policy Act became the first major state statute specifically governing AI use.

73% of users now prioritize privacy when selecting mental health applications, making robust compliance a market differentiator. GDPR requires explicit, granular consent for processing sensitive data.

These are steps in the right direction. But they're reactive, not proactive. They're playing catch-up while AI companies move fast and break things—including our privacy, our data rights, and our livelihoods.

Is there hope?

Here's what gives me some hope: reality is catching up with the hype. And it's doing so in a spectacular way.

Fifty-five percent of employers now report regretting AI-driven layoffs, according to Forrester Research. A February 2026 survey of 600 HR professionals found that two in three companies that made AI-driven cuts are already rehiring. Of those, more than a third had brought back over half the positions they originally eliminated. The financial case for AI driven layoffs seems to be collapsing: companies spend roughly $1.27 for every $1 saved through staff reductions once severance, productivity losses, and replacement costs are factored in. Nearly 31% said rehiring ultimately cost more than what the layoffs had saved.

The story of Klarna is instructive. The Swedish fintech cut hundreds of customer service roles, claiming its AI chatbot could handle the work of 700 human agents. Customer satisfaction scores dropped. CEO Sebastian Siemiatkowski eventually acknowledged the company had prioritized cost over experience—and began rehiring. I suppose a bot cannot replace human interaction.

It turns out AI handled the predictable 30% of customer interactions just fine. The other 70%—the frustrated customers, the situations requiring context and judgment and a genuine human relationship—had nowhere to go. The savings that looked so clean on a spreadsheet came with hidden costs that leadership did not anticipate.

I recently found out that Amazon's "AI-powered" Just Walk Out retail technology, which was marketed as cutting-edge automation, actually relied on remote workers in India who monitored the in-store cameras ensuring accuracy. Nothing more than AI theater.

The path forward isn't human vs. AI. It's ensuring that AI amplifies human capability rather than replacing it—and that the people doing the amplifying are fairly compensated, not discarded.

The Path Forward

I'm not anti-technology, but I am pro-human.

And right now, AI isn't being developed in a way that's pro-human. It's being developed in a way that's pro-profit, pro-efficiency, pro-automation, and pro-extraction.

I mentioned earlier that I use AI to help draft posts like this one, but the moment I'm able to, I want to pay someone to write or review my work. That's the moment of truth, isn't it?

Your data is the raw material. Your job is the target. Your consent is the smokescreen.

They're mining you for gold. And they've convinced you to hold the pickaxe.

The Questions That Matter

Before you click "I agree" or generate a funny picture of your dog, ask yourself:

  1. Why is this AI feature being added to this service?
  2. Who benefits from my data being used this way?
  3. Is this truly making my experience better, or is it making the company's dataset bigger?
  4. If this tool stopped being free tomorrow, would I pay for it? Would I be able to?
  5. What happens to my data after I "withdraw consent"? (Spoiler: it's probably too late by then)

And the biggest question of all: What kind of economy do we want to build and leave for the future?

One where human creativity, skill, and labor are valued and compensated? Or one where algorithms trained on stolen data replace human workers while generating profits for a few?

This concludes the three-part series on the AI revolution and who really pays for it.

Part 1: The AI Layoff Paradox: From Star Trek to Hunger Games

Part 2: The Hidden Costs of AI: Water, Power, and Who Really Pays

Part 3: Your Data is Their Gold: The AI Consent Trap

References

Healthcare Privacy and Consent

  1. "E-mental Health in the Age of AI: Data Safety, Privacy Regulations and Recommendations." National Institutes of Health, 2025.
  2. "Emerging AI Privacy Regulations in Healthcare." Censinet, 2025.
  3. "Informed Consent, Redefined: How AI and Big Data Are Changing the Rules." Petrie-Flom Center, Harvard Law School, April 2025.
  4. "Mental Health App Data Privacy: HIPAA-GDPR Hybrid Compliance." SecurePrivacy, 2025.

Data Collection and Privacy

  1. "Study exposes privacy risks of AI chatbot conversations." Stanford University, October 2025.
  2. "Understanding Anthropic's Data Usage Policy: What Users Need to Know." NYU Shanghai Research Institute for Technology and Society, 2025.
  3. "AI Data Privacy: Challenges and Solutions." Fortra, 2025.
  4. "Is AI Model Training Compliant With Data Privacy Laws?" Termly, 2025.

AI Layoff Regret and Rehiring

  1. "The AI Layoff Trap: Why Half Will Be Quietly Rehired." HR Executive, December 2025.
  2. "Why Companies Regret Laying Off Workers For AI." Forbes Technology Council, April 2026.
  3. "Why 55% of Companies Regret Cutting Jobs for AI." The Interview Guys, 2026.

Copyright and Intellectual Property

  1. "Copyright Office Weighs In on AI Training and Fair Use." Skadden, Arps, Slate, Meagher & Flom LLP, May 2025.
  2. "Copyright and AI training data—transparency to the rescue?" Journal of Intellectual Property Law & Practice, Oxford Academic, 2025.
  3. "AI, Copyright, and the Law: The Ongoing Battle Over Intellectual Property Rights." IP & Technology Law Society, USC, February 2025.


Share this post