You are currently viewing The Great Data Heist: What Google and Meta Are Hiding From You

The Great Data Heist: What Google and Meta Are Hiding From You

Your Digital Footprint, The Dark Secrets Big Tech Won’t Share: Is AI reading your private data? It’s the biggest question of our times. Artificial intelligence has become the new normal. We talk to ChatGPT every day. We ask chatbots all sorts of questions. We feed it all sorts of personal information.

But what does it do with this information, with this data? What happens to it? Do AI companies use it to train further models? And not just that, do these companies use your private data on the internet to train its models—say your emails or your Facebook messages with an old friend? All of it is out there. Is it being used too? That’s the question at the moment. Let’s try and break it down.

Is Google Reading Your Emails?

First question, is Google reading your emails? Is it using your Gmail data to train its models? The answer is yes and no. I know it’s frustrating, but I’ll explain why I say that.

You see, Google’s AI model is called Gemini. It is connected to other Google products, including Gmail, Google Drive, and Google Chat. But does it get data from it? Well, not until users give the permission. And where does Gemini get its data from? A host of things. Say the searches you make in your AI app, or any photo or video that you upload on it, or interactions with other apps on Gemini through your messages, your phone call apps if you have given permission. So it all depends on what permissions you give—the permissions that you’ve given to the app, which is why I also said no.

The Sneaky “Opt-Out” Shift

But there’s also the yes part of the answer, and I say this because Google is also sneaky. Say you’re using Gmail, you activate smart features on it like compose emails or list calendar events. Then Google can access your data. And I know what you may be thinking: “I had no idea about that.” But here’s where it gets even more sneaky. Earlier, if you wanted to share your data with Gemini, you had to actively opt in. You had to press that button. You have to give that permission. You knew you were allowing it. Now, it’s the opposite. If you don’t want Gemini to use your data, you have to go digging through the settings and manually switch it off. Google has quietly moved the burden from the company to the consumer. And we only learned about it because a California lawsuit revealed it.

Meta’s Approach: Safe DMs, Public Data Grab

And that’s just the Google story. There’s also Meta. Its AI model is called Llama. You may not have heard about it much, but Meta is really pushing the accelerator on its AI program. Again, does that mean they’re using your data? Because we know Meta has a host of social media platforms: Facebook, Instagram, WhatsApp. All of it is owned by Meta.

Say a person talks to Llama about diving. Meta’s apps may start showing you diving gear to buy. They may start showing you reels about best places to dive. But is it sharing your messages? The answer is no. So, your private conversations are safe at the moment.

However, everything else is being used. Your public Instagram posts, your Facebook comments, that reel you made in 2021 with some random song—all of it can be used to train a Meta AI. If someone posts a picture of you publicly, even if you don’t have a Meta account, Meta Systems may learn from it. Basically, you don’t even get an opt-out button. So, your DMs may be safe for now, but your public profile, not so much.

The Danger of Oversharing with Chatbots

Because here’s the problem. If you upload personal photos to a chatbot, or voice notes, or sensitive context in a prompt, that data may be stored, analyzed, or used for training unless you disable it. And many people simply don’t know how to do it or whether they have an option to disable it. And even if you don’t do it, data about you reaches these AI models by some way or the other. So there’s no way out.

How to Protect Yourself

It seems if you want to avoid that, check the permissions you have given these apps. It may seem like casual bureaucracy saying yes to avoid reading documents, but go through it carefully. Know what you’re sharing.

  • No Personal Media: You should not upload personal photos to chatbots.
  • No Financials: You should not upload bank details.
  • No Health Data: You should not upload private medical information.
  • No Therapy Sessions: You should not use it for therapy. OpenAI CEO Sam Altman has said it himself: chatbots are not therapists.

You should not upload anything you would not want a stranger or a server to store because your data could be stored. Your data could be used for model improvement. Your data could leak someday. Your data could get scraped or misinterpreted.

AI companies are building the future, but they’re doing it with the messiness of the present. And the present has one universal law:

If you don’t pay for the product, you are most likely the product.

Leave a Reply