The new Meta AI app gets very personal very quickly.
Image: Supplied
Geoffrey Fowler
Mark Zuckerberg has a new way to invade your privacy: a creepier version of ChatGPT.
Last week, Facebook’s co-founder launched the Meta AI app, a dedicated home for his company’s artificial intelligence chatbot. The app, which climbed to No. 2 on the iPhone free download charts, promises users a more “personalized” AI with tailored answers and advice. And it includes a new social network for people to share their AI conversations and images.
But Meta AI also brings something else to chatbots: surveillance. The first time I opened the app, I asked it to “describe me in three emojis.” It could, by drawing on years of personal information tracked by its sister apps Facebook and Instagram.
As a test, I also chatted with Meta AI about some intentionally sensitive topics. Afterward, I found it built a so-called Memory file about me that said my interests include natural fertility techniques, divorce, child custody, a payday loan and the laws about tax evasion.
In fact, Meta AI by default kept a copy of everything I said to it - to tailor its responses to me, to train a future version of Meta’s AI and, eventually, target me with ads. I was later able to delete its memory and my conversations, with some effort. But from my tests, Meta AI pushes the limits on privacy in ways that go much further than rivals ChatGPT, from OpenAI, or Gemini, from Google.
Whatever you chat about with Meta AI, just picture Zuckerberg watching. My conversations were tests, but more and more people are turning to AI for incredibly personal stuff, from medical problems to virtual boyfriends.
Meta says its AI gives people what they want. “We’ve provided valuable personalization for people on our platforms for decades, making it easier for people to accomplish what they come to our apps to do - the Meta AI app is no different,” said Meta spokesman Thomas Richards. “We provide transparency and control throughout, so people can manage their experience and make sure it’s right for them.”
But for most users, Meta AI’s personalization brings a whole new set of privacy risks they might not yet appreciate, much less know how to control.
“The disclosures and consumer choices around privacy settings are laughably bad,” says Ben Winters, the director of AI and data privacy at the Consumer Federation of America. “I would only use it for surface-level, fun prompts or things that don’t have anything to do with your personal information, desire, concerns, fears, or anything you wouldn’t broadcast to the internet.”
A virtual assistant that can anticipate your needs - or even be your therapist - is the holy grail of AI. And Meta is not the only company pushing deeper into personal data in the hunt for it.
Google recently added an option to Gemini that lets you choose to use your Google search history to personalize its responses. ChatGPT has added the ability to remember details across conversations.
But the era of personalized AI also brings new privacy complications: How do we manage what we want the bots to know, and how they use our information? “Just because these tools act like your friend doesn’t mean that they are,” says Miranda Bogen, a director at the Center for Democracy & Technology who has researched the privacy trade-offs of AI personalization.
During one conversation with Meta AI, Bogen mentioned baby bottles. After that, she says, the chatbot decided she must be a parent. AI systems, already notorious for stereotypes, might lead to new kinds of bias when they’re personalized. “It could start steering people in directions that maybe aren’t what they intend and are based on information that they’re not comfortable with that advice being based on,” she says.
And just imagine the real-life “Black Mirror” when AI starts inserting personalized ads into its answers and advice. Is it recommending a restaurant because it thinks I’ll like it, or because it’s being paid to? Meta AI has no ads today, but Zuckerberg signaled that he sees “a large opportunity to show product recommendations or ads” in it during an earnings call last week.
OpenAI says it does not share ChatGPT content with third parties for marketing purposes. (The Washington Post has a content partnership with OpenAI.) Google says that Gemini conversations are not currently being used to show ads and that the company would “clearly communicate” a change to users.
“The idea of an agent is that it’s working on my behalf - not on trying to manipulate me on others’ behalf,” says Justin Brookman, the director of technology policy for Consumer Reports. Personalized advertising powered by AI “is inherently adversarial,” he says.
There are three things that make Meta AI feel like a big shift for our privacy.
First, Meta AI is hooked into Facebook and Instagram. If you set up a profile for the app using your existing social media account, the AI will have access to a mountain of information collected by that social app. All of that will shape the responses the Meta AI bot gives you, even if some of Facebook’s or Instagram’s ideas about you are way off-base.
If you don’t want all that data getting combined, you’ll have to set up a separate Meta AI account.
Second, Meta AI remembers everything by default. It keeps a transcript or voice recording of every conversation you have with it, along with a list of what it deems to be key facts about you - such as topics you mention you’re “interested in” - in a separate Memory file. You can view its memories of you in the app’s Settings.
But what if there’s something you don’t want the AI to remember? That’s understandable: There are plenty of things - like my test involving fertility techniques and tax evasion - people know to only Google in private.
Meta says it tries not to add sensitive topics to your Memory file, but my tests found it recorded plenty - like fertility and payday loans. What’s worse, the Meta AI app does not give you the ability to prevent it from saving chats or memories. (The closest you can get is to use the Meta AI website without logging in.) Nor does Meta AI offer a “temporary” mode, as ChatGPT does, that keeps a conversation out of your history.
The Meta AI app does allow you to delete chats and the contents of your Memory file after the fact. But when I tried to delete my memories, the app warned that they weren’t totally gone. The original chat containing the information remained in my history and could still be used. So then I also had to dig into my chat history and find that original chat, and delete it, too. (The nuclear option was to tap a setting instructing it to just delete everything.)
And there’s one more subtle privacy concern, too. The contents of your chats - your words, photos and even voice - will end up being fed back into Meta’s AI training systems. ChatGPT lets you opt out of training by switching off a setting labeled “improve the model for everyone.” Meta AI doesn’t offer an opt out.
Why might you not want to contribute to Meta’s training data? Many artists and writers have taken issue with their work being used to train AI without compensation or acknowledgment.
Some privacy advocates say we should be concerned, too, that AI systems can “leak” out training data in future chats. That’s one reason many corporations lock down their employees to only using AI systems that don’t use their data for training.
Meta says it trains and tunes its AI models to limit the possibility of private information from appearing in responses to other people.
Still, Meta has a warning in its terms of service: “do not share information that you don’t want the AIs to use and retain.”
I’d take them at their word. | The Washington Post