Digital Immortality or Just Creepy? Big Tech’s Plan for Your Afterlife: Imagine you get a Facebook friend request or an Instagram request. You accept it only to find that it’s not from a living person. It’s from someone who has already died. And how are they sending you this request? Through an AI bot.
The very thought can be scary and unsettling. It sounds like the script of a dystopian movie, except it’s happening in our world.
The Patent to Resurrect the Dead
The story begins in 2023. Meta was granted a patent—a patent for an AI system that could simulate a user’s personality online. Basically, a system that could pretend to be you. A system that will keep posting for you even after you die.
They described it as an LLM, a large language model that was trained on your posts, your messages, your voice clips, and even your behavioral patterns. And this AI could theoretically post updates. It could reply to comments. It could send DMs, even simulate voice or video calls. In other words, it would be a digital version of you.
The company later said that it has no plans to actually build this. They called it “just a concept.” But here’s the thing: You do not patent ideas that you have never seriously considered. And this one is dystopian because it messes with the idea of death. It messes with the concept of grief.
The Psychological Cost of “Deadbots”
Let me tell you what experts say. One of the most important psychological steps after losing someone is accepting that they’re gone. It takes time and emotional strength to let the dead be dead. But if an AI keeps talking like a person who has passed away, it could trap loved ones in emotional limbo. They will not fully grieve, not fully recover, and not move on.
But for companies, it could be a gold mine. You see, social media runs on engagement. Your posts, your activity—all of it generates money for them. Dormant accounts generate nothing.
What Happens to Your Account Right Now?
As of today, social media platforms do not resurrect you. They freeze you if you’re dead. On sites owned by Meta like Facebook and Instagram, families can request memorialization—meaning the profile stays, but there are no new posts—or the family can ask for deletion. And other platforms have similar rules. X and Snapchat allow only deactivation and deletion.
No platform currently allows someone, human or AI, to post as the deceased. That’s a line they have not crossed, at least not yet. But could they cross it soon?
The Business of the Digital Afterlife
Let me show you a figure on Facebook. More than 30 million accounts belong to deceased individuals. Roughly 8,000 to 10,000 are added every day. These are accounts of dead people. By 2070, the dead could outnumber the living. Estimates suggest that 1.4 billion Facebook members will die before the year 2100. So 1.4 billion accounts will become dormant.
But if you get a bot that pretends to be you after your death, it means more engagement. More engagement means more ad revenue for the platform. Simply put, even dead users can be monetized.
And companies have tried it before, like Microsoft. In 2021, Microsoft patented a chatbot that could mimic deceased people. Executives later scrapped it. They called the concept disturbing. But many startups still continued with this. There are services that experiment with so-called “deadbots”—that is, AI trained on a person’s memories, voice, and messages. So, the digital afterlife industry is very real and very dangerous. It shows how far tech giants are willing to push AI.
Where Do We Draw the Line?
In fact, Meta CEO Mark Zuckerberg spoke about it on a podcast. He said interacting with the AI version of a loved one might help grieving people, although he did admit that it could also become unhealthy.
Mark Zuckerberg (On Lex Fridman Podcast): “There’s probably some balance where if someone has lost a loved one and is grieving, there may be ways in which being able to interact or relive certain memories could be helpful. But then there’s also probably an extent to which it could become unhealthy.”
And that’s the core problem here. Social media does not know where comfort ends and harm begins. They say they have red lines. They say they have guardrails. But evidence suggests otherwise. They only act when they’re forced to—when a problem gets out of hand.
Now Meta says it won’t build this chatbot. Maybe it won’t. But the patent exists, the technology exists, and the incentives exist. It’s only a matter of who will do it first.
Because in the AI era, even death cannot log you out.