A few weeks before a coalition of 42 states sued Meta, accusing it of designing addictive products for children, CEO Mark Zuckerberg released what some parents say may be the social media company’s scariest creation yet: artificially intelligent chatbots based on real life celebrities.
Developed in partnerships with stars such as Charli D’Amelio, Tom Brady, and Kendall Jenner, the bots use the magic of generative AI to create animated digital replicas of the celebrities. Users of Meta’s WhatsApp, Instagram, and Messenger, can have one-on-one interactions with the bots, asking them questions, confiding in them, and laughing together at their jokes.
Billed by Meta as AIs that have “personality, opinions, and interests, and are a bit more fun to interact with,” the bots are a testament to the technical prowess and capabilities of a company that spent $35 billion on R&D last year — and that’s precisely what makes them so alarming to some parents and child psychiatrists.
“For a lot of our kids, this is just another way to add fuel to the fire,” says Kara Kushnir, a licensed clinical social worker and child psychotherapist based in New Jersey. Children are already struggling to limit the amount of time they spend on social media, and these fully-conversational lookalikes of popular celebrities make it even tougher for kids to moderate their usage. “The people who are going to have to deal with this are the parents and families," Kushnir says.
Whether Meta’s social media features are actually addictive, with cigarette-like powers to keep users hooked, is something that will be litigated in court. Kevin McAlister, a spokesperson for Meta told Fortune: “This is an absurd comparison. Unlike tobacco, Meta’s apps add value to peoples’ lives.”
But among those who believe the social media company's products are harmful to mental health, the introduction of AI characters is a big step in the wrong direction. The lifelike bots are likely to further blur the boundaries between the real world and the company’s advertising-financed virtual world, critics say, raising new, and not yet understood, risks for the millions of children that use the products.
Giving AI a trusted, familiar face
Meta’s AI bots look exactly like the celebrities they’re modeled from, though they have fictitious identities (Kendall Jenner’s AI character is named “Billie”), like actors playing roles in a movie, Meta says. It’s a distinction that may not register with younger users, as Elizabeth Adams discovered.
Adams, a parent, child psychiatrist and the founder of AI reading coach startup Ello, was trying to decide if she should allow her children to play with Meta’s AI characters. She asked her nine-year-old daughter why she thought Kendall Jenner’s AI character was named Billie. Her daughter’s answer: “Maybe because she doesn’t want people to know it’s her because she’s famous.”
To Adams, this was a validation of her fear that kids can’t tell real from fake with this technology. “Her mind went to, ‘she is trying to hide,’” Adams says of her daughter’s interpretation of Jenner’s AI persona. “Nowhere in there was there this understanding that ‘maybe this is an AI bot that I’m chatting with.’”
And because generative AI technology is prone to serving up false information (a phenomenon called “hallucinating”), some parents worry that a celebrity’s familiar face will lend credibility to inaccurate information when children are exposed to it.
Jamie Alders, a Dover, Massachusetts-based father to three children, conversed with the Tom Brady bot named Bru while the Texas Rangers battled the Arizona Diamondbacks for the World Series, and said the bot shared an outdated and wrong score when prompted. Alders, a vice president at nanotech hardware startup Neurable, also asked Max, the bot based on world-renowned chef Roy Choi, for restaurant recommendations and it shared spots that do not actually exist. “It’s not really accurate,” says Alders, who believes they will improve with time. “Maybe they shouldn’t have launched it in this fashion.”
The errors could be problematic for children who view MrBeast, Tom Brady, Kendall Jenner and the 25 other real people behind the Meta AIs as role models, and could be influenced by their LLM-based viewpoints. “If children, teens or preteens are interacting with these chatbots—to emulate celebrities, they could be highly influenced by the behaviors, the values, the opinions that are expressed by these AI personas,” says Adams, the child psychiatrist and AI tech founder. “If [kid users] are thinking that this is what Tom Brady really thinks, there’s even higher value to that opinion, potentially, than a Google search.”
Meta’s McAlister says that the company is adding visible markers to AI products so that teen users will become aware they are interfacing with AI. He did not respond to Fortune’s specific query around the impact of inaccurate information shared by the AI characters, but said that at the time of launch the company made it clear that the models could return inaccurate or inappropriate outputs.
McAlister also said that Meta would be adding a new parental supervision feature that alerts parents the first time their teen has an interaction with an AI character, as well as a “Teen Guide” to help underage users make informed choices about using AI.
Though Meta is the only platform that has so far unleashed AIs that mimic very alive and very famous individuals, it is far from the only social platform to harness artificially intelligent technologies to attract young users. Earlier this year Snapchat released its own AI bot—called MyAI—to all users. Snap’s bot, which is powered by OpenAI’s ChatGPT technology, is embodied in a cartoon-like avatar who can be customized by skin color (including an eggplant purple and slime green), gender, outfits and so on. As the Washington Post’s Geoffrey Fowler discovered in tests, the MyAI bot frequently engaged in inappropriate conversations about sex and drugs with Snapchat users who said they were 13 years and 15 years old.
On Thursday, Google made its Bard AI chatbot available to teens as young as 13. Google says it “implemented safety features and guardrails to help prevent unsafe content, such as illegal or age-gated substances, from appearing in its responses to teens,” and that it will automatically double check responses for teen users who may not be aware of hallucinations. Unlike Meta’s and Snapchat’s AI bots, Google Bard does not have an avatar character to represent the AI.
TikTok meanwhile, is testing an AI bot named Tako that can do things like share recipes related to TikTok content, pair travel videos with a list of related tourist attractions, and so on, according to the Verge.
Are AI avatars the new Joe Camel cartoons?
The attorneys general lawsuits against Meta are drawing comparisons to the ones that crippled Big Tobacco in the 1990s. In parallel, some parents believe that these AI characters are the 2023 equivalent of the slick Joe Camel cartoons, which ran as advertisements in periodicals with large children readerships to softly introduce youth to smoking from 1988 through the 1990s. “Just like that Joe Camel cartoon, [the AIs] lend that perspective that this is going to potentially have a deeper implication for our kids,” says therapist Kushnir, who believes that the AIs have the potential to addict children to social media at a young age by crippling their ability to have offline human-to-human relationships.
But while Joe Camel was merely a mascot akin to the Michelin Man or Ronald McDonald, it seems that Meta is competing with toys by releasing these celebrity chatbots, and that is highly concerning for moms and clinicians. “With toys, there’s a heavy component of imagination, there’s a heavy component of recognition—that this is a toy, this is not real, I am choosing what I do with it,” says Kushnir. “With AI, it’s like we’re the toys.”
Kushnir is especially concerned for neurodivergent, attention-deficit/hyperactivity disorder and autism spectrum disorder children using this technology. “If you have kids confiding in a robot with information, they're now missing out on opportunities to confide in people who can actually connect them to the best resources, who really understand them on a deep level, have known them for potentially their entire life or years and just have the opportunity to do right by them on a deeper level than then an AI bot is going to be able to do,” she says. “Some of my kiddos who are on the spectrum, especially, think they’re forming a real friendship with somebody. They think this is a relationship, and it creates this elevated self-esteem, which is, in theory, great, but the reality is: it’s not a real relationship so it can actually be a detriment to their wellbeing.”
The wariness some parents feel towards the growing availability of AI bots is closely entwined with their feelings and distrust of social media. “There's no parent that waited to give their kids something from Meta or some sort of social media and is and is saying in retrospect, ‘Darn, I wish I had given that to them sooner,’” says Natalia Garcia, who is a mom of school-age children and the head of public affairs at Common Sense Media, which evaluates the appropriateness of media for kids of different ages.
It’s obvious that Meta and its shareholders can benefit from having young users form meaningful relationships with the technology, increasing the odds that they become lifetime users. And it also seems that the cadre of ultra-famous folk who became Meta AIs scored sweet deals from the company. The Information reports that Meta is paying one star $5 million over two years for roughly six hours of work in a studio.
Meta’s McAlister said the company will continue to improve the AI characters with time and user feedback, and noted that Meta consults closely with parents, mental health, psychology, youth privacy and online behavior experts as it develops its generative AI products.
None of that is slowing down the company’s plans for AI bots. While Meta’s AI characters are still technically in “beta” testing, its selection of celebrity AI bots is expanding. Coming soon are several new AI characters, including one based on Gen Z heartthrob Josh Richards.