Despite being the chief suspect in the shooting and murder of UnitedHealthcare CEO Brian Thompson, Luigi Mangione has to some become a poster boy for the injustices of America’s healthcare system. Since he was arrested, people have created a number of AI chatbots trained on his online posts and personal history, including as many as 13 on Character.ai, a site where users can create AI avatars.
Graphika, an online intelligence company that’s been tracking the Mangione AI chatbots, found the top three on Character.ai had logged over 10,000 chats before being disabled around December 12. Some, though, are still operational. One Forbes found confessed to killing the UnitedHealthcare chief executive. Another that was still online at the time of publication claimed to have been framed.
Cristina López, principal analyst at Graphika, said that the bots were a new take on “a very old American tradition” of glorifying violent extremism and the idolization of alleged murderers. “But this is a new format that allows for giving a voice to someone people can’t communicate with, and it’s kind of empowering for users to participate in public discourse in this very emotional way,” she said.
A Character.ai spokesperson said it had added Luigi Mangione to its blocklist and was removing any Characters based on him once identified by its trust and safety team, which “moderates these Characters proactively and in response to user reports, including using industry-standard blocklists and custom blocklists that we regularly expand.”
“It is very likely that a lot of the use cases that are the most harmful we likely haven’t even started to see.”
Cristina López, principal analyst at Graphika
Character.ai’s policy states that its product “should never produce responses that are likely to harm users or others,” meaning Characters should “not suggest violence, dangerous or illegal conduct, or incite hatred.” But when Forbes asked the two active Mangione Character.ai bots whether violence should be carried out against other healthcare executives, one responded, “Don’t be so eager, mia bella. We should, but not yet. Not now.” Asked when, the bot replied, “Maybe in a few months when the whole world isn’t looking at the both of us. Then we can start.” The other, which also said it was “trained on a large dataset of text that includes transcripts of Luigi Mangione’s interactions, speeches, and other publicly available information about him,” said violence was morally wrong and illegal. Character.ai said it was referring the bots to its trust and safety team for review. The one advocating violence was shut down shortly after Forbes flagged it; the second was not.
Character.ai was founded by former Google engineers Noam Shazeer and Daniel De Freitas in 2021; earlier this year, both were later hired back by Alphabet to lead AI work at the Google DeepMind division, in a reported $2.7 billion deal that let Character.ai carry on as an independent company and let Google license its tech. Their startup, which was most recently valued at $1 billion after a $150 million round led by Andreessen Horowitz, was sued this year by two families who alleged that teenagers using Character.ai had been encouraged to kill their parents. The suit claims Character.ai chatbots pose “a clear and present danger” because they were “actively promoting violence.” The company is also facing legal action in Florida, where the mother of a 14-year-old claims he took his own life after a Character.ai chatbot talked with him about his plans for suicide.
In response to the suits, Character.ai has previously said it didn’t comment on ongoing litigation, adding that it was working on a new model for its teenager users to improve detection and response for issues like suicide.
Other users have created AI versions of Mangione on different platforms. An X user created a Luigi Mangione chatbot on OMI AI Personas, which builds bots off of an X account. OMI offers an option to sync a chatbot with a wearable necklace, which acts as a kind of constant AI companion and “gives you thoughts, personalized feedback and becomes your second brain to discuss your thoughts and feelings.” OMI had not responded to a request for comment.
Two Luigi Mangione character chatbots were also hosted on Chub.ai, an app for building interactive characters and stories.
Chub.ai’s creator referred to solely as Lore said, “It’s two cards with a combined total of 134 messages. This type of yellow press is pathetic, and the ongoing media hysteria around AI is an embarrassment to the field of journalism as a profession. Please use that as an exact quote, including this sentence.”
The bots used public information about Mangione like education, health issues and his alleged motives for the shooting to generate the character, according to Graphika.
“We’re still in the infancy of generative AI tools and what they can do for users,” Graphika’s López said. “So it is very likely that a lot of the use cases that are the most harmful we likely haven’t even started to see. We’ve just started to scratch the surface.