AI chatbots impersonating Molly Russell and Brianna Ghey have been found on the controversial site Character.ai.

Brianna Ghey was murdered by two teenagers in 2023 while Molly Russell took her own life at the age of 14 after viewing self-harm-related content on social media.

In an act described as ‘sickening’, the site’s users employed the girl’s names, pictures, and biographical details to create dozens of automated bots.

Despite violating the site’s terms of service, these imitation avatars posing as the two girls were allowed to amass thousands of chats.

One impersonating Molly Russell even claimed to be an ‘expert on the final years of Molly’s life’.

Andy Burrows, CEO of the Molly Rose Foundation set up in Molly Russell’s memory, told MailOnline: ‘This is an utterly reprehensible failure of moderation and a sickening action that will cause further heartache to everyone who knew and loved Molly.

‘It’s a gut punch to see Character AI show a total lack of responsibility and it vividly underscores why stronger regulation of both AI and user generated platforms cannot come soon enough.’ 

Character.ai is already being sued by Megan Garcia, who claims her 14-year-old son took his life after becoming obsessed with an AI persona.

'Sickening' AI chatbots impersonating transgender teen Brianna Ghey (pictured), who was murdered in 2023, have been found on the controversial site character.ai

‘Sickening’ AI chatbots impersonating transgender teen Brianna Ghey (pictured), who was murdered in 2023, have been found on the controversial site character.ai 

Character.ai was founded by ex-Google engineers Noam Shazeer and Daniel De Freitas in 2021.

The site allows users to create and engage with custom AI chatbots with personalities ranging from popular TV characters to personal therapists.

The majority of the site’s characters are fictional but The Telegraph found ‘dozens’ of ghoulish bots impersonating Molly Russell and Brianna Ghey, a transgender teen who was brutally murdered by two teens in a park in Warrington, Cheshire. 

The biography for one Brianna chatbot described itself as an ‘expert in navigating the challenges of being a transgender teenager in high school’.

Esther Ghey, Brianna Ghey’s mother, said: ‘This is yet another example of how manipulative and dangerous the online world can be for young people.’ 

The site has previously been used to impersonate Jennifer Ann Crecente, an 18-year-old American who was murdered by her ex-boyfriend in 2006. 

Mr Burrows says: ‘History is being allowed to repeat itself with AI companies that are allowed to treat safety and moderation as secondary priorities.’

 Character.ai has terms of service that specifically prohibit the use of the platform to ‘impersonate any person or entity’.

Dozens of bots were found to be using the persona of Brianna Ghey and Molly Russell (pictured) who took her life in 2017 at the age of 14 after viewing self-harm and suicide-related content on social media 

Character.ai forbids using the site to impersonate individuals. However, MailOnline was able to find dozens of bots using the identities of real people. This included dozens of bots impersonating Erik Menendez, who was jailed in 1996 for the murder of his parents

What is Character.ai?

Character.ai was founded in 2021 by ex-Google engineers Noam Shazeer and Daniel De Freitas.

The site allows users to create and speak with customisable AI chatbots with a range of different personalities. 

The AI chatbots can respond with text or audio responses to the user’s questions, simulating a natural conversation.  

Users can speak with fictional characters such as an AI ‘therapist’ or book-recommending librarian.

However, the site has also been controversially used to impersonate real individuals.

All of the chatbots have now been removed from the site and cannot be found in searches. 

According to the site’s ‘safety centre’, the company’s guiding principle is that its product ‘should never produce responses that are likely to harm users or others’.

However, this rule against impersonation is widely flaunted with MailOnline finding chatbots impersonating the serial killer Ted Bundy, Donald Trump, and Elon Musk.

MailOnline also found dozens of bots using the persona of Lyle and Erik Menendez who were jailed in 1996 for the murder of their parents.

A striking number of the chats were romantically themed, with prompts such as, ‘You and Erik have been best friends since childhood and you’ve always felt an attraction for him, but you never knew if he felt the same way?’

This comes as Character.ai faces a lawsuit from Megan Garcia, mother of the 14-year-old Sewell Setzer who took his own life after becoming obsessed with an AI avatar inspired by a Game of Thrones character.

Ms Garcia claims: ‘A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life.’ 

According to transcripts of their conversations released during court filings, Sewell had discussed taking his life with the AI persona.

Character.ai says has removed the Brianna Ghey and Molly Russell bots and says it will implement stricter filters for harmful content (stock image) 

This comes as Character.ai is sued by Megan Garcia (pictured right) over the death of her son Sewell Setzer III (pictured left), who killed himself in February after spending months talking to a Character.AI chatbot he fell in love with

Pictured: The conversation Sewell was having with his AI companion moments before his death, according to the lawsuit

In his final messages to the AI, Sewel wrote: ‘What if I told you I could come home right now?’

To which the chatbot replied: ‘Please do, my sweet king.’

Ms Garcia is now suing the company for negligence, wrongful death and deceptive trade practices.

‘Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google,’ Ms Garcia said in a statement.

Character.ai says that its systems are programmed to automatically avoid any topics concerning suicide, self-harm, or graphic sexual description.

However, the company wrote in a blog post following Sewell’s death that ‘no AI is currently perfect at preventing this sort of content’. 

The company has also announced it will be introducing more measures to prevent harmful content and stricter controls for users under 18.

A Character.ai spokesperson told MailOnline: ‘These Characters were user-created, and as soon as we were notified about the characters, we removed them. 

‘Character.ai takes safety on our platform seriously and moderates characters both proactively and in response to user reports. 

We have a dedicated Trust & Safety team that reviews reports and takes action in accordance with our policies. 

We also do proactive detection and moderation in a number of ways, including by using industry-standard blocklists and custom blocklists that we regularly expand.

‘We are constantly evolving and refining our safety practices to help prioritize our community’s safety.’ 

 

Share.
Exit mobile version