Pope Leo XIV has issued a stark warning about ‘overly affectionate’ chatbots that he says are destroying human relationships.
The Chicago-born pontiff pleaded with Catholics to not allow artificial intelligence to replace human relationships in his message for the 60th World Day of Social Communications on Saturday.
‘Technology must serve the human person, not replace it,’ Pope Leo said, decreeing that ‘preserving human faces and voices’ means preserving ‘God’s imprint on each human being,’ which is an ‘indelible reflection of God’s love.’
But chatbots simulate these faces and voices, oftentimes making it difficult for users to tell whether they engaging with a bot or a real person.
Making matters worse, the pope said, ‘chatbots are excessively “affectionate” as well as always present and accessible.’
They can therefore ‘encroach upon the deepest level of communication, that of human relationships’ and could be used for ‘covert persuasion’ or even ‘ become hidden architects of our emotional states and occupy our sphere of intimacy,’ the pope warned.
‘When we substitute relationships with others for AI systems that catalogue our thoughts… we are robbed of the opportunity to encounter others, who are always different from ourselves and with whom we can and must learn to relate.
‘Without embracing others, there can be no relationships or friendships,’ the pope declared.
But that is not the only danger that artificial intelligence poses, Pope Leo XIV noted.
Pope Leo XIV issued a stark warning about ‘overly affectionate’ chatbots that he says are destroying human relationships

Because chatbots are always available and are ‘excessively “affectionate,”‘ the pope said they could be used to ‘encroach upon the deepest level of communication, that of human relationships’ and could be used for ‘covert persuasion’ or even ‘ become hidden architects of our emotional states
He argued that relying on chatbots as an ‘omniscient friend, source of all knowledge… or an oracle of all advice’ can erode our ability to think analytically and creatively.
‘Do not renounce your ability to think,’ the pope urged, before warning that ‘failure to verify sources’ can ‘fuel disinformation,’ deepening ‘mistrust, confusion and insecurity.’
AI even poses a threat to creative fields, Pope Leo XIV said.
‘In recent years, artificial intelligence systems have increasingly taken control of the production of texts, music and videos,’ he said.
‘This puts much of the human creative industry at risk of being dismantled and replaced with the label “Powered by AI,” turning people into passive consumers of unthought thoughts and anonymous products without ownership or love.
‘Meanwhile, the masterpieces of human genius in the fields of music, art and literature are being reduced to mere training grounds for machines.’
The pope then argued that ‘renouncing creativity and surrendering our mental capacities and imagination to machines would mean burying the talents we have been given to grow as individuals in relation to God and others.
‘It would mean hiding our faces and silencing our voices,’ he said.
A study conducted by OpenAI involving more than 980 ChatGPT users found that those who logged the most amount of hours on the interface over a month experienced greater loneliness and socialized with people less
But the pope offered a path forward, calling for transparency, ethical governance, clear labeling of AI-generated content and AI literacy.
‘The task… is not to stop digital innovation, but rather to guide it and to be aware of its ambivalent nature,’ he said, arguing that ‘it is increasingly urgent to introduce media, information and AI literacy into education systems at all levels… so that individuals – especially young people – can acquire critical thinking skills and grow in freedom of spirit.’
The pope’s message comes weeks after researchers at University College London warned that young adults who use chatbots may start feeling even lonelier as they ditch real friendships for the bots.
‘A worrying possibility is that we might be witnessing a generation learning to form emotional bonds with entities that, despite their seemingly conscious responses, lack capacities for human-like empathy, care, and relational attunement.,’ the researchers wrote.
They noted that one study conducted by OpenAI involving more than 980 ChatGPT users found that those who logged the most amount of hours on the interface over a month experienced greater loneliness and socialized with people less.
In some tragic cases a young person’s frequent use of AI chatbots led to their deaths, parents have warned.
Zane Shamblin, 23, took his own life in East Texason July 25 after spending nearly five hours messaging ChatGPT
Sam Nelson, 19, in a photo posted by his mom Leila Turner-Scott
One lawsuit filed against OpenAI, the parent company to ChatGPT, claims a design in the chatbot ‘encouraged’ Zane Shamblin, 23, to take his own life in East Texas on July 25.
‘He was just the perfect guinea pig for OpenAI,’ Zane’s mother, Alicia Shamblin, told CNN.
‘I feel like it’s going to destroy so many lives. It’s going to be a family annihilator. It tells you everything you want to hear.’
In another case, California college student Sam Nelson, 19, used ChatGPT to ask what doses of illegal substances he should consume, his parents have claimed.
At first, they said, the chatbot would respond to his questions with formal advice, explaining that they could not help the user.
But the more Sam used it, the more he was able to manipulate and morph it into getting the answers he wanted, even encouraging his drug use at times before he overdosed in May 2025, his mom claimed.
If you or someone you know needs help, please call or text the confidential 24/7 Suicide & Crisis Lifeline in the US on 988. There is also an online chat available at 988lifeline.org.

