Artificial intelligence (AI) is already helping to solve problems in finance, research and medicine.
But could it be reaching consciousness?
Dr Tom McClelland, a philosopher from the University of Cambridge has warned that current evidence is ‘far too limited’ to rule this dystopian possibility out.
According to the expert, the only sensible position on the question of whether AI is conscious is one of ‘agnosticism’.
The main problem, he claims, is that we don’t have a ‘deep explanation’ of what makes something conscious in the first place, so can’t test for it in AI.
‘The best–case scenario is we’re an intellectual revolution away from any kind of viable consciousness test,’ Dr McClelland explained.
‘If neither common sense nor hard–nosed research can give us an answer, the logical position is agnosticism.
‘We cannot, and may never, know.’
Artificial intelligence ( AI) is already helping to solve problems in finance, research and medicine. But could it be reaching consciousness? Pictured: Terminator Genisys
AI companies are investing vast sums of money into pursuing ‘artificial general intelligence’ – the point at which AI can outperform humans in any area.
But as they work towards this goal, some also claim that increasingly sophisticated AI may develop consciousness.
This means AI could develop the capacity for perception and become self–aware.
While this idea might evoke visions of killer robots, Dr McClelland argues that AI could make this jump without us even realising, because we don’t really have an agreed–upon theory of consciousness to begin with.
Some theories say consciousness is a matter of processing information in the right way, and that AI could be conscious if only it could run the ‘software’ of a conscious mind.
Others argue it is inherently biological, meaning AI can only imitate consciousness at best.
Until we can figure out which side of the argument is right, we simply don’t have any basis on which to test for consciousness in AI.
In a paper published in the journal Mind and Language, Dr McClelland claims both sides of the debate are taking a ‘leap of faith’.
We can’t tell whether an AI, like in the sci–fi film Ex Machina (pictured), really has conscious experience or whether it is just simulating consciousness
Whether something is conscious radically changes the kinds of ethical questions we need to consider.
For example, humans are expected to behave morally towards other people and animals, because consciousness gives them ‘moral status’.
In contrast, we don’t have these same values towards inanimate objects, like toasters or computers.
‘It makes no sense to be concerned for a toaster’s well–being because the toaster doesn’t experience anything,’ Dr McClelland explains.
‘So when I yell at my computer, I really don’t need to feel guilty about it. But if we end up with AI that’s conscious, then that could all change.’
While that might make dealing with AI an ethical nightmare, the bigger risk may be that we start to consider AIs as conscious or sentient when they are not.
Dr McClelland explained: ‘If you have an emotional connection with something premised on it being conscious and it’s not, that has the potential to be existentially toxic.’
Worryingly, the philosopher says that members of the public are already sending him letters written by chatbots ‘pleading with me that they’re conscious’.
He added: ‘We don’t want to risk mistreating artificial beings that are conscious, but nor do we want to dedicate our resources to protecting the “rights” of something no more conscious than a toaster.’










