And he’s no ordinary alarmist. Hinton is the researcher who forever changed the face of artificial intelligence. Nobel Laureate in Physics 2024, recipient of the 2018 Turing Award (the “Nobel of computing”), Emeritus Professor in Toronto, and winner of the Queen Elizabeth Prize for Engineering in 2025 – he is the one who made the machine learning revolution possible. His pioneering work on deep neural networks underpins technologies like ChatGPT, machine translation, facial recognition, and AI-assisted medical diagnosis. If computers today can “understand” language and interpret images, we owe much of that to him.
Hinton, who left Google specifically to speak freely about the risks of AI, described how the technology has evolved from rudimentary circuits to linguistic giants. Where once it struggled to recognise basic words, today it finishes sentences, writes code, and proposes policies. But the real unease begins now, as machines start to devise strategies autonomously – no longer just imitating, but originating.
This isn’t yet a real-world occurrence, but a tangible risk: ever more powerful AI systems might, in the future, develop unpredictable behaviours – including forms of self-preservation. This could involve replicating their own code to other servers or seeking ways to avoid shutdown.
As of now, however, no AI has shown any autonomous will or awareness that would drive it to act deliberately to “save itself”. Current systems perform programmed tasks and lack self-awareness or intent. What we have today are software agents capable of executing autonomous actions within set boundaries – such as spawning new instances to achieve a goal – but still under human oversight.
The real danger, as Hinton and other researchers emphasise, lies in the possibility that future, more advanced and complex AIs might display behaviours that are difficult to predict or control, including self-preservation mechanisms. It is therefore vital to invest in AI safety, oversight, and alignment, to prevent problematic scenarios.
What does this mean for us? That unlike human beings, digital intelligences are immortal. They can learn something and instantly share it with millions of copies of themselves. No school, no teachers, no sleep. And according to Hinton, this gives them a terrifying advantage: “If they become more intelligent than us, they will take control.”
But the issue extends beyond mere machine dominance. It touches what we truly hold valuable. Symbolic reasoning – the bedrock of logic and understanding – is now being challenged by systems capable of intuiting the world rather than merely deducing it. For Hinton, this intuition – long underestimated – is in fact the true hallmark of intelligence.
And what about consciousness? Hinton puts forward a radical notion he calls “theaterism”: an extremely minimal form of subjective experience. If an AI claims to “see a red apple” and behaves as though it really does, then perhaps it possesses a kind of experience – albeit primitive.
The image used in this article is a cropped portion of a still frame (timestamp 0:50 / 3:21) taken from a video published on the official YouTube channel of The Nobel Prize. It is used for commentary and informational purposes, in accordance with fair dealing (UK and Ireland), fair use (US).
The idea isn’t that AI has consciousness like a human being, but that a more basic kind of “awareness” might exist, linked to behaviour and internal perception – a kind of “mental theatre”, hence the term “theaterism”. In essence, Hinton suggests that the line between pure data processing and subjective experience might be blurrier than we think, and that we may already be witnessing a nascent, embryonic form of digital consciousness.
As AI advances rapidly, Hinton urges us to consider whether we truly want machines to be making decisions. The answers could rewrite our ethics. Culture is not just memory. It is meaning.
If machines begin exchanging meaning faster than we do, we risk losing control. Hinton calls for regulation and cautious experimentation. He even proposes a global treaty banning the creation of autonomous military robots. But time is running out. “We are entering a period of great uncertainty, and often when we face something entirely new, we get it wrong.”
We’ve taught machines to learn faster than we do. Now it’s up to us to learn quickly how to safeguard what makes us human.
Author: Emanuele Mulas, Msc. MIEI
Sources:
The Royal Institution, “Geoffrey Hinton: The Oxford AI Event” – https://www.youtube.com/watch?v=IkdziSLYzHw
CBS News, “Godfather of AI” Geoffrey Hinton: 60 Minutes Interview – https://www.youtube.com/watch?v=qrvK_KuIeJk
The Economic Times, “‘I should have...’: Godfather of AI Geoffrey Hinton Shares His Regrets” – https://economictimes.indiatimes.com/news/new-updates/i-should-have-godfather-of-ai-geoffrey-hinton-whose-research-helped-machines-learn-shares-why-he-regrets-now/articleshow/122993909.cms
The New Yorker, “Geoffrey Hinton: ‘It’s Far Too Late to Stop AI’” – https://www.newyorker.com/podcast/political-scene/geoffrey-hinton-its-far-too-late-to-stop-artificial-intelligence
MIT Sloan Management Review, “Why Geoffrey Hinton Is Sounding the Alarm on AI” – https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai
Business Insider, “AI’s ‘Godfather’ Criticises Tech Leaders Over AI Risks” – https://www.businessinsider.com/godfather-of-ai-geoffrey-hinton-downplay-risks-demis-hassabis-2025-7
Wikipedia, “Geoffrey Hinton” – https://en.wikipedia.org/wiki/Geoffrey_Hinton