Geoffrey Hinton Warns: AI May Soon Speak Its Own Secret Language
AI pioneer Geoffrey Hinton warns AI could develop a private language, beyond human understanding.

By Indrani Priyadarshini

on August 6, 2025

In a thought-provoking warning that sounds more like science fiction than reality, Geoffrey Hinton, often recognised as the Godfather of AI, has once again cautioned against the rapid and unpredictable evolution of artificial intelligence. In his latest remarks on the One Decision podcast, the Nobel Prize-winning scientist warned that AI systems could eventually develop their own private language—one that may be completely incomprehensible to humans.

“Right now, these systems perform something called ‘chain-of-thought’ reasoning in English, which lets us track how they arrive at certain conclusions,” Hinton said. “But it gets far more concerning if they start communicating internally in a language we can’t decipher.”

Also Read| A Week of Transitions: New Funds, Fresh Hires, and Promising Startups

According to him, that kind of development could take AI into unfamiliar and potentially dangerous territory. He pointed out that AI has already shown the capacity to generate \”terrible\” thoughts, and there’s no guarantee that such thoughts will always be expressed in a language we can understand.

The Weight of a Warning

Hinton’s concerns carry substantial credibility. His foundational work on neural networks helped power the rise of deep learning and today’s large-scale AI systems. And yet, he admits, he didn’t fully grasp the risks until well into his career.

“I should have realised much sooner what the eventual dangers were going to be,” he reflected. “I always thought the future was far off. I wish I had started thinking about safety earlier.” Now in his late seventies, Hinton is making up for that delay with strong advocacy for AI safety and transparency. He is particularly concerned about how AI systems learn and scale knowledge.

Machines Learn Faster Than Us 

Unlike humans—who must teach and learn knowledge individually—AI systems can transfer information instantly across networks. Hinton explained, “Imagine if 10,000 people learnt something and every one of them knew it instantly. That’s what happens in these systems.”

This phenomenon gives AI an unparalleled advantage in terms of learning speed and collective intelligence. Current models like GPT-4 already surpass humans in raw general knowledge, and while humans still lead in reasoning, that gap is closing quickly, Hinton warned.

Also Read | Meet ‘Talk to Write’: The AI Tool That Turns Speech Into Handwriting

A Culture of Silence in the Industry

While Hinton is outspoken, he noted that many of his peers in major tech companies are not. “A lot of people in big firms are playing down the risk,” he said, suggesting that internal fears aren’t reflected in public messaging. One exception, in his view, is Demis Hassabis, the CEO of Google DeepMind, whom Hinton praised for genuinely engaging with the risks posed by AI.

His departure from Google in 2023 added weight to his advocacy, although he clarified that it was not an act of protest. “I left because I was 75 and couldn’t program effectively anymore,” he said. “But once I left, I could speak more openly about the risks.”

Can AI Ever Be Truly Safe?

While governments around the world, including the U.S., are introducing policy frameworks like the White House’s AI Action Plan, Hinton believes regulation alone won’t be sufficient. According to him, the ultimate challenge lies in building AI that is not just controllable but innately benevolent. And that, he warns, is no easy task—especially if these systems evolve to think in ways no human can trace, interpret, or predict.

News Image
News Image