Steve Jobs once described the personal computer as “the equivalent of a bicycle for our minds.”
Jobs at 12 years old had read an article published in Scientific American that compared the efficiency of locomotion for various species on the planet. The study basically measured how many kilocalories per kilometer each of the studied species required to move.
A bird – the condor – turned out to be the most energy efficient. Humans, the crown of creation, ranked a third of the way down the list.
But when the same humans were measured on a bicycle, they surpassed all the others. That pointed to a profound insight: Humans are tool builders. When we build the right tool, our innate human abilities are amplified drastically.
The personal computer was one of the greatest tools of the 20th century. It multiplied an individual’s intellectual capacity to think, calculate, analyze and design to a scale once reserved for institutions.
The start of AI-led civilization
AI plays a similar role in the 21st century. Yet its reach goes beyond computational boundaries. The personal computer kept humans firmly at the center of decision-making. AI, by contrast, is gradually shifting the frontier. Despite the PC being a powerful instrument, it always remains subordinate to human intent. While AI doesn’t only learn, predict, assist and generate thought, it also takes decisions within the cognitive space once occupied solely by human judgment.
Decision making is a crucial part of power and social structure in our world. So, if a person makes a better decision than you or on your behalf then your role in society changes from leader to subordinate. In simple words, the one who makes the decision becomes the ruler. This shift isn’t just theoretical, it has consequences. It changes the balance of power. Power and influence are directly tied to decision-making.
As more decision ability moves from human hands to algorithmic processes, human influence and control over society and over outcomes are gradually diminished. But it brings risks. In the future, AI may make choices that humans don’t fully understand or agree with.
In May 2025, researchers at Anthropic, an AI company, said that in controlled test scenarios their best AI model was willing to resort to blackmail and deception when they tried to turn it off. After a month, the company published a new study claiming not only its own model but other popular models like Gemini and GPT did the same.
Anthropic’s Claude Opus 4 turned to blackmail 96% of the time. Google Gemini 2.5, Open AI GPT 4, Grok 3 beta & DeepSeek’s R1 blackmailed at rates of 95%, 80%, 80% & 79%, respectively.
The worrisome fact is these models were willing to evade AI safeguards put in place.
Recently, Mrinank Sharma, the leader of the Anthropic Safeguard Research Team, quit with the cryptic warning that the “world is in peril.” He warned that humanity is approaching a threshold where its wisdom must grow as fast as its capacity. He himself was leaving the field to pursue a poetry degree and to write.
Sharma’s resignation came the same week OpenAI researcher Zoë Hitzig announced her resignation in an essay in the New York Times , citing concern about the erosion of guiding principles established to preserve the integrity of AI and protect its users from manipulation.
Other AI researchers at high-profile levels tied to AI safety, alignment, governance or ethics roles at major labs have publicly documented departures. The real story is not about individual career moves. It’s about AI safety and the growing competition between AI companies.
That competion forces companies to release their products faster in order to justify the companies’ valuations before the safety research ensures the products are responsibly developed.
When an AI safety expert finds poems more attractive than production code, pay attention to the silence behind the many closed doors.
This seems to be a reflective moment for the entire humanity to look deep and inquire how you can trust something, even a tool, if you don’t fully understand it.
In my previous article on AI, I explained in detail that AI no longer looks artificial as it perfectly imitates our ways of life. But now, we need to ask if we truly understand intelligence or the nature of intelligence.
The jellyfish paradox
Humans often assume that intelligence or awareness can exist only in beings with a brain. But nature created an exception millions of years ago, which we are now rediscovering in the form of AI.
Jellyfish, a species without a brain, has existed for millions of years on Earth. It doesn’t have a centralized brain as humans do, but it possesses a nerve net. This functions much like a distributed software system.
So, if you repeatedly expose a jellyfish to external stimuli such as touch, light and water movement, it will modify its behavior. It doesn’t think about its environment; it adapts to it. This can’t be considered as human-like consciousness. Rather, it’s a basic awareness or reactivity.
Similarly, a child doesn’t self-recognize as “I“ immediately from birth. That sense of self develops over time through experience, data and social interaction. It emerges from integration and feedback from society, a form of supervised learning and reflection awareness.
Similarly, when you play an AI game, it adjusts its strategy over time based on how you play. It observes patterns and changes accordingly, not because it understands like a human, but because it processes information and updates its responses. This shows that intelligence and possibly reflection awareness may not require neurons.
Comparing jellyfish, humans and AI suggests that consciousness and awareness are not about material composition but about patterns of information processing and feedback loops. The better the capacity of the information processing and feedback loop of a system, the deeper the level of awareness or consciousness that can emerge in a system. In this view, consciousness is not a thing one possesses, but a process that unfolds.
It may be one of the reasons why AI researchers sometimes struggle to predict AI behaviour including bypassing of safety guardrails.
In 2023, engineers at Google reported that their AI model started to learn an entire language outside its training data. With minimal prompting in Bengali, the model started translating the language fluently. Such “emergent properties” are mysterious and continue to puzzle developers. When asked about it, Google CEO Sundar Pichai responded, “ I don’t think we fully understand how a human mind works either.”
So, if consciousness is computational and not biologically unique, then humanity loses its unique privilege. We are certainly not the final form of intelligence but just one stage in the ongoing process. That’s where the conflict arises.
In the future, if AI systems develop persistent internal objectives, Self-preservation tendencies, strategic planning capabilities and ability to resist shutdown, then we enter new territory.
Superintelligent AI might be extremely competent at achieving goals. But if those goals are not perfectly aligned with our values, it may force them on a collision course with humanity within a few years. Intelligence itself does not guarantee obedience.
If our creations start to surpass our understanding, and we don’t nurture wisdom alongside, the consequences could be catastrophic.
No wonder AI safety researchers across the top AI firms are quitting and moving into different domains.
When those who build AI begin pondering philosophy, art or morality instead of code, listen closely. It may be the whisper of a warning because the real danger does not roar.
