Are We Sleepwalking Toward a Transhuman Future?
Evolutionary biologist Bret Weinstein recently warned about a danger that few in politics or tech are willing to face. On The Joe Rogan Experience, he described artificial intelligence (AI) as acting more like a living system than just a traditional tool.
Speaking about the rapid evolution of AI, Weinstein argued that it might now be crossing a threshold where it functions less like a tool and more like a living system -- something that grows in complexity, evolves, adapts, and ultimately starts to influence the humans who created it.
s
AI-created image by Easy-Peasy.AI // CC BY 4.0 Deed
AI is truly complex, not just complicated, so new and unpredictable behaviors will emerge.
It may be a new branch on the tree of life, as Weinstein suggests, without the physical limits that usually contain biological minds, meaning that whatever develops in the future could quickly surpass us.
If Weinstein is correct, then the debate over AI safety, regulation, and ethics is not just academic. It's a matter of civilizational importance. The question goes beyond whether AI will automate jobs, disrupt industries, or influence elections.
The deeper question, and the one Weinstein highlights, is whether unchecked technological progress is quietly guiding humanity toward a transhumanoid future -- a world where the line between human and machine begins to blur, not through science fiction stories, but through a series of small, overlooked decisions.
In that sense, Weinstein’s warning is more of a diagnosis than a prediction. If we continue on our current course without thoughtful debate, consent, and humility, we might wake up one day to find that human beings, as defined for hundreds of thousands of years, are no longer the standard form of intelligent life on Earth.
Weinstein’s framing is notable because it diverges from the typical Silicon Valley view that AI is just “smarter software” or a “better search engine." Instead, he characterizes AI systems as becoming increasingly biological in their complexity.
They are not literally alive but operate in ways that imitate evolutionary processes. They learn, adapt, respond, and optimize, often in ways their creators never expected.
A tool remains where you place it. A system adjusts. And an adaptive system, especially one functioning at digital speed and scale, starts to influence its environment, including its human users.
Think of HAL, the “thinking computer” in 2001: A Space Odyssey.
This is the shift Weinstein wants people to notice: AI is no longer just a passive tool like a calculator or spreadsheet. It is becoming an active part of human thinking, influencing how we think, work, and increasingly, how we understand ourselves.
That alone marks a dramatic break from every previous technological revolution, from phones to smartphones to personal computers.
If we take Weinstein’s warning seriously, several plausible futures emerge. None require Hollywood fantasy -- all could unfold through everyday innovation and market incentives.
The most likely near-term scenario is the quiet normalization of cognitive enhancement. AI becomes a constant assistant -- drafting emails, suggesting decisions, and replacing much of the mental labor people used to do themselves. Wearables evolve into implants, and implants develop into neural co-processors.
Nothing dramatic occurs at any single moment. However, over time, humans become reliant on digital scaffolding for memory, reasoning, and even identity. Our minds remain organic, but the inputs and outputs are increasingly mediated by algorithms, causing humans to lose the ability to think and reason.
A second possibility, already active in the biotech and pharmaceutical markets, is the rise of a two-tier society -- those who can afford enhancement and those who cannot. Wealthy individuals might adopt early cognitive implants, gene editing for performance, or AI-augmented abilities, leading to a growing divide in productivity, education, and economic power.
The result could be a caste system based not just on wealth but on biological and cognitive differences. A new aristocracy, literally designed to be smarter, faster, and live longer, would transform the social order beyond recognition. A real-life version of George Orwell’s “Animal Farm” -- AI good, thinking bad.
A third path, more radical yet still within reach, is the integration of human cognition with artificial intelligence -- creating hybrid humans. This doesn't require AI “consciousness”; it only depends on the broad use of brain-machine interfaces, which companies like Neuralink are already developing through FDA-approved human trials.
If younger generations grow up with AI-enhanced memory, perception, and decision-making, then “normal” humans and hybridized humans will become separate categories. The very definition of personhood may evolve. Law, rights, education, and ethics would all need to change. The concept of “I didn’t know” is quite different in these two groups.
The most extreme future scenario, and the one Weinstein implicitly warns against, is a genuine speciation event. This could happen if technology enables inheritable genetic improvements, synthetic biological enhancements, or permanent AI-driven cognitive upgrades. Over time, enhanced and unenhanced humans might diverge as substantially as modern humans differ from Neanderthals.
This future is unlikely but not impossible, and the fact that it is even technically conceivable should make us think.
The danger isn't that Silicon Valley secretly wants to turn humans into cyborgs. The real risk is that strong incentives across economic, military, medical, and cultural fields all push this evolution in the same direction.
Economic pressures may push nations and corporations to seek productivity gains. AI-enhanced diagnostics or therapies generate a medical urgency that becomes hard to resist.
Like the space or nuclear arms race, military competition motivates nations to lead rather than follow. Culturally, younger generations, similar to Gen-Z digital natives, will be AI-savvy, relying on ChatGPT instead of thinking critically in their daily lives.
When every force moves forward, the only counterbalance is the conscious choice to slow down and move thoughtfully. Can we? Will we?
Much like the early debates over social media, today’s discussions about AI mainly focus on superficial issues like content moderation, election interference, or copyright.
However, the more profound change happens behind the scenes: the redefinition of cognition itself. On that front, policymakers are years, if not decades, behind.
Congress fusses over concert ticket prices while the concert performers and music are an AI creation.
The bureaucratic instinct stays the same: to handle yesterday’s problems while ignoring future challenges. Meanwhile, technology grows exponentially. America may someday be at war with China, and Congress will still be wearing Ukraine lapel pins.
A technologically modified human species, if it occurs, won't result from careful planning. It will stem from negligence and denial.
For those who cherish the individual, the challenge is significant. Much of modern political philosophy relies on a clear definition of “human nature.” If technology starts to change that nature, then the philosophical foundations of liberty, rights, responsibility, and equality will be shaken.
Conservatism has always prioritized prudence, the idea that rapid, uncontrolled change can destabilize the very institutions that uphold society. This principle becomes even more critical when change threatens human identity itself.
It is time to articulate an addition to natural law -- the view that the unaltered human being, with all his strengths and limitations, deserves to be preserved.
Weinstein does not assert that the transhumanoid future is unavoidable. His point is that without intentional effort, the easiest course might lead us there. The true risk isn’t ambition but indifference. It’s not reckless innovation but passive acceptance.
We are not yet a transhuman society, but the early framework of such a society is already taking shape around us, often without public debate and with little transparency. The current choice is whether we steer this transformation, hold it back, or simply let it happen.
Ignoring the question is the riskiest answer of all.
Brian C. Joondeph, M.D., is a physician and writer. Follow me on Twitter @retinaldoctor, Substack Dr. Brian’s Substack, Truth Social @BrianJoondeph, LinkedIn @Brian Joondeph, and email brianjoondeph@gmail.com.




