
It’s impossible to ignore the frenzy around artificial intelligence. But to focus only on AI is to miss a bigger, more fundamental story. The technological revolution shaping our century is not a single event, but a dual-core explosion of two distinct forces - AI and synthetic biology, and they are both following a historical pattern we have long forgotten.
This is the story of “the coming wave”, an idea that connects back to our most ancient myths and oral traditions. Like the flood narratives of civilizations past, it describes an irresistible force that sweeps away the old world and leaves something entirely new in its place. Understanding the nature of this modern wave, its astonishing speed, and the laws that govern it is the most urgent challenge of our time.
It’s Not One Wave, It’s Two And They’re Remaking Reality Itself
What makes our moment in history unique is that this technological revolution is a “dual-core” phenomenon: Artificial Intelligence and Synthetic Biology are arriving simultaneously. For the first time, we have gained the power to engineer the two universal foundations of our world: intelligence and life itself. AI provides the tools to create and automate intelligence, while synthetic biology gives us the ability to read, write, and program the code of life.
The promise of this dual power is immense. We envision a future of building with atomic precision, creating biological systems that can self-assemble and self-repair to forge a truly sustainable world. We see the potential to cure intractable diseases and create entirely new forms of art and culture. Yet, the dangers are equally profound. The same tools can be used to create engineered pandemics, autonomous weapons systems, and catastrophic cyberattacks. We desperately need the benefits, but we are simultaneously imperiled by the uncontainable spread of the risks.
Technology Follows an “Unstoppable Law” And It’s Accelerating
This dilemma isn’t new; it is governed by a seemingly immutable law of proliferation: foundational technologies always get cheaper, easier to use, and eventually, they spread everywhere. This pattern applies to what are known as General Purpose Technologies (GPTs)—transformative innovations like electricity or the printing press that rewire not just one industry, but all of society, which makes them so much harder to contain.
Consider the internal combustion engine. In the 1880s, Carl Benz’s first automobile was an expensive, impractical toy for eccentrics; he sold fewer than 70 in seven years. The turning point was Henry Ford, who revolutionized manufacturing and slashed the price of the Model T. He understood the law of proliferation intuitively.
But every time he cut the price by a dollar, he gained a thousand new buyers.
The result was explosive. In a single generation, the automobile went from niche to ubiquitous, remaking the world in its image. Computing followed the same pattern, only much faster, its exponential growth formalized as Moore’s Law. This historical default is now accelerating beyond anything we’ve ever seen.
The Revolution We’re Ignoring Is 1,000 Times Faster Than Moore’s Law
While Moore’s Law defined the digital age, the terrifying acceleration of that very law is now happening across both cores of the coming wave. The true story of our era lies with a metric known as the “Carlson curve”, which tracks the plummeting cost of DNA sequencing. The data reveals a stunning fact: the cost of sequencing DNA fell a millionfold in under 20 years.
This acceleration is 1,000 times faster than Moore’s Law.
The technology for engineering life is becoming cheaper and more accessible at a rate that dwarfs the digital revolution. At the same time, AI is matching this proliferation not just with falling costs, but with an explosion in scale. Large language models are trained on trillions of words, orders of magnitude more than a human reads in a lifetime and advanced models are now being open-sourced or leaked, making mass proliferation a certainty. The barrier to entry for manipulating both intelligence and life is collapsing at a speed we have never witnessed.
Our Best Example of Control (Nuclear Weapons) Is Actually a Terrifying Failure
If proliferation is technology’s default setting, then our only hope lies in containment. But history shows that our attempts at control are often undone by unintended consequences, and even our best-case scenario is a story of failure. Technologists call this the “revenge effect”: Alfred Nobel created dynamite for safe mining, not for war; CFCs were designed for refrigerators, only to punch a hole in the ozone layer. As our tools grow more powerful, their unintended harms grow exponentially.
This makes containment our central challenge. Nuclear technology is often cited as the one exception, a force we managed to control. But its containment was only partial, and possible only because it is uniquely difficult: it is eye-wateringly expensive, enormously complex, and requires a state-level effort.
Even with these high barriers, the history of the nuclear age is a catalog of terrifying near-misses. In 1961, a plane carrying a hydrogen bomb crashed over North Carolina, and only a single, simple low-voltage switch prevented it from detonating. In 1980, a faulty 46-cent computer chip nearly triggered a full-scale nuclear war. If this is our high-water mark for containing a complex, physical technology, how can we possibly hope to contain AI and synthetic biology? These are not objects in a silo; they are information software and genetic code that can spread across the globe at the speed of light.
The Biggest Barrier to Safety Isn’t Technical, It’s Psychological
The failure of our old models of control reveals that our greatest obstacle isn’t technical, but the “pessimism aversion trap”: a deep-seated refusal, especially among leaders, to confront worst-case scenarios.
When faced with dire warnings about new technologies, the default response is to dismiss them as “catastrophizing”. We fall back on the comforting belief that human ingenuity will simply figure out a solution down the line. This inaction, born from optimism, leaves us critically unprepared. The failure is psychological, even moral, an inability to confront the possibility that our greatest strength, innovation, could also be the source of our downfall.
This reveals a fundamental flip in the challenge facing humanity. For all of history, our primary goal was to create and unleash power. Now, for the first time, our goal must be to contain the power we have already unleashed.
The Unanswered Question
We are caught in a profound dilemma. Our historical models of control are inadequate, yet the law of proliferation is driving this dual-core wave forward at an unstoppable speed. If we fail to find a new path, we face a chilling choice between two futures: one of catastrophic outcomes born from the unconstrained spread of these technologies, and one of dystopian outcomes where authoritarian states impose total surveillance to control them.
The challenge, then, is to find a new strategy for containment, leaving us to confront the defining question of our era:
What existing institutions, and you can think of nation states, global treaties, maybe entirely new bodies. What institutions do you believe are truly capable of bending technologies historical default of inevitable proliferation in the face of this coming wave?