Skip to main content

Our Evolutionary Past Can Teach Us about AI’s Future

Evolutionary biology offers warnings, and tips, for surviving the advent of artificial intelligence

Illustrated concept of evolutionary human history from a pre-historic man carrying a crude torch to a present-day holding a modern smartphone

As artificial intelligence advances, experts have warned about its potential to cause human extinction. Exactly how this might come about is a matter of speculation—but it’s not hard to see that intelligent robots could build more of themselves, improve on their own designs and pursue their own interests. And that could be a threat to humanity.

Last week, an AI Safety Summit was held at Bletchley Park in the U.K. It sought to address some of the threats associated with the most advanced AI technologies, among them “loss of control” risks—the possibility that such systems might become independent.

It’s worth asking what we can predict about such scenarios based on things we already know. Machines able to act independently and upgrade their own designs would be subject to the same evolutionary laws as bacteria, animals and plants. Thus evolution has a lot to teach us about how AI might develop—and how to ensure humans survive its rise.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


A first lesson is that, in the long run, there are no free lunches. Unfortunately, that means we can’t expect AI to produce a hedonistic paradise where every human need is met by robot servants. Most organisms live close to the edge of survival, eking out an existence as best they can. Many humans today do live more comfortable and prosperous lives, but evolutionary history suggests that AI could disrupt this. The fundamental reason is competition.

This is an argument that traces back to Darwin, and applies more widely than just to AI. However, it’s easily illustrated using an AI-based scenario. Imagine we have two future AI-run nation-states where humans no longer make significant economic contributions. One slavishly devotes itself to meeting every hedonistic need of its human population. The other puts less energy into its humans and focuses more on acquiring resources and improving its technology. The latter would become more powerful over time. It might take over the first one. And eventually, it might decide to dispense with its humans altogether. The example does not have to be a nation-state for this argument to work; the key thing is the competition. One takeaway from such scenarios is that humans should try to keep their economic relevance. In the long run, the only way to ensure our survival is to actively work toward it ourselves.

Another insight is that evolution is incremental. We can see this in major past innovations such as the evolution of multicellularity. For most of Earth’s history, life consisted mainly of single-celled organisms. Environmental conditions were unsuitable for large multicellular organisms due to low oxygen levels. However, even when the environment became more friendly, the world was not suddenly filled with redwoods and whales and humans. Building a complex structure like a tree or a mammal requires many capabilities, including elaborate gene regulatory networks and cellular mechanisms for adhesion and communication. These arose bit by bit over time.

AI is also likely to advance incrementally. Rather than a pure robot civilization springing up de novo, it’s more likely that AI will integrate itself into things that already exist in our world. The resulting hybrid entities could take many forms; imagine, for example, a company with a human owner but machine-based operations and research. Among other things, arrangements like this would lead to extreme inequality among humans, as owners would profit from their control of AI, while those without such control would become unemployed and impoverished.

Such hybrids are also likely to be where the immediate threat to humanity lies. Some have argued that the “robots take over the world” scenario is overblown because AI will not intrinsically have a desire to dominate. That may be true. However, humans certainly do—and this could be a big part of what they would contribute to a collaboration with machines. With all this in mind, perhaps another principle for us to adopt is that AI should not be allowed to exacerbate inequality in our society.

Contemplating all this may leave one wondering if humans have any long-term prospects at all. Another observation from the history of life on Earth is thatmajor innovations allow life to occupy new niches. Multicellularity evolved in the oceans and enabled novel ways of making a living there. For animals, these included burrowing through sediments and new kinds of predation. This opened up new food options and allowed animals to diversify, eventually leading to the riot of shapes and lifestyles that exist today. Crucially, the creation of new niches does not mean all the old ones go away. After animals and plants evolved, bacteria and other single-celled organisms persisted. Today, some of them do similar things to what they did before (and indeed are central to the functioning of the biosphere). Others have profited from new opportunities such as living in the guts of animals.

Hopefully some possible futures include an ecological niche for humans. After all, some things that humans need (such as oxygen and organic food), machines do not. Maybe we can convince them to go out into the solar system to mine the outer planets and harvest the sun’s energy. And leave the Earth to us.

But we may need to act quickly. A final lesson from the history of biological innovations is that what happens in the beginning matters. The evolution of multicellularity led to the Cambrian explosion, a period more than 500 million years ago when large multicellular animals appeared in great diversity. Many of these early animals went extinct without descendants. Because the ones that survived went on to found major groupings of animals, what happened in this era determined much about the biological world of today. It has been argued that many paths were possible in the Cambrian, and that the world we ended up with was not foreordained. If the development of AI is like that, then now is the time when we have maximum leverage to steer events.

Steering events, however, requires specifics. It is well and good to have general principles like “humans should maintain an economic role,” and “AI should not exacerbate inequality.” The challenge is to turn those into specific regulations regarding the development and use of AI. We’ll need to do that despite the fact that computer scientists themselves don’t know how AI will progress over the next 10 years, much less over the long term. And we’ll also need to apply the regulations we come up with relatively consistently across the world. All of this will require us to act with more coherence and foresight than we’ve demonstrated when dealing with other existential problems such as climate change.

It seems like a tall order. But then again, four or five million years ago, no one would have suspected that our small-brained, relatively apelike ancestors would evolve into something that can sequence genomes and send probes to the edge of the solar system. With luck, maybe we’ll rise to the occasion again.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.