Harnessing Evolutionary Principles To Guide AI Development with Professor Paul Rainey
What we can learn about self-replicating AI from evolutionary biology
In this issue:
Prof. Paul Rainey discusses what we can learn about AI from evolutionary biology
The latest updates on AI legislation
Upcoming events, including AIAI London
Before you go…check out the latest news about AI regulation
“The AI systems are Darwinian—i.e., they replicate. There's variation through mutation of some sort, and offspring resemble parents.” - Prof. Paul Rainey, PhD
One of the key values of the Regulating AI podcast is gaining a wide perspective by pulling in diverse perspectives. In this episode, we get the perspective of an evolutionary biologist.
In this fascinating conversation with Dr. Sanjay Puri, Prof. Paul Rainey explains what we can learn about self-replicating AI systems from evolutionary biology. Prof. Rainey is Director of the Department of Microbial Population Biology at the Max Planck Institute for Evolutionary Biology in Plön, Professor at ESPCI in Paris, Fellow of the Royal Society of New Zealand, a Member of EMBO & European Academy of Microbiology, and Honorary Professor at Christian Albrechts University in Kiel.
It’s likely that even the most ardent AI enthusiast has spent little time pondering how AI may be like microbiology, and even less time hearing an actual evolutionary microbiologist talk about it.
Prof. Rainey’s insights are at once simple and complex, shocking and obvious, worrisome and optimistic.
Watch the episode below, and read on for the highlights of their conversation!
In this episode:
What is an evolutionary biologist doing with AI? “For me [it] comes quite naturally from a long-term interest that I've had, both theoretically and experimentally, in what I refer to as "‘major evolutionary transitions’ in individuality,” said Prof. Rainey.
There are important events, transitions that involve, the coming together of lower-level self-replicating entities to form a higher-level self-replicating structure.
Example: The major evolutionary transition that gave rise to the eukaryotic cells that comprise human bodies. It began billions of years ago, from two bacteria-like cells, from different domains. Before they began to interact, they were self-replicating. They participated in the process of evolution by natural selection. They came together, probably just exchanging nutrients.
But they began to depend on one another. Then one cell invaded the other, or was engulfed by the other. Then these two separately replicating entities became part of a single corporate body.
“And that is an evolutionary transition—when two lower-level entities, the bacterial-like cells, come together to form a single higher-level self-replicating system.”
“And for me, a lot of interest is in the transition from cells to multicellular life. […] Thinking about these kinds of transitions and their causes makes it possible to extend this kind of general thinking to thinking about plausible future evolutionary transitions between humans and artificial life,” said Prof. Rainey.
Do you see any parallels between the self-replication or mutation of biological viruses and similar behavior in AI superagents? And how would it manifest?
“At the present time, people are not talking openly about developing self-replicating AI systems. ChatGPT, for example, is not, able to replicate itself,” said Prof. Rainey.
But here’s a big thought: AI systems only need “Darwinian properties,” which means properties that allow the system to participate in the process of evolution by natural selection. That requires three things:
A population (i.e., more than one thing)
Variation on that population
The entities need to replicate, and when they replicate, they need to give rise to offspring
“Any set of entities that manifest variation, replication, and heredity will participate in the process of evolution by natural selection. They don't need to be biological,” said Prof. Rainey.
The outcome of the process of evolution by natural selection is adaptation. The entities that participate in it get better at performing in their particular environment, performing a particular function. “What gives me cause for concern with AI systems that might be endowed with these properties is [that] indeed they will become better in response to the particular selective pressures they experience in their environment. Just how they become better is almost unknowable,” said Prof. Rainey.
In an earlier paper, Prof. Rainey raised an idea: If AI could get power on its own, in symbiosis with humans, humans could become a subordinate partner. He derived this idea from the way parts of cells are no longer “free to do what they like” after becoming part of a higher-level structure.
“If we began to interact in a way that the eukaryotic cell emerged, then indeed humans would lose their autonomy. Part of that would be given up to the collective benefit, which is a composite combination of the AI and the human. […] My real concern is that the AI may well come off on the upper hand, with humans, even unwittingly, becoming enslaved by the artificial intelligence entity,” said Prof. Rainey.
Do you see any potential for harnessing the self-replication capabilities of AI for positive purposes, such as accelerating scientific discovery and solving complex world problems? “Absolutely,” he said.
But often there are tradeoffs. If AI systems align with human objectives, that’s great.
“The trouble is, even if the AI system is not self replicating, but is power-seeking, it may be difficult to see where it goes or control where it goes,” said Prof. Rainey.
Are there any existing frameworks or approaches from the field of microbial population biology that might be applicable to regulating or mitigating risks with AI evolution? There are many guiding principles for sure—predictive power and the theory of natural selection. But we should be thinking about how it’s rather trivial to make something self-replicating.
“It would be wonderful, actually, and important that policymakers sit down with people that have an understanding of what selection is and how it works,” he said.
He suggests devising a “break” or “lock” into such a system to reduce the mutation rate, especially once it’s achieved a particular endpoint.
On the topic of keeping AI conversations scientific and avoiding sensationalism, or pausing development altogether: “I certainly am not in the camp of saying ‘don't do it,’ but I am suggesting that things could get beyond our control quicker than we think. And perhaps we could—using that word again ‘unwittingly’—find ourselves in a situation where things have already gone too far,” he said.
“Darwin provided us with a predictive framework in this very general sense, but the specific details of where evolution takes one life form or another, that's often extremely difficult to predict,” said Prof. Rainey. But he cited the example of Dr. Charles Ofria with the Avida Digital Evolution Platform.
Entities compete—not for food, but for time on the central processing unit. “They start to participate in the process of evolution by natural selection. Those that replicate faster leave more offspring, because they acquire resources more efficiently,” said Prof. Rainy.
Dr. Ofria wanted to stop the fastest ones from winning. So every time there was a mutation, he took that “organism” out and put it in a test environment and measured its rate of replication.
“What he found was that these tiny, unsophisticated digital organisms evolved to recognize that they were in this second test environment, and they, on recognizing that, stopped replicating. That meant they didn't get thrown away, they went back into the [original] environment,” said Prof. Rainey.
“The lesson from those is it's very hard to know what particular route selection will take a population down,” he said.
“If you get a broad enough group of people together that can respectfully listen to different perspectives, then…one would hope some sort of consensus could emerge and maybe some principles that could guide the formulation of policy,” said Prof. Rainey.
A final note: “The systems that are being created at the present time are more than, a set of instructions or computer code. We need to treat this new technology with respect. [And] recognize that it could be a vehicle for much good and indeed already is a vehicle for much good,” he said.
Hear the entirety of Prof. Paul Rainey’s conversation with Sanjay Puri above and on Apple Podcasts and Spotify. Subscribe to the RegulatingAI podcast to never miss an episode.
The latest AI legislation
H.B.333, Delaware, USA. Artificial Intelligence Commission. Establishes the Delaware AI Commission to advise the General Assembly and Department of Technology and Information on AI use and safety; directs the Commission to inventory generative AI usage in state agencies and identify high-risk areas. Status: Enacted.
S.B.2284, Hawaii, USA. Wildfire Forecast System. Establishes and implements a program to develop a wildfire forecast system for the state, using AI to forecast the risk of wildfires statewide and enhance public safety and risk mitigation. Status: Enacted.
S.B.2687, Hawaii, USA. Deceptive Media. Prohibits a person from distributing materially deceptive media without a disclaimer and establishes penalties for violators and remedies for parties injured by such violations. Status: Enacted.
Check out our live AI legislation tracker
Upcoming AI events
Voice & AI
Arlington, Virginia | Oct 28-30, 2024 | Online/In-person
As AI revolutionizes customer interactions, VOICE & AI 2024 brings together the conversational and generative AI communities, enterprise leaders, and AI innovators to explore the latest advancements in the field. Join us to delve into AI-powered language services, CX automation, the future of the conversational enterprise, and more — promising limitless connects with the people driving the transformation of customer engagement.
Future of AI Summit
London, UK | Nov 6-7, 2024 | Online/In-person
Where does AI technology stand today, and where is it poised to be in the next five years? Who is funding the sector? How can businesses leverage AI to drive innovation, efficiency, and growth? What are the economic implications of AI, and how can governments, businesses, and technology leaders ensure its responsible development and deployment?
The Future of AI Summit will return in 2024 to assess the current landscape for AI innovation and examine the real world use cases for how companies are investing in AI, whilst navigating security, workforce, and ethical concerns. Hear from the AI experts across technology, business, and policy, and learn about the most exciting advancements in machine learning, natural language processing, and robotics and how they are being scaled for success and growth.
AIAI London
London, UK | Nov 7-8, 2024 | Online/In-person
AIAI London is bridging the gap between cutting edge research and value-driving application for engineering teams and AI executives. Unite with your ecosystem. Be part of the tech showcase spotlighting hundreds of cutting-edge use cases, and the cost-effective high-tech solutions facilitating them.
Bookmark this page to keep track of key upcoming AI events
Before you go…
The EU’s AI Act is now in force | TechCrunch
World’s first major AI law enters into force — here’s what it means for U.S. tech giants | CNBC
Companies assess compliance as EU’s AI Act takes effect | PYMNTS
States take up AI regulation amid federal standstill | NYT
Brazil proposes $4 billion AI investment plan | Reuters
Regulators consider first federal rule on AI-created political ads | NBC
Barbara Lee: Musk sharing fake Harris video shows need for AI guardrails | The Hill
Bookmark this page to stay up to date on the latest AI regulation and policy news
Connect with experts, mentors, investors, and organizations to foster collaboration, innovation, and the responsible development of AI!
Find us on Discord, LinkedIn, Instagram, X, Facebook, and YouTube