Hold on to your tinfoil hats, folks: Robots are taking over the world after climate change forces global mass extinction. Or, rather, with enough conjecture, coincidence, and frivolous shoehorning, you too could wring that argument out of a new paper by researchers at the IT University of Copenhagen and the University of Texas–Austin. If that’s what you wanted to do with your afternoon.

But you don’t need to — that’s what The Washington Post is for:

Reader support makes our work possible. Donate today to keep our site free. All donations DOUBLED!

We’ve already heard of all the nasty consequences that could occur if the pace of global climate change doesn’t abate by the year 2050 — we could see wars over water, massive food scarcity, and the extinction of once populous species. Now add to the mix a potentially new wrinkle on the abrupt and irreversible changes – superintelligent robots would be just about ready to take over from humanity in the event of any mass extinction event impacting the planet.

In fact, according to a mind-blowing research paper published in mid-August by computer science researchers Joel Lehman and Risto Miikkulainen, robots would quickly evolve in the event of any mass extinction (defined as the loss of at least 75 percent of the species on the planet), something that’s already happened five times before in the past.

Grist thanks its sponsors. Become one.

In a survival of the fittest contest in which humans and robots start at zero (which is what we’re really talking about with a mass extinction event), robots would win every time. That’s because humans evolve linearly, while superintelligent robots would evolve exponentially. Simple math.

Woahhh, boy. Easy. Have a sugar cube.

Climate change is bad, bad, bad news bears. But it’s probably not going to wipe out all the people. Don’t get me wrong: Rising sea levels, security threat multipliers, Peabody is the devil, keep it in the ground, carbon fee and dividend, etc., etc. But implying mass human extinction due to a warming climate is counter-productive in a country in which half the political populace suggests the threat is overblown.

Side note: Even if we did “start at zero,” presumably that would imply actually starting at zero. As in, no humans and no robots. In which case you can exponentiate zero until the robocows come home, and you’ll still be left with an arithmetic donut. Mother Earth wins every time — the deserts, the oceans, the bacteria — not the ‘bots. End side note.

Grist thanks its sponsors. Become one.

There’s a lot going on in Lehman and Miikkulainen’s paper, but none of it is about climate change. (Dominic Basulto, the author of the WaPo piece, acknowledges as much.) The study itself is a relatively straightforward piece of computer science: Dump some biologically inspired learning algorithms into a population of simulated robots, tack on an evolution-mimicking step, kill off a bunch of digital bots (that’s the mass extinction), and see what happens. The researchers demonstrate that after unplugging a good chunk of the robots, “evolvability” accelerates; that is, the extent to which and the rate at which the digibots are able to fill abandoned niches increase. It’s an interesting result, and one that warrants discussion in the context of biological evolution.

But back to climate change, mass extinction, and the coming robopocalypse. Let’s assume “start at zero” means “start at roughly equal numbers of robots and humans, who are at roughly equal levels of intelligence, and who are more or less randomly geographically distributed, and then inexplicably normalize this undoubtedly high-dimensional description of the scenario to ‘zero’.” (Forget the difficulties of meaningfully quantifying intelligence and consciousness.) There are a handful of robots and a handful of humans. It’s basically Burning Man out there. Then, the argument goes, the robots really take off:

Think about it — robots don’t need water and they don’t need food — all they need is a power source and a way to constantly refine the algorithms they use to make sense of the world around them. If they figure out how to stay powered up after severe and irreversible climate change impacts – perhaps by powering up with solar power as they did in the Hollywood film “Transcendence” — robots could quickly prove to be “fitter” than humans in responding to any mass extinction event.

They also might just sit around and rust in the sun. What’s motivating them to survive? Part of the problem here is that we also don’t really know what “evolvability” is. A 2008 review in Nature Reviews Genetics by Massimo Pigliucci states that, traditionally, “the term has been used to refer to different, if partly overlapping, phenomena.” It’s also unclear whether or not an organism can actually evolve evolvability.

Setting those difficulties aside, in assessing the Robo-Climate Wars scenario, we’re still left poking at the unparalleled uncertainty that hovers around the Singularity — the term used to describe the moment at which runaway artificial intelligence surpasses that of mankind. Some researchers argue the moment could come within the next ten years. Others say it won’t be less than a hundred (if ever). Basulto’s argument in The Washington Post rests on the temporal convergence of a climate-change-induced mass extinction and the dawn of superintelligence. Which seems a tad unlikely. And even if you buy, say, 2050 as an extinction date and as the hockey stick uptick for the Singularity, we’re still left with the assumption that human evolution and robot evolution will somehow be qualitatively and quantitatively different from one another. Which should give us pause.

While Basulto writes that “humans evolve linearly,” that idea is not at all settled in the evolutionary biology community. (Nor is “linear evolution” particularly well-defined in the first place. Linear? By what metric?) Some futurists argue — and it’s mostly futurists having these conversations — that as soon as we’ve developed robots capable of superintelligence (which ostensibly implies nonlinear evolution) we’ll have cracked the code of our own brains. In which case the same argument that applies to the robots in Lehman and Miikkulainen’s paper will apply to us, too, and we should spend more time worrying about climate change and less time worrying about the rise of our robot overlords.

Besides, by 2045, we’re supposed to be able to upload our brains to achieve immortality. As long as we’re cherry-picking futurist tidbits, we might as well cling to that one.