Marine biologist Raphael Sagarin has eclectic interests. During the course of his career, he’s scoured an Alaskan gambling record for clues to climate change, retraced John Steinbeck’s and Ed Ricketts’ survey of the Sea of Cortez, and even studied how Easy Cheese escaped early chlorofluorocarbon regulations. In 2002, as a science fellow on Capitol Hill, he turned his biologist’s eye to post-9/11 Washington, D.C., with its proliferating Jersey barriers and security checkpoints.
“I started thinking about the fact that somehow, organisms keep themselves safe in a world that’s every bit as unpredictable as our world,” he says.
Curious about the overlap between national security and natural selection, Sagarin gathered a group of biologists and security experts in 2005 for a series of meetings and conversations. The discussions inspired the new essay collection Natural Security: A Darwinian Approach to a Dangerous World, which he co-edited.
Sagarin, now associate director for ocean and coastal policy at Duke University’s Nicholas Institute, recently talked with Grist about airport security, nervous marmots, and what Darwin might say about Code Orange.
So you got a group of biologists talking to security experts. I imagine they were speaking different languages, at least at first. What was that like?
It was a really interesting dynamic at every level. The very first morning [of our meetings], Gary Vermeij was there with his wife — Vermeij is this brilliant paleontologist who happens to have been blind since he was a young boy — and his wife was explaining the scene to him. In came [security expert] Terry Taylor, my coeditor on the book, who’s this tall, dashing, older British gentleman who was dressed in an absolutely beautiful tailored suit. Vermeij’s wife whispered to him, “There’s a man coming in with a really nice suit on,” and Vermeij smiled and chuckled, because at any meeting of biologists, there’s hardly a tie to be found, let alone a suit.
So there were cultural differences, but there were also differences in language, and we just insisted that people explain any terms that might be confusing. We talked about everything from specific organisms and what they do to more general strategies, and we also talked about different situations our policy people had experienced. Terry Taylor worked quite a bit on the conflict in Northern Ireland, and we had people with corporate backgrounds who were able to share how they worked in an unpredictable environment.
What kinds of things did biologists bring to the discussion that the security people hadn’t really considered before?
Every [security] document written since 9/11 pays lip service to the need to be adaptable, but very few security people understand what it actually means to be adaptable, how selection works on things. There are also countless examples of adaptations, of ways organisms respond to situations, that the security people weren’t really aware of.
One of the essays in the book talks about the marmot groups nicknamed Nervous Nellies and Cool-Hand Lucys. Can you tell me more about them?
Dan Blumstein from UCLA, who studies marmots and other mammals, has probably gone to more dangerous countries than many of the security people [in our group]. He’d say, “When I was in Kazakhstan studying these guys,” or, “When I was in Afghanistan,” and people would say, “What were you doing there?”
In groups of marmots, there are some he calls Nervous Nellies, which always make alarm calls whether there’s a real threat or not, and others he calls Cool-Hand Lucys, which only call if there’s a real threat. The conventional wisdom says, “Well, Nervous Nellie is like the little boy who cried wolf, and the other marmots will eventually learn to ignore her.” But what actually happens is that all the other marmots spend a lot more time listening to Nervous Nellie, because they can’t tell if the signal is honest, if it’s a real threat or not.
[In the U.S.], we do a lot of Nervous Nellie signaling. What animal behavior generally shows us is that you want to reduce uncertainty for yourself, and increase uncertainty for your enemy — all these sort of behaviors, camouflage or flocking or sitting and waiting in ambush, are about this equation. And a lot of our security operations do exactly the opposite. They reduce uncertainty for our enemies — now everyone is well aware that you don’t take a liquid explosive on an airplane — and they increase uncertainty for us. Every time we walk into an airport, Nervous Nellie is screaming, “Code Orange!” and we have no idea what that means. The information we’re getting is not useful at all.
OK, so the airport liquids ban, the take-off-your-shoes requirement — biologists would say those aren’t smart strategies, given what we know about the natural world. But does the natural world suggest better strategies?
There are a lot of better strategies, and some of them link to the fact that as humans, we understand the behavior of other humans very well. Understanding each other’s behaviors and intentions is something we learned through our development in small groups, when we had to figure out — as a matter of life and death — what another person was thinking and experiencing. From what I’ve heard, behavioral screening, where you’re looking at things like tics and the way people are walking, is much more effective than the blanket screening we do now.
What it gets at is resource efficiency. All organisms live with risk, but they figure out, through the process of selection, how to allocate resources to security, mating, eating, all of those things. We need to actively do that, and this is one area where we could massively ramp down the blanket screening we do on everyone and their sister, and ramp up a much more effective type of screening.
One of the ideas in your book is that the natural world knows it can never eliminate risk — that all populations can do is reduce risk. But that doesn’t work very well as a political strategy — what works for a politician is to get up and say, “We’re going to make you safe.”
It’s very politically expedient to say we’re going to eliminate risk. And we use these examples, we say, “Look, we got rid of smallpox, we got rid of fascism in Europe.” That’s all well and good — the world is undoubtedly better for it — but those were very specific and well-understood threats. Things like drug addiction and terrorism, they’re not monolithic threats — they’re complex and ubiquitous, and you can’t apply the same logic of elimination to those kind of threats, because there’s not a single answer. No organism, for example, tries to eliminate the general threat of predation — organisms certainly take all sorts of different precautions to try to avoid the threat, but they don’t spend all their resources trying to make a shark not be a predator.
But Darwinian security does have its limitations — we’re more complicated than marmots, right?
We are complex, and we make decisions based on a whole bunch of factors. Scientists in general get frustrated with policy making, because it’s not a simple translation of a good, scientifically defensible idea into policy — all sorts of economic, social, and religious considerations go into decision-making. I think that’s a good thing, because it’s a natural sort of safety valve — after all, nowhere do we suggest that if you see a good idea in nature, you should automatically turn it into a policy.
For instance, you might look at the immune system and say, “Well, there it is — total information awareness. We need to go out and screen everything, report back, and figure out what we need to eliminate.” But that strategy raises all sorts of ethical concerns, concerns about civil liberties, that people need to debate. We’re not saying we have to turn our society into some sort of evolutionary utopia that works just like evolution. What we’re saying is that within the constructs of our society, we can use some of these great ideas from nature to improve our own security.