This is a post that I’m virtually certain will be misinterpreted. But it’s an important enough issue that I’m going to bet that my writing skills are sufficient to provide clarity to a rather muddy issue.
First off, though, a disclaimer: Science is good. Policy informed by science is good. Leadership informed by science is good. The alternative to all of the above is bad. Nothing I am about to say is to be taken as support for creationism, global warming denial, diminution of White House science advisers or the re-excommunication of Galileo.
However, there is a conflict that lies between the fuzziness that is innate to scientific inquiry and the precision that is required for policy — and more broadly, leadership. We see this conflict whenever global warming deniers trot out scientists who disagree with mainstream theories and we are forced to explain to the deniers that while the nature of scientific inquiry invites debate, the presence of a debate per se does not imply anything about the preponderance of evidence. As Joe Romm has pointed out, Einstein’s revisions to the laws of motion did not prove that Issac Newton was an insufferable quack. It just meant that science is innately fallible and subject to revision. Or as Keynes famously said, “When the facts change, I change my mind. What do you do?”
So far, I don’t think I’ve said anything novel or controversial. But here’s the catch: The same logic that compels us to acknowledge that science is fallible and evolves must also compel us to acknowledge that policy based on science might be wrong. This is not to suggest that a 1-percent doubt ought to stand in the way of policy based on 99 percent certainty, but rather to recognize that good policy must retain sufficient flexibility to “change its mind.”
The source of the conflict
Science, at core, is nothing more than a way to search for the truth. Karl Popper distinguished science from philosophy by noting that science is falsifiable. Religion, astrology, and Heidegger may or may not lead us to the truth. But per Popper’s definition, they are not scientific, since they cannot be falsified.
This makes for a situation where the most honest scientists are those who most critically examine their most deeply held beliefs. If one believes the theory of anthropogenic global warming to be true, try to prove it false. Try to prove that the globe is not warming, or that this warming is not caused by CO2 concentrations, or that human-made releases of CO2 do not account for the preponderance of the recent variation in CO2 concentrations. Try to disprove the theory not because you want to prove the theory false, but rather because the most robust way to test the theory is to try to knock it down. Astrologers can look at any personality type and conclude that it is a “typical Virgo.” Scientists by contrast ask for a robust, testable definition of one’s Virgo-ness and then look for exceptions to disprove the theory.
But now look at how this gets transformed into policy. Or indeed, to leadership of any type, since in a democracy, policy only proceeds by virtue of individual’s demonstrating sufficient leadership to sway opinion. While it may be scientifically valid to admit the fallibility of one’s theories, it’s a pretty lousy way to get people to follow you. “We hold these truths to be self-evident” inspires. “We believe the following is consistent with the current views on human nature but we reserve the right to change course, subject to the latest behavioral research” does not.
This is no less true in the halls of Congress than it is in corporate offices, baseball diamonds, or war zones. In all cases, effective leadership demands the appearance of certainty. Indeed, one of the hardest things about being a CEO in my experience is the internal conflict that comes from the necessity of this lie. If I know that our ability to make payroll next month depends upon us all rallying to close a deal this week, it behooves me to rally the team — and not to share my private concerns about the likelihood of that deal closing. More positively, leadership requires the ability to make decisions and be bound by the consequences of those decisions, even in the absence of sufficient information to make a fully-informed decision. And again, there is little to be gained by making those decisions with caveats about one’s inability to predict the future.
Any leaders who are truly honest with themselves face the same challenge. As such, scientifically-informed leadership must be nimble enough to shift gears when their fallibility is inevitably revealed — and honest enough to know that it will be revealed more often than we’d like.
Where policy and business deviate
In the chaos of a market, this fallibility is revealed on a daily basis in aggregate, even while it is denied specifically. Sony may have internally convinced themselves that beta was better than VHS. Bear Stearns may have internally concluded that their sub-prime exposure was limited. Calpine may have internally convinced themselves that the price of natural gas could never reach $13/MMBtu. In all cases, they were wrong — and the market revealed their fallibility, causing them to lose billions of dollars in waves of Schumpeterian “creative destruction.” And markets moved on.
Government policy, by contrast, has no such built-in correction. We made a bet in the 1980s that it was a good idea to give weapons to Iraq, and we’re still cleaning up the resulting mess. We made a bet in the 1950s that nuclear power would be “too cheap to meter” and still throw billions of dollars a year in subsidies at the industry to try to make it so. We decided in 1776 that we needed a well-regulated militia and 250 years later use this to allow private citizens to own armor-piercing bullets.
This isn’t because our political leaders are bad people, but simply because — absent the check of a market economy — there is no easy way to undo those decisions that proved to be less-than-omniscient. But decisions are made by mortals. And mortals are fallible. And any decision we make today based on today’s understanding of The Truth will therefore eventually turn out to be less than perfect.
This is why Lieberman-Warner is so problematic. Not because it seeks to reduce greenhouse-gas emissions (based on the current scientific consensus with respect to AGW), but because it presumes knowledge of the optimal way to address greenhouse-gas emissions over the next 42 years. It’s also why the Clean Air Act is bad for the environment. Not because it wasn’t important to lower criteria pollutants, but because it’s so damned hard to fix it so that it stops mandating increased CO2 emissions.
And those of us with an environmental policy bent are well advised to keep our own hubris in check as we think about the optimal approaches to address the environmental problems of our time. Be scientifically informed. Don’t wait for 100 percent scientific consensus to make a decision, because it never comes. But acknowledge that our best decision today may not be the best decision in the long term. Be flexible. Don’t write policies that are so heavily proscribed that they are unable to adapt as The Truth is more fully revealed.