Jonathan Clough, environmental modeler
Thursday, 25 Jul 2002
WARREN, Vt.
“Everybody loves an environmental model until it tells them to do something they don’t want to do.”
One of the models I have been working with for many years is currently undergoing a peer-review process; the quotation presented above has been the dominant response of one of the reviewers. She is right, of course. In the abstract, environmental models strike many people as desirable, because they integrate a wide array of scientific research into dynamic models of complex systems. Plus, they usually come with plenty of nifty graphics and colorful animated maps. As modeling techniques become ever more sophisticated, it feels as though we are making genuine progress in the realm of scientific inquiry and discovery.
However, as soon as an environmental model makes a prediction that leads to a consequence that is costly to industry or property owners, the model becomes quite unpopular indeed. In an effort to ward off the consequence, every underlying assumption behind the model becomes subject to deep and occasionally unfair scrutiny.
For example: When I was working for a consulting firm near Washington, D.C., I was sent to the U.S. EPA offices downtown to help respond to industry criticism of a particular modeling analysis. This analysis had led to an emissions regulation that would prove costly to certain industries. The industries had responded by creating six file-drawers full of reports and comments as to why the EPA’s analysis was bogus and why the regulation should not be implemented.
Looking through these file drawers, I was amazed at how every portion of the analysis was attacked, in many cases just to create additional pages of comments. Being new to the business at the time, my responses were limited to those comments that were completely off the mark — that is, based on a misunderstanding of the analysis or simply on errant science. There were many comments that could be dismissed on this basis. Still, the sheer quantity of these comments and the requirement of the agency to respond to them certainly managed to gum up the works and delay the passage of the regulation.
The model that was being attacked in this manner happened to be a fairly simple analysis. The problem can become even more acute when dealing with a more complex model. As one adds parameters to a model, more data and analyses are required to back up these parameters. While the added complexity may improve the model’s results, every additional level of sophistication provides more fodder for attack.
Of course, models can be attacked on the basis that they are too simple, as well. Look, for example, at the models of global warming and the attention that the “cloud cover” portion of these models has received over the past few years. The dominant models of global warming originally made a fairly simple assumptions about cloud cover — an assumption that may or may not have been valid, and one that the model’s results may not have been particularly sensitive to. However, industry groups that opposed any regulation of greenhouse gas emissions seized on this portion of the model as its Achilles heel.
Meanwhile, glaciers continue to recede in mountains worldwide, global temperatures continue to rise unabated, and worldwide drought has become commonplace. Throughout the entire planet, few people will tell you that their weather patterns have not been changing for the worse in the last several years. Because people are starting to feel the effects for themselves, they now believe that global warming is real. The computer models were predicting such effects at least a decade ago — but because their prescriptions were deemed harmful to the industrial economy, the models were widely ignored or attacked as invalid. (Of course, our current presidential administration continues to ignore these realities.)
This, then, provides one of the central philosophical dilemmas facing a computer modeler. When do you add complexity to a model and when do you rely on simplifying assumptions? Simpler models are easier to control, often provide more intuitive results, and, in many cases, are easier to defend. Complex models represent more of the dynamics of the system and in some cases will provide a better set of results. However, more time and money are required to create and apply a complex model.
The general rule is that you only add complexity to a computer model when you cannot get the model to work without it. Model “calibration” is the procedure of testing a computer model against existing empirical data. If calibration is possible with a simple model, then adding more complexity is not generally desirable. However, what represents an appropriate calibration is again subject to debate. More on that tomorrow.
Today my tasks primarily involve gathering empirical data for such a calibration effort. An arctic high pressure system has made the day crisp and clear, and, like the weather, my spirits seem to be escaping from the doldrums. It’s time to get down to work.
