Historic Flooding Colorado September 2013

dae4c821c560dbf8ed7b5a71ab7e3c77_normal.jpeg bradhp shares “Historic Flooding Colorado September 2013” from 5023516.png ClimateState

Historic Flooding Colorado September 2013

http://climatestate.com/2013/09/13/historic-flooding-colorado-september-2013/

Colorado flooding forces thousands more evacuations http://www.cbsnews.com/8301-201_162-57602779/colorado-flooding-forces-thousands-more-evacuations/

“Biblical” rains trigger flooding that kills 3 in Colo.

Live coverage of Colorado flooding from KUSA-TV

Residents of an apartment building work to divert floodwater from their homes in Boulder, Colo.. Flash flooding in Colorado has cut off access to towns, closed the University of Colorado in Boulder and left at least three people dead. Ed Andrieski, APResidents of an apartment building work to divert floodwater from their homes in Boulder, Colo.. Flash flooding in Colorado has cut off access to towns, closed the University of Colorado in Boulder and left at least three people dead. Ed Andrieski, AP

NewsBlur

Thousands more to evacuate Boulder, Colorado – Rainfall record smashed – Flooding worsened b y burn scars from 2012’s giant Waldo Canyon fire

dae4c821c560dbf8ed7b5a71ab7e3c77_normal.jpeg bradhp shares “Thousands more to evacuate Boulder, Colorado – Rainfall record smashed – Flooding worsened by burn scars from 2012’s giant Waldo Canyon fire” from 1068672.png Desdemona Despair

Thousands more to evacuate Boulder, Colorado – Rainfall record smashed – Flooding worsened by burn scars from 2012’s giant Waldo Canyon fire

http://www.desdemonadespair.net/2013/09/thousands-more-to-evacuate-boulder.html

Flood water shoots out of a sewer on Canon Avenue on Thursday, 12 September 2013, in Manitou Springs, Colorado. Flash flooding in Colorado has cut off access to towns, closed the University of Colorado in Boulder and left at least three people dead. Photo: Michael Ciaglo / The Colorado Springs Gazette

By P. Solomon Banda, with additional reporting by Colleen Slevin, Steven K. Paulson, and Thomas Peipert in Denver and Mead Gruver in Longmont
13 September 2013

LYONS, Colorado (Associated Press) – With rain still falling and the flood threat still real, authorities called on thousands more people in the inundated city of Boulder and nearby towns to evacuate as rivers and creeks rose to dangerous levels.

The late-night reports from Boulder and the village of Eldorado Springs came as rescuers struggled to reach dozens of people cut off by flooding in Colorado mountain communities. Residents in the Denver area and other downstream communities were warned to stay off flooded streets.

The towns of Lyons, Jamestown and others in the Rocky Mountain foothills have been isolated by flooding and without power or telephone since rain hanging over the region all week intensified late Wednesday and early Thursday.

At least three people were killed and another was missing, and numerous people were forced to seek shelter up and down Colorado’s populated Front Range.

Boulder County spokesman James Burrus said 17 people were unaccounted for Friday, and officials planned to publicly release the names later.

“Unaccounted for doesn’t mean missing. It means we haven’t heard back from them,” he said. […]

Late Thursday, warning sirens blared in Boulder and city officials sent notice to about 4,000 people living along Boulder Creek around the mouth of Boulder Canyon to head for higher ground, according to Boulder’s Daily Camera newspaper.

The alert was prompted by rapidly rising creek levels caused by water backing up at the mouth of the canyon because of debris and mud coming off the mountainsides, the city Office of Emergency Management said.

The creek began to recede after midnight, but the conditions remained dangerous and a surprising amount of water was still flowing into the city’s streets, Police Chief Mark Beckner told the Daily Camera after touring the damage.

The entire hamlet of Eldorado Springs, about 500 people, was urged to evacuate because of a flash flood and mudslide threat along South Boulder Creek, Burrus said.

NWS Boulder @NWSBoulder tweet on 13 September 2013: Our Boulder station as of yesterday has received 12.30 inches of rain. This smashes the old record of 5.50 inches in an entire month. Graphic: NWS

Northwest of Boulder, the overflowing Vrain Creek cut the town of Longmont in half. Evacuation requests were issued for some neighborhoods, all major roads were closed, and several thousand homes and businesses were without power, he said.

Interstate 25 east of Loveland was closed in both directions Friday, state transportation officials said.

In Fort Collins, neighborhoods along the Cache La Poudre River were evacuated overnight, with the river expected to rise to nearly 2 feet above flood stage Friday morning, according to the National Weather Service.

City officials in Fort Collins closed bridges after water began topping Seaman Reservoir in the Poudre Canyon, The Denver Post reported. The city warned residents to stay clear of the river.

South of the historic Red Rocks Amphitheater, Jefferson County deputy sheriffs went door-to-door in Morrison and Kittridge, asking hundreds of residents to leave their homes as Bear Creek neared flood stage. The amphitheater was in no immediate danger.

In Lyons, residents took shelter on higher ground, including some at an elementary school. Although everyone was believed to be safe, the deluge was expected to continue into Friday.

“There’s no way out of town. There’s no way into town. So, basically, now we’re just on an island,” said Jason Stillman, 37, who was forced with his fiancee to evacuate their home in Lyons after a nearby river began to overflow into the street.

The Colorado National Guard began trucking people out of Lyons on Thursday evening.

To the north, residents along the Big Thompson Canyon in Larimer County, scene of the deadliest flash flood in state history, were also evacuated. The Big Thompson River flooded in 1976 after about a foot of rain fell in just four hours, killing 144 people.

Early Friday, the National Weather Service warned of more flash flooding in Loveland, according to the Post. NOAA reported that the Big Thompson River at Drake was more than 4 feet above its flood stage of 6 feet.

President Barack Obama signed an emergency declaration Thursday night, freeing federal aid and allowing the Federal Emergency Management Agency to coordinate disaster relief efforts.

Some of the flooding was exacerbated by wildfire “burn scars” that have spawned flash floods all summer in the mountains. That was particularly true in an area scarred by fire in 2010 near the tiny community of Jamestown and another near Colorado Springs’ Waldo Canyon that was hit in 2012.

The University of Colorado canceled classes at least through Friday after a quarter of its buildings were flooded. Students in family housing near Boulder Creek were also forced to leave. […]

At least one earthen dam gave way southeast of Estes Park, the gateway to Rocky Mountain National Park. Water levels could rise downstream as authorities release more water to ease pressure on dams. With debris piling up near bridges, downstream farming areas including Fort Lupton, Dacono and Plateville were also at risk. [more]

Boulder calls for thousands more to evacuate

Technorati Tags: North America,flood,monsoon,global warming,climate change,wildfire,forest fire

NewsBlur

Summer 2013 weather extremes tied to extraordinarily unusual polar jet stream

dae4c821c560dbf8ed7b5a71ab7e3c77_normal.jpeg bradhp shares “Summer 2013 weather extremes tied to extraordinarily unusual polar jet stream” from 3301833.png Climate Change: The Next Generation

Summer 2013 weather extremes tied to extraordinarily unusual polar jet stream

http://climatechangepsychology.blogspot.com/2013/09/summer-2013-weather-extremes-tied-to.html

by Steve Tracton, The Washington Post, September 11, 2013

For at least the past one or two decades the adjective extreme has increasingly become used in describing unusual weather. It’s virtually impossible now to escape news of extreme drought, excessive rainfall and floods, record breaking heat waves, cool spells and severe weather outbreaks, etc. which seem to recur year after year around the Northern Hemisphere. This summer was no different except that the behavior and configuration of the polar jet stream, the river of high altitude winds marking the divide between warm and cool air, were rare and mind-boggling.

Instead of meandering as a single stream like it normally does, it transformed into a “dual” jet stream configuration, sometimes transitioning from this dual setup back into a single more coherent stream, back and forth.
The rarity of dual polar jets was highlighted by Professor John Nielsen-Gammon (Texas A&M University) in an article in Popular Mechanics. He pointed out they are something one might see once per decade. From an independent assessment myself, it appears that there are no other polar jet examples comparable to this summer at least as far back as 2000 (the furthest back I’ve looked).
Mostly, the perplexing behavior of the polar jet has been described in befuddling terminology such as weird, mangled, and wobbly. Some have described the jet in a state of disarray, not playing by the so-called rules. Jeff Masters said that in his 30 years doing meteorology, the jet stream has been doing things he’s not seen before.
What follows is a rather technical discussion of how this jet stream pattern evolved and some of the weather characteristics associated with it. Although some terms may not be familiar, the included parenthetical notes and illustrations should help guide you along.
As a general overview I’ve subjectively identified three periods I call Regime 1, 2, and 3. To illustrate associated weather characteristics, I present 500-mb zonal wind (representative of upper level jet stream) anomalies and 850-mb temperature (low-level temperature) regimes over the June–July–August (JJA) meteorological summer. In these time vs. latitude (30–90 N) charts, color coded values are daily means longitudinally averaged (0–360 degrees) at each latitude. Jet streams coincide with the green to red colorization. The three regimes are separated by notably shorter periods of transition from one regime to the next.
Regime 1 (R1) appeared following a regime change at the end of May (not shown) to a dual polar jet which persisted through most of June. Around the beginning of July, R1 transitioned to a single jet mode which characterized Regime 2 (R2). During the third week in July, there was a rapid change to another dual jet configuration in Regime 3 (R3), which subsequently transitioned to a single jet during the middle of August.
Screen Shot 2013-09-11 at 3.06.14 PMTime evolution of daily means vs. latitude of longitudinally averaged (0360 longitude) 500-mb zonal wind anomalies (left) and 850-mb temperature (right). Green to red correspond to jets; light yellow to red correspond to anomalous warmth.

It’s important to add that changes in the zonal wind at any given latitude conform directly (via basic meteorological principles, “thermal wind”) to the largest north-south and south-north differences (gradient) in the lower level temperature field (winds adjust to temperature changes, not vice versa, except in the Tropics).
Most significantly, each regime reflects notably different background fields in the three-dimensional wind and temperature structure of the atmosphere from the mid-latitudes to the North Pole (NP). Although there is considerable variability within a given regime, each appears to have predominant signatures in observed weather events that differ from those characterizing the other regimes. Some examples appear deeper down.
To further describe aspects of regime transition, I’ll focus on that from R2 to R3. See first the time/lat chart for the period July 1 to August 9.
Screen Shot 2013-09-11 at 3.06.28 PM
The major difference between R2 and R3 zonal wind anomalies is obvious. Specifically, R2 is characterized by a single maximum in zonal wind speed (single polar jet) centered between 55 and 65 N. Following a relatively short period of transition, two maxima (dual jets) are evident (in R3), the strongest immediately surrounding the NP (80–90 N), while the second is seen initially far to the south but migrating slowly towards mid-latitudes.
The zonal wind profiles are directly tied to evolution of the lower level temperature field. R2 is characterized by very warm weather immediately surrounding the NP (80–90 N), cool in the 60–75 N latitudinal band, and warm centered between 45 and 55 N.
After the relatively short transition period, R3 is virtually a mirror image with very cold around NP, warmth between 65 and 75 N, and cool further south. Close inspection, if you are so inclined (presuming you have very good eyesight), will reveal that zonal wind speed maxima occur precisely where temperature decreases most rapidly from S–N, while minima are found where temperature increases most rapidly from N–S.
So what does all this have to do with extreme weather events?
Almost invariably extreme summer weather of late is discussed in context of anomalies (differences from average) in the polar jet. The anomalies are commonly attributed directly or indirectly to global warming (aka climate change) as manifest in warming occurring faster in the Arctic than latitudes further south (Arctic amplification). Temperatures, therefore, decrease less rapidly than the climatological norm, and the zonal component of the winds at jet levels adjust by weakening relative to “normal.” The response generally speaking is for atmospheric waves to amplify in their meridional (N–S) extent and lead to more frequent occurrences of unusually high amplitude ridges (and/or blocking highs) and troughs (and/or cut-off lows) along with the respective weather associated with these systems. In combination with slowing progression of weather systems, this translates to enhancing prospects for persistent spells of extreme heat, and extended periods of unusually cool and/or wet conditions.
As illustrated in the figures above, the N–S differential heating adjustments in the zonal wind component are considerably more complex with regard to details in the spatial and temporal variability within as well as between regimes. In particular, note variability in details over time and latitude in the blue areas where zonal winds are least strong and thus favorable for high amplitude circulations and possible extreme weather.
Nevertheless, as mentioned earlier, it is possible to discern the principle unique expression of each regime. By way of example, this figure displays those for R2 and R3.
Screen Shot 2013-09-11 at 3.06.44 PM
The distinct differences between the two regimes are abundantly clear (the 500-mb height anomalies are closely related to the low level temperatures). Note especially the dramatic transition from relatively cool conditions to extreme warmth over Alaska (influence of high amplitude ridge), the cooling trough in R3 over the Northeast U.S., and dominantly warm (R2) to dominantly cool (R3) over extreme northern Europe.
The figures below exemplify regional differences corresponding to heavy rainfall events (precipitable water – total atmospheric water content above location – is used as an approximation for relative differences in precipitation).
Screen Shot 2013-09-11 at 3.06.59 PM
The transition form R2 to R3 brings in flooding rains to Western Europe.
Especially interesting for the U.S. are alternating regions of dominantly dry and dominantly wet conditions in the sequence of regimes transitions over the course of the entire summer, shown below.
Screen Shot 2013-09-11 at 3.07.12 PM
Finally, there is no basis at this time (if ever) to determine whether the transitions to and from regimes with dual polar jets made this summer any more or less unusual in occurrences of extreme weather events than over the past 10–15 years, which have been presumed to be less complicated by dual jets.
Scientists tend to believe the increase in extreme weather is tied somehow to the diminishing Arctic ice cover and perhaps more rapid melting of snow cover over Siberia. The “somehow,” especially when coupled to interactions with other plausible and not yet identified factors, remains an open question. No individual or set of observational studies to date and no existing models and modeling strategies are adequate for garnering some insights when dealing with details in regional domains. This is especially true when dual polar jets are added to the mix of complexities. As far as I know, there has not even been a single investigation of the why’s and wherefore’s of this aspect of the problem (or even whether it has been given much thought).
http://www.washingtonpost.com/blogs/capital-weather-gang/wp/2013/09/11/summer-2013-weather-extremes-tied-to-extraordinarily-unusual-polar-jet-stream/
Steve Tracton
Steve Tracton retired from U.S. Government employment after 34 years of service. His career began immediately after receiving a Ph.D. in Meteorology from MIT as an Assistant Professor at the Naval Postgraduate School (1972-1975). Thereafter, Steve was a research scientist for 31 years at the National Centers for Environmental Prediction (NCEP). A basic theme of his career at NCEP was assessment of data, analysis, and forecast systems with emphasis on physical insight, applications to forecast problems, and realistic appreciation of capabilities and limitations. Perhaps most notably Steve has been recognized nationally and internationally as a principal agent and advocate in development, application, and use of operational ensemble prediction systems and strategies for dealing with forecast uncertainty. From 2002-2006, Steve was a Program Officer for Marine Meteorology at the Office of Naval Research (ONR). He’s currently the chairman of the D.C. Chapter of the American Meteorological Society.

NewsBlur

Temperature Rise

dae4c821c560dbf8ed7b5a71ab7e3c77_normal.jpeg bradhp shares “Temperature Rise” from 2470160.png Arctic News

Temperature Rise

http://arctic-news.blogspot.com/2013/09/temperature-rise.html

Surface Temperature Rise

How much have temperatures risen over the past 100 years or so? In the image below, Peter Carter points at the aerosols from volcanic eruptions and fossil fuel combustion that temporarily delay the full impact of global warming.

post-2010-warming-660.jpg

Temperature Rise hits Arctic most strongly

In above image, temperature anomalies are compared to a 3-decade base period from 1951 to 1980. To highlight the full wrath of global warming, it is more informative to compare anomalies with an earlier base period. Furthermore, a short running mean better shows how high peaks can reach.

GlobalMeans.jpg

NASA typically compares temperature change relative to 1951-1980, because the U.S. National Weather Service uses a three-decade period to define “normal” or average temperature. The NASA GISS analysis effort began around 1980, so the most recent 30 years at the time was 1951-1980.1

But as said, it is more informative to use a 30-year base period that starts earlier. To show Gobal & Arctic Temperature Change, James Hansen and Makiko Sato used a 1951-1980 base period next to a 1880-1920 base period. For this post, a 1883-1912 base period was selected to create the above image, and this same base period was selected to create the image below.

TemperatureAnomaly.jpg

Above image shows that the Arctic is hit most strongly by the temperature rise. Note that the anomalies in above image are visualized by latitude, but are averaged by longitude globally, masking even higher anomalies that can be experienced at specific longitudes. At times, some areas in the Arctic do already experience anomalies of over 20°C, as shown in the animation below, based on NOAA data for the period December 7, 2011 – January 21, 2012.

Sea Surface Temperature Rise in the Arctic

2.gif
[ Note: above animation is a 3MB file that may take some time to fully load ]

Above animation was created by Sam Carana for the page Warming in the Arctic, which adds that the anomaly can be even more striking for individual days and locations. On January 6, 2011, the minimum temperature in Coral Harbour, located at the northwest corner of Hudson Bay in the province of Nunavut, Canada, was –3.7°C (25.3°F), i.e. 30°C (54°F) above average.2

The danger is that extreme weather events will cause waters in the Arctic Ocean to warm up, in turn causing up and heat to penetrate deep into the seabed and triggering trigger destablization of methane held in the sediment in the form of hydrates or free gas. Ways for this to eventuate were also recently discussed in the postArctic Ocean is turning red.3

Feedbacks

Feedbacks have the potential to dramatically speed up the temperature rise.

6a0133f03a1e37970b019aff3bbe27970d.png

Albedo change, due to decline of snow and ice in the Arctic, exercizes a strong additional warming feedback. As illustrated by the above image by Neven, from the Arctic Sea Ice blog, average Arctic sea ice thickness (crudely calculated by dividing PIOMAS (PI) volume numbers with Cryosphere Today (CT) sea ice area numbers) is the lowest on record in the satellite era.

8563548376356.jpg

Another feedback is methane release. On August 25, 2013, mean global methane levels were recorded as high as 1828 ppb. On September 4, 2013, a peak methane level of 2481 ppb was recorded, showing how quickly methane levels can rise locally.

global-methane.jpg

Runaway Global Warming

The danger is that, as sea ice retreats further and as methane traps more heat, there will be areas in the Arctic Ocean where cyclones will cause shallow waters to warm up all the way down to the seabed to such an extent that heat will penetrate the seabed, triggering the water will warm up enough to cause heat to penetrate the seabed and trigger destablization of methane held in the sediment in the form of hydrates and/or or free gas. Recently, sea surface temperatures of about 20°C (68°F) were recorded in some spots in the Arctic Ocean, as also described the postArctic Ocean is turning red.3

For more on the threat of runaway global warming, also see themethane hydrates blog.4This situation calls for an effective and comprehensive climate plan, such as described at the ClimatePlan blog.5

banner4.jpg

Related

1. Four Hiroshima bombs a second: how we imagine climate change
Arctic-news.blogspot.com/2013/08/four-hiroshima-bombs-second-how-we-imagine-climate-change.html

2. Warming in the Arctic
Arctic-news.blogspot.com/p/warming-in-arctic.html
http://arctic-news.blogspot.com/p/warming-in-arctic.html

3. Arctic Ocean is turning red
Arctic-news.blogspot.com/2013/08/arctic-ocean-is-turning-red.html

4. Methane hydrates
Methane-hydrates.blogspot.com/2013/04/methane-hydrates.html
methane-hydrates.blogspot.com/2013/04/methane-hydrates.html

5. Climate Plan
ClimatePlan.blogspot.com/2013/01/an-effective-and-comprehensive-climate-plan.html
http://climateplan.blogspot.com/2013/01/an-effective-and-comprehensive-climate-plan.html

Why trust climate models? It’s a matter of simple science

dae4c821c560dbf8ed7b5a71ab7e3c77_normal.jpeg bradhp shares “Why trust climate models? It’s a matter of simple science” from 3301833.png Climate Change: The Next Generation

Why trust climate models? It’s a matter of simple science

http://climatechangepsychology.blogspot.com/2013/09/why-trust-climate-models-its-matter-of.html

How climate scientists test, test again, and use their simulation tools.

by Scott K. Johnson, ars tecnica, September 5 2013

AR4figure-8-1-l-640x416.png Model simulation showing average ocean current velocities and sea surface temperatures near Japan.IPCC
Talk to someone who rejects the conclusions of climate science and you’ll likely hear some variation of the following: “That’s all based on models, and you can make a model say anything you want.” Often, they’ll suggest the models don’t even have a solid foundation of data to work with—garbage in, garbage out, as the old programming adage goes. But how many of us (anywhere on the opinion spectrum) really know enough about what goes into a climate model to judge what comes out?

Climate models are used to generate projections showing the consequences of various courses of action, so they are relevant to discussions about public policy. Of course, being relevant to public policy also makes a thing vulnerable to the indiscriminate cannons on the foul battlefield of politics.

Skepticism is certainly not an unreasonable response when first exposed to the concept of a climate model. But skepticism means examining the evidence before making up one’s mind. If anyone has scrutinized the workings of climate models, it’s climate scientists—and they are confident that, just as in other fields, their models are useful scientific tools.

It’s a model, just not the fierce kind

Climate models are, at heart, giant bundles of equations—mathematical representations of everything we’ve learned about the climate system. Equations for the physics of absorbing energy from the Sun’s radiation. Equations for atmospheric and oceanic circulation. Equations for chemical cycles. Equations for the growth of vegetation. Some of these equations are simple physical laws, but some are empirical approximations of processes that occur at a scale too small to be simulated directly.

Cloud droplets, for example, might be a couple hundredths of a millimeter in diameter, while the smallest grid cells that are considered in a model may be more like a couple hundred kilometers across. Instead of trying to model individual droplets, scientists instead approximate their bulk behavior within each grid cell. These approximations are called “parameterizations.”

Connect all those equations together and the model operates like a virtual, rudimentary Earth. So long as the models behave realistically, they allow scientists to test hypotheses as well as make predictions testable by new observations.

Some components of the climate system are connected in a fairly direct manner, but some processes are too complicated to think through intuitively, and climate models can help us explore the complexity. So it’s possible that shrinking sea ice in the Arctic could increase snowfall over Siberia, pushing the jet stream southward, creating summer high pressures in Europe that allow India’s monsoon rains to linger, and on it goes… It’s hard to examine those connections in the real world, but it’s much easier to see how things play out in a climate model. Twiddle some knobs, run the model. Twiddle again, see what changes. You get to design your own experiment—a rare luxury in some of the Earth sciences.
cesm_architecture_diagram-640x668.pngEnlarge Diagram of software architecture for the Community Earth System Model. Coupled models use interacting components simulating different parts of the climate system. Bubble size represents the number of lines of code in each component of this particular model. Kaitlin Alexander, Steve Easterbrook
In order to gain useful insights, we need climate models that behave realistically. Climate modelers are always working to develop an ever more faithful representation of the planet’s climate system. At every step along the way, the models are compared to as much real-world data as possible. They’re never perfect, but these comparisons give us a sense for what the model can do well and where it veers off track. That knowledge guides the use of the model, in that it tells us which results are robust and which are too uncertain to be relied upon.

Andrew Weaver, a researcher at the University of Victoria, uses climate models to study many aspects of the climate system and anthropogenic climate change. Weaver described the model evaluation process as including three general phases. First, you see how the model simulates a stable climate with characteristics like the modern day. “You basically take a very long run, a so-called ‘control run,’” Weaver told Ars. “You just do perpetual present-day type conditions. And you look at the statistics of the system and say, ‘Does this model give me a good representation of El Niño? Does it give me a good representation of Arctic Oscillation? Do I see seasonal cycles in here? Do trees grow where they should grow? Is the carbon cycle balanced?’ ”

Next, the model is run in changing conditions, simulating the last couple centuries using our best estimates of the climate “forcings” (or drivers of change) at work over that time period. Those forcings include solar activity, volcanic eruptions, changing greenhouse gas concentrations, and human modifications of the landscape. “What has happened, of course, is that people have cut down trees and created pasture, so you actually have to artificially come in and cut down trees and turn it into pasture, and you have to account for this human effect on the climate system,” Weaver said.

The results are compared to observations of things like changing global temperatures, local temperatures, and precipitation patterns. Did the model capture the big picture? How about the fine details? Which fine details did it simulate poorly—and why might that be?
AR4figure-8-5-l-640x768.png
Enlarge Comparison of observed (top) and simulated (bottom) average annual precipitation
between 1980 and 1999.IPCC
At this point, the model is set loose on interesting climatic periods in the past. Here, the observations are fuzzier. Proxy records of climate, like those derived from ice cores and ocean sediment cores, track the big-picture changes well but can’t provide the same level of local detail we have for the past century. Still, you can see if the model captures the unique characteristics of that period and whatever regional patterns we’ve been able to identify.

This is what models go through before researchers start using them to investigate questions or provide estimates for summary reports like those produced for the Intergovernmental Panel on Climate Change (IPCC).

Coding the climate

Some voices in the public debate over climate science have been critical of the fact that there is no standardized, independent testing protocol for climate models like those used for commercial and engineering applications. Climate scientists have responded that climate models are so different as to make such an “independent verification and validation” process incompatible.

Steve Easterbrook, a professor of computer science at the University of Toronto, has been studying climate models for several years. “I’d done a lot of research in the past studying the development of commercial and open source software systems, including four years with NASA studying the verification and validation processes used on their spacecraft flight control software,” he told Ars.

When Easterbrook started looking into the processes followed by climate modeling groups, he was surprised by what he found.

“I expected to see a messy process, dominated by quick fixes and muddling through, as that’s the typical practice in much small-scale scientific software. What I found instead was a community that takes very seriously the importance of rigorous testing, and which is already using most of the tools a modern software development company would use (version control, automated testing, bug tracking systems, a planned release cycle, etc.).”

“I was blown away by the testing process that every proposed change to the model has to go through,” Easterbrook wrote.

“Basically, each change is set up like a scientific experiment, with a hypothesis describing the expected improvement in the simulation results. The old and new versions of the code are then treated as the two experimental conditions. They are run on the same simulations, and the results are compared in detail to see if the hypothesis was correct. Only after convincing each other that the change really does offer an improvement is it accepted into the model baseline.”

Easterbrook spent two months at the UK Met Office Hadley Centre, observing and describing the operations of the climate modeling group (which is about 200 scientists strong). He looked at everything from code efficiency to debugging to the development process. He couldn’t find much to critique, concluding that “it is hard to identify potential for radical improvements in the efficiency of what is a ‘grand challenge’ science and software engineering problem.”

Easterbrook has argued against the idea that an independent verification and validation protocol could usefully be applied to climate models. One problem he sees is that climate models are living scientific tools that are constantly evolving rather than pieces of software built to achieve a certain goal. There is, for the most part, no final product to ship out the door. There’s no absolute standard to compare it against either.

To give one example, adding more realistic physics or chemistry to some component of a model sometimes makes simulations fit some observations less well. Whether you add it or not then depends on what you’re trying to achieve. Is the primary test of the model to match certain observations or to provide the most realistic possible representation of the processes that drive the climate system? And which observations are the most important to match? Patterns of cloud cover? Sea surface temperature?

As more features have been added, current models have become much more sophisticated than models were 20 years ago, so the standards by which they’re judged have tightened. It’s entirely possible that earlier models would have failed testing that today’s models would pass. But that doesn’t mean that the older models were useless; they may have just gotten fewer physical processes right or had a much lower resolution.

If, as Easterbrook argues, the models are essentially manifestations of the scientific community’s best available knowledge, there’s already a process in place to evaluate them—science. Experiments are replicated by other groups using their own models. Individual peer-reviewed studies are considered in the context of the accumulated knowledge of climate science. Climate models are not so different from other methods of inquiry in that a new scientific method must be invented especially for them.

Firing up the wayback machine

The individual researchers who are part of these modeling efforts work on very different aspects of the model, and each requires a slightly different way of doing things. Bette Otto-Bliesner works on the Community Earth System Model at the National Center for Atmospheric Research (which recently opened a new supercomputing center). Her research focuses on using climate models to understand past climate, working out the mechanisms that drove the events recorded in things like ocean sediment cores. “My research goal is to understand the uncertainties in the climate and Earth system responses to forcings using past time periods to provide more confidence in our projections of future change,” Otto-Bliesner told Ars.

Proxy records of climate from cores of ice or ocean sediments are limited to providing information about the geographic area from which they were collected, so climate models can help fill in the rest of the global picture. A model simulation of actual events—say, an immense ice-dammed lake draining into the North Atlantic and disrupting ocean circulation—can be compared to a network of proxy records to see if the simulated climate impact is consistent with what the proxies show. If the match is poor, then perhaps the observed change in climate was caused by something else.

Otto-Bliesner’s group is working to take this comparison one step further by having the model simulate the processes that create the proxy records as well. Instead of comparing the model to the interpretation of the proxy record data (such as temperature changes inferred from shifting isotope ratios), that data could be compared directly to a virtual version of the isotopes themselves, one produced by the model.

These paleoclimate simulations can serve to evaluate a model as well. The model can be run for interesting time periods, like the end of the last ice age, to see how well it simulates changes in temperature and ocean circulation. “We want to keep our paleo-simulations [separate] as an independent test of our models to changed forcings, so they are not included in the development process,” Otto-Bliesner told Ars. Since the climate was very different at times in the past, these tests help illuminate a model’s strengths and weaknesses.
TraCE.TS_.BA_-640x480.jpgEnlarge / Snapshot from an experiment simulating the last 22,000 years. In the graph at the bottom,
the dark line represents simulated surface temperature over Greenland and the lighter line shows
data from a Greenland ice core. National Center for Atmospheric Research/ University Corporation for Atmospheric Research

Setting the bar

Gavin Schmidt, a climate researcher at the NASA Goddard Institute for Space Studies, is more involved in the development itself. “I explore issues like how one evaluates [climate] models, how comparisons between models and observations should be done, and how one builds credibility in predictions,” he told Ars.

Improving the model means better simulating physical processes, Schmidt says, which doesn’t necessarily improve the large-scale match with every set of observations. “There are always observational datasets that show a mismatch to the model—either regionally or in time,” Schmidt explained. “Some of these mismatches are persistent (i.e., we haven’t found any way to alleviate them); some are related to issues/parameters that we have more of a handle on, and so they can be reduced in the next iteration. One problem is that in fixing one problem one often makes something else worse. Therefore, it is a balancing act that each model center does a little differently.”

One surprisingly common misconception about climate models is that they’re just exercises in curve-fitting. The global average temperature record is fed into the model, which matches that trend and spits out a simulation just like it. In this (mistaken) view, having a model that compares well with reality is a necessary outcome of the process. This doesn’t demonstrate that climate models can be trusted to usefully project future trends, but this line of thinking is mistaken for several reasons.

There’s obviously more to a climate model than a graph of global average temperature. Some parameterizations—those stand-ins for processes that occur at scales finer than a grid cell—are tuned to match observations. After all, they are attempts to describe a process in terms of its large-scale results. But successful parameterizations aren’t used as a gauge of how well the model is reproducing reality. “Obviously, since these factors are tuned for, they don’t count as a model success. However, the model evaluations span a much wider and deeper set of observations, and when you do historical or paleoclimate simulations, none of the data you are interested in has been tuned for,” Schmidt told Ars.

GISS_E_1-640x375.gifEnlarge / Example output showing average annual surface temperature from the NASA GISS ModelE.
NASA GISS

Why so cirrus?

Many of the most important parameterizations involve the complex behavior of clouds. Representing these processes effectively in a climate model is a key challenge, not just because they happen at scales far smaller than grid cells but because clouds play such a big role in the climate system. Storm patterns affect regional climate in many ways, and the way clouds respond to a warming climate could either enhance or partially offset the temperature change.

Tony Del Genio, another researcher at the NASA Goddard Institute for Space Studies, works on improving the way models simulate clouds. “The real world is more complicated than any model of it,” Del Genio told Ars. “Given the limited computing and human resources, we have to prioritize. We try to anticipate which processes that are missing from the model might be most important to include in the next-generation version (not everything that happens in the atmosphere is important to climate).”

“Once we identify a physical process we want to add or improve, we start with whatever fundamental understanding of the process that we have, and then we try to develop a way to approximately represent it in terms of the variables in the model (temperature, humidity, etc.) and write computer code to represent that,” Del Genio said. “We then run the model with the new process in it and we look for two things: whether the process as we have portrayed it behaves the way it does in the real world and whether or not it makes some aspect of the model’s climate more realistic. We do this by comparison to observations, either field experiment, satellite, or surface remote sensing observations, or by comparing to fine-scale models that simulate individual cloud systems.”

Del Genio says that while modelers used to focus more on whether the model simulations looked like the average conditions for an area, they’ve learned that other types of behavior—like large-scale weather patterns— are better indicators of the usefulness of a model for projecting into the future. “A good example of that is something called the Madden-Julian Oscillation (MJO for short), which most people in the US have probably never heard of,” Del Genio said. “The MJO causes alternating periods of very rainy and then mostly clear weather over periods of a month or so over the Indian Ocean and in southeast Asia and is very important to people in that part of the world. It also affects winter rainfall in the western US. It turns out that whether a model simulates the MJO or not depends strongly on how one represents the clouds that develop into thunderstorms in the model, so we observe it closely and try hard to get it right.”

Del Genio also gets to apply his knowledge and skills to other planets. Using the extremely limited information we have about the atmospheres of other planets, models can help work out how they behave. “For other planets, we are still asking basic questions about how a given planet’s atmosphere works—how fast do its winds blow and why, does it have storms like those on Earth, are those storms made of water clouds like on Earth, and why one planet differs from another,” Del Genio said.

Ice, on the rocks

While Tony Del Genio has his head in the clouds and outward into the Solar System beyond, Penn State glaciologist Richard Alley stands on ice sheets miles thick, thinking about what’s going on beneath his feet. Instead of trying to model the whole climate system, he’s focused on the behavior of valley glaciers and ice sheets. “An ice sheet is a two-mile-thick, one-continent-wide pile of old snow squeezed to ice under the weight of more snow and spreading under its own weight,” Alley told Ars. “The impetus for flow is essentially the excess pressure inside the ice compared to outside, and it’s usually quantified as being the product of the ice density, gravitational acceleration, thickness of ice above the point you’re talking about, and surface slope.”

Ice sheet models use the equations that describe that flow of ice to simulate how the ice sheet changes over time in response to outside factors. The size of an ice sheet, like a bank account, is determined by the balance of gains and losses. Increase the amount of melting going on at the edges of the ice sheet and it will shrink. Increase the amount of snowfall over the cold, central region of the ice sheet and it will grow. Lubricate the base of the ice sheet with liquid water, and it may flow faster to the sea, causing an overall loss of ice.

These models are complex and detailed enough that they’re usually run on their own rather than within a climate model that is already busy trying to handle the rest of the planet. Depending on the experiment being run with the model, climate conditions simulated by another model might be imported or a simpler, pre-determined scenario might suffice.

Like global climate models, ice sheet models can also be evaluated against what we know about the past. “Does the model put ice in places that ice was known to have been and not in places where ice was absent?” Alley said. “Are the fluctuations of ice in response to orbital forcing in the past configuration consistent with the reconstructed changes in sea level based on coastal indicators or isotopic composition of the ocean as inferred from ratios in particular shells in sediment cores?”

All this work eventually contributes to our understanding of how the ice sheet is likely to behave in the future. “For these projections to be reliable, we want to see similar behavior in a range of models, from simple to complex, run by different groups, and to understand physically why the models are producing the results they do; we’re especially confident if the paleoclimatic record shows a similar response to similar forcings in the past, and if we see the projected behavior emerging now in response to the recent human and natural forcings,” Alley said. “With all four—physical understanding, agreement in a range of models, observed in paleo and emerging now—we’re pretty confident; with fewer, less so.”

Along with providing better estimates of how ice sheets will contribute to sea level rise, ice sheet models also help generate research questions. By revealing the biggest sources of uncertainty, models can point to the types of measurements and research that will yield the greatest bang for the buck.

PISM_LGM-640x464.pngEnlarge Simulation of ice sheet elevation at the peak of the last ice age using the Parallel Ice
Sheet Model and the ECHAM5 climate model. Florian Ziemen, Christian Rodehacke,
Uwe Mikolajewicz (Max Planck Institute for Meteorology)

Community service

There’s another way in which these climate models are probed—by comparing them with each other. Since there are so many groups of researchers independently building their own models to approximate the climate system, the similarities and differences of their simulations can be illuminating.

Observational data is necessarily limited, but every single thing in a model can be examined. That makes model-to-model comparison more of an apples-to-apples affair when they’re run using the same inputs (like greenhouse gas emissions scenarios). The cause of a poor match between some portion of a model and reality isn’t always obvious, whereas it could jump out when the results are compared to those produced by another model.

There are many such “model intercomparison projects,” including ones focused on atmospheric models, paleoclimate simulations, or geoengineering research. The largest is the Coupled Model Intercomparison Project (CMIP), which has become an important resource for the Intergovernmental Panel on Climate Change reports. What started in 1995 as a simple project blossomed into an enormously useful organizing force for an abundance of research.

Each phase of the project includes a set of experiments chosen by the modeling community. In the latest round, for example, the models have been investigating short-term, decadal predictions, the way clouds change in a warming climate, and a new technique for making comparisons between model results and atmospheric data from satellites.

Apart from helping research groups improve their models, CMIP also makes climate simulations from all the models involved accessible to other researchers. Interested in the future behavior of Himalayan glaciers? Or the economic impact of changes in precipitation over the US? Simulations from a variety of models for a range of emissions scenarios are conveniently available in one place and in standardized formats. In a way, that coordination also increases the value of the studies that use this data. If three different studies on species migration caused by climate change each used arbitrarily different scenarios for the future, comparing their results could be more difficult.

The most visible product of CMIP has probably been its contribution to the IPCC reports. When the reports show model ensembles (many simulations averaged together), they’re pulling from the CMIP collection. Rather than choosing a preferred model, the IPCC essentially works from the average of all of them, while the range of their results is used as an indicator of uncertainty. In this way, the work of independent modeling groups around the world is aggregated to help inform policy makers.

AR4faq-8-1-figure-1-l-640x429.pngEnlarge Average (red line) of 58 model simulations (yellow lines) of global average temperature
compared to observations (black line).IPCC

No crystal ball—but no magic 8 ball, either

If you only tune in to public arguments about climate change or read about the latest study that uses climate models, it’s easy to lose sight of the truly extraordinary achievement those models represent. As Andrew Weaver told Ars, “What is so remarkable about these climate models is that it really shows how much we know about the physics and chemistry of the atmosphere, because they’re ultimately driven by one thing—that is, the Sun. So you start with these equations, and you start these equations with a world that has no moisture in the atmosphere that just has seeds on land but has no trees anywhere, that has an ocean that has a constant temperature and a constant amount of salt in it, and it has no sea ice, and all you do is turn it on. [Flick on] the Sun, and you see this model predict a system that looks so much like the real world. It predicts storm tracks where they should be, it predicts ocean circulation where it should be, it grows trees where it should, it grows a carbon cycle—it really is remarkable.”

But climate scientists know models are just scientific tools—nothing more. In studying the practices of climate modeling groups, Steve Easterbrook saw this firsthand. “One of the most common uses of the models is to look for surprises—places where the model does something unexpected, primarily as a way of probing the boundaries of what we know and what we can simulate,” he said. “The models are perfectly suited for this. They get the basic physical processes right but often throw up surprises in the complex interactions between different parts of the Earth system. It is in these areas where the scientific knowledge is weakest. So the models help guide the scientific process.”

“So I have tremendous respect for what the models are able to do (actually, I’d say it’s mind-blowing), but that’s a long way from saying that any one model can give accurate forecasts of climate change in the future on any timescale,” Easterbrook continued. “I’m particularly impressed by how much this problem is actively acknowledged and discussed in the climate modeling community and how cautious the modelers are in working to avoid any possible over-interpretation of model results.”

“One of the biggest sources of confidence in the models is that they give results that are broadly consistent with one another (despite some very different scientific choices in different models), and they give results that are consistent with the available data and current theory,” Easterbrook said. And while they’re being developed, the rest of the broad field of climate science is hard at work gathering more data and developing our theoretical understanding of the climate system—information that will inform the next generation of models.

The guiding principle in modeling of any kind was summarized by George E.P. Box when he wrote that “all models are wrong, but some are useful.” Climate scientists work hard to ensure that their models are useful, whether to understand what happened in the past or what could happen in the future.

Every projection showing multiple scenarios for future greenhouse gas emissions illustrates the present moment as a constantly shifting crossroads—the point where all future paths diverge, with their course determined using climate models. Armed with that map, we get to decide which of the possible paths we are going to make reality. The more we understand about the climate system and the more realistically climate models behave, the more detailed that map becomes. There’s always more to work out, but we’ve already advanced well past the stage where we need to ask for directions.

http://arstechnica.com/science/2013/09/why-trust-climate-models-its-a-matter-of-simple-science/