AODA Blog

Dark Age America: The Population Implosion

Wed, 2014-08-27 17:31
The three environmental shifts discussed in earlier posts in this sequence—the ecological impacts of a sharply warmer and dryer climate, the flooding of coastal regions due to rising sea levels, and the long-term consequences of industrial America’s frankly brainless dumping of persistent radiological and chemical poisons—all involve changes to the North American continent that will endure straight through the deindustrial dark age ahead, and will help shape the history of the successor cultures that will rise amid our ruins. For millennia to come, the peoples of North America will have to contend with drastically expanded deserts, coastlines that in some regions will be many miles further inland than they are today, and the presence of dead zones where nuclear or chemical wastes in the soil and water make human settlement impossible.
All these factors mean, among other things, that deindustrial North America will support many fewer people than it did in 1880 or so, before new agricultural technologies dependent on fossil fuels launched the population boom that is peaking in our time. Now of course this also implies that deindustrial North America will support many, many fewer people than it does today. For obvious reasons, it’s worth talking about the processes by which today’s seriously overpopulated North America will become the sparsely populated continent of the coming dark age—but that’s going to involve a confrontation with a certain kind of petrified irrelevancy all too common in our time.
Every few weeks, the comments page of this blog fields something insisting that I’m ignoring the role of overpopulation in the crisis of our time, and demanding that I say or do something about that. In point of fact, I’ve said quite a bit about overpopulation on this blog over the years, dating back to this post from 2007. What I’ve said about it, though, doesn’t follow either one of the two officially sanctioned scripts into which discussions of overpopulation are inevitably shoehorned in today’s industrial world; the comments I get are thus basically objecting to the fact that I’m not toeing the party line.
Like most cultural phenomena in today’s industrial world, the scripts just mentioned hew closely to the faux-liberal and faux-conservative narratives that dominate so much of contemporary thought. (I insist on the prefix, as what passes for political thought these days has essentially nothing to do with either liberalism or conservatism as these were understood as little as a few decades ago.) The scripts differ along the usual lines: that is to say, the faux-liberal script is well-meaning and ineffectual, while the faux-conservative script is practicable and evil.
Thus the faux-liberal script insists that overpopulation is a terrible problem, and we ought to do something about it, and the things we should do about it are all things that don’t work, won’t work, and have been being tried over and over again for decades without having the slightest effect on the situation. The faux-conservative script insists that overpopulation is a terrible problem, but only because it’s people of, ahem, the wrong skin color who are overpopulating, ahem, our country: that is, overpopulation means immigration, and immigration means let’s throw buckets of gasoline onto the flames of ethnic conflict, so it can play its standard role in ripping apart a dying civilization with even more verve than usual.
Overpopulation and immigration policy are not the same thing; neither are depopulation and the mass migrations of whole peoples for which German historians of the post-Roman dark ages coined the neat term völkerwanderung, which are the corresponding phenomena in eras of decline and fall. For that reason, the faux-conservative side of the debate, along with the usually unmentioned realities of immigration policy in today’s America and the far greater and more troubling realities of mass migration and ethnogenesis that will follow in due time, will be left for next week’s post. For now I want to talk about overpopulation as such, and therefore about the faux-liberal side of the debate and the stark realities of depopulation that are waiting in the future.
All this needs to be put in its proper context. In 1962, the year I was born, there were about three and a half billion human beings on this planet. Today, there are more than seven billion of us. That staggering increase in human numbers has played an immense and disastrous role in backing today’s industrial world into the corner where it now finds itself. Among all the forces driving us toward an ugly future, the raw pressure of human overpopulation, with the huge and rising resource requirements it entails, is among the most important.
That much is clear. What to do about it is something else again. You’ll still hear people insisting that campaigns to convince people to limit their reproduction voluntarily ought to do the trick, but such campaigns have been ongoing since well before I was born, and human numbers more than doubled anyway. It bears repeating that if a strategy has failed every time it’s been tried, insisting that we ought to do it again isn’t a useful suggestion. That applies not only to the campaigns just noted, but to all the other proposals to slow or stop population growth that have been tried repeatedly and failed just as repeatedly over the decades just past.
These days, a great deal of the hopeful talk around the subject of limits to overpopulation has refocused on what’s called the demographic transition: the process, visible in the population history of most of today’s industrial nations, whereby people start voluntarily reducing their reproduction when their income and access to resources rise above a certain level. It’s a real effect, though its causes are far from clear. The problem here is simply that the resource base that would make it possible for enough of the world’s population to have the income and access to resources necessary to trigger a worldwide demographic transition simply don’t exist.
As fossil fuels and a galaxy of other nonrenewable resources slide down the slope of depletion at varying rates, for that matter, it’s becoming increasingly hard for people in the industrial nations to maintain their familiar standards of living. It may be worth noting that this hasn’t caused a sudden upward spike in population growth in those countries where downward mobility has become most visible. The demographic transition, in other words, doesn’t work in reverse, and this points to a crucial fact that hasn’t necessarily been given the weight it deserves in conversations about overpopulation.
The vast surge in human numbers that dominates the demographic history of modern times is wholly a phenomenon of the industrial age. Other historical periods have seen modest population increases, but nothing on the same scale, and those have reversed themselves promptly when ecological limits came into play. Whatever the specific factors and forces that drove the population boom, then, it’s a pretty safe bet that the underlying cause was the one factor present in industrial civilization that hasn’t played a significant role in any other human society: the exploitation of vast quantities of extrasomatic energy—that is, energy that doesn’t come into play by means of human or animal muscle. Place the curve of increasing energy per capita worldwide next to the curve of human population worldwide, and the two move very nearly in lockstep: thus it’s fair to say that human beings, like yeast, respond to increased access to energy with increased reproduction.
Does that mean that we’re going to have to deal with soaring population worldwide for the foreseeable future? No, and hard planetary limits to resource extraction are the reasons why. Without the huge energy subsidy to agriculture contributed by fossil fuels, producing enough food to support seven billion people won’t be possible. We saw a preview of the consequences in 2008 and 2009, when the spike in petroleum prices caused a corresponding spike in food prices and a great many people around the world found themselves scrambling to get enough to eat on any terms at all. The riots and revolutions that followed grabbed the headlines, but another shift that happened around the same time deserves more attention: birth rates in many Third World countries decreased noticeably, and have continued to trend downward since then.
The same phenomenon can be seen elsewhere. Since the collapse of the Soviet Union, most of the formerly Soviet republics have seen steep declines in rates of live birth, life expectancy, and most other measures of public health, while death rates have climbed well above birth rates and stayed there. For that matter, since 2008, birth rates in the United States have dropped even further below the rate of replacement than they were before that time; immigration is the only reason the population of the United States doesn’t register declines year after year.
This is the wave of the future.  As fossil fuel and other resources continue to deplete, and economies dependent on those resources become less and less able to provide people with the necessities of life, the population boom will turn into a population bust. The base scenario in 1972’s The Limits to Growth, still the most accurate (and thus inevitably the most vilified) model of the future into which we’re stumbling blindly just now, put the peak of global population somewhere around 2030: that is, sixteen years from now. Recent declines in birth rates in areas that were once hotbeds of population growth, such as Latin America and the Middle East, can be seen as the leveling off that always occurs in a population curve before decline sets in.
That decline is likely to go very far indeed. That’s partly a matter of straightforward logic: since global population has been artificially inflated by pouring extrasomatic energy into boosting the food supply and providing other necessary resources to human beings, the exhaustion of economically extractable reserves of the fossil fuels that made that process possible will knock the props out from under global population figures. Still, historical parallels also have quite a bit to offer here: extreme depopulation is a common feature of the decline and fall of civilizations, with up to 95% population loss over the one to three centuries that the fall of a civilization usually takes.
Suggest that to people nowadays and, once you get past the usual reactions of denial and disbelief, the standard assumption is that population declines so severe could only happen if there were catastrophes on a truly gargantuan scale. That’s an easy assumption to make, but it doesn’t happen to be true. Just as it didn’t take vast public orgies of copulation and childbirth to double the planet’s population over the last half-century, it wouldn’t take equivalent exercises in mass death to halve the planet’s population over the same time frame. The ordinary processes of demography can do the trick all by themselves.
Let’s explore that by way of a thought experiment. Between family, friends, coworkers, and the others that you meet in the course of your daily activities, you probably know something close to a hundred people. Every so often, in the ordinary course of events, one of them dies—depending on the age and social status of the people you know, that might happen once a year, once every two years, or what have you. Take a moment to recall the most recent death in your social circle, and the one before that, to help put the rest of the thought experiment in context.
Now imagine that from this day onward, among the hundred people you know, one additional person—one person more than you would otherwise expect to die—dies every year, while the rate of birth remains the same as it is now. Imagine that modest increase in the death rate affecting the people you know. One year, an elderly relative of yours doesn’t wake up one morning; the next, a barista at the place where you get coffee on the way to work dies of cancer; the year after that, a coworker’s child comes down with an infection the doctors can’t treat, and so on.  A noticeable shift? Granted, but it’s not Armageddon; you attend a few more funerals than you’re used to, make friends with the new barista, and go about your life until one of those additional deaths is yours.
Now take that process and extrapolate it out. (Those of my readers who have the necessary math skills should take the time to crunch the numbers themselves.) Over the course of three centuries, an increase in the crude death rate of one per cent per annum, given an unchanged birth rate, is sufficient to reduce a population to five per cent of its original level. Vast catastrophes need not apply; of the traditional four horsemen, War, Famine, and Pestilence can sit around drinking beer and playing poker. The fourth horseman, in the shape of a modest change in crude death rates, can do the job all by himself.
Now imagine the same scenario, except that there are two additional deaths each year in your social circle, rather than one.  That would be considerably more noticeable, but it still doesn’t look like the end of the world—at least until you do the math. An increase in the crude death rate of two per cent per annum, given an unchanged birth rate, is enough to reduce a population to five per cent of its original level within a single century. In global terms, if world population peaks around 8 billion in 2030, a decline on that scale would leave four hundred million people on the planet by 2130.
In the real world, of course, things are not as simple or smooth as they are in the thought experiment just offered. Birth rates are subject to complex pressures and vary up and down depending on the specific pressures a population faces, and even small increases in infant and child mortality have a disproportionate effect by removing potential breeding pairs from the population before they can reproduce. Meanwhile, population declines are rarely anything like so even as  the thought experiment suggests; those other three horsemen, in particular, tend to get bored of their poker game at intervals and go riding out to give the guy with the scythe some help with the harvest. War, famine, and pestilence are common events in the decline and fall of a civilization, and the twilight of the industrial world is likely to get its fair share of them.
Thus it probably won’t be a matter of two more deaths a year, every year. Instead, one year, war breaks out, most of the young men in town get drafted, and half of them come back in body bags.  Another year, after a string of bad harvests, the flu comes through, and a lot of people who would have shaken it off under better conditions are just that little bit too malnourished to survive.  Yet another year, a virus shaken out of its tropical home by climate change and ecosystem disruption goes through town, and fifteen per cent of the population dies in eight ghastly months. That’s the way population declines happen in history.
In the twilight years of the Roman world, for example, a steady demographic contraction was overlaid by civil wars, barbarian invasions, economic crises, famines, and epidemics; the total population decline varied significantly from one region to another, but even the relatively stable parts of the Eastern Empire seem to have had around a 50% loss of population, while some areas of the Western Empire suffered far more drastic losses; Britain in particular was transformed from a rich, populous, and largely urbanized province to a land of silent urban ruins and small, scattered villages of subsistence farmers where even so simple a technology as wheel-thrown pottery became a lost art.
The classic lowland Maya are another good example along the same lines.  Hammered by climate change and topsoil loss, the Maya heartland went through a rolling collapse a century and a half in length that ended with population levels maybe five per cent of what they’d been at the start of the Terminal Classic period, and most of the great Maya cities became empty ruins rapidly covered by the encroaching jungle. Those of my readers who have seen pictures of tropical foliage burying the pyramids of Tikal and Copan might want to imagine scenes of the same kind in the ruins of Atlanta and Austin a few centuries from now. That’s the kind of thing that happens when an urbanized society suffers severe population loss during the decline and fall of a civilization.
That, in turn, is what has to be factored into any realistic forecast of dark age America: there will be many, many fewer people inhabiting North America a few centuries from now than there are today.  Between the depletion of the fossil fuel resources necessary to maintain today’s hugely inflated numbers and the degradation of North America’s human carrying capacity by climate change, sea level rise, and persistent radiological and chemical pollution, the continent simply won’t be able to support that many people. The current total is about 470 million—35 million in Canada, 314 million in the US, and 121 million in Mexico, according to the latest figures I was able to find—and something close to five per cent of that—say, 20 to 25 million—might be a reasonable midrange estimate for the human population of the North American continent when the population implosion finally bottoms out a few centuries from now.
Now of course those 20 to 25 million people won’t be scattered evenly across the continent. There will be very large regions—for example, the nearly lifeless, sun-blasted wastelands that climate change will make of the southern Great Plains and the Sonoran desert—where human settlement will be as sparse as it is today in the bleakest parts of the Sahara or the Rub’al Khali of central Arabia. There will be other areas—for example, the Great Lakes region and the southern half of Mexico’s great central valley—where population will be relatively dense by Dark Age standards, and towns of modest size may even thrive if they happen to be in defensible locations.
The nomadic herding folk of the midwestern prairies, the villages of the Gulf Coast jungles, and the other human ecologies that will spring up in the varying ecosystems of deindustrial North America will all gradually settle into a more or less stable population level, at which births and deaths balance each other and the consumption of resources stays at or below sustainable levels of production. That’s what happens in human societies that don’t have the dubious advantage of a torrent of nonrenewable energy reserves to distract them temporarily from the hard necessities of survival.
It’s getting to that level that’s going to be a bear. The mechanisms of population contraction are simple enough, and as suggested above, they can have a dramatic impact on historical time scales without cataclysmic impact on the scale of individual lives. No, the difficult part of population contraction is its impact on economic patterns geared to continuous population growth. That’s part of a more general pattern, of course—the brutal impact of the end of growth on an economy that depends on growth to function at all—which has been discussed on this blog several times already, and will require close study in the present sequence of posts.
That examination will begin after we’ve considered the second half of the demography of dark age America: the role of mass migration and ethnogenesis in the birth of the cultures that will emerge on this continent when industrial civilization is a fading memory. That very challenging discussion will occupy next week’s post.

Heading Toward The Sidewalk

Wed, 2014-08-20 18:25
Talking about historical change is one thing when the changes under discussion are at some convenient remove in the past or the future. It’s quite another when the changes are already taking place. That’s one of the things that adds complexity to the project of this blog, because the decline and fall of modern industrial civilization isn’t something that might take place someday, if X or Y or Z happens or doesn’t happen; it’s under way now, all around us, and a good many of the tumults of our time are being driven by the unmentionable but inescapable fact that the process of decline is beginning to pick up speed.

Those tumults are at least as relevant to this blog’s project as the comparable events in the latter years of dead civilizations, and so it’s going to be necessary now and then to pause the current sequence of posts, set aside considerations of the far future for a bit, and take a look at what’s happening here and now. This is going to be one of those weeks, because a signal I’ve been expecting for a couple of years now has finally showed up, and its appearance means that real trouble may be imminent.

This has admittedly happened in a week when the sky is black with birds coming home to roost. I suspect that most of my readers have been paying at least some attention to the Ebola epidemic now spreading across West Africa. Over the last week, the World Health Organization has revealed that official statistics on the epidemic’s toll are significantly understated, the main nongovernmental organization fighting Ebola has admitted that the situation is out of anyone’s control, and a series of events neatly poised between absurdity and horror—a riot in one of Monrovia’s poorest slums directed at an emergency quarantine facility, in which looters made off with linens and bedding contaminated with the Ebola virus, and quarantined patients vanished into the crowd—may shortly plunge Liberia into scenes of a kind not witnessed since the heyday of the Black Death. The possibility that this outbreak may become a global pandemic, while still small, can no longer be dismissed out of hand.

Meanwhile, closer to home, what has become a routine event in today’s America—the casual killing of an unarmed African-American man by the police—has blown up in a decidedly nonroutine fashion, with imagery reminiscent of Cairo’s Tahrir Square being enacted night after night in the St. Louis suburb of Ferguson, Missouri. The culture of militarization and unaccountability that’s entrenched in urban police forces in the United States has been displayed in a highly unflattering light, as police officers dressed for all the world like storm troopers on the set of a bad science fiction movie did their best to act the part, tear-gassing and beating protesters, reporters, and random passersby in an orgy of jackbooted enthusiasm blatant enough that Tea Party Republicans have started to make worried speeches about just how closely this resembles the behavior of a police state.

If the police keep it up, the Arab Spring of a few years back may just be paralleled by an American Autumn. Even if some lingering spark of common sense on the part of state and local authorities heads off that possibility, the next time a white police officer guns down an African-American man for no particular reason—and there will be a next time; such events, as noted above, are routine in the United States these days—the explosion that follows will be even more severe, and the risk that such an explosion may end up driving the emergence of a domestic insurgency is not small. I noted in a post a couple of years back that the American way of war pretty much guarantees that any country conquered by our military will pup an insurgency in short order thereafter; there’s a great deal of irony in the thought that the importation of the same model of warfare into police practice in the US may have exactly the same effect here.

It may come as a surprise to some of my readers that the sign I noted is neither of these things. No, it’s not the big volcano in Iceland that’s showing worrying signs of blowing its top, either. It’s an absurdly little thing—a minor book review in an otherwise undistinguished financial-advice blog—and it matters only because it’s a harbinger of something considerably more important.

A glance at the past may be useful here. On September 9, 1929, no less a financial periodical than Barron’s took time off from its usual cheerleading of the stock market’s grand upward movement to denounce an investment analyst named Roger Babson in heated terms. Babson’s crime? Suggesting that the grand upward movement just mentioned was part of a classic speculative bubble, and the bubble’s inevitable bust would cause an economic depression. Babson had been saying this sort of thing all through the stock market boom of the late 1920s, and until that summer, the mainstream financial media simply ignored him, as they ignored everyone else whose sense of economic reality hadn’t gone out to lunch and forgotten to come back.

For those who followed the media, in fact, the summer and fall of 1929 were notable mostly for the fact that a set of beliefs that most people took for granted—above all else, the claim that the stock market could keep on rising indefinitely—suddenly were being loudly defended all over the place, even though next to nobody was attacking them. The June issue of The American Magazine featured an interview with financier Bernard Baruch, insisting that “the economic condition of the world seems on the verge of a great forward movement.” In the July 8 issue of Barron’s, similarly, an article insisted that people who worried about how much debt was propping up the market didn’t understand the role of broker’s loans as a major new investment outlet for corporate money.

As late as October 15, when the great crash was only days away, Professor Irving Fisher of Yale’s economics department made his famous announcement to the media: “Stock prices have reached what looks like a permanently high plateau.” That sort of puffery was business as usual, then as now. Assaulting the critics of the bubble in print, by name, was not. It was only when the market was sliding toward the abyss of the 1929 crash that financial columnists publicly trained their rhetorical guns on the handful of people who had been saying all along that the boom would inevitably bust.

That’s a remarkably common feature of speculative bubbles, and could be traced in any number of historical examples, starting with the tulip bubble in the 17th century Netherlands and going on from there. Some of my readers may well have experienced the same thing for themselves in the not too distant past, during the last stages of the gargantuan real estate bubble that popped so messily in 2008. I certainly did, and a glance back at that experience will help clarify the implications of the signal I noticed in the week just past.

Back when the real estate bubble was soaring to vertiginous and hopelessly unsustainable heights, I used to track its progress on a couple of news aggregator sites, especially Keith Brand’s lively HousingPanic blog. Now and then, as the bubble peaked and began losing air, I would sit down with a glass of scotch, a series of links to the latest absurd comments by real estate promoters, and my copy of John Kenneth Galbraith’s The Great Crash 1929—the source, by the way, of the anecdotes cited above—and enjoyed watching the rhetoric used to insist that the 2008 bubble wasn’t a bubble duplicate, in some cases word for word, the rhetoric used for the same purpose in 1929.

All the anti-bubble blogs fielded a steady stream of hostile comments from real estate investors who apparently couldn’t handle the thought that anyone might question their guaranteed ticket to unearned wealth, and Brand’s in particular saw no shortage of bare-knuckle verbal brawls. It was only in the last few months before the bubble burst, though, that pro-bubble blogs started posting personal attacks on Brand and his fellow critics, denouncing them by name in heated and usually inaccurate terms. At the time, I noted the parallel with the Barron’s attack on Roger Babson, and wondered if it meant the same thing; the events that followed showed pretty clearly that it did.

That same point may just have arrived in the fracking bubble—unsurprisingly, since that has followed the standard trajectory of speculative booms in all other respects so far. For some time now, the media has been full of proclamations about America’s allegely limitless petroleum supply, which resemble nothing so much as the airy claims about stocks made by Bernard Baruch and Irving Fisher back in 1929. Week after week, bloggers and commentators have belabored the concept of peak oil, finding new and ingenious ways to insist that it must somehow be possible to extract infinite amounts of oil from a finite planet; oddly enough, though it’s rare for anyone to speak up for peak oil on these forums, the arguments leveled against it have been getting louder and more shrill as time passes. Until recently, though, I hadn’t encountered the personal attacks that announce the imminence of the bust.

That was before this week. On August 11th, a financial-advice website hosted a fine example of the species, and rather to my surprise—I’m hardly the most influential or widely read critic of the fracking bubble, after all—it was directed at me.

Mind you, I have no objection to hostile reviews of my writing. A number of books by other people have come in for various kinds of rough treatment on this blog, and turnabout here as elsewhere is fair play. I do prefer reviewers, hostile or otherwise, to take the time to read a book of mine before they review it, but that’s not something any writer can count on; reviewers who clearly haven’t so much as opened the cover of the book on which they pass judgment have been the target of barbed remarks in literary circles since at least the 18th century. Still, a review of a book the reviewer hasn’t read is one thing, and a review of a book the author hasn’t written and the publisher hasn’t published is something else again.

That’s basically the case here. The reviewer, a stock market blogger named Andew McKillop, set out to critique a newly re-edited version of my 2008 book The Long Descent. That came as quite a surprise to me, as well as to New Society Publications, the publisher of the earlier book, since no such reissue exists. The Long Descent remains in print in its original edition, and my six other books on peak oil and the future of industrial society are, ahem, different books.

My best guess is that McKillop spotted my new title Decline and Fall: The End of Empire and the Future of Democracy in 21st Century America in a bookshop window, and simply jumped to the conclusion that it must be a new release of the earlier book. I’m still not sure whether the result counts as a brilliant bit of surrealist performance art or a new low in what we still jokingly call journalistic ethics; in either case, it’s definitely broken new ground. Still, I hope that McKillop does better research for the people who count on him for stock advice.

Given that starting point, the rest of the review is about what you would expect. I gather that McKillop read a couple of online reviews of The Long Descent and a couple more of Decline and Fall, skimmed over a few randomly chosen posts on this blog, tossed the results together all anyhow, and jumped to the conclusion that the resulting mess was what the book was about. The result is quite a lively little bricolage of misunderstandings, non sequiturs, and straightforward fabrications—I invite anyone who cares to make the attempt to point out the place in my writings, for example, where I contrast catabolic collapse with “anabolic collapse,” whatever on earth that latter might be.

There’s a certain wry amusement to be had from going through the review and trying to figure out exactly how McKillop might have gotten this or that bit of misinformation wedged into his brain, but I’ll leave that as a party game for my readers. The point I’d like to make here is that the appearance of this attempted counterblast in a mainstream financial blog is a warning sign. It suggests that the fracking boom, like previous bubbles when they reached the shoot-the-messenger stage, may well be teetering on the brink of a really spectacular crash—and it’s not the only such sign, either.

The same questions about debt that were asked about the stock market in 1929 and the housing market in 2008 are being asked now, with increasing urgency, about the immense volume of junk bonds that are currently propping up the shale boom. Meanwhile gas and oil companies are having to drill ever more frantically and invest ever more money to keep production rates from dropping like a rock Get past the vacuous handwaving about “Saudi America,” and it’s embarrassingly clear that the fracking boom is simply one more debt-fueled speculative orgy destined for one more messy bust. It’s disguised as an energy revolution in exactly the same way that the real estate bubble was disguised as a housing revolution, the tech-stock bubble as a technological revolution, and so on back through the annals of financial delusion as far as you care to go.

Sooner or later—and much more likely sooner than later—the fracking bubble is going to pop. Just how and when that will happen is impossible to know in advance. Even making an intelligent guess at this point would require a detailed knowledge of which banks and investment firms have gotten furthest over their heads in shale leases and the like, which petroleum and natural gas firms have gone out furthest on a financial limb, and so on. That’s the kind of information that the companies in question like to hide from one another, not to mention the general public; it’s thus effectively inaccessible to archdruids, which means that we’ll just have to wait for the bankruptcies, the panic selling, and the wet thud of financiers hitting Wall Street sidewalks to find out which firms won the fiscal irresponsibility sweepstakes this time around.

One way or another, the collapse of the fracking boom bids fair to deliver a body blow to the US economy, at a time when most sectors of that economy have yet to recover from the bruising they received at the hands of the real estate bubble and bust. Depending on how heavily and cluelessly foreign banks and investors have been sucked into the boom—again, hard to say without inside access to closely guarded financial information—the popping of the bubble could sucker-punch national economies elsewhere in the world as well. Either way, it’s going to be messy, and the consequences will likely include a second helping of the same unsavory stew of bailouts for the rich, austerity for the poor, bullying of weaker countries by their stronger neighbors, and the like, that was dished up with such reckless abandon in the aftermath of the 2008 real estate bust. Nor is any of this going to make it easier to deal with potential pandemics, simmering proto-insurgencies in the American heartland, or any of the other entertaining consequences of our headfirst collision with the sidewalks of reality.

The consequences may go further than this. The one detail that sets the fracking bubble apart from the real estate bubble, the tech stock bubble, and their kin further back in economic history is that fracking wasn’t just sold to investors as a way to get rich quick; it was also sold to them, and to the wider public as well, as a way to evade the otherwise inexorable reality of peak oil. 2008, it bears remembering, was not just the year that the real estate bubble crashed, and dragged much of the global economy down with it; it was also the year when all those prophets of perpetual business as usual who insisted that petroleum would never break $60 a barrel or so got to eat crow, deep-fried in light sweet crude, when prices spiked upwards of $140 a barrel. All of a sudden, all those warnings about peak oil that experts had been issuing since the 1950s became a great deal harder to dismiss out of hand.

The fracking bubble thus had mixed parentage; its father may have been the same merciless passion for fleecing the innocent that always sets the cold sick heart of Wall Street aflutter, but its mother was the uneasy dawn of recognition that by ignoring decades of warnings and recklessly burning through the Earth’s finite reserves of fossil fuels just as fast as they could be extracted, the industrial world has backed itself into a corner from which the only way out leads straight down. White’s Law, one of the core concepts of human ecology, points out that economic development is directly correlated with energy per capita; as depletion overtakes production and energy per capita begins to decline, the inevitable result is a long era of economic contraction, in which a galaxy of economic and cultural institutions predicated on continued growth will stop working, and those whose wealth and influence depend on those institutions will be left with few choices short of jumping out a Wall Street window.

The last few years of meretricious handwaving about fracking as the salvation of our fossil-fueled society may thus mark something rather more significant than another round of the pervasive financial fraud that’s become the lifeblood of the US economy in these latter days. It’s one of the latest—and maybe, just maybe, one of the last—of the mental evasions that people in the industrial world have used in the futile but fateful attempt to pretend that pursuing limitless economic growth on a finite and fragile planet is anything other than a guaranteed recipe for disaster. When the fracking bubble goes to its inevitable fate, and most of a decade of babbling about limitless shale oil takes its proper place in the annals of human idiocy, it’s just possible that some significant number of people will realize that the universe is under no obligation to provide us will all the energy and other resources we want, just because we happen to want them. I wouldn’t bet the farm on that, but I think the possibility is there.

One swallow does not a summer make, mind you, and one fumbled attempt at a hostile book review on one website doesn’t prove that the same stage in the speculative bubble cycle that saw frantic denunciations flung at Roger Babson and Keith Brand—the stage that comes immediately before the crash—has arrived this time around. I would encourage my readers to watch for similar denunciations aimed at more influential and respectable fracking-bubble critics such as Richard Heinberg or Kurt Cobb. Once those start showing up, hang onto your hat; it’s going to be a wild ride.

Dark Age America: A Bitter Legacy

Wed, 2014-08-13 16:45
Civilizations normally leave a damaged environment behind them when they fall, and ours shows every sign of following that wearily familiar pattern. The nature and severity of the ecological damage a civilization leaves behind, though, depend on two factors, one obvious, the other less so. The obvious factor derives from the nature of the technologies the civilization deployed in its heyday; the less obvious one depends on how many times those same technologies had been through the same cycle of rise and fall before the civilization under discussion got to them.

There’s an important lesson in this latter factor. Human technologies almost always start off their trajectory through time as environmental disasters looking for a spot marked X, which they inevitably find, and then have the rough edges knocked off them by centuries or millennia of bitter experience. When our species first developed the technologies that enabled hunting bands to take down big game animals, the result was mass slaughter and the extinction of entire species of megafauna, followed by famine and misery; rinse and repeat, and you get the exquisite ecological balance that most hunter-gatherer societies maintained in historic times. In much the same way, early field agriculture yielded bumper crops of topsoil loss and subsistence failure to go along with its less reliable yields of edible grain, and the hard lessons from that experience have driven the rise of more sustainable agricultural systems—a process completed in our time with the emergence of organic agricultural methods that build soil rather than depleting it.

Any brand new mode of human subsistence is thus normally cruising for a bruising, and will get it in due time at the hands of the biosphere. That’s not precisely good news for modern industrial civilization, because ours is a brand new mode of human subsistence; it’s the first human society ever to depend almost entirely on extrasomatic energy—energy, that is, that doesn’t come from human or animal muscles fueled by food crops. In my book The Ecotechnic Future, I’ve suggested that industrial civilization is simply the first and most wasteful of a new mode of human society, the technic society. Eventually, I proposed, technic societies will achieve the same precise accommodation to ecological reality that hunter-gatherer societies worked out long ago, and agricultural societies have spent the last eight thousand years or so pursuing. Unfortunately, that doesn’t help us much just now.

Modern industrial civilization, in point of fact, has been stunningly clueless in its relationship with the planetary cycles that keep us all alive. Like those early bands of roving hunters who slaughtered every mammoth they could find and then looked around blankly for something to eat, we’ve drawn down the finite stocks of fossil fuels on this planet without the least concern about what the future would bring—well, other than the occasional pious utterance of thoughtstopping mantras of the “Oh, I’m sure they’ll think of something” variety. That’s not the only thing we’ve drawn down recklessly, of course, and the impact of our idiotically short-term thinking on our long-term prospects will be among the most important forces shaping the next five centuries of North America’s future.

Let’s start with one of the most obvious: topsoil, the biologically active layer of soil that can support food crops. On average, as a result of today’s standard agricultural methods, North America’s arable land loses almost three tons of topsoil from each cultivated acre every single year. Most of the topsoil that made North America the breadbasket of the 20th century world is already gone, and at the current rate of loss, all of it will be gone by 2075. That would be bad enough if we could rely on artificial fertilizer to make up for the losses, but by 2075 that won’t be an option: the entire range of chemical fertilizers are made from nonrenewable resources—natural gas is the main feedstock for nitrate fertilizers, rock phosphate for phosphate fertilizers, and so on—and all of these are depleting fast.

Topsoil loss driven by bad agricultural practices is actually quite a common factor in the collapse of civilizations. Sea-floor cores in the waters around Greece, for example, show a spike in sediment deposition from rapidly eroding topsoil right around the end of the Mycenean civilization, and another from the latter years of the Roman Empire. If archeologists thousands of years from now try the same test, they’ll find yet another eroded topsoil layer at the bottom of the Gulf of Mexico, the legacy of an agricultural system that put quarterly profits ahead of the relatively modest changes that might have preserved the soil for future generations.

The methods of organic agriculture mentioned earlier could help very significantly with this problem, since those include techniques for preserving existing topsoil, and rebuilding depleted soil at a rate considerably faster than nature’s pace. To make any kind of difference, though, those methods would have to be deployed on a very broad scale, and then passed down through the difficult years ahead. Lacking that, even where desertification driven by climate change doesn’t make farming impossible, a very large part of today’s North American farm belt will likely be unable to support crops for centuries or millennia to come. Eventually, the same slow processes that replenished the soil on land scraped bare by the ice age glaciers will do the same thing to land stripped of topsoil by industrial farming, but “eventually” will not come quickly enough to spare our descendants many hungry days.

The same tune in a different key is currently being played across the world’s oceans, and as a result my readers can look forward, in the not too distant future, to tasting the last piece of seafood they will ever eat. Conservatively managed, the world’s fish stocks could have produced large yields indefinitely, but they were not conservatively managed; where regulation was attempted, political and economic pressure consistently drove catch limits above sustainable levels, and of course cheating was pervasive and the penalties for being caught were merely another cost of doing business. Fishery after fishery has accordingly collapsed, and the increasingly frantic struggle to feed seven billion hungry mouths is unlikely to leave any of those that remain intact for long.

Worse, all of this is happening in oceans that are being hammered by other aspects of our collective ecological stupidity. Global climate change, by boosting the carbon dioxide content of the atmosphere, is acidifying the oceans and causing sweeping shifts in oceanic food chains. Those shifts involve winners as well as losers; where calcium-shelled diatoms and corals are suffering population declines, seaweeds and algae, which are not so sensitive to changes in the acid-alkaline balance, are thriving on the increased CO2 in the water—but the fish that feed on seaweeds and algae are not the same as those that feed on diatoms and corals, and the resulting changes are whipsawing ocean ecologies.

Close to shore, toxic effluents from human industry and agriculture are also adding to the trouble. The deep oceans, all things considered, offer sparse pickings for most saltwater creatures; the vast majority of ocean life thrives within a few hundred miles of land, where rivers, upwelling zones, and the like provide nutrients in relative abundance. We’re already seeing serious problems with toxic substances concentrating up through oceanic food chains; unless communities close to the water’s edge respond to rising sea levels with consummate care, hauling every source of toxic chemicals out of reach of the waters, that problem is only going to grow worse. Different species react differently to this or that toxin; some kind of aquatic ecosystem will emerge and thrive even in the most toxic estuaries of deindustrial North America, but it’s unlikely that those ecosystems will produce anything fit for human beings to eat, and making the attempt may not be particularly good for one’s health.

Over the long run, that, too, will right itself. Bioaccumulated toxins will end up entombed in the muck on the ocean’s floor, providing yet another interesting data point for the archeologists of the far future; food chains and ecosystems will reorganize, quite possibly in very different forms from the ones they have now. Changes in water temperature, and potentially in the patterns of ocean currents, will bring unfamiliar species into contact with one another, and living things that survive the deindustrial years in isolated refugia will expand into their former range. These are normal stages in the adaptation of ecosystems to large-scale shocks. Still, those processes of renewal take time, and the deindustrial dark ages ahead of us will be long gone before the seas are restored to biological abundance.

Barren lands and empty seas aren’t the only bitter legacies we’re leaving our descendants, of course. One of the others has received quite a bit of attention on the apocalyptic end of the peak oil blogosphere for several years now—since March 11, 2011, to be precise, when the Fukushima Daiichi nuclear disaster got under way. Nuclear power exerts a curious magnetism on the modern mind, drawing it toward extremes in one direction or the other; the wildly unrealistic claims about its limitless potential to power the future that have been made by its supporters are neatly balanced by the wildly unrealistic claims about its limitless potential as a source of human extinction on the other. Negotiating a path between those extremes is not always an easy matter.

In both cases, though, it’s easy enough to clear away at least some of the confusion by turning to documented facts. It so happens, for instance, that no nation on Earth has ever been able to launch or maintain a nuclear power program without huge and continuing subsidies. Nuclear power never pays for itself; absent a steady stream of government handouts, it doesn’t make enough economic sense to attract enough private investment to cover its costs, much less meet the huge and so far unmet expenses of nuclear waste storage; and in the great majority of cases, the motive behind the program, and the subsidies, is pretty clearly the desire of the local government to arm itself with nuclear weapons at any cost. Thus the tired fantasy of cheap, abundant nuclear power needs to be buried alongside the Eisenhower-era propagandists who dreamed it up in the first place.

It also happens, of course, that there have been quite a few catastrophic nuclear accidents since the dawn of the atomic age just over seventy years ago, especially but not only in the former Soviet Union. Thus it’s no secret what the consequences are when a reactor melts down, or when mismanaged nuclear waste storage facilities catch fire and spew radioactive smoke across the countryside. What results is an unusually dangerous industrial accident, on a par with the sudden collapse of a hydroelectric dam or a chemical plant explosion that sends toxic gases drifting into a populated area; it differs from these mostly in that the contamination left behind by certain nuclear accidents remains dangerous for many years after it comes drifting down from the sky.

There are currently 69 operational nuclear power plants scattered unevenly across the face of North America, with 127 reactors among them; there are also 48 research reactors, most of them much smaller and less vulnerable to meltdown than the power plant reactors. Most North American nuclear power plants store spent fuel rods in pools of cooling water onsite, since the spent rods continue to give off heat and radiation and there’s no long term storage for high-level nuclear waste. Neither a reactor nor a fuel rod storage pool can be left untended for long without serious trouble, and a great many things—including natural disasters and human stupidity—can push them over into meltdown, in the case of reactors, or conflagration, in the case of spent fuel rods. In either case, or both, you’ll get a plume of toxic, highly radioactive smoke drifting in the wind, and a great many people immediately downwind will die quickly or slowly, depending on the details and the dose.

It’s entirely reasonable to predict that this is going to happen to some of those 175 reactors. In a world racked by climate change, resource depletion, economic disintegration, political and social chaos, mass movements of populations, and the other normal features of the decline and fall of a civilization and the coming of a dark age, the short straw is going to be drawn sooner or later, and serious nuclear disasters are going to happen. That doesn’t justify the claim that every one of those reactors is going to melt down catastrophically, every one of the spent-fuel storage facilities is going to catch fire, and so on—though of course that claim does make for more colorful rhetoric.

In the real world, for reasons I’ll be discussing further in this series of posts, we don’t face the kind of sudden collapse that could make all the lights go out at once. Some nations, regions, and local areas within regions will slide faster than others, or be deliberately sacrificed so that resources of one kind or another can be used somewhere else. As long as governments retain any kind of power at all, keeping nuclear facilities from adding to the ongoing list of disasters will be high on their agendas; shutting down reactors that are no longer safe to operate is one step they can certainly do, and so is hauling spent fuel rods out of the pools and putting them somewhere less immediately vulnerable.

It’s probably a safe bet that the further we go along the arc of decline and fall, the further these decommissioning exercises will stray from the optimum. I can all too easily imagine fuel rods being hauled out of their pools by condemned criminals or political prisoners, loaded on flatbed rail cars, taken to some desolate corner of the expanding western deserts, and tipped one at a time into trenches dug in the desert soil, then covered over with a few meters of dirt and left to the elements. Sooner or later the radionuclides will leak out, and that desolate place will become even more desolate, a place of rumors and legends where those who go don’t come back.

Meanwhile, the reactors and spent-fuel pools that don’t get shut down even in so cavalier a fashion will become the focal points of dead zones of a slightly different kind. The facilities themselves will be off limits for some thousands of years, and the invisible footprints left behind by the plumes of smoke and dust will be dangerous for centuries. The vagaries of deposition and erosion are impossible to predict; in areas downwind from Chernobyl or some of the less famous Soviet nuclear accidents, one piece of overgrown former farmland may be relatively safe while another a quarter hour’s walk away may still set a Geiger counter clicking at way-beyond-safe rates. Here I imagine cow skulls on poles, or some such traditional marker, warning the unwary that they stand on the edge of accursed ground.

It’s important to keep in mind that not all the accursed ground in deindustrial North America will be the result of nuclear accidents. There are already areas on the continent so heavily contaminated with toxic pollutants of less glow-in-the-dark varieties that anyone who attempts to grow food or drink the water there can count on a short life and a wretched death. As the industrial system spirals toward its end, and those environmental protections that haven’t been gutted already get flung aside in the frantic quest to keep the system going just a little bit longer, spills and other industrial accidents are very likely to become a good deal more common than they are already.

There are methods of soil and ecosystem bioremediation that can be done with very simple technologies—for example, plants that concentrate toxic metals in their tissues so it can be hauled away to a less dangerous site, and fungi that break down organic toxins—but if they’re to do any good at all, these will have to be preserved and deployed in the teeth of massive social changes and equally massive hardships. Lacking that, and it’s a considerable gamble at this point, the North America of the future will be spotted with areas where birth defects are a common cause of infant mortality and it will be rare to see anyone over the age of forty or so without the telltale signs of cancer.

There’s a bitter irony in the fact that cancer, a relatively rare disease a century and a half ago—most childhood cancers in particular were so rare that individual cases were written up in medical journals —has become the signature disease of industrial society, expanding its occurrence and death toll in lockstep with our mindless dumping of chemical toxins and radioactive waste into the environment. What, after all, is cancer? A disease of uncontrolled growth.

I sometimes wonder if our descendants in the deindustrial world will appreciate that irony. One way or another, I have no doubt that they’ll have their own opinions about the bitter legacy we’re leaving them. Late at night, when sleep is far away, I sometimes remember Ernest Thompson Seton’s heartrending 1927 prose poem “A Lament,” in which he recalled the beauty of the wild West he had known and the desolation of barbed wire and bleached bones he had seen it become. He projected the same curve of devastation forward until it rebounded on its perpetrators—yes, that would be us—and imagined the voyagers of some other nation landing centuries from now at the ruins of Manhattan, and slowly piecing together the story of a vanished people:

Their chiefs and wiser ones shall know
That here was a wastrel race, cruel and sordid,
Weighed and found wanting,
Once mighty but forgotten now.
And on our last remembrance stone,
These wiser ones will write of us:
They desolated their heritage,
They wrote their own doom.

I suspect, though, that our descendants will put things in language a good deal sharper than this. As they think back on the people of the 20th and early 21st centuries who gave them the barren soil and ravaged fisheries, the chaotic weather and rising oceans, the poisoned land and water, the birth defects and cancers that embitter their lives, how will they remember us? I think I know. I think we will be the orcs and Nazgûl of their legends, the collective Satan of their mythology, the ancient race who ravaged the earth and everything on it so they could enjoy lives of wretched excess at the future’s expense. They will remember us as evil incarnate—and from their perspective, it’s by no means easy to dispute that judgment.

Dark Age America: The Rising Oceans

Wed, 2014-08-06 17:02
The vagaries of global climate set in motion by our species’ frankly brainless maltreatment of the only atmosphere we’ve got, the subject of last week’s post here, have another dimension that bears close watching. History, as I suggested last week, can be seen as human ecology in its transformations over time, and every ecosystem depends in the final analysis on the available habitat. For human beings, the habitat that matters is dry land with adequate rainfall and moderate temperatures; we’ve talked about the way that anthropogenic climate change is interfering with the latter two, but it promises to have  significant impacts on the first of those requirements as well.
It’s helpful to put all this in the context of deep time. For most of the last billion years or so, the Earth has been a swampy jungle planet where ice and snow were theoretical possibilities only. Four times in that vast span, though, something—scientists are still arguing about what—turned the planet’s thermostat down sharply, resulting in ice ages millions of years in length. The most recent of these downturns began cooling the planet maybe ten million years ago, in the Miocene epoch; a little less than two million years ago, at the beginning of the Pleistocene epoch, the first of the great continental ice sheets began to spread across the Northern Hemisphere, and the ice age was on.
We’re still in it. During an ice age, a complex interplay of the Earth’s rotational and orbital wobbles drives the Milankovich cycle, a cyclical warming and cooling of the planet that takes around 100,000 years to complete, with long glaciations broken by much shorter interglacials. We’re approaching the end of the current interglacial, and it’s estimated that the current ice age has maybe another ten million years to go; one consequence is that at some point a few millennia in the future, we can pretty much count on the arrival of a new glaciation. In the meantime, we’ve still got continental ice sheets covering Antarctica and Greenland, and a significant amount of year-round ice in mountains in various corners of the world. That’s normal for an interglacial, though not for most of the planet’s history.
The back-and-forth flipflop between glaciations and interglacials has a galaxy of impacts on the climate and ecology of the planet, but one of the most obvious comes from the simple fact that all the frozen water needed to form a continental ice sheet have to come from somewhere, and the only available “somewhere” on this planet is the oceans. As glaciers spread, sea level drops accordingly; 18,000 years ago, when the most recent glaciation hit its final peak, sea level was more than 400 feet lower than today, and roaming tribal hunters could walk all the way from Holland to Ireland and keep going, following reindeer herds a good distance into what’s now the northeast Atlantic.
What followed has plenty of lessons on offer for our future. It used to be part of the received wisdom that ice ages began and ended with, ahem, glacial slowness, and there still seems to be good reason to think that the beginnings are fairly gradual, but the ending of the most recent ice age involved periods of very sudden change. 18,000 years ago, as already mentioned, the ice sheets were at their peak; about 16,000 years ago, the planetary climate began to warm, pushing the ice into a slow retreat. Around 14,700 years ago, the warm Bölling phase arrived, and the ice sheets retreated hundreds of miles; according to several studies, the West Antarctic ice sheet collapsed completely at this time.
The Bölling gave way after around 600 years to the Older Dryas cold period, putting the retreat of the ice on hold. After another six centuries or so, the Older Dryas gave way to a new warm period, the Alleröd, which sent the ice sheets reeling back and raised sea levels hundreds of feet worldwide. Then came a new cold phase, the frigid Younger Dryas, which brought temperatures back for a few centuries to their ice age lows, cold enough to allow the West Antarctic ice sheet to reestablish itself and to restore tundra conditions over large sections of the Northern Hemisphere. Ice core measurements suggest that the temperature drop hit fast, in a few decades or less—a useful reminder that rapid climate change can come from natural sources as well as from our smokestacks and tailpipes.
Just over a millennium later, right around 9600 BC, the Boreal phase arrived, and brought even more spectacular change. According to oxygen isotope measurements from Greenland ice cores—I get challenged on this point fairly often, so I’ll mention that the figure I’m citing is from Steven Mithen’s After The Ice, a widely respected 2003 survey of human prehistory—global temperatures spiked 7° C  in less than a decade, pushing the remaining ice sheets into rapid collapse and sending sea levels soaring. Over the next few thousand years, the planet’s ice cover shrank to a little less than its current level, and sea level rose a bit above what it is today; a gradual cooling trend beginning around 6000 BCE brought both to the status they had at the beginning of the industrial era.
Scientists still aren’t sure what caused the stunning temperature spike at the beginning of the Boreal phase, but one widely held theory is that it was driven by large-scale methane releases from the warming oceans and thawing permafrost. The ocean floor contains huge amounts of methane trapped in unstable methane hydrates; permafrost contains equally huge amounts of dead vegetation that’s kept from rotting by subfreezing temperatures, and when the permafrost thaws, that vegetation rots and releases more methane. Methane is a far more powerful greenhouse gas than carbon dioxide, but it’s also much more transient—once released into the atmosphere, methane breaks down into carbon dioxide and water relatively quickly, with an estimated average lifespan of ten years or so—and so it’s quite a plausible driver for the sort of sudden shock that can be traced in the Greenland ice cores.
If that’s what did it, of course, we’re arguably well on our way there. I discussed in a previous post here credible reports that large sections of the Arctic ocean are fizzing with methane, and I suspect many of my readers have heard of the recently discovered craters in Siberia that appear to have been caused by methane blowouts from thawing permafrost. On top of the current carbon dioxide spike, a methane spike would do a fine job of producing the kind of climate chaos I discussed in last week’s post. That doesn’t equal the kind of runaway feedback loop beloved of a certain sect of contemporary apocalypse-mongers, because there are massive sources of negative feedback that such claims always ignore, but it seems quite likely that the decades ahead of us will be enlivened by a period of extreme climate turbulence driven by significant methane releases.
Meanwhile, two of the world’s three remaining ice sheets—the West Antarctic and Greenland sheets—have already been destabilized by rising temperatures. Between them, these two ice sheets contain enough water to raise sea level around 50 feet globally, and the estimate I’m using for anthropogenic carbon dioxide emissions over the next century provides enough warming to cause the collapse and total melting of both of them. All that water isn’t going to hit the world’s oceans overnight, of course, and a great deal depends on just how fast the melting happens.
The predictions for sea level rise included in the last few IPCC reports assume a slow, linear process of glacial melting. That’s appropriate as a baseline, but the evidence from paleoclimatology shows that ice sheets collapse in relatively sudden bursts of melting, producing what are termed “global meltwater pulses” that can be tracked worldwide by a variety of proxy measurements. Mind you, “relatively sudden” in geological terms is slow by the standards of a human lifetime; the complete collapse of a midsized ice sheet like Greenland’s or West Antarctica’s can take five or six centuries, and that in turn involves periods of relatively fast melting and sea level rise, interspersed with slack periods when sea level creeps up much more slowly.
So far, at least, the vast East Antarctic ice sheet has shown only very modest changes, and most current estimates suggest that it would take something far more drastic than the carbon output of our remaining economically accessible fossil fuel reserves to tip it over into instability; this is a good thing, as East Antarctica’s ice fields contain enough water to drive sea level up 250 feet or so.  Thus a reasonable estimate for sea level change over the next five hundred years involves the collapse of the Greenland and West Antarctic sheets and some modest melting on the edges of the East Antarctic sheet, raising sea level by something over 50 feet, delivered in a series of unpredictable bursts divided by long periods of relative stability or slow change.
The result will be what paleogeographers call “marine transgression”—the invasion of dry land and fresh water by the sea. Fifty feet of sea level change adds up to quite a bit of marine transgression in some areas, much less in others, depending always on local topography. Where the ground is low and flat, the rising seas can penetrate a very long way; in California, for example, the state capital at Sacramento is many miles from the ocean, but since it’s only 30 feet above sea level and connected to the sea by a river, its  skyscrapers will be rising out of a brackish estuary long before Greenland and West Antarctica are bare of ice. The port cities of the Gulf coast are also on the front lines—New Orleans is actually below sea level, and will likely be an early casualty, but every other Gulf port from Brownsville, Texas (elevation 43 feet) to Tampa, Florida (elevation 15 feet) faces the same fate, and most East and West Coast ports face substantial flooding of economically important districts.
The flooding of Sacramento isn’t the end of the world, and there may even be some among my readers who would consider it to be a good thing. What I’d like to point out, though, is the economic impact of the rising waters. Faced with an unpredictable but continuing rise in sea level, communities and societies face one of two extremely expensive choices. They can abandon billions of dollars of infrastructure to the sea and rebuild further inland, or they can invest billions of dollars in flood control. Because the rate of sea level change can’t be anticipated, furthermore, there’s no way to know in advance how far to relocate or how high to build the barriers at any given time, and there are often hard limits to how much change can be done in advance:  port cities, for example, can’t just move away from the sea and still maintain a functioning economy.
This is a pattern we’ll be seeing over and over again in this series of posts. Societies descending into dark ages reliably get caught on the horns of a brutal dilemma. For any of a galaxy of reasons, crucial elements of infrastructure no longer do the job they once did, but reworking or replacing them runs up against two critical difficulties that are hardwired into the process of decline itself. The first is that, as time passes, the resources needed to do the necessary work become increasingly scarce; the second is that, as time passes, the uncertainties about what needs to be done become increasingly large.
The result can be tracked in the decline of every civilization. At first, failing systems are replaced with some success, but the economic impact of the replacement process becomes an ever-increasing burden, and the new systems never do quite manage to work as well as the older ones did in their heyday. As the process continues, the costs keep mounting and the benefits become less reliable; more and more often, scarce resources end up being wasted or put to counterproductive uses because the situation is too uncertain to allow for their optimum allocation. With each passing year, decision makers have to figure out how much of the dwindling stock of resources can be put to productive uses and how much has to be set aside for crisis management, and the raw uncertainty of the times guarantees that these decisions will very often turn out wrong. Eventually, the declining curve in available resources and the rising curve of uncertainty intersect to produce a crisis that spins out of control, and what’s left of a community, an economic sector, or a whole civilization goes to pieces under the impact.
It’s not too hard to anticipate how that will play out in the century or so immediately ahead of us. If, as I’ve suggested, we can expect the onset of a global meltwater pulse from the breakup of the Greenland and West Antarctic ice sheets at some point in the years ahead, the first upward jolt in sea level will doubtless be met with grand plans for flood-control measures in some areas, and relocation of housing and economic activities in others. Some of those plans may even be carried out, though the raw economic impact of worldwide coastal flooding on a global economy already under severe strain from a chaotic climate and a variety of other factors won’t make that easy. Some coastal cities will hunker down behind hurriedly built or enlarged levees, others will abandon low-lying districts and try to rebuild further upslope, still others will simply founder and be partly or wholly abandoned—and all these choices impose costs on society as a whole.
Thereafter, in years and decades when sea level rises only slowly, the costs of maintaining flood control measures and replacing vulnerable infrastructure with new facilities on higher ground will become an unpopular burden, and the same logic that drives climate change denialism today will doubtless find plenty of hearers then as well. In years and decades when sea level surges upwards, the flood control measures and relocation projects will face increasingly severe tests, which some of them will inevitably fail. The twin spirals of rising costs and rising uncertainty will have their usual effect, shredding the ability of a failing society to cope with the challenges that beset it.
It’s even possible in one specific case to make an educated guess as to the nature of the pressures that will finally push the situation over the edge into collapse and abandonment. It so happens that three different processes that follow in the wake of rapid glacial melting all have the same disastrous consequence for the eastern shores of North America.
The first of these is isostatic rebound. When you pile billions of tons of ice on a piece of land, the land sinks, pressing down hundreds or thousands of feet into the Earth’s mantle; melt the ice, and the land rises again. If the melting happens over a brief time, geologically speaking, the rebound is generally fast enough to place severe stress on geological faults all through the region, and thus sharply increases the occurrence of earthquakes. The Greenland ice sheet is by no means exempt from this process, and many of the earthquakes in the area around a rising Greenland will inevitably happen offshore. The likely result? Tsunamis.
The second process is the destabilization of undersea sediments that build up around an ice sheet that ends in the ocean. As the ice goes away, torrents of meltwater pour into the surrounding seas, and isostatic rebound changes the slope of the underlying rock, masses of sediment break free and plunge down the continental slope into the deep ocean. Some of the sediment slides that followed the end of the last ice age were of impressive scale—the Storegga Slide off the coast of Norway around 6220 BCE, which was caused by exactly this process, sent 840 cubic miles of sediment careening down the continental slope. The likely result? More tsunamis.
The third process, which is somewhat more speculative than the first two, is the sudden blowout of large volumes of undersea methane hydrates. Several oceanographers and paleoclimatologists have argued that the traces of very large underwater slides in the Atlantic, dating from the waning days of the last ice age, may well be the traces of such blowouts. As the climate warmed, they suggest, methane hydrates on the continental shelves were destabilized by rising temperatures, and a sudden shock—perhaps delivered by an earthquake, perhaps by something else—triggered the explosive release of thousands or millions of tons of methane all at once. The likely result? Still more tsunamis.
It’s crucial to realize the role that uncertainty plays here, as in so many dimensions of our predicament. No one knows whether tsunamis driven by glacial melting will hammer the shores of the northern Atlantic basin some time in the next week, or some time in the next millennium. Even if tsunamis driven by the collapse of the Greenland ice sheet become statistically inevitable, there’s no way for anyone to know in advance the timing, scale, and direction of any such event. Efficient allocation of resources to East Coast ports becomes a nighmarish challenge when you literally have no way of knowing how soon any given investment might suddenly end up on the bottom of the Atlantic.
If human beings behave as they usually do, what will most likely happen is that the port cities of the US East Coast will keep on trying to maintain business as usual until well after that stops making any kind of economic sense. The faster the seas rise and the sooner the first tsunamis show up, the sooner that response will tip over into its opposite, and people will begin to flee in large numbers from the coasts in search of safety for themselves and their families. My working guess is that the eastern seaboard of dark age America will be sparsely populated, with communities concentrated in those areas where land well above tsunami range lies close to the sea. The Pacific and Gulf coasts will be at much less risk from tsunamis, and so may be more thickly settled; that said, during periods of rapid marine transgression, the mostly flat and vulnerable Gulf Coast may lose a great deal of land, and those who live there will need to be ready to move inland in a hurry.
All these factors make for a shift in the economic and political geography of the continent that will be of quite some importance at a later point in this series of posts. In times of rapid sea level change, maintaining the infrastructure for maritime trade in seacoast ports is a losing struggle; maritime trade is still possible without port infrastructure, but it’s rarely economically viable; and that means that inland waterways with good navigable connections to the sea will take on an even greater importance than they have today. In North America, the most crucial of those are the St. Lawrence Seaway, the Hudson River-Erie Canal linkage to the Great Lakes, and whatever port further inland replaces New Orleans—Baton Rouge is a likely candidate, due to its location and elevation above sea level—once the current Mississippi delta drowns beneath the rising seas.
Even in dark ages, as I’ll demonstrate later on, maritime trade is a normal part of life, and that means that the waterways just listed will become the economic, political, and strategic keys to most of the North American continent. The implications of that geographical reality will be the focus of a number of posts as we proceed.

Dark Age America: Climate

Wed, 2014-07-30 17:24
Over the next year or so, as I’ve mentioned in recent posts, I plan on tracing out as much as possible of what can be known or reasonably guessed about the next five hundred years or so of North American history—the period of the decline and fall of the civilization that now occupies that continent, the dark age in which that familiar trajectory ends, and the first stirrings of the successor societies that will rise out of its ruins. That’s a challenging project, arguably more so than anything else I’ve attempted here, and it also involves some presuppositions that may be unfamiliar even to my regular readers.
To begin with, I’m approaching history—the history of the past as well as of the future—from a strictly ecological standpoint.  I’d like to propose, in fact, that history might best be understood as the ecology of human communities, traced along the dimension of time.  Like every other ecological process, in other words, it’s shaped partly by the pressures of the senvironment and partly by the way its own subsystems interact with one another, and with the subsystems of the other ecologies around it. That’s not a common view; most historical writing these days puts human beings  at the center of the picture, with the natural world as a supposedly static background, while a minority view goes to the other extreme and fixates on natural catastrophes as the sole cause of this or that major historical change.
Neither of these approaches seem particularly useful to me. As our civilization has been trying its level best not to learn for the last couple of centuries, and thus will be learning the hard way in the years immediately ahead, the natural world is not a static background. It’s an active and constantly changing presence that responds in complex ways to human actions. Human societies, in turn, are equally active and equally changeable, and respond in complex ways to nature’s actions. The strange loops generated by a dance of action and interaction along these lines are difficult to track by the usual tools of linear thinking, but they’re the bread and butter of systems theory, and also of all those branches of ecology that treat the ecosystem rather than the individual organism as the basic unit.
The easiest way to show how this perspective works is to watch it in action, and it so happens that one of the most important factors that will shape the history of North America over the next five centuries is particularly amenable to a systems analysis. The factor I have in mind is climate.
Now of course that’s also a political hot potato just at the moment, due to the unwillingness of a great many people across the industrial world to deal with the hard fact that they can’t continue to enjoy their current lifestyles if they want a climatically and ecologically stable planet to live on. It doesn’t matter how often the planet sets new heat records, nor that the fabled Northwest Passage around the top end of Canada—which has been choked with ice since the beginning of recorded history—is open water every summer nowadays, and an increasingly important route for commercial shipping from Europe to the eastern shores of Asia; every time the planet’s increasingly chaotic weather spits out unseasonably cold days in a few places, you can count on hearing well-paid flacks and passionate amateurs alike insisting at the top of their lungs that this proves that anthropogenic climate change is nonsense.
To the extent that this reaction isn’t just propaganda, it shows a blindness to systems phenomena I’ve discussed here before: a learned inability to recognize that change in complex systems does not follow the sort of nice straight lines our current habits of thought prefer. A simple experiment can help show how complex systems respond in the real world, and in the process make it easier to make sense of the sort of climate phenomena we can count on seeing in the decades ahead.
The next time you fill a bathtub, once you’ve turned off the tap, wait until the water is still. Slip your hand into the water, slowly and gently, so that you make as little disturbance in the water as possible. Then move your hand through the water about as fast as a snail moves, and watch and feel how the water adapts to the movement, flowing gently around your hand. .
Once you’ve gotten a clear sense of that, gradually increase the speed with which your hand is moving. After you pass a certain threshold of speed, the movements of the water will take the form of visible waves—a bow wave in front of your hand, a wake behind it in which water rises and falls rhythmically, and wave patterns extending out to the edges of the tub. The faster you move your hand, the larger the waves become, and the more visible the interference patterns as they collide with one another.
Keep on increasing the speed of your hand. You’ll pass a second threshold, and the rhythm of the waves will disintegrate into turbulence: the water will churn, splash, and spray around your hand, and chaotic surges of water will lurch up and down the sides of the tub. If you keep it up, you can get a fair fraction of the bathwater on your bathroom floor, but this isn’t required for the experiment! Once you’ve got a good sense of the difference between the turbulence above the second threshold and the oscillations below it, take your hand out of the water, and watch what happens: the turbulence subsides into wave patterns, the waves shrink, and finally—after some minutes—you have still water again.
This same sequence of responses can be traced in every complex system, governing its response to every kind of disturbance in its surroundings. So long as the change stays below a certain threshold of intensity and rapidity—a threshold that differs for every system and every kind of change—the system will respond smoothly, with the least adjustment that will maintain its own internal balance. Once that threshold is surpassed, oscillations of various kinds spread through the system, growing steadily more extreme as the disturbance becomes stronger, until it passes the second threshold and the system’s oscillations collapse into turbulence and chaos. When chaotic behavior begins to emerge in an oscillating system, in other words, that’s a sign that real trouble may be sitting on the doorstep.
If global temperature were increasing in a nice even line, in other words, we wouldn’t have as much to worry about, because it would be clear from that fact that the resilience of the planet’s climate system was well able to handle the changes that were in process. Once things begin to oscillate, veering outside usual conditions in both directions, that’s a sign that the limits to resilience are coming into sight, with the possibility of chaotic variability in the planetary climate as a whole waiting not far beyond that. We can fine-tune the warning signals a good deal by remembering that every system is made up of subsystems, and those of sub-subsystems, and as a general rule of thumb, the smaller the system, the more readily it moves from local adjustment to oscillation to turbulence in response to rising levels of disturbance.
Local climate is sensitive enough, in fact, that ordinary seasonal changes can yield minor turbulence, which is why the weather is so hard to predict; regional climates are more stable, and normally cycle through an assortment of wavelike oscillations; the cycle of the seasons is one, but there are also multiyear and multidecade cycles of climate that can be tracked on a regional basis. It’s when those regional patterns start showing chaotic behavior—when, let’s say, the usually sizzling Texas summer is suddenly broken by a record cold snap in the middle of July, in a summer that’s shaping up globally to be among the hottest ever measured—that you know the whole system is coming under strain.
Ahem.
I’m not generally a fan of Thomas Friedman, but he scored a direct hit when he warned that what we have to worry about from anthropogenic climate change is not global warming but "global weirding:" in the terms I’ve used in this post, the emergence of chaotic shifts out of a global climate that’s been hit with too much disturbance too fast. A linear change in global temperatures would be harsh, but it would be possible to some extent to shift crop belts smoothly north in the northern hemisphere and south in the southern. If the crop belts disintegrate—if you don’t know whether the next season is going to be warm or cold, wet or dry, short or long—famines become hard to avoid, and cascading impacts on an already strained global economy add to the fun and games.  At this point, for the reasons just shown, that’s the most likely shape of the century or two ahead of us.
In theory, some of that could be avoided if the world’s nations were to stop treating the skies as an aerial sewer in which to dump greenhouse gases. In practice—well, I’ve met far too many climate change activists who still insist that they have to have SUVs to take their kids to soccer practice, and I recall the embarrassed silence that spread a while back when an important British climate scientist pointed out that maybe jetting all over the place to climate conferences was communicating the wrong message at a time when climate scientists and everyone else needed to decrease their carbon footprint. Until the people who claim to be concerned about climate change start showing a willingness to burn much less carbon, it’s unlikely that anyone else will do so, and so I think it’s a pretty safe bet that fossil fuels will continue to be extracted and burnt as long as geological and economic realities permit.
The one bleak consolation here is that those realities are a good deal less flexible than worst-case scenarios generally assume. There are two factors in particular to track here, and both unfold from net energy—the difference between the energy content of fossil fuels as they reach the end consumer and the energy input needed to get them all the way there. The first factor is simply that if a deposit of fossil carbon takes more energy to extract, process, and transport to the end user than the end user can get by burning it, the fossil carbon will stay in the ground. The poster child here is kerogen shale, which has been the bane of four decades of enthusiastic energy projects in the American West and elsewhere. There’s an immense amount of energy locked up in the Green River shale and its equivalents, but every attempt to break into that cookie jar has come to grief on the hard fact that, all things considered, it takes more energy to extract kerogen from shale than you get from burning the kerogen.
The second factor is subtler and considerably more damaging. As fossil fuel deposits with abundant net energy are exhausted, and have to be replaced by deposits with lower net energy, a larger and larger fraction of the total energy supply available to an industrial society has to be diverted from all other economic uses to the process of keeping the energy flowing.  Thus it’s not enough to point to high total energy production and insist that all’s well; the logic of net energy has to be applied here as well, and the total energy input to energy production, processing, and distribution subtracted from total energy production, to get a realistic sense of how much energy is available to power the rest of the economy—and the rest of the economy, remember, is what produces the wealth that makes it possible for individuals, communities, and nations to afford fossil fuels in the first place.
 Long before the last physically extractable deposit of fossil fuel is exhausted, in other words, fossil fuel extraction will have to stop because it’s become an energy sink rather than an energy source. Well before that point is reached, furthermore, the ability of global and national economies to meet the energy costs of fossil fuel extraction will slam face first into hard limits. Demand destruction, which is what economists call the process by which people who can’t afford to buy a product stop using it, is as important here as raw physical depletion; as economies reel under the twin burdens of depleting reserves and rising energy costs for energy production, carbon footprints will shrink willy-nilly as rapid downward mobility becomes the order of the day for most people.
Combine these factors with the economic impacts of "global weirding" itself and you’ve got a good first approximation of the forces that are already massing to terminate the fossil fuel economy with extreme prejudice in the decades ahead. How those are likely to play out the future we’re facing will be discussed at length in several future posts. For the time being, I’ll just note that I expect global fossil fuel consumption and CO2 emissions to peak within a decade or so to either side of 2030, and then tip over into a ragged and accelerating decline, punctuated by economic and natural disasters, that will reach the zero point of the scale well before 2100.
What that means for the future climate of North America is difficult to predict in detail but not so hard to trace in outline. From now until the end of the 21st century, perhaps longer, we can expect climate chaos, accelerating in its geographical spread and collective impact until a couple of decades after CO2 emissions peak, due to the lag time between when greenhouse gases hit the atmosphere and when their effects finally peak. As the rate of emissions slows thereafter, the turbulence will gradually abate, and some time after that—exactly when is anybody’s guess, but 2300 or so is as good a guess as any—the global climate will have settled down into a "new normal" that won’t be normal by our standards at all. Barring further curveballs from humanity or nature, that "new normal" will remain until enough excess CO2 has been absorbed by natural cycles to matter—a process that will take several millennia at least, and therefore falls outside the range of the five centuries or so I want to consider here.
An educated guess at the shape of the "new normal" is possible, because for the last few million years or so, the paleoclimatology of North America has shown a fairly reliable pattern. The colder North America has been, by and large, the heavier the rainfall in the western half of the continent. During the last Ice Age, for example, rainfall in what’s now the desert Southwest was so heavy that it produced a chain of huge pluvial (that is, rain-fed) lakes and supported relatively abundant grassland and forest ecosystems across much of what’s now sagebrush and cactus country.  Some measure of the difference can be caught from the fact that 18,000 years ago, when the last Ice Age was at its height, Death Valley was a sparkling lake surrounded by pine forests. By contrast, the warmer North America becomes, the dryer the western half of the continent gets, and the drying effect spreads east a very long ways.
After the end of the last Ice Age, for example, the world entered what nowadays gets called the Holocene Climatic Optimum; that term’s a misnomer, at least for this continent, because conditions over a good bit of North America then were optimum only for sand fleas and Gila monsters. There’s been a running debate for several decades about whether the Hypsithermal, to use the so-called Optimum’s other name, was warmer than today all over the planet or just in some regions.  Current opinion tends to favor the latter, but the difference doesn’t actually have that much impact on the issue we’re considering:  the evidence from a broad range of sources shows that North America was significantly warmer in the Hypsithermal than it is today, and so that period makes a fairly good first approximation of the conditions this continent is likely to face in a warmer world.
To make sense of the long-term change to North American climates, it’s important to remember that rainfall is far more important than temperature as a determining factor for local ecosystems. If a given region gets more than about 40 inches of rain a year, no matter what the temperature, it’ll normally support some kind of forest; if it gets between 40 and 10 inches a year, you’ve got grassland or, in polar regions, mosses and lichens; if you get less than 10 inches a year, you’ve got desert, whether it’s as hot as the Sahara or as bitterly cold as the Takla Makan. In the Hypsithermal, as the west dried out,  tallgrass prairie extended straight across the Midwest to western Pennsylvania, and much of the Great Plains were desert, complete with sand dunes.
In a world with ample fossil fuel supplies, it’s been possible to ignore such concerns, to the extent of pumping billions of gallons of water a year from aquifers or distant catchment basins to grow crops in deserts and the driest of grasslands, but as fossil fuel supplies sunset out, the shape of human settlement will once again be a function of annual rainfall, as it was everywhere on the planet before 1900 or so. If the Hypsithermal’s a valid model, as seems most likely, most of North America from the Sierra Nevada and Cascade ranges east across the Great Basin and Rocky Mountains to the Great Plains will be desert, as inhospitable as any on Earth, and human settlement will be accordingly sparse: scattered towns in those few places where geology allows a permanent water supply, separated by vast desolate regions inhabited by few hardy nomads or by no one at all.
East of the Great Desert, grassland will extend for a thousand miles or more, east to the  Allegheny foothills, north to a thinner and dryer boreal forest belt shifted several hundred miles closer to the Arctic Ocean, and south to the tropical jungles of the Gulf coast. Further south, in what’s now Mexico, the tropical rain belt will move northwards with shifts in the global atmospheric circulation, and the Gulf coast east of the Sierra Madre Oriental will shift to tropical ecosystems all the way north to, and beyond, the current international border. Between the greatly expanded tropical zone in the south and east and the hyperarid deserts of the north, Mexico will be a land of sharp ecological contrasts
Factor in sea level rise, on the one hand, and the long-term impacts of soil depletion and of toxic and radioactive wastes on the other—issues complicated enough in their causes, trajectory, and results that they’re going to require separate posts—and you’ve got a fairly limited set of regions in which agriculture will be possible in a post-fossil fuel environment: basically, the eastern seaboard from the new coast west to the Alleghenies and the Great Lakes, and river valleys in the eastern half of the Mississippi basin. The midwestern grasslands will support pastoral grazing, and the jungle belts around the new Gulf coast and across southern Mexico will be suitable for tropical crops once the soil has a chance to recover, but the overall human carrying capacity of the continent will be significantly smaller than it was before the industrial age began.
Climate isn’t the only force pushing in that direction, either. We’ll get to the others in the weeks ahead as we continue exploring the deindustrial landscapes of dark age America.

The Gray Light of Morning

Wed, 2014-07-23 16:18
I try to wear my archdruid’s hat lightly in these essays, but every so often I field questions that touch directly on the issues of ultimate meaning that our culture, however clumsily, classifies as “religious.” Two comments in response to the post here two weeks ago raised such issues, in a way that’s relevant enough to this series of posts and important enough to the broader project of this blog to demand a response.
One of them—tip of the aforementioned archdruid’s hat to Repent—asked, “As a Druid, what are your thoughts about divine purpose, reincarnation, and our purpose in the eyes of God? What do you think future ‘ecotechnic’ societies have yet to achieve that will be worthwhile to pursue, that our descendants should suffer through the dark age towards?” The other—tip of the hat to Yupped—asked, “What do you do if you see the big picture of what’s happening around you? How did those early adopters of decline in other collapsing societies maintain their sanity when they knew what was coming? I don’t think I have the mind or the temperament to tell myself stories about the transcendent meaning of suffering in an age of social collapse.”
Those are serious questions, and questions like them are being raised more and more often these days, on this blog and in a great many other places as well. People are beginning to come to grips with the fact that they can no longer count on faith in limitless technological progress to give them an easy answer to the enduring questions of human existence.  As they do that, they’re also having to confront those questions all over again, and finding out in the process that the solution that modern industrial civilization claimed to offer for those same questions was never actually a solution at all.
Psychologists have a concept they call “provisional living.” That’s the insistence, so often heard from people whose lives are stuck on a dysfunctional merry-go-round of self-inflicted crisis, that everything they don’t like about their lives will change just as soon as something else happens: as soon as they lose twenty pounds, get a divorce, quit their lousy job, or what have you. Of course the weight never goes away, the divorce papers never get filed, and so on, because the point of the exercise is to allow daydreams of an imaginary life in which they get everything they think they want take the place of the hard work and hard choices inseparable from personal change in the real world. What provisional living offers the individual neurotic, in turn, faith in the inevitability and beneficence of progress offers industrial society as a whole—or, more precisely, faith in progress used to offer that, back when the promises made in its name didn’t yet look quite so threadbare as they do today.
There was always a massive political subtext in those promises.  The poor were encouraged to believe that technological progress will someday generate so much wealth that their children and grandchildren will be rich; the sick and dying, to dream about a future where medical progress will make every disease curable; the oppressed, to hope for a day when social progress will grant everyone the fair treatment they can’t reliably get here and now, and so on. Meanwhile, and crucially, members of the privileged classes who became uncomfortable the mismatch between industrial civilization’s glittering rhetoric and its tawdry reality were encouraged to see that mismatch as a passing phase that will be swept away by progress at some undefined point in the future, and thus to limit their efforts to change the system to the sort of well-meaning gestures that don’t seriously inconvenience the status quo.
As real as the political subtext was, it’s a mistake to see the myth of progress purely as a matter of propaganda. During the heyday of industrialism, that myth was devoutly believed by a great many people, at all points along the social spectrum, many of whom saw it as the best chance they had for positive change. Faith in progress was a social fact of vast importance, one that shaped the lives of individuals, communities, and nations. The hope of upward mobility that inspired the poor to tolerate the often grueling conditions of their lives, the dream of better living through technology that kept the middle classes laboring at the treadmill, the visions of human destiny that channeled creative minds into the service of  existing institutions—these were real and powerful forces in their day, and drew on high hopes and noble ideals as well as less exalted motives.
The problem that we face now is precisely that those hopes and dreams and visions have passed their pull date. With each passing year, more people have noticed the widening gap between the future we were supposed to get and the one that’s actually been delivered to our doorstep; with each passing year, the voices raised in defense of the old rhetoric of perpetual progress get more defensive, and the once-sparkling imagery they offer for our contemplation looks more and more shopworn. One by one, we are waking up in a cold and unfamiliar place, and the gray light of morning does not bring us good news.
It would be hard enough to face the difficult future ahead of us if we came to the present moment out of an era of sober realism and close attention to the hard facts of the human condition. It’s far harder to find ourselves where we are when that forces us to own up to the hard fact that we’ve been lying to ourselves for three hundred years. Disillusionment is a bitter pill at the best of times.  When the illusion that’s just been shattered has been telling us that the future is obliged to conform to our fondest fantasies, whatever those happen to be, it’s no wonder that it’s as unwelcome as it is.
Bitter though the pill may be, though, it’s got to be choked down, and like the bitter medicines of an earlier day, it has a tonic effect. Come to terms with the fact that faith in progress was always destined to be disappointed, that the law of diminishing returns and the hard limits of thermodynamics made the dream of endless guaranteed betterment a delusion—an appealing delusion, but a delusion all the same—and after the shock wears off, you’ll find yourself standing on common ground shared with the rest of your species, asking questions that they asked and answered in their time.
Most of the people who have ever lived, it bears remembering, had no expectation that the future would be any better than the world that they saw around them. The majority of them assumed as a matter of course that the future would be much like the present, while quite a few of them believed instead that it would be worse.  Down through the generations, they faced the normal human condition of poverty, sickness, toil, grief, injustice, and the inevitability of their own deaths, and still found life sufficiently worth living to meet the challenges of making a living, raising families, and facing each day as it came.
That’s normal for our species.  Buying into a fantasy that insists that the universe is under an obligation to fulfill your daydreams is not. Get past that fantasy, and past the shock of disillusionment that follows its departure, and it’s not actually that difficult to make sense of a world that doesn’t progress and shows no interest in remaking itself to fit an overdeveloped sense of human entitlement. The downside is that you have to give up any attempt to smuggle the same fantasy back into your mind under some other name or form, and when some such belief system has been central to the worldview of your culture for the last three centuries or so, it’s always tempting to find some way to retrieve the fantasy. Still, falling in with that temptation  just lands you back where you were, waiting for a future the universe is serenely unwilling to provide.
It’s probably worth noting that you also have to give up the equal and opposite fantasy that claims that the universe is under an obligation to fulfill a different set of daydreams, the kind that involves the annihilation of everything you don’t like in the universe, whether or not that includes yourself. That’s simply another way of playing the game of provisional living: “I don’t have to do anything because X is supposed to happen (and it won’t)” amounts in practice to the same thing as “I won’t do anything until X happens (and it won’t)”—that is to say, it’s just one more comfortable evasion of responsibility.
There are more constructive ways to deal with the decidedly mixed bag that human existence hands us. If I may risk a significant oversimplification, there are broadly speaking three ways that work. It so happens that the ancient Greeks, who grappled just as incisively with these issues as they did with so much else, evolved three schools of philosophy, each of which took one of these three ways as its central theme. They weren’t the only ones to do that in a thoughtful fashion; those of my readers who know their way around the history of ideas will be able to name any number of examples from other societies and other ages.  I propose to use Greek examples here simply because they’re the schools with which I’m most familiar. As Charles Fort said, one traces a circle beginning anywhere.
The first of the three approaches I have in mind starts with the realization that for most of us, all things considered, being alive beats the stuffing out of the alternative. While life contains plenty of sources of misery, it also contains no shortage of delights, even when today’s absurdly complex technostructure isn’t there to provide them; furthermore, the mind that pays close attention to its own experiences will soon notice that a fairly large percentage of its miseries are self-inflicted, born of pointless worrying about future troubles or vain brooding over past regrets. Unlearn those habits, stop insisting that life is horrible because it isn’t perfect, and it’s generally not too hard to learn to enjoy the very real pleasures that life has to offer and to tolerate its less pleasant features with reasonable grace.
That’s the approach taught by Epicurus, the founder of the Epicurean school of philosophy in ancient Greece. It’s also the foundation of what William James called the healthy-minded way of thinking, the sort of calm realism you so often see in people who’ve been through hard times and come out the other side in one piece. Just now, it’s a very difficult philosophy for many people in the world’s industrial nations to take up, precisely because most of us haven’t been through hard times; we’ve been through an age of extravagance and excess, and like most people in that position, we’re finding the letdown at the party’s end far more difficult to deal with than any actual suffering we might be facing. Get past that common reaction, and the Epicurean way has much to offer.
If it has a weakness, it’s that attending to the good things in life can be very hard work when those good things are in short supply. That’s when the second approach comes into its own. It starts from the  realization that whether life is good or not, here we are, and we each have to choose how we’re going to respond to that stark fact. The same unlearning that shows the Epicurean to avoid self-inflicted misery is a first step, a clearing of the decks that makes room for the decisions that matter, but once this is taken care of, the next step is to face up to the fact that there are plenty of things in the world that could and should be changed, if only someone were willing to get up off the sofa and make the effort required. The second approach thus becomes a philosophy of action, and when action requires risking one’s life—and in really hard times, it very often does—those who embrace the second approach very often find themselves saying, “Well, what of it? I’m going to die sooner or later anyway.”
That’s the approach taught by Zeno, the founder of the Stoic school of philosophy in ancient Greece. It’s among the most common ways of thought in dark ages, sometimes worked out as a philosophy, sometimes expressed in pure action: the ethos of the Spartans and the samurai. That way of thinking about life is taken to its logical extreme in the literature of the pagan Teutonic peoples: you will die, says the Elder Edda, the world will die, even the gods will die, and none of that matters. All that matters is doing the right thing, because it’s the right thing, and because you’ve learned to embrace the certainty of your death and so don’t have to worry about anything but doing the right thing. 
Now of course the same choice can express itself in less stark forms. Every one of my readers who’s had the experience of doing something inconvenient or unpleasant just because it’s the right thing to do has some sense of how that works, and why.  In a civilization on the downward arc, there are many inconvenient or unpleasant things that very badly need to be done, and choosing one of them and doing it is a remarkably effective response to the feelings of meaninglessness and helplessness that afflict so many people just now.  Those who argue that you don’t know whether or not your actions will have any results in the long run are missing the point, because from the perspective I’ve just sketched out, the consequences don’t matter either.  Fiat iustitia, ruat caelum, as the Roman Stoics liked to say:  let justice be done, even if it brings the sky crashing down. 
So those, broadly speaking, are the first two ways that people have dealt constructively with the human condition: in simplest terms, either learn to live with what life brings you, or decide to do something about it. The first choice may seem a little simplistic and the second one may seem a little stark, but both work—that is, both are psychologically healthy responses that often yield good results, which is more than can be said for habits of thought that require the universe to either cater to our fantasies of entitlement or destroy itself to satisfy our pique. Both also mesh fairly well with the habitual material-mindedness of contemporary culture, the assumption that the only things that really matter are those you can hit with a stick, which is common to most civilizations toward the end of their history.
The third option I have in mind also works, but it doesn’t mesh at all with the assumption just noted. Current confusions about the alternatives to that assumption run deep enough that some care will be needed in explaining just what I mean.
The third option starts with the sense that the world as we normally perceive it is not quite real—not illusory, strictly speaking, but derivative. It depends on something else, something that stands outside the world of our ordinary experience and differs from that world not just in detail but in kind.  Since this “something else” is apart from the things we normally use language to describe, it’s remarkably difficult to define or describe in any straightforward way, though something of its nature can be shared with other people through the more roundabout means of metaphor and symbol. Elusive as it is, it can’t simply be ignored, because it shapes the world of our ordinary experience, not according to some human agenda but according to a pattern of its own.
I’d encourage my readers to notice with some care what’s not being said here. The reality that stands behind the world of our ordinary experience is not subject to human manipulation; it isn’t answerable to our fantasies or to our fears.  The viewpoint I’m suggesting is just about as far as you can get from the fashionable notion that human beings create their own reality—which, by the way, is just one more way our overdeveloped sense of entitlement shapes our habits of thinking.  As objects of our own and other’s perceptions, we belong to the world of the not quite real. Under certain circumstances, though, human beings can move into modes of nonordinary perception in which the presence of the underlying reality stops being a theory and becomes an experience, and when this happens a great many of the puzzles and perplexities of human existence suddenly start making sense.
There’s a certain irony in the fact that in ancient Greek culture, the philosophical movement that came to embody this approach to the world took its name from a man named Aristocles, whose very broad shoulders gave him the nickname Plato. That’s ironic because Plato was a transitional figure; behind him stood a long line of Orphic and Pythagorean mystics, whose insights he tried to put into rational form, not always successfully; after him came an even longer line of thinkers, the Neoplatonists, who completed the job he started and worked out a coherent philosophy that relates the world of reality to the world of appearance through the lens of human consciousness.
The Platonist answer isn’t limited to Platonism, of course, any more than the Stoic or Epicurean answer is found only in those two Greek philosophical schools. Implicitly or explicitly, it’s present in most religious traditions that grapple with philosophical issues and manage not to fall prey to the easy answers of apocalyptic fantasy. In the language of mainstream Western religion, we can say that there’s a divine reality, and then there’s a created world and created beings—for example, the author and readers of this blog—which depend for their existence on the divine reality, however this is described. Still, that’s far from the only language in which this way of thinking about the world can be framed.
The Epicurean and Stoic approaches to face an imperfect and challenging world, as already discussed, take that world as it is, and propose ways to deal with it. That’s a wholly reasonable approach from within the sort of worldview that those traditions generally embrace. The Platonic approach, by contrast, proposes that the imperfect and challenging world we encounter is only part of the picture, and that certain disciplines of consciousness allow us to take the rest of the picture into account, not as a policy of blind trust, but as an object of personal experience.  As already suggested, it’s difficult to communicate in ordinary language just what that experience has to say about the reality behind such phrases as “divine purpose,” which is why those who pursue such experiences tend to focus on teaching other people how to do it, and let them make their own discoveries as they do the work.
Knowing the rest of the picture, for that matter, doesn’t make the imperfections and challenges go away.  There are many situations in which either an Epicurean or a Stoic tactic is the best bet even from within a Platonic view of the cosmos—it’s a matter of historical fact that much of the best of the Epicurean and Stoic traditions were absorbed into the classical Neoplatonic synthesis for exactly this reason. The difference is simply that to glimpse something of the whole picture, and to pursue those disciplines that bring such glimpses within reach, provide a perspective that makes sense of the texture of everyday experience as it is, without expecting it to act out human fears and fantasies. That approach isn’t for everyone, but it’s an option, and it’s the one that I tend to trust.
And with that, I’ll set aside my archdruid’s hat again and return to the ordinary business of chronicling the decline and fall of industrial civilization.

Smile For The Aliens

Wed, 2014-07-16 16:44
Last week’s post, with its uncompromising portrayal of what descent into a dark age looks like, fielded the usual quota of voices insisting that it’s different this time. It’s a familiar chorus, and I confess to a certain wry amusement in watching so many changes get rung on what, after all, is ultimately a non sequitur. Grant that it’s different this time: so?  It’s different every time, and it always has been, yet those differences have never stopped history’s remarkably diverse stable of civilizations from plodding down the self-same track toward their common destiny.
It may also have occurred to my readers, and it has certainly occurred to me, that the legions of bloggers and pundits who base their reasonings on the claim that history has nothing to teach us don’t have to face a constant barrage of comments insisting that it’s the same this time. “It’s different this time” isn’t simply one opinion among others, after all; it’s one of the basic articles of faith of the contemporary industrial world, and questioning it reliably elicits screams of outrage even from those who like to tell themselves that they’ve rejected the conventional wisdom of the present day.
Yet that raises another question, one that’s going to bear down with increasing force in the years ahead of us: just how will people cope when some of their most cherished beliefs have to face a cage match with reality, and come out second best?
Such issues are rather on my mind just at the moment. Regular readers may recall that a while back I published a book, The UFO Phenomenon, which managed the not inconsiderable feat of offending both sides of the UFO controversy. It did so by the simple expedient of setting aside the folk mythology that’s been heaped up with equal enthusiasm by true believers in extraterrestrial visitation and true believers in today’s fashionable pseudoskeptical debunkery. After getting past that and a few other sources of confusion, I concluded that the most likely explanation for the phenomenon was that US military and intelligence agencies invented it out of whole cloth after the Second World War, as protective camouflage for an assortment of then-secret aerospace technologies.
That wasn’t the conclusion I expected to reach when I began work on the project; I had several other hypotheses in mind, all of which had to be considerably modified as the research proceeded. It was just too hard not to notice the way that the typical UFO sightings reported in any given decade so closely mimicked whatever the US was testing in secret at any given time—silvery dots or spheres in the late 1940s, when high-altitude balloons were the latest thing in aerial reconnaissance; points or tiny blobs of light high in the air in the 1950s, when the U-2 was still top secret; a phantasmagoria of flying lights and things dropping from the sky in the 1960s, when the SR-71 and the first spy satellites entered service; black triangles in the 1980s, when the first stealth aircraft were being tested, and so on. An assortment of further evidence pointing the same way, not to mention the significant parallels between the UFO phenomenon and those inflatable tanks and nonexistent battalions that tricked the Germans into missing the real preparations for D-Day, were further icing on a saucer-shaped cake.
To call that an unpopular suggestion is to understate the case considerably, though I’m pleased to say it didn’t greatly hurt sales of the book.  In the years since The UFO Phenomenon saw print, though, there’s been a steady stream of declassified documents from US intelligence agencies admitting that, yes, a lot of so-called UFOs were perfectly identifiable if you happened to know what classified projects the US government had in the air just then. It turns out, for example, that roughly half the UFO sightings reported to the Air Force’s Project Blue Book between 1952 and 1969 were CIA spyplanes; the officers in charge of Blue Book used to call the CIA when sightings came in, and issue bogus “explanations” to provide cover for what was, at the time, a top secret intelligence project. I have no reason to think that the publication of The UFO Phenomenon had anything to do with the release of all this data, but it was certainly a welcome confirmation of my analysis.
The most recent bit of confirmation hit the media a few weeks back. Connoisseurs of UFO history know that the Scandinavian countries went through a series of major “flaps”—periods in which many UFO sightings occured in a short time—in the 1950s and 1960s. The latest round of declassified data confirmed that these were sightings of US spyplanes snooping on the Soviet Union. The disclosures didn’t happen to mention whether CIA assets also spread lurid accounts of flying saucer sightings and alien visitations to help muddy the waters. My hypothesis is that that’s what was going on all the way through the history of the UFO phenomenon: fake stories and, where necessary, faked sightings kept public attention fixated on a manufactured mythology of flying saucers from outer space, so that the signal of what was actually happening never made it through the noise.
Many of my readers will already have guessed how the two sides of the UFO controversy responded to the disclosures just mentioned:  by and large, they haven’t responded to them at all. Believers in the extraterrestrial origin of UFOs are still insisting at the top of their lungs that some day very soon, the US government will be forced to ‘fess up to the reality of alien visitation—yes, I field emails from such people regularly. Believers in the null hypothesis, the claim that all UFO sightings result from hoaxes, illusions, or misidentification of ordinary phenomena, are still rehashing the same old arguments when they haven’t gone off to play at being skeptical about something else. That’s understandable, as both sides have ended up with substantial amounts of egg on their face.
Mind you, the believers in the extraterrestrial hypothesis were right about a great many more things than their rivals, and they deserve credit for that. They were right, for example, that people really were seeing unusual things in the skies; they were right that there was a coverup orchestrated by the US government, and that the Air Force was handing out explanations that it knew to be fake; they were even right in guessing that the Groom Lake airfield in Nevada, the legendary “Area 51,” was somehow central to the mystery—that was the main US spyplane testing and training base straight through the decades when the UFO mystery was at its peak. The one thing they got wrong was the real origin of the UFO phenomenon, but for them, unfortunately, that was the one thing that mattered.
The believers in the null hypothesis don’t have much reason to cheer, even though they turned out to be right about that one point. The disclosures have shown with uncomfortable clarity that a good many of the explanations offered by UFO skeptics were actually nonsense, just as their opponents had been pointing out all along. In 1981, for example, Philip Klass, James Oberg, and Robert Sheaffer claimed that they’d identified all the cases  that Project Blue Book labeled as “unknown.” As it happens, they did nothing of the kind; what they actually did was offer untested ad hoc hypotheses to explain away the unknowns, which is not exactly the same thing. It hardly needs to be said that CIA spyplanes played no part in those explanations, and if the “unknown” cases contained the same proportion of spyplanes as the whole collection, as seems likely, roughly half their explanations are wrong—a point that doesn’t exactly do much to inspire confidence in other claims made on behalf of the debunking crusade.
So it’s not surprising that neither side in the controversy has had the least interest in letting all this new data get in the way of keeping up the old argument. The usual human reaction to cognitive dissonance is to exclude the information that’s causing the dissonance, and that’s precisely what both sides, by and large, have done. As the dissonance builds, to be sure, people on the fringes of both scenes will quiely take their leave, new recruits will become few and far between, and eventually surviving communities of believers and debunkers alike will settle into a common pattern familiar to any of my readers familiar with Spiritualist churches, Marxist parties, or the flotsam left behind by the receding tide of other once-influential movements in American society: little circles of true believers fixated on the disputes of an earlier day, hermetically sealed against the disdain and disinterest of the wider society.
They have the freedom to do that, because the presence or absence of alien saucers in Earth’s skies simply doesn’t have that much of an impact on everyday life. Like Spiritualists or Marxists, believers in alien contact and their debunking foes by and large can avoid paying more than the most cursory attention to the failure of their respective crusades. The believers can take comfort in the fact that even in the presence of overwhelming evidence, it’s notoriously hard to prove a negative; the debunkers can take comfort in the fact that, however embarrassing their logical lapses and rhetorical excesses, at least they were right about the origins of the phenomenon.
That freedom isn’t always available to those on the losing side of history. It’s not that hard to keep the faith if you aren’t having your nose rubbed in the reality of your defeat on a daily basis, but it’s quite another matter to cope with the ongoing, overwhelming disconfirmation of beliefs on which you’ve staked your pride, your values, and your sense of meaning and purpose in life. What would life be like these days for the vocal UFO debunkers of recent decades, say, if the flying saucers had turned out to be alien spacecraft after all, the mass saucer landing on the White House lawn so often and so vainly predicted had finally gotten around to happening, and Philip Klass and his fellow believers in the null hypothesis had to field polite requests on a daily basis to have their four-dimensional holopictures taken by giggling, gray-skinned tourists from Zeta Reticuli?
For a living example of the same process at work, consider the implosion of the New Age scene that’s well under way just now.  In the years before the 2008 crash, as my readers will doubtless remember, tens of thousands of people plunged into real estate speculation with copies of Rhonda Byrne’s meretricious The Secretor similar works of New Age pseudophilosophy clutched in their sweaty hands, convinced that they knew how to make the universe make them rich. I knew a fair number of them—Ashland, Oregon, where I lived at the time, had a large and lucrative New Age scene—and so I had a ringside seat as their pride went before the real estate market’s fall. That was a huge blow to the New Age movement, and it was followed in short order by the self-inflicted humiliation of the grand nonevent of December 21, 2012.
Those of my readers who don’t happen to follow trends in the publishing industry may be interested to know that sales of New Age books peaked in 2007 and have been plunging since then; so has the take from New Age seminars, conferences, and a galaxy of other products hawked under the same label. There hadn’t been any shortage of disconfirmations in the previous history of the New Age scene, to be sure, but these two seem to have been just that little bit more than most of the movement’s adherents can gloss over. No doubt the New Age movement will spawn its share of little circles of true believers—the New Thought movement, which was basically the New Age’s previous incarnation, did exactly that when it imploded at the end of the 1920s, and many of those little circles ended up contributing to the rise of the New Age decades later—but as a major cultural phenomenon, it’s circling the drain.
One of the central themes of this blog, in turn, is that an embarrassment on much this same scale waits for all those who’ve staked their pride, their values, and their sense of meaning and purpose in life on the belief that it’s different this time, that our society somehow got an exemption from the common fate of civilizations. If industrial society ends up following the familiar arc of decline and fall into yet another dark age, if all the proud talk about man’s glorious destiny among the stars turns out to be empty wind, if we don’t even get the consolation prize of a downfall cataclysmic enough to drag the rest of the planet down with us—what then?
I’ve come to think that’s what lies behind the steady drumbeat of emails and comments I field week after week insisting that it’s different this time, that it has to be different this time, and clutching at the most remarkable assortment of straws in an attempt to get me to agree with them that it’s different this time. That increasingly frantic chorus has many sources, but much of it is, I believe, a response to a simple fact:  most of the promises made by authoritative voices in contemporary industrial society about the future we’re supposed to get have turned out to be dead wrong.
Given the number of people who like to insist that every technological wet dream will eventually be fulfilled, it’s worth taking the time to notice just how poorly earlier rounds of promises have measured up to the inflexible yardstick of reality.  Of all the gaudy and glittering technological breakthroughs that have been promised with so much confidence over the last half dozen decades or so, from cities on the Moon and nuclear power too cheap to meter straight through to120-year lifespans and cures for cancer and the common cold, how many have actually panned out?  Precious few.  Meanwhile most measures of American public health are slipping further into Third World territory with every year that passes, our national infrastructure is sinking into a morass of malign neglect, and the rising curve of prosperity that was supposed to give every American acces to middle class amenities has vanished in a haze of financial fraud, economic sclerosis, and official statistics so blatantly faked that only the media pretends to believe them any more.
For many Americans these days, furthermore, those broken promises have precise personal equivalents. A great many of the people who were told by New Age authors that they could get rich easily and painlessly by visualizing abundance while investing in dubious real estate ventures found out the hard way that believing those promises amounted to being handed a one-way nonstop ticket to poverty. A great many of the people who were told by equally respected voices that they would attain financial security by mortgaging their futures for the benefit of a rapacious and corrupt academic industry and its allies in the banking sphere are finding out the same thing about the reassuring and seemingly authoritative claims that they took at face value.  For that matter, I wonder how many American voters feel they benefited noticeably from the hope and change that they were promised by the sock puppet they helped put into the White House in 2008 and 2012.
The promises that framed the housing bubble, the student loan bubble, and the breathtaking cynicism of Obama’s campaign, after all, drew on the same logic and the same assumptions that guided all that grand and vaporous talk about the inevitability of cities on the Moon and commuting by jetpack. They all assumed that history is a one-way street that leads from worse to better, to more, bigger, louder, gaudier, and insisted that of course things would turn out that way. Things haven’t turned out that way, they aren’t turning out that way, and it’s becoming increasingly clear that things aren’t going to turn out that way any time this side of the twelfth of Never. I’ve noted here several times now that if you want to predict the future, paying attention to the reality of ongoing decline pretty reliably gives you better results than trusting that the decline won’t continue in its current course.
 The difficulty with that realization, of course, is precisely that so many people have staked their pride, their values, and their sense of meaning and purpose in life on one or another version of the logic I’ve just sketched out. Admitting that the world is under no compulsion to change in the direction they think it’s supposed to change, that it’s currently changing in a direction that most people find acutely unwelcome, and that there are good reasons to think the much-ballyhooed gains of the recent past were the temporary products of the reckless overuse of irreplaceable energy resources, requires the surrender of a deeply and passionately held vision of time and human possibility. Worse, it lands those who do so in a situation uncomfortably close to the crestfallen former UFO debunkers I joked about earlier in this post, having to cope on an everyday basis with a world full of flying saucers and tourists from the stars.
Beneath the farcical dimensions of that image lies a sobering reality. Human beings can’t live for long without some source of values and some sense of meaning in their lives.  That’s why people respond to cognitive dissonance affecting their most cherished values by shoving away the unwelcome data so forcefully, even in the teeth of the evidence. Resistance to cognitive dissonance has its limits, though, and when people have their existing sources of meaning and value swept away by a sufficiently powerful flood of contradictions, they will seek new sources of meaning and value wherever they can find them—no matter how absurd, dysfunctional, or demonic those new meanings and values might look to an unsympathetic observer.  The mass suicide of the members of the Heaven’s Gate UFO cult in 1997 offers one measure of just how far astray those quests for new sources of meaning can go; so, on a much larger scale, does the metastatic nightmare of Nazi Germany.
I wrote in an earlier post this month about the implosion of the sense of political legitimacy that’s quietly sawing the props out from underneath the US federal government, and convincing more and more Americans that the people who claim to represent and govern them are a pack of liars and thieves.  So broad and deep a loss of legitimacy is political dynamite, and normally results within no very long a time frame in the collapse of the government in question. There are no guarantees, though, that whatever system replaces a delegitimzed government will be any better.
That same principle applies with equal force to the collapse of the fundamental beliefs of a civilization. In next week’s post, with this in mind, I plan on talking about potential sources of meaning, purpose and value in a world on its way into a global dark age.

Bright Were The Halls Then

Wed, 2014-07-09 16:51
Arnold Toynbee, whose magisterial writings on history have been a recurring source of inspiration for this blog, has pointed out an intriguing difference between the way civilizations rise and the way they fall. On the way up, he noted, each civilization tends to diverge not merely from its neighbors but from all other civilizations throughout history.  Its political and religious institutions, its arts and architecture, and all the other details of its daily life take on distinctive forms, so that as it nears maturity, even the briefest glance at one of its creations is often enough to identify its source.
 Once the peak is past and the long road down begins, though, that pattern of divergence shifts into reverse, slowly at first, and then with increasing speed. A curious sort of homogenization takes place: distinctive features are lost, and common patterns emerge in their place.  That doesn’t happen all at once, and different cultural forms lose their distinctive outlines at different rates, but the further down the trajectory of decline and fall a civilization proceeds, the more it resembles every other civilization in decline. By the time that trajectory bottoms out, the resemblance is all but total; compare one postcollapse society to another—the societies of post-Roman Europe, let’s say, with those of post-Mycenean Greece—and it can be hard to believe that dark age societies so similar could have emerged out of the wreckage of civilizations so different.
It’s interesting to speculate about why this reversion to the mean should be so regular a theme in the twilight and afermath of so many civilizations. Still, the recurring patterns of decline and fall have another implication—or, if you will, another application. I’ve noted here and elsewhere that modern industrial society, especially but not only here in North America, is showing all the usual symptoms of a civilization on its way toward history’s compost bin. If we’ve started along the familiar track of decline and fall—and I think a very good case can be made for that hypothesis—it should be possible to map the standard features of the way down onto the details of our current situation, and come up with a fairly accurate sense of the shape of the future ahead of us.
All the caveats raised in last week’s Archdruid Report post deserve repetition here, of course. The part of history that can be guessed in advance is a matter of broad trends and overall patterns, not the sort of specific incidents that make up so much of history as it happens.  Exactly how the pressures bearing down on late industrial America will work out in the day-by-day realities of politics, economics, and society will be determined by the usual interplay of individual choices and pure dumb luck. That said, the broad trends and overall patterns are worth tracking in their own right, and some things that look as though they ought to belong to the realm of the unpredictable—for example, the political and military dynamics of border regions, or the relations among the imperial society’s political class, its increasingly disenfranchised lower classes, and the peoples outside its borders—follow predictable patterns in case after case in history, and show every sign of doing the same thing this time around too.
What I’m suggesting, in fact, is that in a very real sense, it’s possible to map out the history of North America over the next five centuries or so in advance. That’s a sweeping claim, and I’m well aware that the immediate response of at least some of my readers will be to reject the possibility out of hand. I’d like to encourage those who have this reaction to try to keep an open mind. In the posts to come, I plan on illustrating every significant point I make with historical examples from the twilight years of other civilizations, as well as evidence from the current example insofar as that’s available yet.  Thus it should be possible for my readers to follow the argument as it unfolds and see how it hangs together.
Now of course all this presupposes that the lessons of the past actually have some relevance to our future. I’m aware that that’s a controversial proposal these days, but to my mind the controversy says more about the popular idiocies of our time than it does about the facts on the ground. I’ve discussed in previous posts how people in today’s America have taken to using thoughtstoppers such as "but it’s different this time!" to protect themselves from learning anything from history—a habit that no doubt does wonders for their peace of mind today, though it pretty much guarantees them a face-first collision with a brick wall of misery and failure not much further down time’s road. Those who insist on clinging to that habit are not going to find the next year or so of posts here to their taste.
They won’t be the only ones. Among the resources I plan on using to trace out the history of the next five centuries is the current state of the art in the environmental sciences, and that includes the very substantial body of evidence and research on anthropogenic climate change. I’m aware that some people consider that controversial, and of course some very rich corporate interests have invested a lot of money into convincing people that it’s controversial, but I’ve read extensively on all sides of the subject, and the arguments against taking anthropogenic climate change seriously strike me as specious. I don’t propose to debate the matter here, either—there are plenty of forums for that. While I propose to leaven current model-based estimates on climate change and sea level rise with the evidence from paleoclimatology, those who insist that there’s nothing at all the matter with treating the atmosphere as an aerial sewer for greenhouse gases are not going to be happy with the posts ahead.
I also propose to discuss industrial civilization’s decline and fall without trying to sugarcoat the harsher dimensions of that process, and that’s going to ruffle yet another set of feathers. Regular readers will recall a post earlier this year discussing the desperate attempts to insist that it won’t be that bad, really it won’t, that were starting to show up in the flurry of criticism each of these weekly essays reliably fields.  That’s even more common now than it was then; nowadays, in fact, whenever one of my posts uses words such as "decline" or "dark age," I can count on being taken to task by critics who insist earnestly that such language is too negative, that of course we’re facing a shift to a different kind of society but I shouldn’t describe it in such disempowering terms, and so on through the whole vocabulary of the obligatory optimism that’s so fashionable among the privileged these days.
I’m pretty sure, as noted in the blog post just cited, that this marks the beginning of a shift by the peak oil community as a whole out of the second of Elisabeth Kubler-Ross’ famous five stages, the stage of anger, into the third stage of bargaining. That’s welcome, in that it brings us closer to the point at which people have finished dealing with their own psychological issues and can get to work coping with the predicament of our time, but it’s still as much an evasion of that predicament as denial and anger were. The fall of a civilization is not a pleasant prospect—and that’s what we’re talking about, of course: the decline and fall of industrial civilization, the long passage through a dark age, and the first stirrings of the successor societies that will build on our ruins. That’s how the life cycle of a civilization ends, and it’s the way that ours is ending right now.
What that means in practice is that most of the familiar assumptions people in the industrial world like to make about the future will be stood on their heads in the decades and centuries ahead. Most of the rhetoric being splashed about these days in support of this or that or the other Great Turning that will save us from the consequences of our own actions assumes, as a matter of course, that a majority of people in the United States—or, heaven help us, in the whole industrial world—can and will come together around some broadly accepted set of values and some agreed-upon plan of action to rescue industrial civilization from the rising spiral of crises that surrounds it. My readers may have noticed that things seem to be moving in the opposite direction, and history suggests that they’re quite correct.
Among the standard phenomena of decline and fall, in fact, is the shattering of the collective consensus that gives a growing society the capacity to act together to accomplish much of anything at all.  The schism between the political class and the rest of the population—you can certainly call these "the 1%" and "the 99%" if you wish—is simply the most visible of the fissures that spread through every declining civilization, breaking it into a crazy quilt of dissident fragments pursuing competing ideals and agendas. That process has a predictable endpoint, too:  as the increasingly grotesque misbehavior of the political class loses it whatever respect and loyalty it once received from the rest of society, and the masses abandon their trust in the political institutions of their society, charismatic leaders from outside the political class fill the vacuum, violence becomes the normal arbiter of power, and the rule of law becomes a polite fiction when it isn’t simply abandoned altogether.
The economic sphere of a society in decline undergoes a parallel fragmentation for different reasons. In ages of economic expansion, the labor of the working classes yields enough profit to cover the costs of a more or less complex superstructure, whether that superstructure consists of the pharaohs and priesthoods of ancient Egypt or the bureaucrats and investment bankers of late industrial America. As expansion gives way to contraction, the production of goods and services no longer yields the profit ot once did, but the members of the political class, whose power and wealth depend on the superstructure, are predictably unwilling to lose their privileged status and have the power to keep themselves fed at everyone else’s expense. The reliable result is a squeeze on productive economic activity that drives a declining civilization into one convulsive financial crisis after another, and ends by shredding its capacity to produce even the most necessary goods and services .
In response, people begin dropping out of the economic mainstream altogether, because scrabbling for subsistence on the economic fringes is less futile than trying to get by in a system increasingly rigged against them. Rising taxes, declining government services, and systematic privatization of public goods by the rich compete to alienate more and more people from the established order, and the debasement of the money system in an attempt to make up for faltering tax revenues drives more and more economic activity into forms of exchange that don’t involve money at all.  As the monetary system fails, in turn, economies of scale become impossible to exploit; the economy fragments and simplifies until bare economic subsistence on local resources, occasionally supplemented by plunder, becomes the sole surviving form of economic activity
Taken together, these patterns of political fragmentation and economic unraveling send the political class of a failing civilization on a feet-first journey through the exit doors of history.  The only skills its members have, by and large, are those needed to manipulate the complex political and economic levers of their society, and their power depends entirely on the active loyalty of their subordinates, all the way down the chain of command, and the passive obedience of the rest of society.  The collapse of political institutions strips the political class of any claim to legitimacy, the breakdown of the economic system limits its ability to buy the loyalty of those that it can no longer inspire, the breakdown of the levers of control strips its members of the only actual power they’ve got, and that’s when they find themselves having to compete for followers with the charismatic leaders rising just then from the lower echelons of society. The endgame, far more often than not, comes when the political class tries to hire the rising leaders of the disenfranchised as a source of muscle to control the rest of the populace, and finds out the hard way that it’s the people who carry the weapons, not the ones who think they’re giving the orders, who actually exercise power.
The implosion of the political class has implications that go well beyond a simple change in personnel at the upper levels of society. The political and social fragmentation mentioned earlier applies just as forcefully to the less tangible dimensions of human life—its ideas and ideals, its beliefs and values and cultural practices. As a civilization tips over into decline, its educational and cultural institutions, its arts, literature, sciences, philosophies and religions all become identified with its political class; this isn’t an accident, as the political class generally goes out of its way to exploit all these things for the sake of its own faltering authority and influence. To those outside the political class, in turn, the high culture of the civilization becomes alien and hateful, and when the political class goes down, the cultural resources that it harnessed to its service go down with it.
Sometimes, some of those resources get salvaged by subcultures for their own purposes, as Christian monks and nuns salvaged portions of classical Greek and Roman philosophy and science for the greater glory of God. That’s not guaranteed, though, and even when it does happen, the salvage crew picks and chooses for its own reasons—the survival of classical Greek astronomy in the early medieval West, for example, happened simply because the Church needed to know how to calculate the date of Easter. Where no such motive exists, losses can be total: of the immense corpus of Roman music, the only thing that survives is a fragment of one tune that takes about 25 seconds to play, and there are historical examples in which even the simple trick of literacy got lost during the implosion of a civilization, and had to be imported centuries later from somewhere else.
All these transformations impact the human ecology of a falling civilization—that is, the basic relationships with the natural world on which every human society depends for day to day survival. Most civilizations know perfectly well what has to be done to keep topsoil in place, irrigation water flowing, harvests coming in, and all the other details of human interaction with the environment on a stable footing. The problem is always how to meet the required costs as economic growth ends, contraction sets in, and the ability of central governments to enforce their edicts begins to unravel. The habit of feeding the superstructure at the expense of everything else impacts the environment just as forcefully as it does the working classes:  just as wages drop to starvation levels and keep falling, funding for necessary investments in infrastructure, fallow periods needed for crop rotation, and the other inputs that keep an agricultural system going in a sustainable manner all get cut. 
As a result, topsoil washes away, agricultural hinterlands degrade into deserts or swamps, vital infrastructure collapses from malign neglect, and the ability of the land to support human life starts on the cascading descent that characterizes the end stage of decline—and so, in turn, does population, because human numbers in the last analysis are a dependent variable, not an independent one. Populations don’t grow or shrink because people just up and decide one day to have more or fewer babies; they’re constrained by ecological limits. In an expanding civilization, as its wealth and resource base increases, the population expands as well, since people can afford to have more children, and since more of the children born each year have access to the nutrition and basic health care that let them survive to breeding age themselves.  When growth gives way to decline, population typically keeps rising for another generation or so due to sheer demographic momentum, and then begins to fall.
The consequences can be traced in the history of every collapsing civilization.  As the rural economy implodes due to agricultural failure on top of the more general economic decline, a growing fraction of the population concentrates in urban slum districts, and as public health measures collapse, these turn into incubators for infectious disease. Epidemics are thus a common feature in the history of declining civilizations, and of course war and famine are also significant factors, but an even larger toll is taken by the constant upward pressure exerted on death rates by poverty, malnutrition, crowding, and stress. As deaths outnumber births, population goes into a decline that can easily continue for centuries. It’s far from uncommon for the population of an area in the wake of a civilization to equal less than 10% of the figure it reached at the precollapse peak.
Factor these patterns together, follow them out over the usual one to three centuries of spiralling decline, and you have the standard picture of a dark age society: a mostly deserted countryside of small and scattered villages where subsistence farmers, illiterate and impoverished, struggle to coax fertility back into the depleted topsoil. Their goverments consist of the personal rule of local warlords, who take a share of each year’s harvest in exchange for protection from raiders and rough justice administered in the shade of any convenient tree. Their literature consists of poems, lovingly memorized and chanted to the sound of a simple stringed instrument, recalling the great deeds of the charismatic leaders of a vanished age, and these same poems also contain everything they know about their history. Their health care consists of herbs, a little rough surgery, and incantations cannily used to exploit the placebo effect. Their science—well, I’ll let you imagine that for yourself.
And the legacy of the past? Here’s some of what an anonymous poet in one dark age had to say about the previous civilization:
Bright were the halls then, many the bath-houses,High the gables, loud the joyful clamor,Many the meadhalls full of delightsUntil mighty Fate overthrew it all. Wide was the slaughter, the plague-time came,Death took away all those brave men.Broken their ramparts, fallen their halls,The city decayed; those who built itFell to the earth. Thus these courts crumble,And roof-tiles fall from this arch of stone.
Fans of Anglo-Saxon poetry will recognize that as a passage from "The Ruin." If the processes of history follow their normal pattern, they will be chanting poems like this about the ruins of our cities four or five centuries from now. How we’ll get there, and what is likely to happen en route, will be the subject of most of the posts here for the next year or so.

In a Handful of Dust

Wed, 2014-07-02 17:03
All things considered, it’s a good time to think about how much we can know about the future in advance. A hundred years ago last Saturday, as all my European readers know and a few of my American readers might have heard, a young Bosnian man named Gavrilo Prinzip lunged out of a crowd in Sarajevo and emptied a pistol into the Archduke Franz Ferdinand and his wife Sophie, who were touring that corner of the ramshackle Austro-Hungarian empire they were expected to inherit in due time. Over the summer months that followed, as a direct result of those gunshots, most of the nations of Europe went to war with one another, and the shockwaves set in motion by that war brought a global order centuries old crashing down.
In one sense, none of this was a surprise. Perceptive observers of the European scene had been aware for decades of the likelihood of a head-on crash between the rising power of Germany and the aging and increasingly fragile British Empire. The decade and a half before war actually broke out had seen an increasingly frantic scramble for military alliances that united longtime rivals Britain and France in a political marriage of convenience with the Russian Empire, in the hope of containing Germany’s growing economic and military might. Every major power poured much of its wealth into armaments, sparking an arms race so rapid that the most powerful warship on the planet in 1906, Britain’s mighty HMS Dreadnought, was hopelessly obsolete when war broke out eight years later.
Inquiring minds could read learned treatises by Halford Mackinder and many other scholars, explaining why conflict between Britain and Germany was inevitable; they could also take in serious fictional treatments of the subject such as George Chesney’s The Battle of Dorking and Saki’s When William Came, or comic versions such as P.G. Wodehouse’s The Swoop!. Though most military thinkers remained stuck in the Napoleonic mode of conflict chronicled in the pages of Karl von Clausewitz’ On War, those observers of the military scene who paid attention to the events of the American Civil War’s closing campaigns might even have been able to sense something of the trench warfare that would dominate the coming war on the western front.
It’s only fair to remember that a great many prophecies in circulation at that same time turned out to be utterly mistaken. Most of them, however, had a theme in common that regular readers of this blog will find quite familiar: the claim that because of some loudly ballyhooed factor or other, it really was different this time. Thus, for example, plenty of pundits insisted in the popular media that economic globalization had made the world’s economies so interdependent that war between the major powers was no longer possible. Equally, there was no shortage of claims that this or that or the other major technological advance had either rendered war impossible, or guaranteed that a war between the great powers would be over in weeks. Then as now, those who knew their history knew that any claim about the future that begins “It’s different this time” is almost certain to be wrong.
All things considered, it was not exactly difficult in the late spring of 1914, for those who were willing to do so, to peer into the future and see the shadow of a major war between Britain and Germany rising up to meet them. There were, in fact, many people who did just that. To go further and guess how it would happen, though, was quite another matter.  Some people came remarkably close; Bismarck, who was one of the keenest political minds of his time, is said to have commented wearily that the next great European war would probably be set off by some idiotic event in the Balkans.  Still, not even Bismarck could have anticipated the cascade of misjudgments and unintended consequences that sent this particular crisis spinning out of control in a way that half a dozen previous crises had not done.
What’s more, the events that followed the outbreak of war in the summer of 1914 quickly flung themselves off the tracks intended for them by the various political leaders and high commands, and carved out a trajectory of their own that nobody anywhere seems to have anticipated. That the Anglo-French alliance would squander its considerable military and economic superiority by refusing to abandon a bad strategy no matter how utterly it failed or how much it cost; that Russia’s immense armies would prove so feeble under pressure; that Germany would combine military genius and political stupidity in so stunningly self-defeating a fashion; that the United States would turn out to be the wild card in the game, coming down decisively on the Allied side just when the war had begun to turn in Germany’s favor—none of that was predicted, or could have been predicted, by anyone.
Nor were the consequences of the war any easier to foresee. On that bright summer day in 1914 when Gavrilo Prinzip burst from the crowd with a pistol in his hand, who could have anticipated the Soviet Union, the Great Depression, the blitzkreig, or the Holocaust? Who would have guessed that the victor in the great struggle between Britain and Germany would turn out to be the United States?  The awareness that Britain and Germany were racing toward a head-on collision did not provide any certain knowledge about how the resulting crash would turn out, or what its consequences would be; all that could be known for sure was that an impact was imminent and the comfortable certainties of the prewar world would not survive the shock.
That dichotomy, between broad patterns that are knowable in advance and specific details that aren’t, is very common in history. It’s possible, for example, that an impartial observer who assessed the state of the Roman Empire in 400 or so could have predicted the collapse of Roman power outside the Eastern Mediterranean littoral. As far as I know, no one did so—the ideological basis of Roman society made the empire’s implosion just as unthinkable then as the end of progress is today—but the possibility was arguably there. Even if an observer had been able to anticipate the overall shape of the Roman and post-Roman future, though, that anticipation wouldn’t have reached as far as the specifics of the collapse, and let’s not even talk about whether our observer might have guessed that the last Emperor of Rome in the west would turn out to be the son of Attila the Hun’s secretary, as in fact he was.
Such reflections are on my mind rather more than usual just now, for reasons that will probably come as no surprise to regular readers of this blog. For a variety of reasons, a few of which I’ll summarize in the paragraphs ahead, I think it’s very possible that the United States and the industrial world in general are near the brink of a convusive era of crisis at least as severe as the one that began in the summer of 1914. It seems very likely to me that in the years immediately ahead, a great many of the comfortable certainties of the last half century or so are going to be thrown overboard once and for all, as waves of drastic political, economic, military, social, and ecological change slam into societies that, despite decades of cogent warnings, have done precisely nothing to prepare for them.
I want to review here some of the reasons why I expect an era of crisis to arrive sooner rather than later. One of the most important of those reasons is the twilight of the late (and soon to be loudly lamented) fracking bubble. I’ve noted in previous posts here that the main product of the current fracking industry is neither oil nor gas, but the same sort of dubiously priced financial paper we all got to know and love in the aftermath of last decade’s real estate bubble. These days, the rickety fabric of American finance depends for its survival on a steady flow of hallucinatory wealth, since the production of mere goods and services no longer produces enough profit to support the Brobdingnagian superstructure of the financial industry and its swarm of attendant businesses. These days, too, an increasingly brittle global political order depends for its survival on the pretense that the United States is still the superpower it was decades ago, and all those strident and silly claims that the US is about to morph into a "Saudi America" flush with oil wealth are simply useful evasions that allow the day of reckoning, with its inevitable reshuffling of political and economic status, to be put off a little longer.
Unfortunately for all those involved, the geological realities on which the fracking bubble depends are not showing any particular willingness to cooperate. The downgrading of the Monterey Shale not long ago was just the latest piece of writing on the wall: one more sign that we’re scraping the bottom of the oil barrel under the delusion that this proves the barrel is still full. The fact that most of the companies in the fracking industry are paying their bills by running up debt, since their expenses are considerably greater than their earnings, is another sign of trouble that ought to be very familiar to those of us who witnessed the housing bubble’s go through its cycle of boom and bust.
Bubbles are like empires; if you watch one rise, you can be sure that it’s going to fall. What you don’t know, and can’t know, is when and how. That’s a trap that catches plenty of otherwise savvy investors. They see a bubble get under way, recognize it as a bubble, put money into it under the fond illusion that they can anticipate the bust and pull their money out right before the bottom drops out...and then, like everyone else, they get caught flatfooted by the end of the bubble and lose their shirts. That’s one of the great and usually unlearned lessons of finance: when a bubble gets going, it’s the pseudo-smart money that piles into it—the really smart money heads for the hills.
So it’s anyone’s guess when exactly the fracking bubble is going to pop, and even more uncertain how much damage it’s going to do to what remains of the US economy. A good midrange guess might be that it’ll have roughly the same impact that the popping of the housing bubble had in 2008 and 2009, but it could be well to either side of that estimate. Crucially, though, the damage that it does will be landing on an economy that has never really recovered from the 2008-2009 housing crash, in which actual joblessness (as distinct from heavily manipulated unemployment figures) is at historic levels and a very large number of people are scrambling for survival. At this point, another sharp downturn would make things much worse for a great many millions whose prospects aren’t that good to begin with, and that has implications that cross the border from economics into politics.
Meanwhile, the political scene in the United States is primed for an explosion. One of my regular readers—tip of the archdruid’s hat to Andy Brown—is a research anthropologist who recently spent ten weeks traveling around the United States asking people about their opinions and feelings concerning government. What he found was that, straight across geographical, political, and economic dividing lines, everyone he interviewed described the US government as the corrupt sock puppet of wealthy interests. He noted that he couldn’t recall ever encountering so broad a consensus on any political subject, much less one as explosive as this. 
Recent surveys bear him out. Only 7% of Americans feel any significant confidence in Congress.  Corresponding figures for the presidency and the Supreme Court are 29% and 30% respectively; fewer than a third of Americans, that is, place much trust in the political institutions whose birth we’ll be celebrating in a few days. This marks a tectonic shift of immense importance.  Not that many decades ago, substantial majorities of Americans believed in the essential goodness of the institutions that governed their country. Even those who condemned the individuals running those institutions—and of course that’s always been one of our national sports—routinely phrased those condemnations in terms reflecting a basic faith in the institutions themselves, and in the American experiment as a whole.
Those days are evidently over. The collapse of legitimacy currently under way in the United States is a familiar sight to students of history, who can point to dozens of comparable examples; each of these was followed, after no very long delay, by the collapse of the system of government whose legitimacy in the eyes of its people had gone missing in action. Those of my readers who are curious about such things might find it educational to read a good history of the French or the Russian revolutions, the collapse of the Weimar Republic or the Soviet Union, or any of the other implosions of political authority that have littered the last few centuries with rubble: when a system loses legitimacy in the eyes of the people it claims to lead, the end of that system is on its way.
The mechanics behind the collapse are worth a glance as well. Whether or not political power derives from the consent of the governed, as American political theory insists, it’s unarguably true that political power depends from moment to moment on the consent of the people who do the day-to-day work of governing:  the soldiers, police officers, bureaucrats and clerks whose job is is to see to it that orders from the leadership get carried out. Their obedience is the linchpin on which the survival of a regime rests, and it’s usually also the fault line along which regimes shatter, because these low-ranking and poorly paid functionaries aren’t members of the elite. They’re ordinary working joes and janes, subject to the same cultural pressures as their neighbors, and they generally stop believing in the system they serve about the same time as their neighbors do. That doesn’t stop them from serving it, but it does very reliably make them unwilling to lay down their lives in its defense, and if a viable alternative emerges, they’re rarely slow to jump ship.
Here in America, as a result of the processes just surveyed, we’ve got a society facing a well-known pattern of terminal crisis, with a gridlocked political system that’s lost its legitimacy in the eyes of the people it governs, coupled with a baroque and dysfunctional economic system lurching toward another cyclical collapse under the weight of its own hopelessly inefficient management of wealth. This is not a recipe for a comfortable future. The situation has become dire enough that some of the wealthiest beneficiaries of the system—usually the last people to notice what’s happening, until the mob armed with torches and pitchforks shows up at their mansion’s front door—have belatedly noticed that robbing the rest of society blind is not a habit with a long shelf life, and have begun to suggest that if the rich don’t fancy the thought of dangling from lampposts, they might want to consider a change in approach. 
In its own way, this recognition is a promising sign. Similar realizations some seventy years ago put Franklin Roosevelt in the White House and spared the United States the hard choice between civil war and authoritarian rule that so many other countries were facing just then.  Unless a great many more members of our kleptocratic upper class experience the same sort of wake-up call in a hurry, though, the result this time is likely to be far too little and much too late.
Here again, though, a recognition that some kind of crash is coming doesn’t amount to foreknowledge of when it’s going to hit, how it’s going to play out, or what the results will be. If the implosion of the fracking bubble leads to one more round of bailouts for the rich and cutbacks for the poor, we could see the inner cities explode as they did in the long hot summers of the 1960s, setting off the insurgency that was so narrowly avoided in those years, and plunging the nation into a long nightmare of roadside bombs, guerrilla raids, government reprisals, and random drone strikes. If a talented demagogue shows up in the right place and time, we might instead see the rise of a neofascist movement that would feed on the abandoned center of American politics and replace the rusted scraps of America’s democratic institutions with a shiny new dictatorship.
If the federal government’s gridlock stiffens any further toward rigor mortis, for that matter, we could see the states force a constitutional convention that could completely rewrite the terms of our national life, or simply dissolve the Union and allow new regional nations to take shape.  Alternatively, if a great many factors break the right way, and enough people in and out of the corridors of power take the realities of our predicament seriously and unexpectedly grow some gonads—either kind, take your pick—we might just be able to stumble through the crisis years into an era of national retrenchment and reassessment, in which many of the bad habits picked up during America’s century of empire get chucked in history’s compost bin, and some of the ideals that helped inspire this country get a little more attention for a while. That may not be a likely outcome, but I think it’s still barely possible.
All we can do is wait and see what happens, or try to take action in the clear awareness that we can’t know what effects our actions will have. Thinking about that predicament, I find myself remembering lines from the bleak and brilliant poetic testament of the generation that came of age in the aftermath of those gunshots in Sarajevo, T.S. Eliot’s The Waste Land:
What are the roots that clutch, what branches growOut of this stony rubbish? Son of man,You cannot say, or guess, for you know onlyA heap of broken images, where the sun beats,And the dead tree gives no shelter, the cricket no relief,And the dry stone no sound of water. OnlyThere is shadow under this red rock(Come in under the shadow of this red rock),And I will show you something different from eitherYour shadow at morning striding behind youOr your shadow at evening rising up to meet you:I will show you fear in a handful of dust.
It’s a crisp metaphor for the challenges of our time, as it was of those in the time about which Eliot wrote. For that matter, the quest to see something other than our own shadows projected forward on the future or backward onto the past has a broader significance for the project of this blog. With next week’s post, I plan on taking that quest a step further. The handful of dust I intend to offer my readers for their contemplation is the broader trajectory of which the impending crisis of the United States is one detail: the descent of industrial civilization over the next few centuries into a deindustrial dark age.

The Broken Thread of Culture

Wed, 2014-06-25 17:38
There are times when the deindustrial future seems to whisper in the night like a wind blowing through the trees, sending the easy certainties of the present spinning like dead leaves. I had one of those moments recently, courtesy of a news story from 1997 that a reader forwarded me, about the spread of secret stories among homeless children in Florida’s Dade County.  These aren’t your ordinary children’s stories: they’re myths in the making, a bricolage of images from popular religion and folklore torn from their original contexts and pressed into the service of a harsh new vision of reality.
God, according to Dade County’s homeless children, is missing in action; demons stormed Heaven a while back and God hasn’t been seen since. The mother of Christ murdered her son and morphed into the terrifying Bloody Mary, a nightmare being who weeps blood from eyeless sockets and seeks out children to kill them.  Opposing her is a mysterious spirit from the ocean who takes the form of a blue-skinned woman, and who can protect children who know her secret name. The angels, though driven out of Heaven, haven’t given up; they carry on their fight against the demons from a hidden camp in the jungle somewhere outside Miami, guarded by friendly alligators who devour hostile intruders. The spirits of children who die in Dade County’s pervasive gang warfare can go to the camp and join the war against the demons, so long as someone who knows the stories puts a leaf on their graves.
This isn’t the sort of worldview you’d expect from people living in a prosperous, scientifically literate industrial society, but then the children in Dade County’s homeless shelters don’t fit that description in any meaningful sense. They live in conditions indistinguishable from the worst end of the Third World; their lives are defined by poverty, hunger, substance abuse, shattered families, constant uncertainty, and lethal violence dispensed at random. If, as Bruce Sterling suggested, the future is already here, just not evenly distributed yet, they’re the involuntary early adopters of a future very few people want to think about just now, but many of us will experience in the decades ahead, and most of humanity will face in the centuries that follow: a future we may as well call by the time-honored label "dark age."
That label actually dates from before the period most often assigned it these days. Marcus Terentius Varro, who was considered the most erudite Roman scholar of his time, divided up the history known to him into three ages—an age of history, for which there were written records; before that, an age of fable, from which oral traditions survived; and before that, a dark age, about which no one knew anything at all. It’s a simple division but a surprisingly useful one; even in those dark ages where literacy survived as a living tradition, records tend to be extremely sparse and unhelpful, and when records pick up again they tend to be thickly frosted with fable and legend for a good long while thereafter. In a dark age, the thread of collective memory and cultural continuity snaps, the ends are lost, and a new thread must be spun from whatever raw materials happen to be on hand.
There are many other ways to talk about dark ages, and we’ll get to those in later posts, but I want to focus on this aspect for the moment. Before the Greco-Roman world Varro knew, an earlier age of complex, literate civilizations had flourished and then fallen, and the dark age that followed was so severe that in many regions—Greece was one of them—even the trick of written language was lost, and had to be imported from elsewhere centuries afterwards. The dark age following Varro’s time wasn’t quite that extreme, but it was close enough; literacy became a rare attainment, and vast amounts of scientific, technical, and cultural knowledge were lost. To my mind, that discontinuity demands more attention than it’s usually been given.  What is it that snaps the thread that connects past to present, and allows the accumulated knowledge of an entire civilization to fall into oblivion?
A recurring historical process lies behind that failure of transmission, and it’s one that can be seen at work in those homeless children of Dade County, whispering strange stories to one another in the night.
Arnold Toynbee, whose monumental work A Study of History has been a major inspiration to this blog’s project, proposed that civilizations on the way to history’s compost heap always fail in the same general way. The most important factor that makes a rising civilization work, he suggested, is mimesis—the universal human habit by which people imitate the behavior and attitudes of those they admire. As long as the political class of a civilization can inspire admiration and affection from those below it, the civilization thrives, because the shared sense of values and purpose generated by mimesis keeps the pressures of competing class interests from tearing it apart.
Civilizations fail, in turn, because their political classes lose the ability to inspire mimesis, and this happens in turn because members of the elite become so fixated on maintaining their own power and privilege that they stop doing an adequate job of addressing the problems facing their society.  As those problems spin further and further out of control, the political class loses the ability to inspire and settles instead for the ability to dominate. Outside the political class and its hangers-on, in turn, more and more of the population becomes what Toynbee calls an internal proletariat, an increasingly sullen underclass that still provides the political class with its cannon fodder and labor force but no longer sees anything to admire or emulate in those who order it around.
It can be an unsettling experience to read American newspapers or wide-circulation magazines from before 1960 or so with eyes sharpened by Toynbee’s analysis.  Most newspapers included a feature known as the society pages, which chronicled the social and business activities of the well-to-do, and those were read, with a sort of fascinated envy, very far down the social pyramid. Established figures of the political and business world were treated with a degree of effusive respect you won’t find in today’s media, and even those who hoped to shoulder aside this politician or that businessman rarely dreamed of anything more radical than filling the same positions themselves. Nowadays? Watching politicians, businesspeople, and celebrities get dragged down by some wretched scandal or other is this nation’s most popular spectator sport.
That’s what happens when mimesis breaks down. The failure to inspire has disastrous consequences for the political class—when the only things left that motivate people to seek political office are cravings for power or money, you’ve pretty much guaranteed that the only leaders you’ll get are the sort of incompetent hacks who dominate today’s political scene—but I want to concentrate for a moment on the effects on the other end of the spectrum. The failure of the political class to inspire mimesis in the rest of society doesn’t mean that mimesis goes away. The habit of imitation is as universal among humans as it is among other social primates. The question becomes this:  what will inspire mimesis among the internal proletariat? What will they use as the templates for their choices and their lives?
That’s a crucial question, because it’s not just social cohesion that depends on mimesis.  The survival of the collective knowledge of a society—the thread connecting past with present I mentioned earlier—also depends on the innate habit of imitation. In most human societies, children learn most of what they need to know about the world by imitating parents, older siblings, and the like, and in the process the skills and knowledge base of the society is passed on to each new generation. Complex societies like ours do the same thing in a less straightforward way, but the principle is still the same. Back in the day, what motivated so many young people to fiddle with chemistry sets? More often than not, mimesis—the desire to be just like a real scientist, making real discoveries—and that was reasonable in the days when a significant fraction of those young people could expect to grow up to be real scientists.
That still happens, but it’s less and less common these days, and for those who belong to the rapidly expanding underclass of American society—the homeless children in Dade County I mentioned at the beginning of this essay, for example—the sort of mimesis that might lead to a career in science isn’t even an option. A great many of those children won’t live to reach adulthood, and they know it; those who do manage to dodge the stray bullets and the impact of collapsing public health, by and large, will spend their days in the crumbling, crowded warehouse facilities that substitute for schools in this country’s poorer neighborhoods, where maybe half of each graduating high school class comes out functionally illiterate; their chances of getting a decent job of any kind weren’t good even before the global economy started unraveling, and let’s not even talk about those chances now.
When imitating the examples offered by the privileged becomes a dead end, in other words, people find other examples to imitate. That’s one of the core factors, I’m convinced, behind the collapse of the reputation of the sciences in contemporary American society, which is so often bemoaned by scientists and science educators.  Neil DeGrasse Tyson, say, may rhapsodize about the glories of science, but what exactly do those glories have to offer children huddling in an abandoned house in some down-at-heels Miami suburb, whose main concerns are finding ways to get enough to eat and stay out of the way of the latest turf war between the local drug gangs?
Now of course there’s been a standard kneejerk answer to such questions for the last century or so. That answer was that science and technology would eventually create such abundance that everyone in the world would be able to enjoy a middle-class lifestyle and its attendant opportunities.  That same claim can still be heard nowadays, though it’s grown shrill of late after repeated disconfirmation. In point of fact, for the lower 80% of Americans by income, the zenith of prosperity was reached in the third quarter of the 20th century, and it’s all been downhill from there. This isn’t an accident; what the rhetoric of progress through science misses is that the advance of science may have been a necessary condition for the boomtimes of the industrial age, but it was never a sufficient condition in itself.
The other half of the equation was the resource base on which industrial civilization depended. Three centuries ago, as industrialism got under way, it could draw on vast amounts of cheap, concentrated energy in the form of fossil fuels, which had been stored up in the Earth’s crust over the previous half billion years or so. It could draw on equally huge stocks of raw materials of various kinds, and it could also make use of a biosphere whose capacity to absorb pollutants and other environmental insults hadn’t yet been overloaded to the breaking point by human activity. None of those conditions still obtain, and the popular insistence that the economic abundance of the recent past must inevitably be maintained in the absence of the material conditions that made it possible—well, let’s just say that makes a tolerably good example of faith-based thinking.
Thus Tyson is on one side of the schism Toynbee traced out, and the homeless children of Dade County and their peers and soon-to-be-peers elsewhere in America and the world are on the other. He may denounce superstition and praise reason and science until the cows come home, but again, what possible relevance does that have for those children? His promises are for the privileged, not for them; whatever benefits further advances in technology might still have to offer will go to the dwindling circle of those who can still afford such things, not to the poor and desperate.  Of course that simply points out another way of talking about Toynbee’s schism:  Tyson thinks he lives in a progressing society, while the homeless children of Dade County know that they live in a collapsing one.
As the numbers shift toward the far side of that dividing line, and more and more Americans find themselves struggling to cope with a new and unwelcome existence in which talk about progress and prosperity amounts to a bad joke, the failure of mimesis—as in the fallen civilizations of the past—will become a massive social force. If the usual patterns play themselves out, there will be a phase when the  leaders of successful drug gangs, the barbarian warbands of our decline and fall, will attract the same rock-star charisma that clung to Attila, Alaric, Genseric and their peers. The first traces of that process are already visible; just as young Romans in the fourth century adopted the clothes and manners of Visigoths, it’s not unusual to see the children of white families in the suburban upper middle class copying the clothing and culture of inner city gang members.
Eventually, to judge by past examples, this particular mimesis is likely to extend a great deal further than it has so far. It’s when the internal proletariat turns on the failed dominant minority and makes common cause with what Toynbee calls the external proletariat—the people who live just beyond the borders of the falling civilization, who have been shut out from its benefits but burdened with many of its costs, and who will eventually tear the corpse of the civilization to bloody shreds—that civilizations make the harsh transition from decline to fall. That transition hasn’t arrived yet for our civilization, and exactly when it will arrive is by no means a simple question, but the first whispers of its approach are already audible for those who know what to listen for and are willing to hear.
The age of charismatic warlords, though, is an epoch of transition rather than an enduring reality.  The most colorful figures of that age, remade by the workings of the popular imagination, become the focus of folk memories and epic poetry in the ages that follow; Theodoric the Ostrogoth becomes Dietrich von Bern and the war leader Artorius becomes the courtly King Arthur, taking their place alongside Gilgamesh, Arjuna, Achilles, Yoshitsune, and their many equivalents. In their new form as heroes of romance, they have a significant role to play as objects of mimesis, but it tends to be restricted to specific classes, and finds a place within broader patterns of mimesis that draw from other sources.
And those other sources?  What evidence we have—for the early stages of their emergence are rarely well documented—suggests that they begin as strange stories whispered in the night, stories that deliberately devalue the most basic images and assumptions of a dying civilization to find meaning in a world those images and assumptions no longer explain.
Two millennia ago, for example, the classical Greco-Roman world imagined itself seated comfortably at the summit of history.  Religious people in that culture gloried in gods that had reduced primal chaos to permanent order and exercised a calm rulership over the cosmos; those who rejected traditional religion in favor of rationalism—and there was no shortage of those, any more than there is today; it’s a common stage in the life of every civilization—rewrote the same story in secular terms, invoking various philosophical principles of order to fill the role of the gods of Olympus; political thinkers defined history in the same terms, with the Roman Empire standing in for Jupiter Optimus Maximus. It was a very comforting way of thinking about the world, if you happened to be a member of the gradually narrowing circle of those who benefited from the existing order of society.
To thos who formed the nucleus of the Roman Empire’s internal proletariat, though, to slaves and the urban poor, that way of thinking communicated no meaning and offered no hope. The scraps of evidence that survived the fall of the Roman world suggest that a great many different stories got whispered in the darkness, but those stories increasingly came to center around a single narrative—a story in which the God who created everything came down to walk the earth as a man, was condemned by a Roman court as a common criminal, and was nailed to a cross and left hanging there to die.
That’s not the sort of worldview you’d expect from people living in a prosperous, philosophically literate classical society, but then the internal proletariat of the Roman world increasingly didn’t fit that description. They were the involuntary early adopters of the post-Roman future, and they needed stories that would give meaning to lives defined by poverty, brutal injustice, uncertainty, and violence. That’s what they found in Christianity, which denied the most basic assumptions of Greco-Roman culture in order to give value to the lived experience of those for whom the Roman world offered least.
This is what the internal proletariat of every collapsing civilization finds in whatever stories become central to the faith of the dark age to come.  It’s what Egyptians in the last years of the Old Kingdom found by abandoning the official Horus-cult in favor of the worship of Osiris, who walked the earth as a man and suffered a brutal death; it’s what many Indians in the twilight of the Guptas and many Chinese in the aftermath of the Han dynasty found by rejecting their traditional faiths in favor of reverence for the Buddha, who walked away from a royal lifestyle to live by his begging bowl and search for a way to leave the miseries of existence behind forever.  Those and the many more examples like them inspired mimesis among those for whom the official beliefs of their civilizations had become a closed book, and became the core around which new societies emerged.
The stories being whispered from one homeless Dade County child to another probably aren’t the stories that will serve that same function as our civilization follows the familiar trajectory of decline and fall. That’s my guess, at least, though of course I could be wrong. What those whispers in the night seem to be telling me is that the trajectory in question is unfolding in the usual way—that those who benefit least from modern industrial civilization are already finding meaning and hope in narratives that deliberately reject our culture’s traditional faiths and overturn the most fundamental presuppositions of our age. As more and more people find themselves in similar straits, in turn, what are whispers in the night just now will take on greater and greater volume, until they drown out the stories that most people take on faith today.

The Stories of our Grandchildren

Wed, 2014-06-18 17:05
Over the last six weeks, in what spare time I could find, I’ve glanced back over the last eight years of weekly Archdruid Report posts, trying to get some sense of where this blog has been and where it might head in the months and years to come. In language the Grateful Dead made famous—well, among those of us in a certain generation, at least—it‘s been a long strange trip, crossing terrain not often included in tours of the future of our faltering industrial civilization.
Among those neglected landscapes of the mind, though, the territory that seems most crucial to me involves the role that stories play in shaping our ideas and expectations about the future, and thus our future itself. It’s a surprisingly difficult issue for many people these days to grapple with. Each time I raise it, I can count on hearing from readers who don’t get what I’m saying, usually because they’ve lost track of the distinction between whatever story they’ve gotten stuck in their minds and the far more diffuse and shapeless experiences that the story claims to explain. We tell ourselves stories to explain the world; that much is normal among human beings, and inevitable. The problem creeps in when we lose track of the difference between the narrative map and the experiential territory, and treat (for example) progress as a simple reality, rather than the complex and nuanced set of interpretations we lay over the facts of history to turn them into incidents in a familiar narrative.
During the time just past, I’ve had several reminders of the power of stories to shape the world of human experience, and the way those stories can get out of step with the facts on the ground. I’d like to take a moment to talk about a couple of those just now.
The first reminder made quite a splash in the news media a couple of weeks ago, when the Energy Information Administraton (EIA)—the US bureaucracy that publishes statistics about American energy resources and production—was forced to admit in public that, well, actually, there was only about 4% as much economically extractable oil in the Monterey Shale in California as they’d claimed a few years earlier. Given that this same Monterey Shale was supposed to provide us with around two-thirds of the oil that was allegedly going to turn the United States into a major oil exporter again by 2020, this was not precisely a minor issue. How many other oil shale deposits are facing similar downgrades? That’s a good question, and one which the EIA seems noticeably unwilling to address.
Bertram Gross pointed out a good many years ago that economic indicators were becoming “economic vindicators,” meant to justify government policy instead of providing accurate glimpses into what’s actually happening in the economic sphere. That certainly seems to have been one of the things behind the EIA’s stratospherically overenthusiastic estimates.  Equally, the US government seems to have responded to the current boom in shale with exactly the same sort of mindless cheerleading it displayed during the housing bubble that popped in 2008 and the tech stock bubble that popped in 2001. I trust it hasn’t escaped the attention of my readers that the key product of the shale oil boom hasn’t been oil or natural gas, but bundled shale leases and similar scraps of overpriced paper, sold to gullible investors with the same gaudy promises of fast wealth and starry-eyed disdain for mere economic reality that fed those earlier bubbles, and drove the market crashes that followed.
Still, there’s more going on here than the common or garden variety political puffery and securities fraud that makes up so much of business as usual in America’s years of decline. The question that needs asking is this:  why are investors who watched those two earlier booms go bust, who either lost money in them or saw many others do so, lining up so eagerly to put their nest eggs into shale-oil companies that are losing money quarter after quarter, and can only stay in business by loading on more and more debt?  Why is the same weary drivel about a new economic era of perpetual prosperity being lapped up so uncritically for a third time in fifteen years, when anyone in possession of three functioning neurons ought to be able to recognize it as a rehash of the same failed hype paraded about in previous bubbles, all the way back to the tulip bubble in the 17th-century Netherlands?
That’s not a rhetorical question; it has an answer, and the answer follows from one of the most popular stories of our culture, the story that says that getting rich is normal. From Horatio Alger right on down to the present, our entertainment media have been overloaded with tales about people who rose up out of poverty and became prosperous. What’s more, during the boom times that made up so much of the 20th century, a modest fraction of those tales were true, or at least not obviously false. Especially but not only  in the United States, you could find people who were born poor and died rich. An expanding economy brings that option within reach for some, though—despite the propaganda—never for all.
The story was always at least a little dishonest, as the golden road up from poverty was never normal for more than a certain fraction of the population, and the wealth of the few always depended, as it always does depend in the real world, on the impoverishment of the many. During their 20th century heyday, the world’s industrial societies could pretend that wasn’t the case by the simple expedient of offshoring their poverty to the Third World, and supporting their standards of living at home on the backs of sharecroppers and sweatshop workers overseas. Still, in those same industrial nations, it was possible to ignore that for a while, and to daydream about a future in which every last human being on earth would get to enjoy the benefits of upward mobility in a rapidly expanding economy.
That dream is over and done with. To begin with, the long arc of economic expansion is over; subtract the fake wealth generated by the mass production of unpayable IOUs—the one real growth industry in our economy these days—and we live in an economy in decline, in which real wealth trickles away and  the fraction of the population permanently shut out of the workforce rises year after year.  Downward mobility, not upward mobility, has become a central fact of our time.  The reality has changed, but the story hasn’t, and so investors convinced that their money ought to make them money are easy prey for some grifter in a Brooks Brothers suit who insists that tech stocks, or real estate, or oil shales will inevitably bring them the rising incomes and painless prosperity that the real world no longer provides.
The same sort of mismatch between a popular story and an unwelcome reality defines the second reminder I want to discuss, which popped up during and after the Age of Limits conference late last month in the woods of south central Pennsylvania. That was a very lively and enjoyable event; when Dr. Dennis Meadows, this year’s special guest, noted how pleasant it was to speak to an audience that didn’t have to be convinced of the reality of limits to growth, he spoke for all the presenters and a great many of the attendees as well. For a few days, those of us who attended had the chance to talk about the most important reality of our age—the decline and impending fall of modern industrial civilization—without having to contend minute by minute with the thirty-one flavors of denial so many people use to evade that reality and the responsibilities it brings with it.
That said, there were a few jarring moments, and one of them happened in the interval between my talk on dark ages and Dr. Mark Cochrane’s excellent presentation on the realities of climate change. In the Q&A session after my talk, in response to a question from the audience, I noted how the prestige of science among the general public had taken a beating due to the way that scientific opinions handed down to the public as proven fact so often get retracted after a decade or so, a habit that has caused  many people outside the scientific community to treat all scientific pronouncements with skepticism. I cited several examples of this, and one of them was the way that popular works on climate science in the 1970s and 1980s routinely claimed that the world was on the brink of a new ice age.
Mention the existence of those claims nowadays and you’ll inevitably get denounced as a climate denialist. As my regular readers know, I’m nothing of the kind; I’ve written extensively about the impacts of anthropogenic climate change on the decades and centuries ahead, and my recently published science fiction novel Star’s Reach takes place in a 25th-century America ravaged by the impacts of climate change, in which oranges are grown in what’s now Illinois and Memphis has become a seaport. It’s become popular, for that matter, to insist that those claims of a new ice age never happened; I’d be happy, if anyone’s curious, to cite books published in the 1970s and 1980s for the general public, written by eminent scientists and respected science writers, that described the imminent ice age as a scientifically proven fact, since I have several on my bookshelf.
What I found interesting is that Dr. Cochrane, who is a more than usually careful scholar, jumped to the conclusion that my reference to these popular works of a bygone decade meant that I must be a climate denialist. I corrected him, and he accepted the correction gracefully.  Yet permaculturist and peak oil author Albert Bates then proceeded to miss my point in exactly the same way in his blog post on the event. Bates was present at the discussion, and presumably heard the whole exchange. He’s neither a stupid man nor a malicious one; why, then, so embarrassing and so public a misstatement?
This isn’t a rhetorical question, either; it has an answer, and the answer follows from another of the most popular stories of our culture, the story that says that having the right answer is all you need to get people to listen to you. You’ll find narratives with that theme straight through the popular culture of the last two centuries and more, and it also pervades the rhetoric of science and of scientific history: once the protagonist figures out what’s really going on, whether it’s a murder mystery or the hunt for the molecular structure of DNA, everything falls promptly into place.
Now of course in the real world, things aren’t generally so easy. That was precisely the point I was trying to make in the discussion at the Age of Limits conference:  however convincing the evidence for anthropogenic climate change may be to scientists, it’s failed to convince a great many people outside the scientific enterprise, and one of the things that’s driven that failure is the accelerating decline in the prestige of science in modern industrial society as a whole. Among the roots of that decline, in turn, is the dogmatic tone so often taken when scientists and science writers set out to communicate current scientific opinions to the general public—a tone that differs sharply, it bears remembering, from the far more tentative habits of communication practiced within the scientific community itself.
When climate scientists today insist that they’ve determined conclusively that we’ve entered an age of rising temperatures, I see no reason to doubt them—but they need to recall that many people still remember when writers and speakers with equally impressive scientific credentials insisted with equal vigor that it was just as certain that we’d entered an age of cooling temperatures.  Scientists in the relevant fields know what’s behind the change, but people outside the scientific community don’t; all they see is a flip-flop, and since such flip-flops of scientific opinion have been fairly common in recent decades, members of the general public are by no means as quick as they once were to take scientists at their word. For that matter, when spokespeople for the scientific community insist to the general public nowadays that the flip-flop never took place—that, for example, no reputable scientist or science writer ever claimed to the general public that a new ice age was imminent—those spokespeople simply leave themselves and the scientific community wide open to accusations of bad faith.
We don’t talk about the political dimensions of scientific authority in the modern industrial world. That’s what lies behind the convenient and inaccurate narrative I mentioned earlier, the one that claims that all you have to do to convince people is speak the truth. Question that story, and you have to deal with the mixed motives and tangled cultural politics inseparable from science as a human activity, and above all, you have to discuss the much-vexed relationship between the scientific community and a general public that has become increasingly suspicious of the rhetoric of expertise in contemporary life.
That relationship has dimensions that I don’t think anyone in the scientific community these days has quite grasped. I’ve been told privately by several active online proponents of creationism, for example, that they don’t actually care that much about how the world’s current stock of life forms got there; it’s just that the spluttering Donald Duck frenzy that can reliably be elicited from your common or garden variety rationalist atheist by questioning Darwin’s theory is too entertaining to skip.
Such reflections lead in directions most Americans aren’t willing to go, because they can’t be discussed without raising deeply troubling issues about the conflict between the cult of expertise and what’s left of the traditions of American democracy, and about the social construction of what’s considered real in this as in every other human culture. It’s much easier, and much more comfortable, to insist that the people on the other side of the divide just mentioned are simply stupid and evil, and—as in the example I cited earlier—to force any attempt to talk about the faltering prestige of science in today’s America into a more familiar discourse about who’s right and who’s wrong.
Equally, it’s much easier, and much more comfortable, to insist that the ongoing decline in standards of living here in America is either the fault of the poor or the fault of the rich.  Either evasion makes it possible to ignore all the evidence that suggests that what most Americans think of as a normal standard of living is actually an absurd degree of extravagance, made possibly only briefly by the reckless squandering of the most lavish energy resource our species will ever know.
One of the crucial facts of our age is thus that the stories we tell ourselves, the narratives we use to make sense of the events of our lives, have passed their pull date and no longer make sense of the world we experience. The stories our grandchildren use to make sense of their world will be different, from ours, because they will be living in the world that the misguided choices of the last four decades or so will have made—a world that is beginning to take shape around us already, even though most people nowadays are doing their level best not to notice that awkward fact.
Meanwhile, those new stories, the stories of our grandchildren, may already be stirring in the crawlspaces of our collective imagination. In future posts, I’ll be talking about some of the more troubling of those, but this week I’m pleased to have the chance to discuss something a little more cheerful along those lines:  the outcome of this year’s “Space Bats” deindustrial science fiction contest.
Regular readers of this blog will remember that back in the fall of 2011, in the course of discussing the role that the science fiction of previous decades played in shaping our expectations of the future, I put out a call for SF short stories set in a world on the far side of peak oil and climate change. I was delighted by the response: over the five months or so that followed, 63 stories were submitted, and I duly assembled an anthology: After Oil: SF Stories of a Post-Petroleum Future. This January, I announced a second contest of the same sort, with a three-month window in which stories would be accepted.
The response was even more impressive this time around. Over those three months I received 92 story submissions, some from Archdruid Report regulars but many others from people I didn’t know from Robert Heinlein’s off ox, and a remarkably large fraction of them were not only publishable but of very high quality. I despaired of winnowing down the input to one anthology’s worth; fortunately, the publisher came to the rescue by proposing a slight change in plans.
I’m therefore delighted to announce that there will be not one but two new anthologies—one of stories set in the twilight years of our own civilization, one of stories set in the new societies that will rise after the industrial world is a fading memory. The first one, After Oil 2: The Years of Crisis, will include the following stories:
Grant Canterbury’s "Dreaming"Walt Freitag’s "A Mile a Minute"Matthew Griffith’s "Promised Land"Diana Haugh’s "The Big Quiet"Martin Hensher’s "Crown Prerogative"J.M. Hughes’ "Byte Heist"Calvin Jennings’ "A Dead Art Form"Joseph Nemeth’s "A Break with the Past"N.N. Scott’s "When It Comes a Gully-Washer"David Trammel’s "A Fish Tale"Tony Whelk’s "Al-Kimiya"Rachel White’s "Story Material"
The second new anthology, After Oil 3: The Years of Rebirth, will include the following stories:
Bill Blondeau’s "The Borax Road Affair"Phil Harris’ "North of the Wall"Wylie Harris’ "Dispatches"Diana Haugh’s "Silver Survivor"Jason Heppenstall’s "Saga and the Bog People"J.M. Hughes’ "Dahamri"Gaianne Jenkins’ "Midwinter Eclipse"Troy Jones’ "For Our Mushrooms"Catherine McGuire’s "Singing the World"David Senti’s "Nuala Thrives"Al Sevcik’s "Community"Eric Singletary’s "City of Spirits"
Once again, I’d like to thank everyone who contributed a story to the contest; even with a spare anthology to fill, it wasn’t easy to choose among the entries. I’m looking into whether it might be possible to launch a quarterly magazine for deindustrial SF:  there’s clearly an ample supply of good writers who want to tell such stories, and (to judge from sales of the original anthology, and of my deindustrial SF novel Star’s Reach) plenty of people who want to read them as well.
That strikes me as a very good sign. We may not yet be in a position to guess at the stories our grandchildren will tell each other to make sense of the world, but the fact that so many people are already eager to write and read stories about a world on the far side of progress gives me hope that the failed narratives of the past are losing their grip on the collective imagination of our age—and that we may be starting to tell at least a few of the new stories that will make sense of the world after oil.