AODA Blog

Facts, Values, and Dark Beer

Wed, 2014-11-19 19:40
Over the last eight and a half years, since I first began writing essays on The Archdruid Report, I’ve fielded a great many questions about what motivates this blog’s project. Some of those questions have been abusive, and some of them have been clueless; some of them have been thoughtful enough to deserve an answer, either in the comments or as a blog post in its own right. Last week brought one of that last category. It came from one of my European readers, Ervino Cus, and it read as follows:
“All considered (the amount of weapons—personal and of MD—around today; the population numbers; the environmental pollution; the level of lawlessness we are about to face; the difficulty to have a secure form of life in the coming years; etc.) plus the ‘low’ technical level of possible development of the future societies (I mean: no more space flight? no more scientific discovery about the ultimate structure of the Universe? no genetic engineering to modify the human genome?) the question I ask to myself is: why bother?
“Seriously: why one should wish to plan for his/her long term survival in the future that await us? Why, when all goes belly up, don't join the first warlord band available and go off with a bang, pillaging and raping till one drops dead?
“If the possibilities for a new stable civilization are very low, and it's very probable that such a civilization, even if created, will NEVER be able to reach even the technical level of today, not to mention to surpass it, why one should want to try to survive some more years in a situation that becomes every day less bright, without ANY possibilities to get better in his/her lifetime, and with, as the best objective, only some low-tech rural/feudal state waaay along the way?
“Dunno you, but for me the idea that this is the last stop for the technological civilization, that things as a syncrothron or a manned space flight are doomed and never to repeat, and that the max at which we, as a species and as individuals, can aspire from now on is to have a good harvest and to ‘enjoy’ the same level of knowledge of the structure of the Universe of our flock of sheeps, doesen't makes for a good enough incentive to want to live more, or to give a darn if anybody other lives on.
“Apologies if my word could seem blunt (and for my far than good English: I'm Italian), but, as Dante said:
“Considerate la vostra semenza:fatti non foste a viver come bruti,ma per seguir virtute e canoscenza.” (Inferno - Canto XXVI - vv. 112-120)
“If our future is not this (and unfortunately I too agree with you that at this point the things seems irreversibles) I, for one, don't see any reason to be anymore compelled by any moral imperative... :-(
“PS: Yes, I know, I pose some absolutes: that a high-tech/scientific civilization is the only kind of civilization that enpowers us to gain any form of ‘real’ knowledge of the Universe, that this knowledge is a ‘plus’ and that a life made only of ‘birth-reproduction-death’ is a life of no more ‘meaning’ than the one of an a plant.
“Cheers, Ervino.”
It’s a common enough question, though rarely expressed as clearly or as starkly as this. As it happens, there’s an answer to it, or rather an entire family of answers, but the best way there is to start by considering the presuppositions behind it.  Those aren’t adequately summarized by Ervino’s list of ‘absolutes’—the latter are simply restatements of his basic argument.
What Ervino is suggesting, rather, presupposes that scientific and technological progress are the only reasons for human existence. Lacking those—lacking space travel, cyclotrons, ‘real’ knowledge about the universe, and the rest—our existence is a waste of time and we might as well just lay down and die or, as he suggests, run riot in anarchic excess until death makes the whole thing moot. What’s more, only the promise of a better future gives any justification for moral behavior—consider his comment about not feeling compelled by any moral imperative if no better future is in sight.
Those of my readers who recall the discussion of progress as a surrogate religion in last year’s posts here will find this sort of thinking very familiar, because the values being imputed to space travel, cyclotrons et al. are precisely those that used to be assigned to more blatantly theological concepts such as God and eternal life. Still, I want to pose a more basic question: is this claim—that the meaning and purpose of human existence and the justification of morality can only be found in scientific and technological progress—based on evidence? Are there, for example, double-blinded, controlled studies by qualified experts that confirm this claim?
Of course not. Ervino’s claim is a value judgment, not a statement of fact.  The distinction between facts and values was mentioned in last week’s post, but probably needs to be sketched out here as well; to summarize a complex issue somewhat too simply, facts are the things that depend on the properties of perceived objects rather than perceiving subjects. Imagine, dear reader, that you and I were sitting in the same living room, and I got a bottle of beer out of the fridge and passed it around.  Provided that everyone present had normally functioning senses and no reason to prevaricate, we’d be able to agree on certain facts about the bottle: its size, shape, color, weight, temperature, and so on. Those are facts.
Now let’s suppose I got two glasses, poured half the beer into each glass, handed one to you and took the other for myself. Let’s further suppose that the beer is an imperial stout, and you can’t stand dark beer. I take a sip and say, “Oh, man, that’s good.” You take a sip, make a face, and say, “Ick. That’s awful.” If I were to say, “No, that’s not true—it’s delicious,” I’d be talking nonsense of a very specific kind: the nonsense that pops up reliably whenever someone tries to treat a value as though it’s a fact.
“Delicious” is a value judgment, and like every value judgment, it depends on the properties of perceiving subjects rather than perceived objects. That’s true of all values without exception, including those considerably more important than those involved in assessing the taste of beer. To say “this is good” or “this is bad” is to invite the question “according to whose values?”—which is to say, every value implies a valuer, just as every judgment implies a judge.
Now of course it’s remarkably common these days for people to insist that their values are objective truths, and values that differ from theirs objective falsehoods. That’s a very appealing sort of nonsense, but it’s still nonsense. Consider the claim often made by such people that if values are subjective, that would make all values, no matter how repugnant, equal to one another. Equal in what sense? Why, equal in value—and of course there the entire claim falls to pieces, because “equal in value” invites the question already noted, “according to whose values?” If a given set of values is repugnant to you, then pointing out that someone else thinks differently about those values doesn’t make them less repugnant to you.  All it means is that if you want to talk other people into sharing those values, you have to offer good reasons, and not simply insist at the top of your lungs that you’re right and they’re wrong.
To say that values depend on the properties of perceiving subjects rather than perceived objects does not mean that values are wholly arbitrary, after all. It’s possible to compare different values to one another, and to decide that one set of values is better than another. In point of fact, people do this all the time, just as they compare different claims of fact to one another and decide that one is more accurate than another. The scientific method itself is simply a relatively rigorous way to handle this latter task: if fact X is true, then fact Y would also be true; is it? In the same way, though contemporary industrial culture tends to pay far too little attention to this, there’s an ethical method that works along the same lines: if value X is good, then value Y would also be good; is it?
Again, we do this sort of thing all the time. Consider, for example, why it is that most people nowadays reject the racist claim that some arbitrarily defined assortment of ethnicities—say, “the white race”—is superior to all others, and ought to have rights and privileges that are denied to everyone else. One reason why such claims are rejected is that they conflict with other values, such as fairness and justice, that most people consider to be important; another is that the history of racial intolerance shows that people who hold the values associated with racism are much more likely than others to engage in activities, such as herding their neighbors into concentration camps, which most people find morally repugnant. That’s the ethical method in practice.
With all this in mind, let’s go back to Ervino’s claims. He proposes that in all the extraordinary richness of human life, out of all its potentials for love, learning, reflection, and delight, the only thing that can count as a source of meaning is the accumulation of “‘real’ knowledge of the Universe,” defined more precisely as the specific kind of quantitative knowledge about the behavior of matter and energy that the physical sciences of the world’s industrial societies currently pursue. That’s his value judgment on human life. Of course he has the right to make that judgment; he would be equally within his rights to insist that the point of life is to see how many orgasms he can rack up over the course of his existence; and it’s by no means obvious why one of these ambitions is any more absurd than the other.
Curiosity, after all, is a biological drive, one that human beings share in a high degree with most other primates. Sexual desire is another such drive, rather more widely shared among living things. Grant that the fulfillment of some such drive can be seen as the purpose of life, why not another? For that matter, why not more than one, or some combination of biological drives and the many other incentives that are capable of motivating human beings?
For quite a few centuries now, though, it’s been fashionable for thinkers in the Western world to finesse such issues, and insist that some biological drives are “noble” while others are “base,” “animal,” or what have you. Here again, we have value judgments masquerading as statements of fact, with a hearty dollop of class prejudice mixed in—for “base,” “animal,” etc., you could as well put “peasant,” which is of course the literal opposite of “noble.” That’s the sort of thinking that appears in the bit of Dante that Ervino included in his comment. His English is better than my Italian, and I’m not enough of a poet to translate anything but the raw meaning of Dante’s verse, but this is roughly what the verses say:
“Consider your lineage;You were not born to live as animals,But to seek virtue and knowledge.”
It’s a very conventional sentiment. The remarkable thing about this passage, though, is that Dante was not proposing the sentiment as a model for others to follow. Rather, this least conventional of poets put those words in the mouth of Ulysses, who appears in this passage of the Inferno as a damned soul frying in the eighth circle of Hell. Dante has it that after the events of Homer’s poem, Ulysses was so deeply in love with endless voyaging that he put to sea again, and these are the words with which he urged his second crew to sail beyond all known seas—a voyage which took them straight to a miserable death, and sent Ulysses himself tumbling down to eternal damnation.
This intensely equivocal frame story is typical of Dante, who delineated as well as any poet ever has the many ways that greatness turns into hubris, that useful Greek concept best translated as the overweening pride of the doomed. The project of scientific and technological progress is at least as vulnerable to that fate as any of the acts that earned the damned their places in Dante’s poem. That project might fail irrevocably if industrial society comes crashing down and no future society will ever be able to pursue the same narrowly defined objectives that ours has valued. In that case—at least in the parochial sense just sketched out—progress is over. Still, there’s at least one more way the same project would come to a screeching and permanent halt: if it succeeds.
Let’s imagine, for instance, that the fantasies of our scientific cornucopians are right and the march of progress continues on its way, unhindered by resource shortages or destabilized biospheres. Let’s also imagine that right now, some brilliant young physicist in Mumbai is working out the details of the long-awaited Unified Field Theory. It sees print next year; there are furious debates; the next decade goes into experimental tests of the theory, and proves that it’s correct. The relationship of all four basic forces of the cosmos—the strong force, the weak force, electromagnetism, and gravity—is explained clearly once and for all. With that in place, the rest of physical science falls into place step by step over the next century or so, and humanity learns the answers to all the questions that science can pose.
It’s only in the imagination of true believers in the Singularity, please note, that everything becomes possible once that happens. Many of the greatest achievements of science can be summed up in the words “you can’t do that;” the discovery of the laws of thermodynamics closed the door once and for all on perpetual motion, just as the theory of relativity put a full stop to the hope of limitless velocity. (“186,282 miles per second: it’s not just a good idea, it’s the law.”) Once the sciences finish their work, the technologists will have to scramble to catch up with them, and so for a while, at least, there will be no shortage of novel toys to amuse those who like such things; but sooner or later, all of what Ervino calls “‘real’ knowledge about the Universe” will have been learnt; at some point after that, every viable technology will have been refined to the highest degree of efficiency that physical law allows.
What then? The project of scientific and technological progress will be over. No one will ever again be able to discover a brand new, previously unimagined truth about the universe, in any but the most trivial sense—“this star’s mass is 1.000000000000000000006978 greater than this other star,” or the like—and variations in technology will be reduced to shifts in what’s fashionable at any given time. If the ongoing quest for replicable quantifiable knowledge about the physical properties of nature is the only thing that makes human life worth living, everyone alive at that point arguably ought to fly their hovercars at top speed into the nearest concrete abutment and end it all.
One way or another, that is, the project of scientific and technological progress is self-terminating. If this suggests to you, dear reader, that treating it as the be-all and end-all of human existence may not be the smartest choice, well, yes, that’s what it suggests to me as well. Does that make it worthless? Of course not. It should hardly be necessary to point out that “the only thing important in life” and “not important at all” aren’t the only two options available in discussions of this kind.
I’d like to suggest, along these lines, that human life sorts itself out most straightforwardly into an assortment of separate spheres, each of which deals with certain aspects of the extraordinary range of possibilities open to each of us. The sciences comprise one of those spheres, with each individual science a subsphere within it; the arts are a separate sphere, similarly subdivided; politics, religion, and sexuality are among the other spheres. None of these spheres contains more than a fraction of the whole rich landscape of human existence. Which of them is the most important? That’s a value judgment, and thus can only be made by an individual, from his or her own irreducibly individual point of view.
We’ve begun to realize—well, at least some of us have—that authority in one of these spheres isn’t transferable. When a religious leader, let’s say, makes pronouncements about science, those have no more authority than they would if they came from any other more or less clueless layperson, and a scientist who makes pronouncements about religion is subject to exactly the same rule. The same distinction applies with equal force between any two spheres, and as often as not between subspheres of a single sphere as well:  plenty of scientists make fools of themselves, for example, when they try to lay down the law about sciences they haven’t studied.
Claiming that one such sphere is the only thing that makes human life worthwhile is an error of the same kind. If Ervino feels that scientific and technological progress is the only thing that makes his own personal life worth living, that’s his call, and presumably he has reasons for it. If he tries to say that that’s true for me, he’s wrong—there are plenty of things that make my life worth living—and if he’s trying to make the same claim for every human being who will ever live, that strikes me as a profoundly impoverished view of the richness of human possibility. Insisting that scientific and technological progress are the only acts of human beings that differentiate their existence from that of a plant isn’t much better. Dante’s Divina Commedia, to cite the obvious example, is neither a scientific paper nor a technological invention; does that mean that it belongs in the same category as the noise made by hogs grunting in the mud?
Dante Alighieri lived in a troubled age in which scientific and technological progress were nearly absent and warfare, injustice, famine, pestilence, and the collapse of widely held beliefs about the world were matters of common experience. From that arguably unpromising raw material, he brewed one of the great achievements of human culture. It may well be that the next few centuries will be far from optimal for scientific and technological progress; it may well be that the most important thing that can be done by people who value science and technology is to figure out what can be preserved through the difficult times ahead, and do their best to see that these things reach the waiting hands of the future. If life hands you a dark age, one might say, it’s probably not a good time to brew lite beer, but there are plenty of other things you can still brew, bottle and drink.
As for me—well, all things considered, I find that being alive beats the stuffing out of the alternative, and that’s true even though I live in a troubled age in which scientific and technological progress show every sign of grinding to a halt in the near future, and in which warfare, injustice, famine, pestilence, and the collapse of widely held beliefs are matters of common experience. The notion that life has to justify itself to me seems, if I may be frank, faintly silly, and so does the comparable claim that I have to justify my existence to it, or to anyone else. Here I am; I did not make the world; quite the contrary, the world made me, and put me in the irreducibly personal situation in which I find myself. Given that I’m here, where and when I happen to be, there are any number of things that I can choose to do, or not do; and it so happens that one of the things I choose to do is to prepare, and help others prepare, for the long decline of industrial civilization and the coming of the dark age that will follow it.
And with that, dear reader, I return you to your regularly scheduled discussion of decline and fall on The Archdruid Report.

Dark Age America: The Hoard of the Nibelungs

Wed, 2014-11-12 15:46
Of all the differences that separate the feudal economy sketched out in last week’s post from the market economy most of us inhabit today, the one that tends to throw people for a loop most effectively is the near-total absence of money in everyday medieval life. Money is so central to current notions of economics that getting by without it is all but unthinkable these days.  The fact—and of course it is a fact—that the vast majority of human societies, complex civilizations among them, have gotten by just fine without money of any kind barely registers in our collective imagination.
One source of this curious blindness, I’ve come to think, is the way that the logic of money is presented to students in school. Those of my readers who sat through an Economics 101 class will no doubt recall the sort of narrative that inevitably pops up in textbooks when this point is raised. You have, let’s say, a pig farmer who has bad teeth, but the only dentist in the village is Jewish, so the pig farmer can’t simply swap pork chops and bacon for dental work. Barter might be an option, but according to the usual textbook narrative, that would end up requiring some sort of complicated multiparty deal whereby the pig farmer gives pork to the carpenter, who builds a garage for the auto repairman, who fixes the hairdresser’s car, and eventually things get back around to the dentist. Once money enters the picture, by contrast, the pig farmer sells bacon and pork chops to all and sundry, uses the proceeds to pay the dentist, and everyone’s happy. Right?
Well, maybe. Let’s stop right there for a moment, and take a look at the presuppositions hardwired into this little story. First of all, the narrative assumes that participants have a single rigidly defined economic role: the pig farmer can only raise pigs, the dentist can only fix teeth, and so on. Furthermore, it assumes that participants can’t anticipate needs and adapt to them: even though he knows the only dentist in town is Jewish, the pig farmer can’t do the logical thing and start raising lambs for Passover on the side, or what have you. Finally, the narrative assumes that participants can only interact economically through market exchanges: there are no other options for meeting needs for goods and services, no other way to arrange exchanges between people other than market transactions driven by the law of supply and demand.
Even in modern industrial societies, these three presuppositions are rarely true. I happen to know several pig farmers, for example, and none of them are so hyperspecialized that their contributions to economic exchanges are limited to pork products; garden truck, fresh eggs, venison, moonshine, and a good many other things could come into the equation as well. For that matter, outside the bizarre feedlot landscape of industrial agriculture, mixed farms raising a variety of crops and livestock are far more resilient than single-crop farms, and thus considerably more common in societies that haven’t shoved every economic activity into the procrustean bed of the money economy.
As for the second point raised above, the law of supply and demand works just as effectively in a barter economy as in a money economy, and successful participants are always on the lookout for a good or service that’s in short supply relative to potential demand, and so can be bartered with advantage. It’s no accident that traditional village economies tend to be exquisitely adapted to produce exactly that mix of goods and services the inhabitants of the village need and want.
Finally, of course, there are many ways of handling the production and distribution of goods and services without engaging in market exchanges. The household economy, in which members of each household produce goods and services that they themselves consume, is the foundation of economic activity in most human societies, and still accounted for the majority of economic value produced in the United States until not much more than a century ago. The gift economy, in which members of a community give their excess production to other members of the same community in the expectation that the gift will be reciprocated, is immensely common; so is the feudal economy delineated in last week’s post, with its systematic exclusion of market forces from the economic sphere. There are others, plenty of them, and none of them require money at all.
Thus the logic behind money pretty clearly isn’t what the textbook story claims it is. That doesn’t mean that there’s no logic to it at all; what it means is that nobody wants to talk about what it is that money is actually meant to do. Fortunately, we’ve discussed the relevant issues in last week’s post, so I can sum up the matter here in a single sentence: the point of money is that it makes intermediation easy.
Intermediation, for those of my readers who weren’t paying attention last week, is the process by which other people insert themselves between the producer and the consumer of any good or service, and take a cut of the proceeds of the transaction. That’s very easy to do in a money economy, because—as we all know from personal experience—the intermediaries can simply charge fees for whatever service they claim to provide, and then cash in those fees for whatever goods and services they happen to want.
Imagine, by way of contrast, the predicament of an intermediary who wanted to insert himself into, and take a cut out of, a money-free transaction between the pig farmer and the dentist. We’ll suppose that the arrangement the two of them have worked out is that the pig farmer raises enough lambs each year that all the Jewish families in town can have a proper Passover seder, the dentist takes care of the dental needs of the pig farmer and his family, and the other families in the Jewish community work things out with the dentist in exchange for their lambs—a type of arrangement, half barter and half gift economy, that’s tolerably common in close-knit communities.
Intermediation works by taking a cut from each transaction. The cut may be described as a tax, a fee, an interest payment, a service charge, or what have you, but it amounts to the same thing: whenever money changes hands, part of it gets siphoned off for the benefit of the intermediaries involved in the transaction. The same thing can be done in some money-free transactions, but not all. Our intermediary might be able to demand a certain amount of meat from each Passover lamb, or require the pig farmer to raise one lamb for the intermediary per six lambs raised for the local Jewish families, though this assumes that he either likes lamb chops or can swap the lamb to someone else for something he wants.
What on earth, though, is he going to do to take a cut from the dentist’s side of the transaction?  There wouldn’t be much point in demanding one tooth out of every six the dentist extracts, for example, and requiring the dentist to fill one of the intermediary’s teeth for every twenty other teeth he fills would be awkward at best—what if the intermediary doesn’t happen to need any teeth filled this year? What’s more, once intermediation is reduced to such crassly physical terms, it’s hard to pretend that it’s anything but a parasitic relationship that benefits the intermediary at everyone else’s expense.
What makes intermediation seem to make sense in a money economy is that money is the primary intermediation. Money is a system of arbitrary tokens used to facilitate exchange, but it’s also a good deal more than that. It’s the framework of laws, institutions, and power relationships that creates the tokens, defines their official value, and mandates that they be used for certain classes of economic exchange. Once the use of money is required for any purpose, the people who control the framework—whether those people are government officials, bankers, or what have you—get to decide the terms on which everyone else gets access to money, which amounts to effective control over everyone else. That is to say, they become the primary intermediaries, and every other intermediation depends on them and the money system they control.
This is why, to cite only one example, British colonial administrators in Africa imposed a house tax on the native population, even though the cost of administering and collecting the tax was more than the revenue the tax brought in. By requiring the tax to be paid in money rather than in kind, the colonial government forced the natives to participate in the money economy, on terms that were of course set by the colonial administration and British business interests. The money economy is the basis on which nearly all other forms of intermediation rest, and forcing the native peoples to work for money instead of allowing them to meet their economic needs in some less easily exploited fashion was an essential part of the mechanism that pumped wealth out of the colonies for Britain’s benefit.
Watch the way that the money economy has insinuated itself into every dimension of modern life in an industrial society and you’ve got a ringside seat from which to observe the metastasis of intermediation in recent decades. Where money goes, intermediation follows:  that’s one of the unmentionable realities of political economy, the science that Adam Smith actually founded, but was gutted, stuffed, and mounted on the wall—turned, that is, into the contemporary pseudoscience of economics—once it became painfully clear just what kind of trouble got stirred up when people got to talking about the implications of the links between political power and economic wealth.
There’s another side to the metastasis just mentioned, though, and it has to do with the habits of thought that the money economy both requires and reinforces. At the heart of the entire system of money is the concept of abstract value, the idea that goods and services share a common, objective attribute called “value” that can be gauged according to the one-dimensional measurement of price.
It’s an astonishingly complex concept, and so needs unpacking here. Philosophers generally recognize a crucial distinction between facts and values; there are various ways of distinguishing them, but the one that matters for our present purposes is that facts are collective and values are individual. Consider the statement “it rained here last night.” Given agreed-upon definitions of “here” and “last night,” that’s a factual statement; all those who stood outside last night in the town where I live and looked up at the sky got raindrops on their faces. In the strict sense of the word, facts are objective—that is, they deal with the properties of objects of perception, such as raindrops and nights.
Values, by contrast, are subjective—that is, they deal with the properties of perceiving subjects, such as people who look up at the sky and notice wetness on their faces. One person is annoyed by the rain, another is pleased, another is completely indifferent to it, and these value judgments are irreducibly personal; it’s not that the rain is annoying, pleasant, or indifferent, it’s the individuals who are affected in these ways. Nor are these personal valuations easy to sort out along a linear scale without drastic distortion. The human experience of value is a richly multidimensional thing; even in a language as poorly furnished with descriptive terms for emotion as English is, there are countless shades of meaning available for talking about positive valuations, and at least as many more for negative ones.
From that vast universe of human experience, the concept of abstract value extracts a single variable—“how much will you give for it?”—and reduces the answer to a numerical scale denominated in dollars and cents or the local equivalent. Like any other act of reductive abstraction, it has its uses, but the benefits of any such act always have to be measured against the blind spots generated by reductive modes of thinking, and the consequences of that induced blindness must either be guarded against or paid in full. The latter is far and away the more common of the two, and it’s certainly the option that modern industrial society has enthusiastically chosen.
Those of my readers who want to see the blindness just mentioned in full spate need only turn to any of the popular cornucopian economic theorists of our time. The fond and fatuous insistence that resource depletion can’t possibly be a problem, because investing additional capital will inevitably turn up new supplies—precisely the same logic, by the way, that appears in the legendary utterance “I can’t be overdrawn, I still have checks left!”—unfolds precisely from the flattening out of qualitative value into quantitative price just discussed.  The habit of reducing every kind of value to bare price is profitable in a money economy, since it facilitates ignoring every variable that might get in the way of making money off  transactions; unfortunately it misses a minor but crucial fact, which is that the laws of physics and ecology trump the laws of economics, and can neither be bribed nor bought.
The contemporary fixation on abstract value isn’t limited to economists and those who believe them, nor is its potential for catastrophic consequences. I’m thinking here specifically of those people who have grasped the fact that industrial civilization is picking up speed on the downslope of its decline, but whose main response to it consists of trying to find some way to stash away as much abstract value as possible now, so that it will be available to them in some prospective postcollapse society. Far more often than not, gold plays a central role in that strategy, though there are a variety of less popular vehicles that play starring roles the same sort of plan.
Now of course it was probably inevitable in a consumer society like ours that even the downfall of industrial civilization would be turned promptly into yet another reason to go shopping. Still, there’s another difficulty here, and that’s that the same strategy has been tried before, many times, in the last years of other civilizations. There’s an ample body of historical evidence that can be used to see just how well it works. The short form? Don’t go there.
It so happens, for example, that in there among the sagas and songs of early medieval Europe are a handful that deal with historical events in the years right after the fall of Rome: the Nibelungenlied, Beowulf, the oldest strata of Norse saga, and some others. Now of course all these started out as oral traditions, and finally found their way into written form centuries after the events they chronicle, when their compilers had no way to check their facts; they also include plenty of folktale and myth, as oral traditions generally do. Still, they describe events and social customs that have been confirmed by surviving records and archeological evidence, and offer one of the best glimpses we’ve got into the lived experience of descent into a dark age.
Precious metals played an important part in the political economy of that age—no surprises there, as the Roman world had a precious-metal currency, and since banks had not been invented yet, portable objects of gold and silver were the most common way that the Roman world’s well-off classes stashed their personal wealth. As the western empire foundered in the fifth century CE and its market economy came apart, hoarding precious metals became standard practice, and rural villas, the doomsteads of the day, popped up all over. When archeologists excavate those villas, they routinely find evidence that they were looted and burnt when the empire fell, and tolerably often the archeologists or a hobbyist with a metal detector has located the buried stash of precious metals somewhere nearby, an expressive reminder of just how much benefit that store of abstract wealth actually provided to its owner.
That’s the same story you get from all the old legends: when treasure turns up, a lot of people are about to die. The Volsunga saga and the Nibelungenlied, for example, are versions of the same story, based on dim memories of events in the Rhine valley in the century or so after Rome’s fall. The primary plot engine of those events is a hoard of the usual late Roman kind,  which passes from hand to hand by way of murder, torture, treachery, vengeance, and the extermination of entire dynasties. For that matter, when Beowulf dies after slaying his dragon, and his people discover that the dragon was guarding a treasure, do they rejoice? Not at all; they take it for granted that the kings and warriors of every neighboring kingdom are going to come and slaughter them to get it—and in fact that’s what happens. That’s business as usual in a dark age society.
The problem with stockpiling gold on the brink of a dark age is thus simply another dimension, if a more extreme one, of the broader problem with intermediation. It bears remembering that gold is not wealth; it’s simply a durable form of money, and thus, like every other form of money, an arbitrary token embodying a claim to real wealth—that is, goods and services—that other people produce. If the goods and services aren’t available, a basement safe full of gold coins won’t change that fact, and if the people who have the goods and services need them more than they want gold, the same is true. Even if the goods and services are to be had, if everyone with gold is bidding for the same diminished supply, that gold isn’t going to buy anything close to what it does today. What’s more, tokens of abstract value have another disadvantage in a society where the rule of law has broken down: they attract violence the way a dead rat draws flies.
The fetish for stockpiling gold has always struck me, in fact, as the best possible proof that most of the people who think they are preparing for total social collapse haven’t actually thought the matter through, and considered the conditions that will obtain after the rubble stops bouncing. Let’s say industrial civilization comes apart, quickly or slowly, and you have gold.  In that case, either you spend it to purchase goods and services after the collapse, or you don’t. If you do, everyone in your vicinity will soon know that you have gold, the rule of law no longer discourages people from killing you and taking it in the best Nibelungenlied fashion, and sooner or later you’ll run out of ammo. If you don’t, what good will the gold do you?
The era when Nibelungenlied conditions apply—when, for example, armed gangs move from one doomstead to another, annihilating the people holed up there, living for a while on what they find, and then moving on to the next, or when local governments round up the families of those believed to have gold and torture them to death, starting with the children, until someone breaks—is a common stage of dark ages. It’s a self-terminating one, since sooner or later the available supply of precious metals or other carriers of abstract wealth are spread thin across the available supply of warlords. This can take anything up to a century or two before we reach the stage commemorated in the Anglo-Saxon poem “The Seafarer:” Nearon nú cyningas ne cáseras, ne goldgiefan swylce iú wáeron(No more are there kings or caesars or gold-givers as once there were).
That’s when things begin settling down and the sort of feudal arrangement sketched out in last week’s post begins to emerge, when money and the market play little role in most people’s lives and labor and land become the foundation of a new, impoverished, but relatively stable society where the rule of law again becomes a reality. None of us living today will see that period arrive, but it’s good to know where the process is headed. We’ll discuss the practical implications of that knowledge in a future post.

Dark Age America: The End of the Market Economy

Wed, 2014-11-05 15:49
One of the factors that makes it difficult to think through the economic consequences of the end of the industrial age is that we’ve all grown up in a world where every form of economic activity has been channeled through certain familiar forms for so long that very few people remember that things could be any other way. Another of the factors that make the same effort of thinking difficult is that the conventional economic thought of our time has invested immense effort and oceans of verbiage into obscuring the fact that things could be any other way.
Those are formidable obstacles. We’re going to have to confront them, though, because one of the core features of the decline and fall of civilizations is that most of the habits of everyday life that are standard practice when civilizations are at zenith get chucked promptly into the recycle bin as decline picks up speed. That’s true across the whole spectrum of cultural phenomena, and it’s especially true of economics, for a reason discussed in last week’s post: the economic institutions and habits of a civilization in full flower are too complex for the same civilization to support once it’s gone to seed.
The institutions and habits that contemporary industrial civilization uses to structure its economic life comprise that tangled realm of supposedly voluntary exchanges we call “the market.” Back when the United States was still contending with the Soviet Union for global hegemony, that almost always got rephrased as “the free market;” the adjective still gets some use among ideologues, but by and large it’s dropped out of use elsewhere. This is a good thing, at least from the perspective of honest speaking, because the “free” market is of course nothing of the kind. It’s unfree in at least two crucial senses: first, in that it’s compulsory; second, in that it’s expensive.
“The law in its majestic equality,” Anatole France once noted drolly, “forbids rich and poor alike to urinate in public, sleep under bridges, or beg for bread.” In much the same sense, no one is actually forced to participate in the market economy in the modern industrial world. Those who want to abstain are perfectly free to go looking for some other way to keep themselves fed, clothed, housed, and supplied with the other necessities of life, and the fact that every option outside of the market has been hedged around with impenetrable legal prohibitions if it hasn’t simply been annihilated by legal fiat or brute force is just one of those minor details that make life so interesting.
Historically speaking, there are a vast number of ways to handle exchanges of goods and services between people. In modern industrial societies, on the other hand, outside of the occasional vestige of an older tradition here and there, there’s only one. Exchanging some form of labor for money, on whatever terms an employer chooses to offer, and then exchanging money for goods and services, on whatever terms the seller chooses to offer, is the only game in town. There’s nothing free about either exchange, other than the aforesaid freedom to starve in the gutter. The further up you go in the social hierarchy, to be sure, the less burdensome the conditions on the exchanges generally turn out to be—here as elsewhere, privilege has its advantages—but unless you happen to have inherited wealth or can find some other way to parasitize the market economy without having to sell your own labor, you’re going to participate if you like to eat.
Your participation in the market, furthermore, doesn’t come cheap. Every exchange you make, whether it’s selling your labor or buying goods and services with the proceeds, takes place within a system that has been subjected to the process of intermediation discussed in last week’s post. Thus, in most cases, you can’t simply sell your labor directly to individuals who want to buy it or its products; instead, you are expected to sell your labor to an employer, who then sells it or its product to others, gives you part of the proceeds, and pockets the rest. Plenty of other people are lined up for their share of the value of your labor: bankers, landlords, government officials, and the list goes on. When you go to exchange money for goods and services, the same principle applies; how much of the value of your labor you get to keep for your own purposes varies from case to case, but it’s always less than the whole sum, and sometimes a great deal less.
Karl Marx performed a valuable service to political economy by pointing out these facts and giving them the stress they deserve, in the teeth of savage opposition from the cheerleaders of the status quo who, then as now, dominated economic thought. His proposed solution to the pervasive problems of the (un)free market was another matter.  Like most of his generation of European intellectuals, Marx was dazzled by the swamp-gas luminescence of Hegelian philosophy, and followed Hegel’s verbose and vaporous trail into a morass of circular reasoning and false prophecy from which few of his remaining followers have yet managed to extract themselves.
It’s from Hegel that Marx got the enticing but mistaken notion that history consists of a sequence of stages that move in a predetermined direction toward some as-perfect-as-possible state: the same idea, please note, that Francis Fukuyama used to justify his risible vision of the first Bush administration as the glorious fulfillment of human history. (To borrow a bit of old-fashioned European political jargon, there are right-Hegelians and left-Hegelians; Fukuyama was an example of the former, Marx of the latter.) I’ll leave such claims and the theories founded on them to the true believers, alongside such equally plausible claims as the Singularity, the Rapture, and the lemonade oceans of Charles Fourier; what history itself shows is something rather different.
What history shows, as already noted, is that the complex systems that emerge during the heyday of a civilization are inevitably scrapped on the way back down. Market economies are among those complex systems. Not all civilizations have market economies—some develop other ways to handle the complicated process of allocating goods and services in a society with many different social classes and occupational specialties—but those that do set up market economies inevitably load them with as many intermediaries as the overall complexity of their economies can support.
It’s when decline sets in and maintaining the existing level of complexity becomes a problem that the trouble begins. Under some conditions, intermediation can benefit the productive economy, but in a complex economy, more and more of the intermediation over time amounts to finding ways to game the system, profiting off economic activity without actually providing any benefit to anyone else.  A complex society at or after its zenith thus typically ends up with a huge burden of unproductive economic activity supported by an increasingly fragile foundation of productive activity.
All the intermediaries, the parasitic as well as the productive, expect to be maintained in the style to which they’re accustomed, and since they typically have more wealth and influence than the producers and consumers who support them, they can usually stop moves to block their access to the feed trough. Economic contraction, however, makes it hard to support business as usual on the shrinking supply of real wealth. The intermediaries thus end up competing with the actual producers and consumers of goods and services, and since the intermediaries typically have the support of governments and institutional forms, more often than not it’s the intermediaries who win that competition.
It’s not at all hard to see that process at work; all it takes is a stroll down the main street of the old red brick mill town where I live, or any of thousands of other towns and cities in today’s America. Here in Cumberland, there are empty storefronts all through downtown, and empty buildings well suited to any other kind of economic activity you care to name there and elsewhere in town. There are plenty of people who want to work, wage and benefit expectations are modest, and there are plenty of goods and services that people would buy if they had the chance. Yet the storefronts stay empty, the workers stay unemployed, the goods and services remain unavailable. Why?
The reason is intermediation. Start a business in this town, or anywhere else in America, and the intermediaries all come running to line up in front of you with their hands out. Local, state, and federal bureaucrats all want their cut; so do the bankers, the landlords, the construction firms, and so on down the long list of businesses that feed on other businesses, and can’t be dispensed with because this or that law or regulation requires them to be paid their share. The resulting burden is far too large for most businesses to meet. Thus businesses don’t get started, and those that do start up generally go under in short order. It’s the same problem faced by every parasite that becomes too successful: it kills the host on which its own survival depends.
That’s the usual outcome when a heavily intermediated market economy slams face first into the hard realities of decline. Theoretically, it would be possible to respond to the resulting crisis by forcing  disintermediation, and thus salvaging the market economy. Practically, that’s usually not an option, because the disintermediation requires dragging a great many influential economic and political sectors away from their accustomed feeding trough. Far more often than not, declining societies with heavily intermediated market economies respond to the crisis just described by trying to force the buyers and sellers of goods and services to participate in the market even at the cost of their own economic survival, so that some semblance of business as usual can proceed.
That’s why the late Roman Empire, for example, passed laws requiring that each male Roman citizen take up the same profession as his father, whether he could survive that way or not.  That’s also why, as noted last week, so many American jurisdictions are cracking down on people who try to buy and sell food, medical care, and the like outside the corporate economy. In the Roman case, the attempt to keep the market economy fully intermediated ended up killing the market economy altogether, and in most of the post-Roman world—interestingly, this was as true across much of the Byzantine empire as it was in the barbarian west—the complex money-mediated market economy of the old Roman world went away, and centuries passed before anything of the kind reappeared.
What replaced it is what always replaces the complex economic systems of fallen civilizations: a system that systematically chucks the intermediaries out of economic activity and replaces them with personal commitments set up to block any attempt to game the system: that is to say, feudalism.
There’s enough confusion around that last word these days that a concrete example is probably needed here. I’ll borrow a minor character from a favorite book of my childhood, therefore, and introduce you to Higg son of Snell. His name could just as well be Michio, Chung-Wan, Devadatta, Hafiz, Diocles, Bel-Nasir-Apal, or Mentu-hetep, because the feudalisms that evolve in the wake of societal collapse are remarkably similar around the world and throughout time, but we’ll stick with Higg for now. On the off chance that the name hasn’t clued you in, Higg is a peasant—a free peasant, he’ll tell you with some pride, and not a mere serf; his father died a little while back of what people call “elf-stroke” in his time and we’ve shortened to “stroke” in ours, and he’s come in the best of his two woolen tunics to the court of the local baron to take part in the ceremony at the heart of the feudal system.
It’s a verbal contract performed in the presence of witnesses: in this case, the baron, the village priest, a couple of elderly knights who serve the baron as advisers, and a gaggle of village elders who remember every detail of the local customary law with the verbal exactness common to learned people among the illiterate. Higg places his hands between the baron’s and repeats the traditional pledge of loyalty, coached as needed by the priest; the baron replies in equally formal words, and the two of them are bound for life in the relationship of liegeman to liege lord.
What this means in practice is anything but vague.  As the baron’s man, Higg has the lifelong right to dwell in his father’s house and make use of the garden and pigpen; to farm a certain specified portion of the village farmland; to pasture one milch cow and its calf, one ox, and twelve sheep on the village commons; to gather, on fourteen specified saint’s days, as much wood as he can carry on his back in a single trip from the forest north of the village, but only limbwood and fallen wood; to catch two dozen adult rabbits from the warren on the near side of the stream, being strictly forbidden to catch any from the warren on the far side of the millpond; and, as a reward for a service his great-grandfather once performed for the baron’s great-grandfather during a boar hunt, to take anything that washes up on the weir across the stream between the first  sound of the matin bell and the last of the vespers bell on the day of St. Ethelfrith each year.
In exchange for these benefits, Higg is bound to an equally specific set of duties. He will labor in the baron’s fields, as well as his own and his neighbors, at seedtime and harvest; his son will help tend the baron’s cattle and sheep along with the rest of the village herd; he will give a tenth of his crop at harvest each year for the support of the village church; he will provide the baron with unpaid labor in the fields or on the great stone keep rising next to the old manorial hall for three weeks each year; if the baron goes to war, whether he’s staging a raid on the next barony over or answering the summons of that half-mythical being, the king, in the distant town of London, Higg will put on a leather jerkin and an old iron helmet, take a stout knife and the billhook he normally uses to harvest wood on those fourteen saint’s days, and follow the baron in the field for up to forty days. None of these benefits and duties are negotiable; all Higg’s paternal ancestors have held their land on these terms since time out of mind; each of his neighbors holds some equivalent set of feudal rights from the baron for some similar set of duties.
Higg has heard of markets. One is held annually every St. Audrey’s day at the king’s town of Norbury, twenty-seven miles away, but he’s never been there and may well never travel that far from home in his life. He also knows about money, and has even seen a silver penny once, but he will live out his entire life without ever buying or selling something for money, or engaging in any economic transaction governed by the law of supply and demand. Not until centuries later, when the feudal economy begins to break down and intermediaries once again begin to insert themselves between producer and consumer, will that change—and that’s precisely the point, because feudal economics is what emerges in a society that has learned about the dangers of intermediation the hard way and sets out to build an economy where that doesn’t happen.
There are good reasons, in other words, why medieval European economic theory focused on the concept of the just price, which is not set by supply and demand, and why medieval European economic practice included a galaxy of carefully designed measures meant to prevent supply and demand from influencing prices, wages, or anything else. There are equally good reasons why lending money at interest was considered a sufficiently heinous sin in the Middle Ages that Dante, in The Inferno, put lenders at the bottom of the seventh circle of hell, below mass murderers, heretics, and fallen angels. The only sinners who go further down than lenders were the practitioners of fraud, in the eighth circle, and traitors, in the ninth: here again, this was a straightforward literary reflection of everyday reality in a society that depended on the sanctity of verbal contracts and the mutual personal obligations that structure feudal relationships.
(It’s probably necessary at this point to note that yes, I’m quite aware that European feudalism had its downsides—that it was rigidly caste-bound, brutally violent, and generally unjust. So is the system under which you live, dear reader, and it’s worth noting that the average medieval peasant worked fewer hours and had more days off than you do. Medieval societies also valued stability or, as today’s economists like to call it, stagnation, rather than economic growth and technological progress; whether that’s a good thing or not probably ought to be left to be decided in the far future, when the long-term consequences of our system can be judged against the long-term consequences of Higg’s.)
A fully developed feudal system takes several centuries to emerge. The first stirrings of one, however, begin to take shape as soon as people in a declining civilization start to realize that the economic system under which they live is stacked against them, and benefits, at their expense, whatever class of parasitic intermediaries their society happens to have spawned. That’s when people begin looking for ways to meet their own economic needs outside the existing system, and certain things reliably follow. The replacement of temporary economic transactions with enduring personal relationships is one of these; so is the primacy of farmland and other productive property to the economic system—this is why land and the like are still referred to legally as “real property,” as though all other forms of property are somehow unreal; in a feudal economy, that’s more or less the case.
A third consequence of the shift of economic activity away from the institutions and forms of a failing civilization has already been mentioned: the abandonment of money as an abstract intermediary in economic activity. That’s a crucial element of the process, and it has even more crucial implications, but those are sweeping enough that the end of money will require a post of its own. We’ll discuss that next week.

******************
Finally, here's a note from the volunteer moderator of the Facebook page for my latest book, Twilight's Last Gleaming. By all means check it out.

"The Facebook Page for Twilight’'s Last Gleaming can be found at https://www.facebook.com/TwilightsLastGleaming.  You are not required to have a Facebook account if you simply want to view the Page.  To promote the book, the plan is for the Page to run a series of posts which will briefly describe interesting events in American history.  Three posts (A.K.A. Status Updates) have already added, two on the Civil War and its aftermath and one about our earliest involvement with Russian Communists.  The readership of this blog is invited to submit entries for these Status Updates.  The entries should be less than 500 words, describe a specific event in American history that should have some relevance to the book, however tenuous, and end with a variant of the following tag line “And to find out what happened XXX years later read Twilight’s Last Gleaming.”  The events described should hopefully crisscross all over the political and social spectrum and be written in a way that engages the reader.

"To submit an entry via Facebook, please use the Message button at the lower right of the Cover Photo on the Facebook Page.  You can also add a photograph or picture file that has a horizontal orientation to the message. That will be become the new Cover Photo while your post is new.  Please only send pictures or photos are not covered via copyright and can be considered in the public domain.  If you want to submit an entry but do not have a Facebook Account, then send it as a comment to this blog with the header “Twilight’s Last Gleaming Post”.  We look forward to seeing your entries!

"The Facebook Page also accepts Posts To Page, which should deal with any other thoughts or issues relating to Twilight’s Last Gleaming.  Posts To Page entries will be moderated with guidelines similar to what JMG uses for comments to this blog.

"We would be delighted if Facebook users Liked the Page so they can get the Status Update posts.  If you  read one that you like, please Share it with your Facebook Friends, or even your non-Facebook friends.  Enjoy the book!"

Dark Age America: Involuntary Simplicity

Wed, 2014-10-29 20:57
The political transformations that have occupied the last four posts in this sequence can also be traced in detail in the economic sphere. A strong case could be made, in fact, that the economic dimension is the more important of the two, and the political struggles that pit the elites of a faliing civilization against the proto-warlords of the nascent dark age reflect deeper shifts in the economic sphere. Whether or not that’s the case—and in some sense, it’s simply a difference in emphasis—the economics of decline and fall need to be understood in order to make sense of the trajectory ahead of us.
One of the more useful ways of understanding that trajectory was traced out some years ago by Joseph Tainter in his book The Collapse of Complex Societies. While I’ve taken issue with some of the details of Tainter’s analysis in my own work, the general model of collapse he offers was also a core inspiration for the theory of catabolic collapse that provides the  basic structure for this series of posts, so I don’t think it’s out of place to summarize his theory briefly here.
Tainter begins with the law of diminishing returns: the rule, applicable to an astonishingly broad range of human affairs, that the more you invest—in any sense—in any one project, the smaller the additional return is on each unit of additional investment. The point at which this starts to take effect is called the point of diminishing returns. Off past that point is a far more threatening landmark, the point of zero marginal return: the point, that is, when additional investment costs as much as the benefit it yields. Beyond that lies the territory of negative returns, where further investment yields less than it costs, and the gap grows wider with each additional increment.
The attempt to achieve infinite economic growth on a finite planet makes a fine example of the law of diminishing returns in action. Given the necessary preconditions—a point we’ll discuss in more detail a bit later in this post—economic growth in its early stages produces benefits well in excess of its costs. Once the point of diminishing returns is past, though, further growth brings less and less benefit in any but a purely abstract, financial sense; broader measures of well-being fail to keep up with the expansion of the economy, and eventually the point of zero marginal return arrives and further rounds of growth actively make things worse.
Mainstream economists these days shove these increments of what John Ruskin used to call “illth”—yes, that’s the opposite of wealth—into the category of “externalities,” where they are generally ignored by everyone who doesn’t have to deal with them in person. If growth continues far enough, though, the production of illth overwhelms the production of wealth, and we end up more or less where we are today, where the benefits from continued growth are outweighed by the increasingly ghastly impact of the social, economic, and environmental “externalities” driven by growth itself. As The Limits to Growth  pointed out all those years ago, that’s the nature of our predicament: the costs of growth rise faster than the benefits and eventually force the industrial economy to its knees.
Tainter’s insight was that the same rules can be applied to social complexity. When a society begins to add layers of social complexity—for example, expanding the reach of the division of labor, setting up hierarchies to centralize decisionmaking, and so on—the initial rounds pay off substantially in terms of additional wealth and the capacity to deal with challenges from other societies and the natural world. Here again, though, there’s a point of diminishing returns, after which additional investments in social complexity yield less and less in the way of benefits, and there’s a point of zero marginal return, after which each additional increment of complexity subtracts from the wealth and resilience of the society.
There’s a mordant irony to what happens next. Societies in crisis reliably respond by doing what they know how to do. In the case of complex societies, what they know how to amounts to adding on new layers of complexity—after all, that’s what’s worked in the past. I mentioned at the beginning of this month, in an earlier post in this sequence, the way this plays out in political terms. The same thing happens in every other sphere of collective life—economic, cultural, intellectual, and so on down the list. If too much complexity is at the root of the problems besetting a society, though, what happens when its leaders keep adding even more complexity to solve those problems?
Any of my readers who have trouble coming up with the answer might find it useful to take a look out the nearest window. Whether or not Tainter’s theory provides a useful description of every complex society in trouble—for what it’s worth, it’s a significant part of the puzzle in every historical example known to me—it certainly applies to contemporary industrial society. Here in America, certainly, we’ve long since passed the point at which additional investments in complexity yield any benefit at all, but the manufacture of further complexity goes on apace, unhindered by the mere fact that it’s making a galaxy of bad problems worse. Do I need to cite the US health care system, which is currently collapsing under the sheer weight of the baroque superstructure of corporate and government bureaucracies heaped on top of what was once the simple process of paying a visit to the doctor?
We can describe this process as intermediation—the insertion of a variety of intermediate persons, professions, and institutions between the producer and the consumer of any given good or service. It’s a standard feature of social complexity, and tends to blossom in the latter years of every civilization, as part of the piling up of complexity on complexity that Tainter discussed. There’s an interesting parallel between the process of intermediation and the process of ecological succession.  Just as an ecosystem, as it moves from one sere (successional stage) to the next, tends to produce ever more elaborate food webs linking the plants whose photosynthesis starts the process with the consumers of detritus at its end, the rise of social complexity in a civilization tends to produce ever more elaborate patterns of intermediation between producers and consumers.
Contemporary industrial civilization has taken intermediation to an extreme not reached by any previous civilization, and there’s a reason for that. White’s Law, one of the fundamental rules of human ecology, states that economic development is a function of energy per capita. The jackpot of cheap concentrated energy that industrial civilization obtained from fossil fuels threw that equation into overdrive, and economic development is simply another name for complexity. The US health care system, again, is one example out of many; as the American economy expanded metastatically over the course of the 20th century, an immense army of medical administrators, laboratory staff, specialists, insurance agents, government officials, and other functionaries inserted themselves into the notional space between physician and  patient, turning what was once an ordinary face to face business transaction into a bureaucratic nightmare reminiscent of Franz Kafka’s The Castle.
In one way or another, that’s been the fate of every kind of economic activity in modern industrial society. Pick an economic sector, any economic sector, and the producers and consumers of the goods and services involved in any given transaction are hugely outnumbered by the people who earn a living from that transaction in some other way—by administering, financing, scheduling, regulating, taxing, approving, overseeing, facilitating, supplying, or in some other manner getting in there and grabbing a piece of the action. Take the natural tendency for social complexity to increase over time, and put it to work in a society that’s surfing a gargantuan tsunami of cheap energy, in which most work is done by machines powered by fossil fuels and not by human hands and minds, and that’s pretty much what you can expect to get.
That’s also a textbook example of the sort of excess complexity Joseph Tainter discussed in The Collapse of Complex Societies, but industrial civilization’s dependence on nonrenewable energy resources puts the entire situation in a different and even more troubling light. On the one hand, continuing increases in complexity in a society already burdened to the breaking point with too much complexity pretty much guarantees a rapid decrease in complexity not too far down the road—and no, that’s not likely to unfold in a nice neat orderly way, either. On the other, the ongoing depletion of energy resources and the decline in net energy that unfolds from that inescapable natural process means that energy per capita will be decreasing in the years ahead—and that, according to White’s Law, means that the ability of industrial society to sustain current levels of complexity, or anything like them, will be going away in the tolerably near future.
Add these trends together and you have a recipe for the radical simplification of the economy. The state of affairs in which most people in the work force have only an indirect connection to the production of concrete goods and services to meet human needs is, in James Howard Kunstler’s useful phrase, an arrangement without a future. The unraveling of that arrangement, and the return to a state of affairs in which most people produce goods and services with their own labor for their own, their families’, and their neighbors’ use, will be the great economic trend of the next several centuries.
That’s not to say that this unraveling will be a simple process. All those millions of people whose jobs depend on intermediation, and thus on the maintenance of current levels of economic complexity, have an understandable interest in staying employed. That interest in practice works out to an increasingly frantic quest to keep people from sidestepping the baroque corporate and bureaucratic economic machine and getting goods and services directly from producers.
That’s a great deal of what drives the ongoing crusade against alternative health care—every dollar spent on herbs from a medical herbalist or treatments from an acupuncturist is a dollar that doesn’t go into feeding the gargantuan corporations and bureaucracies that are supposed to provide health care for Americans, and sometimes even do so. The same thing is driving corporate and government attacks on local food production, since every dollar a consumer spends buying zucchini from a backyard farmer doesn’t prop up the equally huge and tottering mass of institutions that attempt to control the production and sale of food in America.
It’s not uncommon for those who object to these maneuvers to portray them as the acts of a triumphant corporate despotism on the brink of seizing total power over the planet. I’d like to suggest that they’re something quite different. While the American and global economies are both still growing in a notional sense, the measures of growth that yield that result factor in such things as the manufacture of derivatives and a great many other forms of fictive wealth.
Subtract those from the national and global balance sheet, and the result is an economy in contraction. The ongoing rise in the permanently jobless, the epidemic of malign neglect affecting even the most crucial elements of America’s infrastructure, and the ongoing decline in income and living standards among all those classes that lack access to fictive wealth, among many other things, all tell the same story. Thus it’s far from surprising that all the people whose jobs are dependent on intermediation, all the way up the corporate food chain to the corner offices, are increasingly worried about the number of people who are trying to engage in disintermediation—to buy food, health care, and other goods and services directly from the producers.
Their worries are entirely rational.  One of the results of the contraction of the real economy is that the costs of intermediation, financial and otherwise, have not merely gone through the roof but zoomed off into the stratosphere, with low earth orbit the next logical stop. Health care, again, is among the most obvious examples. In most parts of the United States, for instance, a visit to the acupuncturist for some ordinary health condition will typically set you back well under $100, while if you go to an MD for the same thing you’ll be lucky to get away for under $1000, counting lab work and other costs—and you can typically count on thirty or forty minutes of personal attention from the acupuncturist, as compared to five or ten minutes with a harried and distracted MD. It’s therefore no surprise that more and more Americans are turning their backs on the officially sanctioned health care industry and seeking out alternative health care instead.
They’d probably be just as happy to go to an ordinary MD who offered medical care on the same terms as the acupuncturist, which happen to be the same terms that were standard a century ago for every kind of health care. As matters stand, though, physicians are dependent on the system as it presently exists; their standing with their peers, and even their legal right to practice medicine, depends on their willingness to play by the rules of intermediation—and of course it’s also true that acupuncturists don’t generally make the six-figure salaries that so many physicians do in America. A hundred years ago, the average American doctor didn’t make that much more than the average American plumber; many of the changes in the US health care system since that time were quite openly intended to change that fact.
A hundred years ago, as the United States moved through the early stages of its age of imperial excess, that was something the nation could afford. Equally, all the other modes of profiteering, intermediation, and other maneuvers aimed at maximizing the take of assorted economic sectors were viable then,since a growing economy provides plenty of slack for such projects. As the economics of growth gave way to the economics of stagnation in the last quarter of the 20th century, such things became considerably more burdensome. As stagnation gives way to contraction, and the negative returns on excess complexity combine with the impact of depleting nonrenewable resources, the burden is rapidly becoming more than the US economy or the wider society can bear.
The result, in one way or another, will be disintermediation: the dissolution of the complex relations and institutions that currently come between the producer and the consumer of goods and services, and their replacement by something much less costly to maintain. “In one way or another,” though, covers a great deal of ground, and it’s far from easy to predict exactly how the current system will come unglued in the United States or, for that matter, anywhere else.
Disintermediation might happen quickly, if a major crisis shatters some central element of the US economic system—for example, the financial sector—and forces the entire economy to regroup around less abstract and more local systems of exchange. It might happen slowly, as more and more of the population can no longer afford to participate in the intermediated economy at all, and have to craft their own localized economies from the bottom up, while the narrowing circle of the well-to-do continue to make use of some equivalent of the current system for a long time to come. It might happen at different rates in different geographical areas—for example, cities and their suburbs might keep the intermediated economy going long after rural areas have abandoned it, or what have you.
Plenty of people these days like to look forward to some such transformation, and not without reason. Complexity has long since passed the point of negative returns in the US economy, as in most other aspects of American society, and the coming of disintermediation across a wide range of economic activities will arguably lead to significant improvements in many aspects of our collective life. That said, it’s not all roses and affordable health care. The extravagant rates of energy per capita that made today’s absurdly complex economy possible also made it possible for millions of Americans to make their living working in offices and other relatively comfortable settings, rather than standing hip deep in hog manure with a shovel in their hands, and it also allowed them to earn what currently passes for a normal income, rather than the bare subsistence that’s actually normal in societies that haven’t had their economies inflated to the bursting point by a temporary glut of cheap energy.
It was popular a number of years back for the urban and suburban middle classes, most of whom work in jobs that only exist due to intermediation, to go in for “voluntary simplicity”—at best a pallid half-equivalent of Thoreau’s far more challenging concept of voluntary poverty, at worst a marketing gimmick for the consumption of round after round of overpriced “simple” products. For all its more embarrassing features, the voluntary simplicity movement was at least occasionally motivated by an honest recognition of the immediate personal implications of Tainter’s fundamental point—that complexity taken past the point of diminishing returns becomes a burden rather than a benefit.
In the years ahead of us, a great many of these same people are going to experience what I suppose might best be called involuntary simplicity: the disintermediation of most aspects of economic life, the departure of lifestyles that can only be supported by the cheap abundant energy of the recent past, and a transition to the much less complex—and often, much less comfortable—lifestyles that are all that’s possible in a deindustrial world. There may be a certain entertainment value in watching what those who praised voluntary simplicity to the skies think of simple living when it’s no longer voluntary, and there’s no way back to the comforts of a bygone era.
That said, the impact of involuntary simplicity on the economic sphere won’t be limited to the lifestyles of the formerly privileged. It promises to bring an end to certain features of economic life that contemporary thought assumes are fixed in place forever: among them, the market economy itself. We’ll talk about that next week.

************
In other news, I'm pleased to report that Twilight's Last Gleaming, my novel of the fall of America's empire based on 2012's "How It Could Happen" series of posts, is hot off the press and available from the publisher with free shipping worldwide. The novel also has its own Facebook page for fans of social media. By all means check it out.

A Pink Slip for the Progress Fairy

Wed, 2014-10-22 15:24
If you’ve ever wondered just how powerfully collective thinking grips most members of our species—including, by and large, those who most forcefully insist on the originality of their thinking—I have an experiment to recommend: go out in public and advocate an idea about the future that isn’t part of the conventional wisdom, and see what kind of reaction you field. If your experience is anything like mine, you’ll get some anger, some argument, and some blank stares, but the most telling reaction will come from people who try to force what you’re saying into the Procrustean bed of the conventional wisdom, no matter how thoroughly they have to stretch and chop what you’ve said to make it fit.
Now of course the project of this blog is guaranteed to field such reactions, since the ideas explored here don’t just ignore the conventional wisdom, they fling it to the floor and dance on the crumpled remains. When I mention that I expect the decline and fall of industrial civilization to take centuries, accordingly, people take this to mean that I expect a smooth, untroubled descent. When I mention that I expect crisis before this decade is finished, in turn, people take this to mean that I expect industrial civilization to crash into ruin in the next few years. Some people, for that matter, slam back and forth from one of these presuppositions to another, as though they can’t fit the concepts of prolonged decline and imminent crisis into their heads at the same moment.
That sort of response has become more common than usual in recent months, and part of the reason may be that it’s been a while since I’ve sketched out the overall shape of the future as I see it.  Some of my readers may have lost track of the broader picture, and more recent readers of this blog may not have encountered that picture at all. For that reason among others, I’m going to spend this week’s post summarizing the the decline and fall of  industrial civilization.
Yes, I’m aware that many people believe that such a thing can’t happen:  that science, technology, or some other factor has made progress irreversible. I’m also aware that many people insist that progress may not be irreversible yet but will be if we all just do that little bit more. These are—well, let’s be charitable and call them faith-based claims. Generalizing from a sample size of one when the experiment hasn’t yet run its course is poor scientific procedure; insisting that just this once, the law of diminishing returns will be suspended for our benefit is the antithesis of science. It amounts to treating progress as some sort of beneficent fairy who can be counted on to tap us with her magic wand and give us a wonderful future, just because we happen to want one.
The overfamiliar cry of “but it’s different this time!” is popular, it’s comforting, but it’s also irrelevant. Of course it’s different this time; it was different every other time, too. Neolithic civilizations limited to one river valley and continental empires with complex technologies have all declined and fallen in much the same way and for much the same reasons. It may appeal to our sense of entitlement to see ourselves as destiny’s darlings, to insist that the Progress Fairy has promised us a glorious future out there among the stars, or even to claim that it’s humanity’s mission to populate the galaxy, but these are another set of faith-based claims; it’s a little startling, in fact, to watch so many people who claim to have outgrown theology clinging to such overtly religious concepts as humanity’s mission and destiny.
In the real world, when civilizations exhaust their resource bases and wreck the ecological cycles that support them, they fall. It takes between one and three centuries on average for the fall to happen—and no, big complex civilizations don’t fall noticeably faster or slower than smaller and simpler ones.  Nor is it a linear decline—the end of a civilization is a fractal process composed of crises on many different scales of space and time, with equally uneven consequences. An effective response can win a breathing space; in the wake of a less effective one, part of what used to be normal goes away for good. Sooner or later, one crisis too many overwhelms the last defenses, and the civilization falls, leaving scattered remnants of itself that struggle and gleam for a while until the long night closes in.
The historian Arnold Toynbee, whose study of the rise and fall of civilizations is the most detailed and cogent for our purpose, has traced a recurring rhythm in this process.  Falling civilizations oscillate between periods of intense crisis and periods of relative calm, each such period lasting anywhere from a few decades to a century or more—the pace is set by the speed of the underlying decline, which varies somewhat from case to case. Most civilizations, he found, go through three and a half cycles of crisis and stabilization—the half being, of course, the final crisis from which there is no recovery. 
That’s basically the model that I’m applying to our future. One wrinkle many people miss is that we’re not waiting for the first of the three and a half rounds of crisis and recovery to hit; we’re waiting for the second. The first began in 1914 and ended around 1954, driven by the downfall of the British Empire and the collapse of European domination of the globe. During the forty years between Sarajevo and Dien Bien Phu, the industrial world was hammered by the First World War, the Spanish Flu pandemic, the Great Depression, millions of political murders by the Nazi and Soviet governments, the Second World War, and the overthrow of European colonial empires around the planet.
That was the first era of crisis in the decline and fall of industrial civilization. The period from 1945 to the present was the first interval of stability and recovery, made more prosperous and expansive than most examples of the species by the breakneck exploitation of petroleum and other fossil fuels, and a corresponding boom in technology. At this point, as fossil fuel reserves deplete, the planet’s capacity to absorb carbon dioxide and other pollutants runs up against hard limits, and a galaxy of other measures of impending crisis move toward the red line, it’s likely that the next round of crisis is not far off.
What will actually trigger that next round, though, is anyone’s guess. In the years leading up to 1914, plenty of people sensed that an explosion was coming, some guessed that a general European war would set it off, but nobody knew that the trigger would be the assassination of an Austrian archduke on the streets of Sarajevo. The Russian Revolution, the March on Rome, the crash of ‘29, Stalin, Hitler, Pearl Harbor, Auschwitz, Hiroshima? No one saw those coming, and only a few people even guessed that something resembling one or another of these things might be in the offing.
Thus trying to foresee the future of industrial society in detail is an impossible task. Sketching out the sort of future that we could get is considerably less challenging. History has plenty to say about the things that happen when a civilization begins its long descent into chaos and barbarism, and it’s not too difficult to generalize from that evidence. I don’t claim that the events outlined below are what will happen, but I expect things like them to happen; further than that, the lessons of history will not go.
With those cautions, here’s a narrative sketch of the kind of future that waits for us.
*************************The second wave of crisis began with the Ebola pandemic, which emerged in West Africa early in 2014. Efforts to control the outbreak in its early phases were ineffective and hopelessly underfunded. By the early months of 2015, the first cases appeared in India, Egypt, and the Caribbean, and from there the pandemic spread to much of the world. In August 2015 a vaccine passed its clinical trials, but scaling up production and distribution of the vaccine to get in front of a fast-spreading pandemic took time, and it was early 2018 before the pandemic was finally under control everywhere in the world. By then 1.6 billion people had died of the disease, and another 210 million had died as a result of the collapse of food distribution and health care across large areas of the Third World.
The struggle against Ebola was complicated by the global economic depression that got under way in 2015 as the “fracking” boom imploded and travel and tourist industries collapsed in the face of the pandemic. Financial markets were stabilized by vast infusions of government debt, as they had been in the wake of the 2008 crash, but the real economy of goods and services was not so easily manipulated; joblessness soared, tax revenues plunged, and a dozen nations defaulted on their debts. Politicians insisted, as they had done for the past decade, that giving more handouts to the rich would restore prosperity; their failure to take any constructive action set the stage for the next act in the tragedy.
The first neofascist parties were founded in Europe before the end of the pandemic, and grew rapidly in the depression years. In 2020 and 2021, neofascists took power in three European nations on anti-immigration, anti-EU and anti-banking industry platforms; their success emboldened similar efforts elsewhere. Even so, the emergence of the neofascist American Peoples Party as a major force in the 2024 US elections stunned most observers. Four years later the APP swept the elections, and forced through laws that turned Congress into an advisory body and enabled rule by presidential decree. Meanwhile, as more European nations embraced neofascism, Europe split into hostile blocs, leading to the dissolution of the European Union in 2032 and the European War of 2035-2041.
By the time war broke out in Europe, the popularity of the APP had fallen drastically due to ongoing economic troubles, and insurgencies against the new regime had emerged in the South and mountain West.  Counterinsurgency efforts proved no more effective than they had in Iraq or Afghanistan, and over the next decade much of the US sank into failed-state conditions. In 2046, after the regime used tactical nuclear weapons on three rebel-held cities, a dissident faction of the US military launched a nuclear strike on Washington DC, terminating the APP regime. Attempts to establish a new federal government failed over the next two years, and the former United States broke into seven nations.
Outside Europe and North America, changes were less dramatic, with the Iranian civil war of 2027-2034 and the Sino-Japanese war of 2033-2035 among the major incidents. Most of the Third World was prostrate in the wake of the Ebola pandemic, and world population continued to decline gradually as the economic crisis took its toll and the long-term effects of the pandemic played out. By 2048 roughly fifteen per cent of the world’s people lived in areas no longer governed by a nation-state.
The years from 2048 to 2089 were an era of relative peace under Chinese global hegemony. The chaos of the crisis years eliminated a great many wasteful habits, such as private automobiles and widespread air travel, and renewable resources padded out with what was left of the world’s fossil fuel production were able to meet the reduced needs of a smaller and less extravagant global population. Sea levels had begun rising steadily during the crisis years; ironically, the need to relocate ports and coastal cities minimized unemployment in the 2050s and 2060s, bringing relative prosperity to the laboring classes. High and rising energy prices spurred deautomation of many industries, with similar effects.
The pace of climate change accelerated, however, as carbon dioxide from the reckless fossil fuel use of the crisis years had its inevitable effect, pushing the polar ice sheets toward collapse and making harvests unpredictable around the globe. Drought gripped the American Southwest, forcing most of the region’s population to move and turning the region into a de facto stateless zone.  The same process destabilized much of the Middle East and south Asia, laying the groundwork for renewed crisis.  
Population levels stabilized in the 2050s and 2060s and began to contract again thereafter. The primary culprit was once again disease, this time from a gamut of pathogens. The expansion of tropical diseases into formerly temperate regions, the spread of antibiotic resistance to effectively all bacterial pathogens, and the immense damage to public health infrastructure during the crisis years all played a part in that shift. The first migrations of climate refugees also helped spread disease and disruption.
The last decade before 2089 was a time of renewed troubles, with political tensions pitting China and its primary allies, Australia and Canada, against the rising power of the South American Union (formed by 2067’s Treaty of Montevideo between Argentina, Chile, Uruguay and Paraguay), and  insurgencies in eastern Europe that set the stage for the Second European War. Economic troubles driven by repeated crop failures in North America and China added to the strains, and kept anyone but scientists from noticing what was happening to the Greenland ice sheet until it was too late.
The collapse of the Greenland ice sheet, which began in earnest in the summer of 2089, delivered a body blow to an already fraying civilization. Meltwater pouring into the North Atlantic shut down the thermohaline circulation, the main driver of the world’s ocean currents, unleashing drastic swings in weather across most of the world’s climate zones, while sea levels jolted upwards. As these trends worsened, climate refugees fled drought, flood, or famine in any direction that promised survival—a promise that in most cases would not be kept. Those nations that opened their borders collapsed under the influx of millions of starving migrants; those who tried to close their borders found themselves at war with entire peoples on the move, in many cases armed with the weapons of pre-crisis armies.
The full impact of the Greenland disaster took time to build, but the initial shock to weather patterns was enough to help trigger the Second European War of 2091-2111. The Twenty Years War, as it was called, pitted most of the nations of Europe against each other in what began as a struggle for mastery and devolved into a struggle for survival. As the fighting dragged on, mercenaries from the Middle East and Africa made up an ever larger fraction of the combatants. The final defeat of the Franco-Swedish alliance in 2111, though it ended the war, left Europe a shattered wreck unable to stem the human tide from the devastated regions further south and east.
Elsewhere, migration and catastrophic climate change brought down most of the nations of North America, while China dissolved in civil war. Australia and the South American Union both unexpectedly benefited as rainfall increased over their territory; both nations survived the first wave of troubles more or less intact, only to face repeated invasions by armed migrants in the following decades. Neither quite succumbed, but most of their resources went into the fight for survival.
Historians attempting to trace the course of events in most of the world are hampered by sparse and fragmentary records, as not only nation-states and their institutions but even basic literacy evaporated in many regions. As long as the migrations continued, settled life was impossible anywhere close to the major corridors of population movement; elsewhere, locals and migrants worked or fought their way to a modus vivendi, or failing that, exterminated one another. Violence, famine and disease added their toll and drove the population of the planet below two billion.
By the 2160s, though, the mass migrations were mostly at an end, and relative stability returned to many parts of the planet. In the aftermath, the South American Union became the world’s dominant power, though its international reach was limited to a modest blue-water navy patrolling the sea lanes and a network of alliances with the dozen or so functioning nation-states that still existed. Critical shortages of nonrenewable resources made salvage one of the few growth industries of the era; an enterprising salvage merchant who knew how to barter with the villagers and nomads of the stateless zones for scrap technology from abandoned cities could become rich in a single voyage.
Important as they were, these salvaged technologies were only accessible to the few.  The Union and a few other nation-states still kept some aging military aircraft operational, but maritime traffic once again was carried by tall ships, and horse-drawn wagons became a standard mode of transport on land away from the railroads. Radio communication had long since taken over from the last fitful fragments of the internet, and electric grids were found only in cities. As for the high-end technologies of a century and a half before, few people even remembered that they had ever existed at all.
In the end, though, the era of Union supremacy was little more than a breathing space, made possible only by the collapse of collective life in the stateless zones. As these began to recover from the era of migrations, and control over salvage passed into the hands of local warlords, the frail economies of the nation-states suffered. Rivalry over access to salvage sites still available for exploitation led to rising tensions between the Union and Australia, and thus to the last act of the tragedy.
This was set in motion by the Pacific War between the Union and Australia, which broke out in 2238 and shredded the economies of both nations.  After the disastrous Battle of Tahiti in 2241, the Union navy’s power to keep sea lanes open and free of piracy was a thing of the past. Maritime trade collapsed, throwing each region onto its own limited resources and destabilizing those parts of the stateless zones that had become dependent on the salvage industry. Even those nations that retained the social forms of the industrial era transformed themselves into agrarian societies where all economics was local and all technology handmade.
The negotiated peace of 2244 brought only the briefest respite: a fatally weakened Australia was overrun by Malik Ibrahim’s armies after the Battle of Darwin in 2251, and the Union fragmented in the wake of the coup of 2268 and the civil war that followed. Both nations had become too dependent on the salvaged technologies of an earlier day; the future belonged to newborn successor cultures in various corners of the world, whose blacksmiths learned how to hammer the scrap metal of ruined cities into firearms, wind turbines, fuel-alcohol stills, and engines to power handbuilt ultralight aircraft. The Earth’s first global civilization had given way to its first global dark age, and nearly four centuries would pass before new societies would be stable enough to support the amenities of civilization.
*************************I probably need to repeat that this is the kind of future I expect, not the specific future I foresee; the details are illustrative, not predictive. Whether the Ebola epidemic spins out of control or not, whether the United States undergoes a fascist takeover or runs headlong into some other disaster, whether China or some other nation becomes the stabilizing hegemon in the next period of relative peace—all these are anyone’s guess. All I’m suggesting is that events like the ones I’ve outlined are likely to occur as industrial civilization stumbles down the curve of decline and fall.
In the real world, in the course of ordinary history, these things happen. So does the decline and fall of civilizations that deplete their resource bases and wreck the ecological cycles that support them. As I noted above, I’m aware that true believers in progress insist that this can’t happen to us, but a growing number of people have noticed that the Progress Fairy got her pink slip some time ago, and ordinary history has taken her place as the arbiter of human affairs. That being the case, getting used to what ordinary history brings may be a highly useful habit to cultivate just now.

Dark Age America: The Hour of the Knife

Wed, 2014-10-15 20:15
It was definitely the sort of week that could benefit from a little comic relief. The Ebola epidemic marked another week of rising death tolls and inadequate international response . Bombs rained down ineffectually on various corners of Iraq and Syria as the United States and an assortment of putative allies launched air strikes at the Islamic State insurgents; since air strikes by themselves don’t win wars, and none of the combatants except Islamic State and the people they’re attacking have shown any inclination to put boots on the ground, that high-tech tantrum also counts in every practical sense as an admission of defeat, a point which is doubtless not lost on Islamic State. Meanwhile stock markets worldwide plunged on an assortment of ghastly economic news, with most indexes giving up their 2014 gains and then some, and oil prices dropped on weakening demand, reaching levels that put a good many fracking firms in imminent danger of bankruptcy.
In the teeth of all this bad news, I’m pleased to say, Paul Krugman rose to the occasion and gave all of us in the peak oil scene something to laugh about.  My regular readers will recall that Krugman assailed Post Carbon Institute a couple of weeks ago for having the temerity to point out that transitioning away from fossil fuels was, ahem, actually going to cost money. His piece was rebutted at once by Post Carbon’s Richard Heinberg and others, who challenged Krugman’s crackpot optimism and pointed out that the laws of physics and geology really do trump those of economics.
Krugman’s response—it really is a comic masterpiece, better than anything I’ve seen since the heyday of Francis Fukuyama—involved, among other non sequiturs and dubious claims, assailing mere scientists for thinking that they know more than economists. Er, let’s see: which of these two groups of people is expected to test their predictions against hard facts and discard a theory that produces inaccurate predictions? That’s what scientists do every working day, while economists apparently have something else to occupy their time. This may be why, when it comes to predicting macroeconomic conditions, economists these days are rarely as accurate as a tossed coin: consider the IMF’s continued advocacy of austerity programs as the road to prosperity when no country that has ever implemented them has ever achieved prosperity thereby, or for that matter the huge majority of economists who insisted the housing bubble wasn’t a bubble and wouldn’t crash, right up until the bottom dropped out.
Like so much great comedy, though, Krugman’s jest has its serious side. He sees a permanent condition of economic growth as the normal, indeed the inevitable state of affairs; it has doubtless never occurred to him that it might merely be a temporary anomaly, made possible only by the reckless extraction and consumption of half a billion years of fossil sunlight in a few short centuries. That the needle on the world’s fossil fuel gauge is swinging inexorably over toward E, to him, thus can only mean that some other source of cheap, abundant, highly concentrated energy will have to be found to keep the engines of economic growth roaring on at full throttle. That there may be no such replacement for fossil fuels ready and waiting in Nature’s cookie jar, and that economic growth can thus give way to an economic contraction extending over decades and centuries to come, has never entered his darkest dream.
That is to say, Krugman is still thinking the thoughts of a bygone era when the assumptions guiding those thoughts are long past their pull date and a very different era is taking shape around him. That’s a common source of confusion in times of rapid change, and never more so than in the decline and fall of civilizations—the theme of the current series of posts here. One specific form of that confusion very often becomes the mechanism by which the governing elite of a society in decline removes itself from power, and that mechanism is what I want to discuss this week.
To make sense of that process, it’s going to be necessary to take a step back and revisit some of the points made in an earlier post in this series. I discussed there the way that the complex social hierarchies common to mature civilizations break down into larger and less stable masses in which new loyalties and hatreds more easily build to explosive intensity. America’s as good an example of that as any.  A century ago, for example, racists in this country were at great pains to distinguish various classes of whiteness, with people of Anglo-Saxon ancestry at the pinnacle of whiteness and everybody else fitted into an intricate scheme of less-white categories below. Over the course of the twentieth century, those categories collapsed into a handful of abstract ethnicities—white, black, Hispanic, Asian—and can be counted on to collapse further as we proceed, until there are just two categories left, which are not determined by ethnicity but purely by access to the machinery of power.
Arnold Toynbee, whose immensely detailed exploration of this process remains the best account for our purposes, called those two the dominant minority and the internal proletariat. The dominant minority is the governing elite of a civilization in its last phases, a group of people united not by ethnic, cultural, religious, or ideological ties, but purely by their success in either clawing their way up the social ladder to a position of power, or hanging on to a position inherited from their forebears. Toynbee draws a sharp division between a dominant minority and the governing elite of a civilization that hasn’t yet begun to decline, which he calls a creative minority. The difference is that a creative minority hasn’t yet gone through the descent into senility that afflicts elites, and still recalls its dependence on the loyalty of those further down the social ladder; a dominant minority or, in my terms, a senile elite has lost track of that, and has to demand and enforce obedience because it can no longer inspire respect.
Everyone else in a declining civilization belongs to the second category, the internal proletariat. Like the dominant minority, the internal proletariat has nothing to unite it but its relationship to political power: it consists of all those people who have none. In the face of that fact, other social divisions gradually evaporate.  Social hierarchies are a form of capital, and like any form of capital, they have maintenance costs, which are paid out in the form of influence and wealth.   The higher someone stands in the social hierarchy, the more access to influence and wealth they have; that’s their payoff for cooperating with the system and enforcing its norms on those further down.
As resources run short and a civilization in decline has to start cutting its maintenance costs, though, the payoffs get cut. For obvious reasons, the higher someone is on the ladder to begin with, the more influence they have over whose payoffs get cut, and that reliably works out to “not mine.” The further down you go, by contrast, the more likely people are to get the short end of the stick. That said, until the civilization actually comes apart, there’s normally a floor to the process, somewhere around the minimum necessary to actually sustain life; an unlucky few get pushed below this, but normally it’s easier to maintain social order when the very poor get just enough to survive. Thus social hierarchies disintegrate from the bottom up, as more and more people on the lower rungs of the latter are pushed down to the bottom, erasing the social distinctions that once differentiated them from the lowest rung.
That happens in society as a whole; it also happens in each of the broad divisions of the caste system—in the United States, those would be the major ethnic divisions. The many shades of relative whiteness that used to divide white Americans into an intricate array of castes, for instance, have almost entirely gone by the boards; you have to go pretty far up the ladder to find white Americans who differentiate themselves from other white Americans on the basis of whose descendants they are. Further down the ladder, Americans of Italian, Irish, and Polish descent—once strictly defined castes with their own churches, neighborhoods, and institutions—now as often as not think of themselves as white without further qualification.
The same process has gotten under way to one extent or another in the other major ethnic divisions of American society, and it’s also started to dissolve even those divisions among the growing masses of the very poor.  I have something of a front-row seat on that last process; I live on the edge of the low-rent district in an old mill town in the Appalachians, and shopping and other errands take me through the neighborhood on foot quite often. I walk past couples pushing baby carriages, kids playing in backyards or vacant lots, neighbors hanging out together on porches, and as often as not these days the people in these groups don’t all have the same skin color. Head into the expensive part of town and you won’t see that; the dissolution of the caste system hasn’t extended that far up the ladder—yet.
This is business as usual in a collapsing civilization.  Sooner or later, no matter how intricate the caste system you start with, you end up with a society divided along the lines sketched out by Toynbee, with a dominant minority defined solely by its access to power and wealth and an internal proletariat defined solely by its exclusion from these things. We’re not there yet, not in the United States; there are still an assortment of intermediate castes between the two final divisions of society—but as Bob Dylan said a long time ago, you don’t have to be a weatherman to know which way the wind is blowing.
The political implications of this shift are worth watching. As I’ve noted here more than once, ruling elites in mature civilizations don’t actually exercise power themselves; they issue general directives to their immediate subordinates, who hand them further down the pyramid; along the way the general directives are turned into specific orders, which finally go to the ordinary working Joes and Janes who actually do the work of maintaining the status quo against potential rivals, rebels, and dissidents. A governing elite that hasn’t yet gone senile knows that it has to keep the members of its overseer class happy, and provides them with appropriate perks and privileges toward this end. As the caste system starts to disintegrate due to a shortage of resources to meet maintenance costs, though, the salaries and benefits at the bottom of the overseer class get cut, and more and more of the work of maintaining the system is assigned to poorly paid, poorly trained, and poorly motivated temp workers whose loyalties don’t necessarily lie with their putative masters.
You might think that even an elite gone senile would have enough basic common sense left to notice that losing the loyalty of the people who keep the elite in power is a fatal error.  In practice, though, the disconnection between the world of the dominant elite and the world of the internal proletariat quickly becomes total, and the former can be completely convinced that everything is fine when the latter know otherwise. As I write this, there’s a timely example unfolding at Texas Health Presbyterian Hospital in Dallas, where hospital administrators have been insisting at the top of their lungs that every possible precaution was taken when the late Thomas Duncan was being treated there for Ebola. According to the nursing staff—two of whom have now come down with the disease—“every possible precaution” amounted to no training, inadequate protective gear, and work schedules that had nurses who treated Duncan go on to tend other patients immediately thereafter.
A few weeks ago, the US media was full of confident bluster about how our high-tech medical industry would swing into action and stop the disease in its tracks; the gap between those easy assurances and the Keystone Kops response currently under way in Dallas is the same, mutatis mutandis, as the gap between the august edicts proclaimed in the capital during the last years of every civilization and the chaos in the streets and on the borders. You can see the same gap at work every time the US government trots out the latest round of heavily massaged economic statistics claiming that prosperity is just around the corner, or—well, I could go on listing examples for any number of pages.
So the gap that opens up between the dominant minority and the internal proletariat is much easier to see from below than from above. Left to itself, that gap would probably keep widening until the dominant minority toppled into it. It’s an interesting regularity of history, though, that this process is almost never left to run its full length. Instead, another series of events overtakes it, with the same harsh consequences for the dominant minority.
To understand this it’s necessary to include another aspect of Toynbee’s analysis, and look at what’s going on just outside the borders of a civilization in decline. Civilizations prosper by preying on their neighbors; the mechanism may be invasion and outright pillage, demands for tribute backed up by the threat of armed force, unbalanced systems of exchange that concentrate wealth in an imperial center at the expense of the periphery, or what have you, but the process is the same in every case, and so are the results. One way or another, the heartland of every civilization ends up surrounded by an impoverished borderland, scaled according to the transport technologies of the era.  In the case of the ancient Maya, the borderland extended only a modest distance in any direction; in the case of ancient Rome, it extended north to the Baltic Sea and east up to the borders of Parthia; in the case of modern industrial society, the borderland includes the entire Third World.
However large the borderland may be, its inhabitants fill a distinctive role in the decline and fall of a civilization. Toynbee calls them the external proletariat; as a civilization matures, their labor provides a steadily increasing share of the wealth that keeps the civilization and its dominant elite afloat, but they receive essentially nothing in return, and they’re keenly aware of this. Civilizations in their prime keep their external proletariats under control by finding and funding compliant despots to rule over the borderlands and, not incidentally, distract the rage of the external proletariat to some target more expendable than the civilization’s dominant minority. Here again, though, maintenance costs are the critical issue. When a dominant minority can no longer afford the subsidies and regular military expeditions needed to keep their puppet despots on their thrones, and try to maintain peace along the borders on teh cheap, they invariably catalyze the birth of the social form that brings them down.
Historians call it the warband: a group of young men whose sole trade is violence, gathered around a charismatic leader.  Warbands spring up in the borderlands of a civilization as the dominant minority or its pet despots lose their grip, and go through a brutally Darwinian process of evolution thereafter in constant struggle with each other and with every other present or potential rival in range. Once they start forming, there seems to be little that a declining civilization can do to derail that evolutionary process; warbands are born of chaos, their activities add to the chaos, and every attempt to pacify the borderlands by force simply adds to the chaos that feeds them. In their early days, warbands cover their expenses by whatever form of violent activity will pay the bills, from armed robbery to smuggling to mercenary service; as they grow, raids across the border are the next step; as the civilization falls apart and the age of migrations begins, warbands are the cutting edge of the process that shreds nations and scatters their people across the map.
The process of warband formation itself can quite readily bring a civilization down. Very often, though, the dominant minority of the declining civilization gives the process a good hard shove. As the chasm between the dominant minority and the internal proletariat becomes wider, remember, the overseer class that used to take care of crowd control and the like for the dominant minority becomes less and less reliable, as their morale and effectiveness are hammered by ongoing budget cuts, and the social barriers that once divided them from the people they are supposed to control will have begun to dissolve if they haven’t entirely given way yet. What’s the obvious option for a dominant minority that is worried about its ability to control the internal proletariat, can no longer rely on its own overseer class, and also has a desperate need to find something to distract the warbands on its borders?
They hire the warbands, of course.
That’s what inspired the Roman-British despot Vortigern to hire the Saxon warlord Hengist and three shiploads of his heavily armed friends to help keep the peace in Britannia after the legions departed. That’s what led the Fujiwara family, the uncrowned rulers of Japan, to hire uncouth samurai from the distant, half-barbarous Kanto plain to maintain peace in the twilight years of the Heian period. That’s why scores of other ruling elites have made the obvious, logical, and lethal choice to hire their own replacements and hand over the actual administration of power to them.
That latter is the moment toward which all the political trends examined in the last four posts in this sequence converge. The disintegration of social hierarchies, the senility of ruling elites, and the fossilization of institutions all lead to the hour of the knife, the point at those who think they still rule a civilization discover the hard way—sometimes the very hard way—that effective power has transferred to new and more muscular hands. Those of the elites that attempt to resist this transfer rarely survive the experience.  Those who accommodate themselves to the new state of affairs may be able to prosper for a time, but only so long as their ability to manipulate what’s left of the old system makes them useful to its new overlords. As what was once a complex society governed by bureaucratic institutions dissolves into a much simpler society governed by the personal rule of warlords, that skill set does not necessarily wear well.
In some cases—Hengist is an example—the warlords allow the old institutions to fall to pieces all at once, and the transition from an urban civilization to a protofeudal rural society takes place in a few generations at most. In others—the samurai of the Minamoto clan, who came out on top in the furious struggles that surrounded the end of the Heian period, are an example here—the warlords try to maintain the existing order of society as best they can, and get dragged down by the same catabolic trap that overwhelmed their predecessors. In an unusually complex case—for example, post-Roman Italy—one warlord after another can seize what’s left of the institutional structure of a dead empire, try to run it for a while, and then get replaced by someone else with the same agenda, each change driving one more step down the long stair that turned the Forum into a sheep pasture.
Exactly how this process will play out in the present case is impossible to predict in advance. We’ve got warband formation well under way in quite a few corners of industrial civilization’s borderlands, the southern border of the United States among them; we’ve got a dominant minority far advanced in the state of senility described in an earlier post; we’ve got a society equally well advanced in the dissolution of castes into dominant minority and internal proletariat. Where we are now in the process is clear enough; what will come out the other side, which will be discussed in a future post, is equally clear; the exact series of steps between them is of less importance—except, of course, to those who have the most to fear when the hour of the knife arrives.

****************
In other news, I'm pleased to announce that my latest book from New Society Publications, After Progress: Reason, Religion, and the End of the Industrial Age is now available for preorder, with a 20% discount off the cover price as an additional temptation. Those readers who enjoyed last year's series of posts on religion and the end of progress will find this very much to their taste. 

Dark Age America: The Collapse of Political Complexity

Wed, 2014-10-08 16:10
The senility that afflicts ruling elites in their last years, the theme of the previous post in this sequence, is far from the only factor leading the rich and influential members of a failing civilization to their eventual destiny as lamppost decorations or come close equivalent. Another factor, at least as important, is a lethal mismatch between the realities of power in an age of decline and the institutional frameworks inherited from a previous age of ascent.
That sounds very abstract, and appropriately so. Power in a mature civilization is very abstract, and the further you ascend the social ladder, the more abstract it becomes. Conspiracy theorists of a certain stripe have invested vast amounts of time and effort in quarrels over which specific group of people it is that runs everything in today’s America. All of it was wasted, because the nature of power in a mature civilization precludes the emergence of any one center of power that dominates all others.
Look at the world through the eyes of an elite class and it’s easy to see how this works. Members of an elite class compete against one another to increase their own wealth and influence, and form alliances to pool resources and counter the depredations of their rivals. The result, in every human society complex enough to have an elite class in the first place, is an elite composed of squabbling factions that jealously resist any attempt at further centralization of power. In times of crisis, that resistance can be overcome, but in less troubled times, any attempt by an individual or faction to seize control of the whole system faces the united opposition of the rest of the elite class.
One result of the constant defensive stance of elite factions against each other is that as a society matures, power tends to pass from individuals to institutions. Bureaucratic systems take over more and more of the management of political, economic, and cultural affairs, and the policies that guide the bureaucrats in their work slowly harden until they are no more subject to change than the law of gravity.  Among its other benefits to the existing order of society, this habit—we may as well call it policy mummification—limits the likelihood that an ambitious individual can parlay control over a single bureaucracy into a weapon against his rivals.
Our civilization is no exception to any of this.  In the modern industrial world, some bureaucracies are overtly part of the political sphere; others—we call them corporations—are supposedly apart from government, and still others like to call themselves “non-governmental organizations” as a form of protective camouflage. They are all part of the institutional structure of power, and thus function in practice as arms of government.  They have more in common than this; most of them have the same hierarchical structure and organizational culture; those that are large enough to matter have executives who went to the same schools, share the same values, and crave the same handouts from higher up the ladder. No matter how revolutionary their rhetoric, for that matter, upsetting the system that provides them with their status and its substantial benefits is the last thing any of them want to do.
All these arrangements make for a great deal of stability, which the elite classes of mature civilizations generally crave. The downside is that it’s not easy for a society that’s proceeded along this path to change its ways to respond to new circumstances. Getting an entrenched bureaucracy to set aside its mummified policies in the face of changing conditions is generally so difficult that it’s often easier to leave the old system in place while redirecting all its important functions to another, newly founded bureaucracy oriented toward the new policies. If conditions change again, the same procedure repeats, producing a layer cake of bureaucratic organizations that all supposedly exist to do the same thing.
Consider, as one example out of many, the shifting of responsibility for US foreign policy over the years. Officially, the State Department has charge of foreign affairs; in practice, its key responsibilities passed many decades ago to the staff of the National Security Council, and more recently have shifted again to coteries of advisers assigned to the Office of the President.  In each case, what drove the shift was the attachment of the older institution to a set of policies and procedures that stopped being relevant to the world of foreign policy—in the case of the State Department, the customary notions of old-fashioned diplomacy; in the case of the National Security Council, the bipolar power politics of the Cold War era—but could not be dislodged from the bureaucracy in question due to the immense inertia of policy mummification in institutional frameworks.
The layered systems that result are not without their practical advantages to the existing order. Many bureaucracies provide even more stability than a single bureaucracy, since it’s often necessary for the people who actually have day to day responsibility for this or that government function to get formal approval from the top officials of the agency or agencies that used to have that responsibility, Even when those officials no longer have any formal way to block a policy they don’t like, the personal and contextual nature of elite politics means that informal options usually exist. Furthermore, since the titular headship of some formerly important body such as the US State Department confers prestige but not power, it makes a good consolation prize to be handed out to also-rans in major political contests, a place to park well-connected incompetents, or what have you.
Those of my readers who recall the discussion of catabolic collapse three weeks ago will already have figured out one of the problems with the sort of system that results from the processes just sketched out:  the maintenance bill for so baroque a form of capital is not small. In a mature civilization, a large fraction of available resources and economic production end up being consumed by institutions that no longer have any real function beyond perpetuating their own existence and the salaries and prestige of their upper-level functionaries. It’s not unusual for the maintenance costs of unproductive capital of this kind to become so great a burden on society that the burden in itself forces a crisis—that was one of the major forces that brought the French Revolution, for instance. Still, I’d like to focus for a moment on a different issue, which is the effect that the institutionalization of power and the multiplication of bureaucracy has on the elites who allegedly run the system from which they so richly benefit.
France in the years leading up to the Revolution makes a superb example, one that John Kenneth Galbraith discussed with his trademark sardonic humor in his useful book The Culture of Contentment. The role of ruling elite in pre-1789 France was occupied by close equivalents of the people who fill that same position in America today: the “nobility of the sword,” the old feudal aristocracy, who had roughly the same role as the holders of inherited wealth in today’s America, and the “nobility of the robe,” who owed their position to education, political office, and a talent for social climbing, and thus had roughly the same role as successful Ivy League graduates do here and now. These two elite classes sparred constantly against each other, and just as constantly competed against their own peers for wealth, influence, and position.
One of the most notable features of both sides of the French elite in those days was just how little either group actually had to do with the day-to-day management of public affairs, or for that matter of their own considerable wealth. The great aristocratic estates of the time were bureaucratic societies in miniature, ruled by hierarchies of feudal servitors and middle-class managers, while the hot new financial innovation of the time, the stock market, allowed those who wanted their wealth in a less tradition-infested form to neglect every part of business ownership but the profits. Those members of the upper classes who held offices in government, the church, and the other venues of power presided decorously over institutions that were perfectly capable of functioning without them.
The elite classes of mature civilizations almost always seek to establish arrangements of this sort, and understandably so. It’s easy to recognize the attractiveness of a state of affairs in which the holders of wealth and influence get all the advantages of their positions and have to put up with as few as possible of the inconveniences thereof. That said, this attraction is also a death wish, because it rarely takes the people who actually do the work long to figure out that a ruling class in this situation has become entirely parasitic, and that society would continue to function perfectly well were something suitably terminal to happen to the titular holders of power.
This is why most of the revolutions in modern history have taken place in nations in which the ruling elite has followed its predilections and handed over all its duties to subordinates. In the case of the American revolution, the English nobility had been directly involved in colonial affairs in the first century or so after Jamestown. Once it left the colonists to manage their own affairs, the latter needed very little time to realize that the only thing they had to lose by seeking independence was the steady hemorrhage of wealth from the colonies to England. In the case of the French and Russian revolutions, much the same thing happened without the benefit of an ocean in the way: the middle classes who actually ran both societies recognized that the monarchy and aristocracy had become disposable, and promptly disposed of them once a crisis made it possible to do so.
The crisis just mentioned is a significant factor in the process. Under normal conditions, a society with a purely decorative ruling elite can keep on stumbling along indefinitely on sheer momentum. It usually takes a crisis—Britain’s military response to colonial protests in 1775, the effective bankruptcy of the French government in 1789, the total military failure of the Russian government in 1917, or what have you—to convince the people who actually handle the levers of power that their best interests no longer lie with their erstwhile masters. Once the crisis hits, the unraveling of the institutional structures of authority can happen with blinding speed, and the former ruling elite is rarely in a position to do anything about it. All they have ever had to do, and all they know how to do, is issue orders to deferential subordinates. When there are none of these latter to be found, or (as more often happens) when the people to whom the deferential subordinates are supposed to pass the orders are no longer interested in listening, the elite has no options left.
The key point to be grasped here is that power is always contextual. A powerful person is a person able to exert particular kinds of power, using particular means, on some particular group of other people, and someone thus can be immensely powerful in one setting and completely powerless in another. What renders the elite classes of a mature society vulnerable to a total collapse of power is that they almost always lose track of this unwelcome fact. Hereditary elites are particularly prone to fall into the trap of thinking of their position in society as an accurate measure of their own personal qualifications to rule, but it’s also quite common for those who are brought into the elite from the classes immediately below to think of their elevation as proof of their innate superiority. That kind of thinking is natural for elites, but once they embrace it, they’re doomed.
It’s dangerous enough for elites to lose track of the contextual and contingent nature of their power when the mechanisms through which power is enforced can be expected to remain in place—as it was in the American colonies in 1776, France in 1789, and Russia in 1917. It’s far more dangerous if the mechanisms of power themselves are in flux. That can happen for any number of reasons, but the one that’s of central importance to the theme of this series of posts is the catabolic collapse of a declining civilization, in which the existing mechanisms of power come apart because their maintenance costs can no longer be met.
That poses at least two challenges to the ruling elite, one obvious and the other less so. The obvious one is that any deterioration in the mechanisms of power limits the ability of the elite to keep the remaining mechanisms of power funded, since a great deal of power is always expended in paying the maintenance costs of power. Thus in the declining years of Rome, for example, the crucial problem the empire faced was precisely that the sprawling system of imperial political and military administration cost more than the imperial revenues could support, but the weakening of that system made it even harder to collect the revenues on which the rest of the system depended, and forced more of what money there was to go for crisis management. Year after year, as a result, roads, fortresses, and the rest of the infrastructure of Roman power sank under a burden of deferred maintenance and malign neglect, and the consequences of each collapse became more and more severe because there was less and less in the treasury to pay for rebuilding when the crisis was over.
That’s the obvious issue. More subtle is the change in the nature of power that accompanies the decay in the mechanisms by which it’s traditionally been used. Power in a mature civilization, as already noted, is very abstract, and the people who are responsible for administering it at the top of the social ladder rise to those positions precisely because of their ability to manage abstract power through the complex machinery that a mature civilization provides them. As the mechanisms collapse, though, power stops being abstract in a hurry, and the skills that allow the manipulation of abstract power have almost nothing in common with the skills that allow concrete power to be wielded.
Late imperial Rome, again, is a fine example. There, as in other mature civilizations, the ruling elite had a firm grip on the intricate mechanisms of social control at their uppermost and least tangible end. The inner circle of each imperial administration—which sometimes included the emperor himself, and sometimes treated him as a sock puppet—could rely on sprawling many-layered civil and military bureaucracies to put their orders into effect. They were by and large subtle, ruthless, well-educated men, schooled in the intricacies of imperial administration, oriented toward the big picture, and completely dependent on the obedience of their underlings and the survival of the Roman system itself.
The people who replaced them, once the empire actually fell, shared none of these characteristics except the ruthlessness. The barbarian warlords who carved up the corpse of Roman power had a completely different set of skills and characteristics: raw physical courage, a high degree of competence in the warrior’s trade, and the kind of charisma that attracts cooperation and obedience from those who have many other options. Their power was concrete, personal, and astonishingly independent of institutional forms. That’s why Odoacer, whose remarkable career was mentioned in an earlier post in this sequence, could turn up alone in a border province, patch together an army out of a random mix of barbarian warriors, and promptly lead them to the conquest of Italy.
There were a very few members of the late Roman elite who could exercise power in the same way as Odoacer and his equivalents, and they’re the exceptions that prove the rule. The greatest of them, Flavius Aetius, spent many years in youth as a hostage in the royal courts of the Visigoths and the Huns and got his practical education there, rather than in Roman schools. He was for all practical purposes a barbarian warlord who happened to be Roman by birth, and played the game as well as any of the other warlords of his age. His vulnerabilities were all on the Roman side of the frontier, where the institutions of Roman society still retained a fingernail grip on power, and so—having defeated the Visigoths, the Franks, the Burgundians, and the massed armies of Attila the Hun, all for the sake of Rome’s survival—he was assassinated by the emperor he served.
Fast forward close to two thousand years and it’s far from difficult to see how the same pattern of elite extinction through the collapse of political complexity will likely work out here in North America. The ruling elites of our society, like those of the late Roman Empire, are superbly skilled at manipulating and parasitizing a fantastically elaborate bureaucratic machine which includes governments, business firms, universities, and many other institutions among its components. That’s what they do, that’s what they know how to do, and that’s what all their training and experience has prepared them to do.  Thus their position is exactly equivalent to that of French aristocrats before 1789, but they’re facing the added difficulty that the vast mechanism on which their power depends has maintenance costs that their civilization can no longer meet. As the machine fails, so does their power.
Nor are they particularly well prepared to make the transition to a radically different way of exercising power. Imagine for a moment that one of the current US elite—an executive from a too-big-to-fail investment bank, a top bureaucrat from inside the DC beltway, a trust-fund multimillionaire with a pro forma job at the family corporation, or what have you—were to turn up in some chaotic failed state on the fringes of the industrial world, with no money, no resources, no help from abroad, and no ticket home. What’s the likelihood that, without anything other than whatever courage, charisma, and bare-knuckle fighting skills he might happen to have, some such person could equal Odoacer’s feat, win the loyalty and obedience of thousands of gang members and unemployed mercenaries, and lead them in a successful invasion of a neighboring country?
There are people in North America who could probably carry off a feat of that kind, but you won’t find them in the current ruling elite. That in itself defines part of the path to dark age America: the replacement of a ruling class that specializes in managing abstract power through institutions with a ruling class that specializes in expressing power up close and in person, using the business end of the nearest available weapon. The process by which the new elite emerges and elbows its predecessors out of the way, in turn, is among the most reliable dimensions of decline and fall; we’ll talk about it next week.

The Buffalo Wind

Wed, 2014-10-01 17:50
I've talked more than once in these essays about the challenge of discussing the fall of civilizations when the current example is picking up speed right outside the window.  In a calmer time, it might be possible to treat the theory of catabolic collapse as a pure abstraction, and contemplate the relationship between the maintenance costs of capital and the resources available to meet those costs without having to think about the ghastly human consequences of shortfall. As it is, when I sketch out this or that detail of the trajectory of a civilization’s fall, the commotions of our time often bring an example of that detail to the surface, and sometimes—as now—those lead in directions I hadn’t planned to address.
This is admittedly a time when harbingers of disaster are not in short supply. I was amused a few days back to see yet another denunciation of economic heresy in the media. This time the author was one Matt Egan, the venue was CNN/Money, and the target was Zero Hedge, one of the more popular sites on the doomward end of the blogosphere. The burden of the CNN/Money piece was that Zero Hedge must be wrong in questioning the giddy optimism of the stock market—after all, stock values have risen to record heights, so what could possibly go wrong?
Zero Hedge’s pseudonymous factotum Tyler Durden had nothing to say to CNN/Money, and quite reasonably so.  He knows as well as I do that in due time, Egan will join that long list of pundits who insisted that the bubble du jour would keep on inflating forever, and got to eat crow until the end of their days as a result. He's going to have plenty of company; the chorus of essays and blog posts denouncing peak oil in increasingly strident tones has built steadily in recent months. I expect that chorus to rise to a deafening shriek right about the time the bottom drops out of the fracking bubble.
Meanwhile the Ebola epidemic has apparently taken another large step toward fulfilling its potential as the Black Death of the 21st century. A month ago, after reports surfaced of Ebola in a southwestern province, Sudan slapped a media blackout on reports of Ebola cases in the country. Maybe there’s an innocent reason for this policy, but I confess I can’t think of one. Sudan is a long way from the West African hotspots of the epidemic, and unless a local outbreak has coincidentally taken place—which is of course possible—this suggests the disease has already spread along the ancient east-west trade routes of the Sahel. If the epidemic gets a foothold in Sudan, the next stops are the teeming cities of Egypt and the busy ports of East Africa, full of shipping from the Gulf States, the Indian subcontinent, and eastern Asia.
I’ve taken a wry amusement in the way that so many people have reacted to the spread of the epidemic by insisting that Ebola can’t possibly be a problem outside the West African countries it’s currently devastating. Here in the US, the media’s full of confident-sounding claims that our high-tech health care system will surely keep Ebola at bay. It all looks very encouraging, unless you happen to know that diseases spread by inadequate handwashing are common in US hospitals, only a small minority of facilities have the high-end gear necessary to isolate an Ebola patient, and the Ebola patient just found in Dallas got misdiagnosed and sent home with a prescription for antibiotics, exposing plenty of people to the virus.
More realistically, Laurie Garrett, a respected figure in the public health field, warns that ”you are not nearly scared enough about Ebola.”  In the peak oil community, Mary Odum, whose credentials as ecologist and nurse make her eminently qualified to discuss the matter, has tried to get the same message across. Few people are listening.
Like the frantic claims that peak oil has been disproven and the economy isn’t on the verge of another ugly slump, the insistence that Ebola can’t possibly break out of its current hot zones is what scholars of the magical arts call an apotropaic charm—that is, an attempt to turn away an unwanted reality by means of incantation. In the case of Ebola, the incantation usually claims that the West African countries currently at ground zero of the epidemic are somehow utterly unlike all the other troubled and impoverished Third World nations it hasn’t yet reached, and that the few thousand deaths racked up so far by the epidemic is a safe measure of its potential.
Those of my readers who have been thinking along these lines are invited to join me in a little thought experiment. According to the World Health Organization, the number of cases of Ebola in the current epidemic is doubling every twenty days, and could reach 1.4 million by the beginning of 2015. Let’s round down, and say that there are one million cases on January 1, 2015.  Let’s also assume for the sake of the experiment that the doubling time stays the same. Assuming that nothing interrupts the continued spread of the virus, and cases continue to double every twenty days, in what month of what year will the total number of cases equal the human population of this planet? Go ahead and do the math for yourself.  If you’re not used to exponential functions, it’s particularly useful to take a 2015 calendar, count out the 20-day intervals, and see exactly how the figure increases over time.
Now of course this is a thought experiment, not a realistic projection. In the real world, the spread of an epidemic disease is a complex process shaped by modes of human contact and transport.  There are bottlenecks that slow propagation across geographical and political barriers, and different cultural practices that can help or hinder the transmission of the Ebola virus. It’s also very likely that some nations, especially in the developed world, will be able to mobilize the sanitation and public-health infrastructure to stop a self-sustaining epidemic from getting under way on their territory before a vaccine can be developed and manufactured in sufficient quantity to matter.
Most members of our species, though, live in societies that don’t have those resources, and the steps that could keep Ebola from spreading to the rest of the Third World are not being taken. Unless massive resources are committed to that task soon—as in before the end of this year—the possibility exists that when the pandemic finally winds down a few years from now, two to three billion people could be dead. We need to consider the possibility that the peak of global population is no longer an abstraction set comfortably off somewhere in the future. It may be knocking at the future’s door right now, shaking with fever and dripping blood from its gums.
That ghastly possibility is still just that, a possibility. It can still be averted, though the window of opportunity in which that could be done  is narrowing with each passing day. Epizootic disease is one of the standard ways by which an animal species in overshoot has its population cut down to levels that the carrying capacity of the environment can support, and the same thing has happened often enough with human beings. It’s not the only way for human numbers to decline; I’ve discussed here at some length the possibility that that could happen by way of ordinary demographic contraction—but we’re now facing a force that could make the first wave of population decline happen in a much faster and more brutal way.
Is that the end of the world? Of course not. Any of my readers who have read a good history of the Black Death—not a bad idea just now, all things considered—know that human societies can take a massive population loss from pandemic disease and still remain viable. That said, any such event is a shattering experience, shaking political, economic, cultural, and spiritual institutions and beliefs down to their core. In the present case, the implosion of the global economy and the demise of the tourism and air travel industries are only the most obvious and immediate impacts. There are also broader and deeper impacts, cascading down from the visible realms of economics and politics into the too rarely noticed substructure of ecological relationships that sustain human existence.
And this, in turn, has me thinking of buffalo.
In there among all the other new stories of the last week, by turns savage and silly, is a report from Montana, where representatives of Native American peoples from the prairies of the United States and Canada signed a treaty pledging their tribes to cooperate in reintroducing wild buffalo to the Great Plains. I doubt most people in either country heard of it, and fewer gave it a second thought. There have been herds of domesticated buffalo in North America for a good many decades now, but only a few very small herds, on reservations or private nature sanctuaries, have been let loose to wander freely as their ancestors did.
A great many of the white residents of the Great Plains are furiously opposed to the project. It’s hard to find any rational reason for that opposition—the Native peoples have merely launched a slow process of putting wild buffalo herds on their own tribal property, not encroaching on anyone or anything else—but rational reasons are rarely that important in human motivation, and the nonrational dimension here as so often  is the determining factor. The entire regional culture of the Great Plains centers on the pioneer experience, the migration that swept millions of people westward onto the prairies on the quest to turn some of North America’s bleakest land into a cozy patchwork of farms and towns, nature replaced by culture across thousands of miles where the buffalo once roamed.
The annihilation of the buffalo was central to that mythic quest, as central as the dispossession of the Native peoples and the replacement of the tallgrass prairie by farm crops. A land with wild buffalo herds upon it is not a domesticated land. Those who saw the prairies in their wild state brought back accounts that sound like something out of mythology: grass so tall a horseman could ride off into it and never be seen again, horizons as level and distant as those of the open ocean, and the buffalo: up to sixty million of them, streaming across the landscape in herds that sometimes reached from horizon to horizon.  The buffalo were the keystone of the prairie ecosystem, and their extermination was an essential step in shattering that ecosystem and extracting the richness of its topsoil for temporary profit.
A little while back I happened to see a video online about the ecological effects of reintroducing wolves to Yellowstone Park. It’s an interesting story:  the return of wolves, most of a century after their extermination, caused deer to stay away from areas of the park where they were vulnerable to attack.  Once those areas were no longer being browsed by deer, their vegetation changed sharply, making the entire park more ecologically diverse; species that had been rare or absent in the park reappeared to take advantage of the new, richer habitat.  Even the behavior of the park’s rivers changed, as vegetation shifts slowed riverine erosion.
All this was narrated by George Monbiot in a tone of gosh-wow wonderment that irritated me at first hearing. Surely it would be obvious, I thought, that changing one part of an ecosystem would change everything else, and that removing or reintroducing one of the key species in the ecosystem would have particularly dramatic effects! Of course I stopped then and laughed, since for most people it’s anything but obvious. Our entire culture is oriented toward machines, not living systems, and what defines a machine is precisely that it’s meant to do exactly what it’s told and nothing else. Push this button, and that happens; turn this switch, and something else happens; pull this trigger, and the buffalo falls dead.  We’re taught to think of the world as though that same logic controlled its responses to our actions, and then get blindsided when it acts like a whole system instead.
I’d be surprised to hear any of the opponents of reintroducing wild buffalo talk in so many words about the buffalo as a keystone species of the prairie ecosystem, and suggesting that its return to the prairies might set off a trophic cascade—that’s the technical term for the avalanche of changes, spreading down the food web to its base, that the Yellowstone wolves set in motion once they sniffed the wind, caught the tasty scent of venison, and went to look. Still, it’s one of the basic axioms of the Druid teachings that undergird these posts that people know more than they think they know, and a gut-level sense of the cascade of changes that would be kickstarted by wild buffalo may be helping drive their opposition.
That said, there’s a further dimension. It’s not just in an ecological sense that a land with wild buffalo herds upon it is not a domesticated land. To the descendants of the pioneers, the prairie, the buffalo, and the Indian are what their ancestors came West to destroy. Behind that identification lies the whole weight of the mythology of progress, the conviction that it’s the destiny of the West to be transformed from wilderness to civilization. The return of wild buffalo is unthinkable from within the pioneer worldview, because it means that “the winning of the West” was not a permanent triumph but a temporary condition, which may yet be followed in due time by the losing of the West.
Of course there were already good reasons to think along those unthinkable lines, long before the Native tribes started drafting their treaty.  The economics of dryland farming on the Great Plains never really made that much sense. Homestead acts and other government subsidies in the 19th century, and the economic impacts of two world wars in the 20th, made farming the Plains look viable, in much the same way that huge government subsidies make nuclear power look viable today. In either case, take away the subsidies and you’ve got an arrangement without a future. That’s the subtext behind the vacant and half-vacant towns you’ll find all over the West these days. That the fields and farms and towns may be replaced in turn by prairie grazed by herds of wild buffalo is unthinkable from within the pioneer worldview, too—but across the West, the unthinkable is increasingly the inescapable.
Equally, it’s unthinkable to most people in the industrial world today that a global pandemic could brush aside the world’s terminally underfunded public health systems and snuff out millions or billions of lives in a few years. It’s just as unthinkable to most people in the industrial world that the increasingly frantic efforts of wealthy elites to prop up the global economy and get it to start generating prosperity again will fail, plunging the world into irrevocable economic contraction. It’s among the articles of faith of the industrial world that the future must lead onward and upward, that the sort of crackpot optimism that draws big crowds at TED Talks counts as realistic thinking about the future, and that the limits to growth can’t possibly get in the way of our craving for limitlessness. Here again, though, the unthinkable is becoming the inescapable.
In each of these cases, and many others, the unthinkable can be described neatly as the possibility that a set of changes that we happen to have decked out with the sanctified label of “progress” might turn out instead to be a temporary and reversible condition. The agricultural settlement of the Great Plains, the relatively brief period when humanity was not troubled by lethal pandemics, and the creation of a global economy powered by extravagant burning of fossil fuels were all supposed to be permanent changes, signs of progress and Man’s Conquest of Nature. No one seriously contemplated the chance that each of those changes would turn out to be transient, that they would shift into reverse under the pressure of their own unintended consequences, and that the final state of each whole system would have more in common with its original condition than with the state it briefly attained in between.
There are plenty of ways to talk about the implications of that great reversal, but the one that speaks to me now comes from the writings of Ernest Thompson Seton, whose nature books were a fixture of my childhood and who would probably be the patron saint of this blog if Druidry had patron saints. He spent the whole of his adult career as naturalist, artist, writer, storyteller, and founder of a youth organization—Woodcraft, which taught wilderness lore, practical skills, and democratic self-government to boys and girls alike, and might be well worth reviving now—fighting for a world in which there would still be a place for wild buffalo roaming the prairies: fought, and lost. (It would be one of his qualifications for Druid sainthood that he knew he would lose, and kept fighting anyway. The English warriors at the battle of Maldon spoke that same language: “Will shall be sterner, heart the stronger, mood shall be more as our might falters.”)
He had no shortage of sound rational reasons for his lifelong struggle, but now and again, in his writings or when talking around the campfire, he would set those aside and talk about deeper issues. He spoke of the “Buffalo Wind,” the wind off the open prairies that tingles with life and wonder, calling humanity back to its roots in the natural order, back to harmony with the living world: not rejecting the distinctive human gifts of culture and knowledge, but holding them in balance with the biological realities of our existence and the needs of the biosphere. I’ve felt that wind; so, I think, have most Druids, and so have plenty of other people who couldn’t tell a Druid from a dormouse but who feel in their bones that industrial humanity’s attempted war against nature is as senseless as a plant trying to gain its freedom by pulling itself up by the roots.
One of the crucial lessons of the Buffalo Wind, though, is that it’s not always gentle. It can also rise to a shrieking gale, tear the roofs off houses, and leave carnage in its wake. We can embrace the lessons that the natural world is patiently and pitilessly teaching us, in other words, or we can close our eyes and stop our ears until sheer pain forces the lessons through our barriers, but one way or another, we’re going to learn those lessons. It’s possible, given massively funded interventions and a good helping of plain dumb luck, that the current Ebola epidemic might be stopped before it spreads around the world. It’s possible that the global economy might keep staggering onward for another season, and that wild buffalo might be kept from roaming the Great Plains for a while yet. Those are details; the underlying issue—the inescapable collision between the futile fantasy of limitless economic expansion on a finite planet and the hard realities of ecology, geology, and thermodynamics—is not going away.
The details also matter, though; in a very old way of speaking, the current shudderings of the economy, the imminent risk of pandemic, and the distant sound of buffalo bellowing in the Montana wind are omens. The Buffalo Wind is rising now, keening in the tall grass, whispering in the branches and setting fallen leaves aswirl. I could be mistaken, but I think that not too far in the future it will become a storm that will shake the industrial world right down to its foundations.

Dark Age America: The Senility of the Elites

Wed, 2014-09-24 17:50
Regular readers of this blog will no doubt recall that, toward the beginning of last month, I commented on a hostile review of one of my books that had just appeared in the financial blogosphere. At the time, I noted that the mainstream media normally ignore the critics of business as usual, and suggested that my readers might want to watch for similar attacks by more popular pundits, in more mainstream publications, on those critics who have more of a claim to conventional respectability than, say, archdruids. Such attacks, as I pointed out then, normally happen in the weeks immediately before business as usual slams face first into a brick wall of its own making
Well, it’s happened. Brace yourself for the impact.
The pundit in question was no less a figure than Paul Krugman, who chose the opinion pages of the New York Times for a shrill and nearly fact-free diatribe lumping Post Carbon Institute together with the Koch brothers as purveyors of “climate despair.” PCI’s crime, in Krugman’s eyes, consists of noticing that the pursuit of limitless economic growth on a finite planet, with or without your choice of green spraypaint, is a recipe for disaster.  Instead of paying attention to such notions, he insists, we ought to believe the IMF and a panel of economists when they claim that replacing trillions of dollars of fossil fuel-specific infrastructure with some unnamed set of sustainable replacements will somehow cost nothing, and that we can have all the economic growth we want because, well, because we can, just you wait and see!
PCI’s Richard Heinberg responded with a crisp and tautly reasoned rebuttal pointing out the gaping logical and factual holes in Krugman’s screed, so there’s no need for me to cover the same ground here. Mind you, Heinberg was too gentlemanly to point out that the authorities Krugman cites aren’t exactly known for their predictive accuracy—the IMF in particular has become notorious in recent decades for insisting that austerity policies that have brought ruin to every country that has ever tried them are the one sure ticket to prosperity—but we can let that pass, too. What I want to talk about here is what Krugman’s diatribe implies for the immediate future.
Under normal circumstances, dissident groups such as Post Carbon Institute and dissident intellectuals such as Richard Heinberg never, but never, get air time in the mainstream media. At most, a cheap shot or two might be aimed at unnamed straw men while passing from one bit of conventional wisdom to the next. It’s been one of the most interesting details of the last few years that peak oil has actually been mentioned by name repeatedly by mainstream pundits: always, to be sure, in tones of contempt, and always in the context of one more supposed proof that a finite planet can too cough up infinite quantities of oil, but it’s been named. The kind of total suppression that happened between the mid-1980s and the turn of the millennium, when the entire subject vanished from the collective conversation of our society, somehow didn’t happen this time.
That says to me that a great many of those who were busy denouncing peak oil and the limits to growth were far less confident than they wanted to appear. You don’t keep on trying to disprove something that nobody believes, and of course the mere fact that oil prices and other quantitative measures kept on behaving the way peak oil theory said they would behave, rather than trotting obediently the way peak oil critics such as Bjorn Lomborg and Daniel Yergin told them to go, didn’t help matters much. The cognitive dissonance between the ongoing proclamations of coming prosperity via fracking and the soaring debt load and grim financial figures of the fracking industry has added to the burden.
Even so, it’s only in extremis that denunciations of this kind shift from attacks on ideas to attacks on individuals. As I noted in the earlier post, one swallow does not a summer make, and one ineptly written book review by an obscure blogger on an obscure website denouncing an archdruid, of all people, might indicate nothing more than a bout of dyspepsia or a disappointing evening at the local singles bar.  When a significant media figure uses one of the world’s major newspapers of record to lash out at a particular band of economic heretics by name, on the other hand, we’ve reached the kind of behavior that only happens, historically speaking, when crunch time is very, very close. Given that we’ve also got a wildly overvalued stock market, falling commodity prices, and a great many other symptoms of drastic economic trouble bearing down on us right now, not to mention the inevitable unraveling of the fracking bubble, there’s a definite chance that the next month or two could see the start of a really spectacular financial crash.
While we wait for financiers to start raining down on Wall Street sidewalks, though, it’s far from inappropriate to continue with the current sequence of posts about the end of industrial civilization—especially as the next topic in line is the way that the elites of a falling civilization destroy themselves.
One of the persistent tropes in current speculations on the future of our civilization revolves around the notion that the current holders of wealth and influence will entrench themselves even more firmly in their positions as things fall apart. A post here back in 2007 criticized what was then a popular form of that trope, the claim that the elites planned to impose a “feudal-fascist” regime on the deindustrial world. That critique still applies; that said, it’s worth discussing what tends to happen to elite classes in the decline and fall of a civilization, and seeing what that has to say about the probable fate of the industrial world’s elite class as our civilization follows the familiar path.
It’s probably necessary to say up front that we’re not talking about the evil space lizards that haunt David Icke’s paranoid delusions, or for that matter the faux-Nietzschean supermen who play a parallel role in Ayn Rand’s dreary novels and even drearier pseudophilosophical rants. What we’re talking about, rather, is something far simpler, which all of my readers will have experienced in their own lives.  Every group of social primates has an inner core of members who have more access to the resources controlled by the group, and more influence over the decisions made by the group, than other members.  How individuals enter that core and maintain themselves there against their rivals varies from one set of social primates to another—baboons settle such matters with threat displays backed up with violence, church ladies do the same thing with social maneuvering and gossip, and so on—but the effect is the same: a few enter the inner core, the rest are excluded from it. That process, many times amplified, gives rise to the ruling elite of a civilization.
I don’t happen to know much about the changing patterns of leadership in baboon troops, but among human beings, there’s a predictable shift over time in the way that individuals gain access to the elite. When institutions are new and relatively fragile, it’s fairly easy for a gifted and ambitious outsider to bluff and bully his way into the elite. As any given institution becomes older and more firmly settled in its role, that possibility fades. What happens instead in a mature institution is that the existing members of the elite group select, from the pool of available candidates, those individuals who will be allowed to advance into the elite.  The church ladies just mentioned are a good example of this process in action; if any of my readers are doctoral candidates in sociology looking for a dissertation topic, I encourage them to consider joining a local church, and tracking the way the elderly women who run most of its social functions groom their own replacements and exclude those they consider unfit for that role.
That process is a miniature version of the way the ruling elite of the world’s industrial nations select new additions to their number. There, as among church ladies, there are basically two routes in. You can be born into the family of a member of the inner circle, and if you don’t run off the rails too drastically, you can count on a place in the inner circle yourself in due time. Alternatively, you can work your way in from outside by being suitably deferential and supportive to the inner circle, meeting all of its expectations and conforming to its opinions and decisions, until the senior members of the elite start treating you as a junior member and the junior members have to deal with you as an equal. You can watch that at work, as already mentioned, in your local church—and you can also watch it at work in the innermost circles of power and privilege in American life.
Here in America, the top universities are the places where the latter version of the process stands out in all its dubious splendor. To these universities, every autumn, come the children of rich and influential families to begin the traditional four-year rite of passage. It would require something close to a superhuman effort on their part to fail. If they don’t fancy attending lectures, they can hire impecunious classmates as “note takers” to do that for them.  If they don’t wish to write papers, the same principle applies, and the classmates are more than ready to help out, since that can be the first step to a career as an executive assistant, speechwriter, or the like. The other requirements of college life can be met in the same manner as needed, and the university inevitably looks the other way, knowing that they can count on a generous donation from the parents as a reward for putting up with Junior’s antics.
Those of my readers who’ve read the novels of Thomas Mann, and recall the satiric portrait of central European minor royalty in Royal Highness, already know their way around the sort of life I’m discussing here. Those who don’t may want to recall everything they learned about the education and business career of George W. Bush. All the formal requirements are met, every gracious gesture is in place:  the diploma, the prestigious positions in business or politics or the stateside military, maybe a book written by one of those impecunious classmates turned ghostwriter and published to bland and favorable reviews in the newspapers of record:  it’s all there, and the only detail that nobody sees fit to mention is that the whole thing could be done just as well by a well-trained cockatiel, and much of it is well within the capacities of a department store mannequin—provided, of course, that one of those impecunious classmates stands close by, pulling the strings that make the hand wave and the head nod.
The impecunious classmates, for their part, are aspirants to the second category mentioned above, those who work their way into the elite from outside. They also come to the same top universities every autumn, but they don’t get there because of who their parents happen to be. They get there by devoting every spare second to that goal from middle school on. They take the right classes, get the right grades, play the right sports, pursue the right extracurricular activities, and rehearse for their entrance interviews by the hour; they are bright, earnest, amusing, pleasant, because they know that that’s what they need to be in order to get where they want to go. Scratch that glossy surface and you’ll find an anxious conformist terrified of failing to measure up to expectations, and it’s a reasonable terror—most of them will in fact fail to do that, and never know how or why.
Once in an Ivy League university or the equivalent, they’re pretty much guaranteed passing grades and a diploma unless they go out of their way to avoid them. Most of them, though, will be shunted off to midlevel posts in business, government, or one of the professions. Only the lucky few will catch the eye of someone with elite connections, and be gently nudged out of their usual orbit into a place from which further advancement is possible. Whether the rich kid whose exam papers you ghostwrote takes a liking to you, and arranges to have you hired as his executive assistant when he gets his first job out of school, or the father of a friend of a friend meets you on some social occasion, chats with you, and later on has the friend of a friend mention in passing that you might consider a job with this senator or that congressman, or what have you, it’s not what you know, it’s who you know, not to mention how precisely you conform to the social and intellectual expectations of the people who have the power to give or withhold the prize you crave so desperately.
That’s how the governing elite of today’s America recruits new members. Mutatis mutandis, it’s how the governing elite of every stable, long-established society recruits new members. That procedure has significant advantages, and not just for the elites. Above all else, it provides stability. Over time, any elite self-selected in this fashion converges asymptotically on the standard model of a mature aristocracy, with an inner core of genial duffers surrounded by an outer circle of rigid conformists—the last people on the planet who are likely to disturb the settled calm of the social order. Like the lead-weighted keel of a deepwater sailboat, their inertia becomes a stabilizing force that only the harshest of tempests can overturn.
Inevitably, though, this advantage comes with certain disadvantages, two of which are of particular importance for our subject. The first is that stability and inertia are not necessarily a good thing in a time of crisis. In particular, if the society governed by an elite of the sort just described happens to depend for its survival on some unsustainable relationship with surrounding societies, the world of nature, or both, the leaden weight of a mature elite can make necessary change impossible until it’s too late for any change at all to matter. One of the most consistent results of the sort of selection process I’ve sketched out is the elimination of any tendency toward original thinking on the part of those selected; “creativity” may be lauded, but what counts as creativity in such a system consists solely of taking some piece of accepted conventional wisdom one very carefully measured step further than anyone else has quite gotten around to going yet.
In a time of drastic change, that sort of limitation is lethal. More deadly still is the other disadvantage I have in mind, which is the curious and consistent habit such elites have of blind faith in their own invincibility. The longer a given elite has been in power, and the more august and formal and well-aged the institutions of its power and wealth become, the easier it seems to be for the very rich to forget that their forefathers established themselves in that position by some form of more or less blatant piracy, and that they themselves could be deprived of it by that same means. Thus elites tend to, shall we say, “misunderestimate” exactly those crises and sources of conflict that pose an existential threat to the survival of their class and its institutions, precisely because they can’t imagine that an existential threat to these things could be posed by anything at all.
The irony, and it’s a rich one, is that the same conviction tends to become just as widespread outside elite circles as within it. The illusion of invincibility, the conviction that the existing order of things is impervious to any but the most cosmetic changes, tends to be pervasive in any mature society, and remains fixed in place right up to the moment that everything changes and the existing order of things is swept away forever. The intensity of the illusion very often has nothing to do with the real condition of the social order to which it applies; France in 1789 and Russia in 1917 were both brittle, crumbling, jerry-rigged hulks waiting for the push that would send them tumbling into oblivion, which they each received shortly thereafter—but next to no one saw the gaping vulnerabilities at the time. In both cases, even the urban rioters that applied the push were left standing there slack-jawed when they saw how readily the whole thing came crashing down.
The illusion of invincibility is far and away the most important asset a mature ruling elite has, because it discourages deliberate attempts at regime change from within. Everyone in the society, in the elite or outside it, assumes that the existing order is so firmly bolted into place that only the most apocalyptic events would be able to shake its grip. In such a context, most activists either beg for scraps from the tables of the rich or content themselves with futile gestures of hostility at a system they don’t seriously expect to be able to harm, while the members of the elite go their genial way, stumbling from one preventable disaster to another, convinced of the inevitability of their positions, and blissfully unconcerned with the possibility—which normally becomes a reality sooner or later—that their own actions might be sawing away at the old and brittle branch on which they’re seated.
If this doesn’t sound familiar to you, dear reader, you definitely need to get out more. The behavior of the holders of wealth and power in contemporary America, as already suggested, is a textbook example of the way that a mature elite turns senile. Consider the fact that the merry pranksters in the banking industry, having delivered a body blow to the global economy in 2008 and 2009 with worthless mortgage-backed securities, are now busy hawking equally worthless securities backed by income from rental properties. Each round of freewheeling financial fraud, each preventable economic slump, increases the odds that an already brittle, crumbling, and jerry-rigged system will crack under the strain, opening a window of opportunity that hostile foreign powers and domestic demagogues alike will not be slow to exploit. Do such considerations move the supposed defenders of the status quo to rein in the manufacture of worthless financial paper? Surely you jest.
It deserves to be said that at least one corner of the current American ruling elite has recently showed some faint echo of the hard common sense once possessed by its piratical forebears. Now of course the recent announcement that one of the Rockefeller charities is about to move some of its investment funds out of fossil fuel industries doesn’t actually justify the rapturous language lavished on it by activists; the amount of money being moved amounts to one tiny droplet in the overflowing bucket of Rockefeller wealth, after all.  For that matter, as the fracking industry founders under a soaring debt load and slumping petroleum prices warn of troubles ahead, pulling investment funds out of fossil fuel companies and putting them in industries that will likely see panic buying when the fracking bubble pops may be motivated by something other than a sudden outburst of environmental sensibility. Even so, it’s worth noting that the Rockefellers, at least, still remember that it’s crucial for elites to play to the audience, to convince those outside elite circles that the holders of wealth and power still have some vague sense of concern for the survival of the society they claim the right to lead.
Most members of America’s elite have apparently lost track of that. Even such modest gestures as the Rockefellers have just made seem to be outside the repertory of most of the wealthy and privileged these days.  Secure in their sense of their own invulnerability, they amble down the familiar road that led so many of their equivalents in past societies to dispossession or annihilation. How that pattern typically plays out will be the subject of next week’s post.

Dark Age America: The End of the Old Order

Wed, 2014-09-17 18:30
Lately I’ve been rereading some of the tales of H.P. Lovecraft. He’s nearly unique among the writers of American horror stories, in that his sense of the terrible was founded squarely on the worldview of modern science. He was a steadfast atheist and materialist, but unlike so many believers in that creed, his attitude toward the cosmos revealed by science was not smug satisfaction but shuddering horror. The first paragraph of his most famous story, “The Call of Cthulhu,” is typical:
“The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.”
It’s entirely possible that this insight of Lovecraft’s will turn out to be prophetic, and that a passionate popular revolt against the implications—and even more, the applications—of contemporary science will be one of the forces that propel us into the dark age ahead. Still, that’s a subject for a later post in this series. The point I want to make here is that Lovecraft’s image of people eagerly seeking such peace and safety as a dark age may provide them is not as ironic as it sounds. Outside the elites, which have a different and considerably more gruesome destiny than the other inhabitants of a falling civilization, it’s surprisingly rare for people to have to be forced to trade civilization for barbarism, either by human action or by the pressure of events.  By and large, by the time that choice arrives, the great majority are more than ready to make the exchange, and for good reason.
Let’s start by reviewing some basics. As I pointed out in a paper published online back in 2005—a PDF is available here—the process that drives the collapse of civilizations has a surprisingly simple basis: the mismatch between the maintenance costs of capital and the resources that are available to meet those costs. Capital here is meant in the broadest sense of the word, and includes everything in which a civilizations invests its wealth: buildings, roads, imperial expansion, urban infrastructure, information resources, trained personnel, or what have you. Capital of every kind has to be maintained, and as a civilization adds to its stock of capital, the costs of maintenance rise steadily, until the burden they place on the civilization’s available resources can’t be supported any longer.
The only way to resolve that conflict is to allow some of the capital to be converted to waste, so that its maintenance costs drop to zero and any useful resources locked up in the capital can be put to other uses. Human beings being what they are, the conversion of capital to waste generally isn’t carried out in a calm, rational manner; instead, kingdoms fall, cities get sacked, ruling elites are torn to pieces by howling mobs, and the like. If a civilization depends on renewable resources, each round of capital destruction is followed by a return to relative stability and the cycle begins all over again; the history of imperial China is a good example of how that works out in practice.
If a civilization depends on nonrenewable resources for essential functions, though, destroying some of its capital yields only a brief reprieve from the crisis of maintenance costs. Once the nonrenewable resource base tips over into depletion, there’s less and less available each year thereafter to meet the remaining maintenance costs, and the result is the stairstep pattern of decline and fall so familiar from history:  each crisis leads to a round of capital destruction, which leads to renewed stability, which gives way to crisis as the resource base drops further. Here again, human beings being what they are, this process isn’t carried out in a calm, rational manner; the difference here is simply that kingdoms keep falling, cities keep getting sacked, ruling elites are slaughtered one after another in ever more inventive and colorful ways, until finally contraction has proceeded far enough that the remaining capital can be supported on the available stock of renewable resources.
That’s a thumbnail sketch of the theory of catabolic collapse, the basic model of the decline and fall of civilizations that underlies the overall project of this blog. I’d encourage those who have questions about the details of the theory to go ahead and read the published version linked above; down the road a ways, I hope to publish a much more thoroughly developed version of the theory, but that project is still in the earliest stages just now. What I want to do here is to go a little more deeply into the social implications of the theory.
It’s common these days to hear people insist that our society is divided into two and only two classes, an elite class that receives all the benefits of the system, and everyone else, who bears all the burdens. The reality, in ours as in every other human society, is a great deal more nuanced. It’s true, of course, that the benefits move toward the top of the ladder of wealth and privilege and the burdens get shoved toward the bottom, but in most cases—ours very much included—you have to go a good long way down the ladder before you find people who receive no benefits at all.
There have admittedly been a few human societies in which most people receive only such benefits from the system as will enable them to keep working until they drop. The early days of plantation slavery in the United States and the Caribbean islands, when the average lifespan of a slave from purchase to death was under ten years, fell into that category, and so do a few others—for example, Cambodia under the Khmer Rouge. These are exceptional cases; they emerge when the cost of unskilled labor drops close to zero and either abundant profits or ideological considerations make the fate of the laborers a matter of complete indifference to their masters.
Under any other set of conditions, such arrangements are uneconomical. It’s more profitable, by and large, to allow such additional benefits to the laboring class as will permit them to survive and raise families, and to motivate them to do more than the bare minimum that will evade the overseer’s lash. That’s what generates the standard peasant economy, for example, in which the rural poor pay landowners in labor and a share of agricultural production for access to arable land.
There are any number of similar arrangements, in which the laboring classes do the work, the ruling classes allow them access to productive capital, and the results are divided between the two classes in a proportion that allows the ruling classes to get rich and the laboring classes to get by. If that sounds familiar, it should.  In terms of the distribution of labor, capital, and production, the latest offerings of today’s job market are indistinguishable from the arrangements between an ancient Egyptian landowner and the peasants who planted and harvested his fields.
The more complex a society becomes, the more intricate the caste system that divides it, and the more diverse the changes that are played on this basic scheme. A relatively simple medieval society might get by with four castes—the feudal Japanese model, which divided society into aristocrats, warriors, farmers, and a catchall category of traders, craftspeople, entertainers, and the like, is as good an example as any. A stable society near the end of a long age of expansion, by contrast, might have hundreds or even thousands of distinct castes, each with its own niche in the social and economic ecology of that society. In every case, each caste represents a particular balance between benefits received and burdens exacted, and given a stable economy entirely dependent on renewable resources, such a system can continue intact for a very long time.
Factor in the process of catabolic collapse, though, and an otherwise stable system turns into a fount of cascading instabilities. The point that needs to be grasped here is that social hierarchies are a form of capital, in the broad sense mentioned above. Like the other forms of capital included in the catabolic collapse model, social hierarchies facilitate the production and distribution of goods and services, and they have maintenance costs that have to be met. If the maintenance costs aren’t met, as with any other form of capital, social hierarchies are converted to waste; they stop fulfilling their economic function, and become available for salvage.
That sounds very straightforward. Here as so often, though, it’s the human factor that transforms it from a simple equation to the raw material of history.  As the maintenance costs of a civilization’s capital begin to mount up toward the point of crisis, corners get cut and malign neglect becomes the order of the day. Among the various forms of capital, though, some benefit people at one point on the ladder of social hierarchy more than people at other levels. As the maintenance budget runs short, people normally try to shield the forms of capital that benefit them directly, and push the cutbacks off onto forms of capital that benefit others instead. Since the ability of any given person to influence where resources go corresponds very precisely to that person’s position in the social hierarchy, this means that the forms of capital that benefit the people at the bottom of the ladder get cut first.
Now of course this isn’t what you hear from Americans today, and it’s not what you hear from people in any society approaching catabolic collapse. When contraction sets in, as I noted here in a post two weeks ago, people tend to pay much more attention to whatever they’re losing than to the even greater losses suffered by others. The middle-class Americans who denounce welfare for the poor at the top of their lungs while demanding that funding for Medicare and Social Security remain intact are par for the course; so, for that matter, are the other middle-class Americans who denounce the admittedly absurd excesses of the so-called 1% while carefully neglecting to note the immense differentials of wealth and privilege that separate them from those still further down the ladder.
This sort of thing is inevitable in a fight over slices of a shrinking pie. Set aside the inevitable partisan rhetoric, though, and a society moving into the penumbra of catabolic collapse is a society in which more and more people are receiving less and less benefit from the existing order of society, while being expected to shoulder an ever-increasing share of the costs of a faltering system. To those who receive little or no benefits in return, the maintenance costs of social capital rapidly become an intolerable burden, and as the supply of benefits still available from a faltering system becomes more and more a perquisite of the upper reaches of the social hierarchy, that burden becomes an explosive political fact.
Every society depends for its survival on the passive acquiescence of the majority of the population and the active support of a large minority. That minority—call them the overseer class—are the people who operate the mechanisms of social hierarchy: the bureaucrats, media personnel, police, soldiers, and other functionaries who are responsible for maintaining social order. They are not drawn from the ruling elite; by and large, they come from the same classes they are expected to control; and if their share of the benefits of the existing order falters, if their share of the burdens increases too noticeably, or if they find other reasons to make common cause with those outside the overseer class against the ruling elite, then the ruling elite can expect to face the brutal choice between flight into exile and a messy death. The mismatch between maintenance costs and available resources, in turn, makes some such turn of events extremely difficult to avoid.
A ruling elite facing a crisis of this kind has at least three available options. The first, and by far the easiest, is to ignore the situation. In the short term, this is actually the most economical option; it requires the least investment of scarce resources and doesn’t require potentially dangerous tinkering with fragile social and political systems. The only drawback is that once the short term runs out, it pretty much guarantees a horrific fate for the members of the ruling elite, and in many cases, this is a less convincing argument than one might think. It’s always easy to find an ideology that insists that things will turn out otherwise, and since members of a ruling elite are generally well insulated from the unpleasant realities of life in the society over which they preside, it’s usually just as easy for them to convince themselves of the validity of whatever ideology they happen to choose. The behavior of the French aristocracy in the years leading up to the French Revolution is worth consulting in this context.
The second option is to try to remedy the situation by increased repression. This is the most expensive option, and it’s generally even less effective than the first, but ruling elites with a taste for jackboots tend to fall into the repression trap fairly often. What makes repression a bad choice is that it does nothing to address the sources of the problems it attempts to suppress. Furthermore, it increases the maintenance costs of social hierarchy drastically—secret police, surveillance gear, prison camps, and the like don’t come cheap—and it enforces the lowest common denominator of passive obedience while doing much to discourage active engagement of people outside the elite in the project of saving the society.  A survey of the fate of the Communist dictatorships of Eastern Europe is a good antidote to the delusion that an elite with enough spies and soldiers can stay in power indefinitely.
That leaves the third option, which requires the ruling elite to sacrifice some of its privileges and perquisites so that those further down the social ladder still have good reason to support the existing order of society. That isn’t common, but it does happen; it happened in the United States as recently as the 1930s, when Franklin Roosevelt spearheaded changes that spared the United States the sort of fascist takeover or civil war that occurred in so many other failed democracies in the same era. Roosevelt and his allies among the very rich realized that fairly modest reforms would be enough to comvince most Americans that they had more to gain from supporting the system than they would gain by overthrowing it.  A few job-creation projects and debt-relief measures, a few welfare programs, and a few perp walks by the most blatant of the con artists of the preceding era of high finance, were enough to stop the unraveling of the social hierarchy, and restore a sense of collective unity strong enough to see the United States through a global war in the following decade.
Now of course Roosevelt and his allies had huge advantages that any comparable project would not be able to duplicate today. In 1933, though it was hamstrung by a collapsed financial system and a steep decline in international trade, the economy of the United States still had the world’s largest and most productive industrial plant and some of the world’s richest deposits of petroleum, coal, and many other natural resources. Eighty years later, the industrial plant was abandoned decades ago in an orgy of offshoring motivated by short-term profit-seeking, and nearly every resource the American land once offered in abundance has been mined and pumped right down to the dregs. That means that an attempt to imitate Roosevelt’s feat under current conditions would face much steeper obstacles, and it would also require the ruling elite to relinquish a much greater share of its current perquisites and privileges than they did in Roosevelt’s day.
I could be mistaken, but I don’t think it will even be tried this time around. Just at the moment, the squabbling coterie of competing power centers that constitutes the ruling elite of the United States seems committed to an approach halfway between the first two options I’ve outlined. The militarization of US domestic police forces and the rising spiral of civil rights violations carried out with equal enthusiasm by both mainstream political parties fall on the repressive side of the scale.  At the same time, for all these gestures in the direction of repression, the overall attitude of American politicians and financiers seems to be that nothing really that bad can actually happen to them or the system that provides them with their power and their wealth.
They’re wrong, and at this point it’s probably a safe bet that a great many of them will die because of that mistake. Already, a large fraction of Americans—probably a majority—accept the continuation of the existing order of society in the United States only because a viable alternative has yet to emerge. As the United States moves closer to catabolic collapse, and the burden of propping up an increasingly dysfunctional status quo bears down ever more intolerably on more and more people outside the narrowing circle of wealth and privilege, the bar that any alternative has to leap will be set lower and lower. Sooner or later, something will make that leap and convince enough people that there’s a workable alternative to the status quo, and the passive acquiescence on which the system depends for its survival will no longer be something that can be taken for granted.
It’s not necessary for such an alternative to be more democratic or more humane than the order that it attempts to replace. It can be considerably less so, so long as it imposes fewer costs on the majority of people and distributes benefits more widely than the existing order does. That’s why, in the last years of Rome, so many people of the collapsing empire so readily accepted the rule of barbarian warlords in place of the imperial government. That government had become hopelessly dysfunctional by the time of the barbarian invasions, centralizing authority in distant bureaucratic centers out of touch with current realities and imposing tax burdens on the poor so crushing that many people were forced to sell themselves into slavery or flee to depopulated regions of the countryside to take up the uncertain life of Bacaudae, half guerrilla and half bandit, hunted by imperial troops whenever those had time to spare from the defense of the frontiers.
By contrast, the local barbarian warlord might be brutal and capricious, but he was there on the scene, and thus unlikely to exhibit the serene detachment from reality so common in centralized bureaucratic states at the end of their lives. What’s more, the warlord had good reason to protect the peasants who put bread and meat on his table, and the cost of supporting him and his retinue in the relatively modest style of barbarian kingship was considerably less expensive than the burden of helping to prop up the baroque complexities of the late Roman imperial bureaucracy. That’s why the peasants and agricultural slaves of the late Roman world acquiesced so calmly in the implosion of Rome and its replacement by a patchwork of petty kingdoms. It wasn’t just that it was merely a change of masters; it was that in a great many cases, the new masters were considerably less of a burden than the old ones had been.
We can expect much the same process to unfold in North America as the United States passes through its own trajectory of decline and fall. Before tracing the ways that process might work out, though, it’s going to be necessary to sort through some common misconceptions, and that requires us to examine the ways that ruling elites destroy themselves. We’ll cover that next week.

Technological Superstitions

Wed, 2014-09-10 17:33
I'd meant to go straight on from last week’s post about völkerwanderungand the dissolution and birth of ethnic identities in dark age societies, and start talking about the mechanisms by which societies destroy themselves—with an eye, of course, to the present example. Still, as I’ve noted here more than once, there are certain complexities involved in the project of discussing the decline and fall of civilizations in a civilization that’s hard at work on its own decline and fall, and one of those complexities is the way that tempting examples of the process keep popping up as we go.
The last week or so has been unusually full of those. The Ebola epidemic in West Africa has continued to spread at an exponential rate as hopelessly underfunded attempts to contain it crumple, while the leaders of the world’s industrial nations distract themselves playing geopolitics in blithe disregard of the very real possibility that their inattention may be helping to launch the next great global pandemic.  In other news—tip of the archdruidical hat here to The Daily Impact—companies and investors who have been involved in the fracking bubble are quietly bailing out. If things continue on their current trajectory, as I’ve noted before, this autumn could very well see the fracking boom go bust; it’s anyone’s guess how heavily that will hit the global economy, but fracking-related loans and investments have accounted for a sufficiently large fraction of Wall Street profits in recent years that the crater left by a fracking bust will likely be large and deep. 
Regular readers of this blog already know, though, that it’s most often the little things that catch my attention, and the subject of this week’s post is no exception. Thus I’m pleased to announce that a coterie of scientists and science fiction writers has figured out what’s wrong with the world today: there are, ahem, too many negative portrayals of the future in popular media. To counter this deluge of unwarranted pessimism, they’ve organized a group called Project Hieroglyph, and published an anthology of new, cheery, upbeat SF stories about marvelous new technologies that could become realities within the next fifty years. That certainly ought to do the trick!
Now of course I’m hardly in a position to discourage anyone from putting together a science fiction anthology around an unpopular theme. After Oil: SF Visions of a Post-Petroleum Future, the anthology that resulted from the first Space Bats challenge here in 2011, is exactly that, and two similar anthologies from this blog’s second Space Bats challenge are going through the editing and publishing process as I write these words. That said, I’d question the claim that those three anthologies will somehow cause the planet’s oil reserves to run dry any faster than they otherwise will.
The same sort of skepticism, I suggest, may be worth applying to Project Hieroglyph and its anthology.  The contemporary  crisis of industrial society isn’t being caused by a lack of optimism; its roots go deep into the tough subsoil of geological and thermodynamic reality, to the lethal mismatch between fantasies of endless economic growth and the hard limits of a finite planet, and to the less immediately deadly but even more pervasive mismatch between fantasies of perpetual technological progress and that nemesis of all linear thinking, the law of diminishing returns.  The failure of optimism that these writers are bemoaning is a symptom rather than a cause, and insisting that the way to solve our problems is to push optimistic notions about the future at people is more than a little like deciding that the best way to deal with flashing red warning lights on the control panel of an airplane is to put little pieces of opaque green tape over them so everything looks fine again.
It’s not as though there’s been a shortage of giddily optimistic visions of a gizmocentric future in recent years, after all. I grant that the most colorful works of imaginative fiction we’ve seen of late have come from those economists and politicians who keep insisting that the only way out of our current economic and social malaise is to do even more of the same things that got us into it. That said, any of my readers who step into a bookstore or a video store and look for something that features interstellar travel or any of the other shibboleths of the contemporary cult of progress won’t have to work hard to find one. What’s happened, rather, is that such things are no longer as popular as they once were, because people find that stories about bleaker futures hedged in with harsh limits are more to their taste.
The question that needs to be asked, then, is why this should be the case. As I see it, there are at least three very good reasons.
First, those bleaker futures and harsh limits reflect the realities of life in contemporary America. Set aside the top twenty per cent of the population by income, and Americans have on average seen their standard of living slide steadily downhill for more than four decades. In 1970, to note just one measure of how far things have gone, an American family with one working class salary could afford to buy a house, pay all their bills on time, put three square meals on the table every day, and still have enough left over for the occasional vacation or high-ticket luxury item. Now? In much of today’s America, a single working class salary isn’t enough to keep a family off the streets.
That history of relentless economic decline has had a massive impact on attitudes toward the future, toward science, and toward technological progress. In 1969, it was only in the ghettos where America confined its urban poor that any significant number of people responded to the Apollo moon landing with the sort of disgusted alienation that Gil Scott-Heron expressed memorably in his furious ballad “Whitey on the Moon.”  Nowadays, a much greater number of Americans—quite possibly a majority—see the latest ballyhooed achievements of science and technology as just one more round of pointless stunts from which they won’t benefit in the least.
It’s easy but inaccurate to insist that they’re mistaken in that assessment. Outside the narrowing circle of the well-to-do, many Americans these days spend more time coping with the problems caused by technologies than they do enjoying the benefits thereof. Most of the jobs eliminated by automation, after all, used to provide gainful employment for the poor; most of the localities that are dumping grounds for toxic waste, similarly, are inhabited by people toward the bottom of the socioeconomic pyramid, and so on down the list of unintended consequences and technological blowback. By and large, the benefits of new technology trickle up the social ladder, while the costs and burdens trickle down; this has a lot to do with the fact that the grandchildren of people who enjoyed The Jetsons now find The Hunger Games more to their taste.
That’s the first reason. The second is that for decades now, the great majority of the claims made about wonderful new technologies that would inevitably become part of our lives in the next few decades have turned out to be dead wrong. From jetpacks and flying cars to domed cities and vacations on the Moon, from the nuclear power plants that would make electricity too cheap to meter to the conquest of poverty, disease, and death itself, most of the promises offered by the propagandists and publicists of technological progress haven’t happened. That has understandably made people noticeably less impressed by further rounds of promises that likely won’t come true either.
When I was a child, if I may insert a personal reflection here, one of my favorite books was titled You Will Go To The Moon. I suspect most American of my generation remember that book, however dimly, with its glossy portrayal of what space travel would be like in the near future: the great conical rocket with its winged upper stage, the white doughnut-shaped space station turning in orbit, and the rest of it. I honestly expected to make that trip someday, and I was encouraged in that belief by a chorus of authoritative voices for whom permanent space stations, bases on the Moon, and a manned landing on Mars were a done deal by the year 2000.
Now of course in those days the United States still had a manned space program capable of putting bootprints on the Moon. We don’t have one of those any more. It’s worth talking about why that is, because the same logic applies equally well to most of the other grand technological projects that were proclaimed not so long ago as the inescapable path to a shiny new future.
We don’t have a manned space program any more, to begin with, because the United States is effectively bankrupt, having committed itself in the usual manner to the sort of imperial overstretch chronicled by Paul Kennedy in The Rise and Fall of the Great Powers, and cashed in its future for a temporary supremacy over most of the planet. That’s the unmentionable subtext behind the disintegration of America’s infrastructure and built environment, the gutting of its once-mighty industrial plant, and a good deal of the steady decline in standards of living mentioned earlier in this post. Britain dreamed about expansion into space when it still had an empire—the British Interplanetary Society was a major presence in space-travel advocacy in the first half of the twentieth century—and shelved those dreams when its empire went away; the United States is in the process of the same retreat. Still, there’s more going on here than this.
Another reason we don’t have a manned space program any more is that all those decades of giddy rhetoric about New Worlds For Man never got around to discussing the difference between technical feasibility and economic viability. The promoters of space travel fell into the common trap of believing their own hype, and convinced themselves that orbital factories, mines on the Moon, and the like would surely turn out to be paying propositions. What they forgot, of course, is what I’ve called the biosphere dividend:  the vast array of goods and services that the Earth’s natural cycles provide for human beings free of charge, which have to be paid for anywhere else. The best current estimate for the value of that dividend, from a 1997 paper in Science written by a team headed by Richard Constanza, is that it’s something like three times the total value of all goods and services produced by human beings.
As a very rough estimate, in other words, economic activity anywhere in the solar system other than Earth will cost around four times what it costs on Earth, even apart from transportation costs, because the services provided here for free by the biosphere have to be paid for in space or on the solar system’s other worlds. That’s why all the talk about space as a new economic frontier went nowhere; orbital manufacturing was tried—the Skylab program of the 1970s, the Space Shuttle, and the International Space Station in its early days all featured experiments along those lines—and the modest advantages of freefall and ready access to hard vacuum didn’t make enough of a difference to offset the costs. Thus manned space travel, like commercial supersonic aircraft, nuclear power plants, and plenty of other erstwhile waves of the future, turned into a gargantuan white elephant that could only be supported so long as massive and continuing government subsidies were forthcoming.
Those are two of the reasons why we don’t have a manned space program any more. The third is less tangible but, I suspect, far and away the most important. It can be tracked by picking up any illustrated book about the solar system that was written before we got there, and comparing what outer space and other worlds were supposed to look like with what was actually waiting for our landers and probes.
I have in front of me right now, for example, a painting of a scene on the Moon in a book published the year I was born. It’s a gorgeous, romantic view. Blue earthlight splashes over a crater in the foreground; further off, needle-sharp mountains catch the sunlight; the sky is full of brilliant stars. Too bad that’s not what the Apollo astronauts found when they got there. Nobody told the Moon it was supposed to cater to human notions of scenic grandeur, and so it presented its visitors with vistas of dull gray hillocks and empty plains beneath a flat black sky. To anybody but a selenologist, the one thing worth attention in that dreary scene was the glowing blue sphere of Earth 240,000 miles away.
For an even stronger contrast, consider the pictures beamed back by the first Viking probe from the surface of Mars in 1976, and compare that to the gaudy images of the Sun’s fourth planet that were in circulation in popular culture up to that time. I remember the event tolerably well, and one of the things I remember most clearly is the pervasive sense of disappointment—of “is that all?”—shared by everyone in the room.  The images from the lander didn’t look like Barsoom, or the arid but gorgeous setting of Ray Bradbury’s The Martian Chronicles, or any of the other visions of Mars everyone in 1970s America had tucked away in their brains; they looked for all of either world like an unusually dull corner of Nevada that had somehow been denuded of air, water, and life.
Here again, the proponents of space travel fell into the trap of believing their own hype, and forgot that science fiction is no more about real futures than romance novels are about real relationships. That isn’t a criticism of science fiction, by the way, though I suspect the members of Project Hieroglyph will take it as one. Science fiction is a literature of ideas, not of crass realities, and it evokes the sense of wonder that is its distinctive literary effect by dissolving the barrier between the realistic and the fantastic. What too often got forgotten, though, is that literary effects don’t guarantee the validity of prophecies—they’re far more likely to hide the flaws of improbable claims behind a haze of emotion.
Romance writers don’t seem to have much trouble recognizing that their novels are not about the real world. Science fiction, by contrast, has suffered from an overdeveloped sense of its own importance for many years now. I’m thinking just now of a typical essay by Isaac Asimov that described science fiction writers as scouts for the onward march of humanity. (Note the presuppositions that humanity is going somewhere, that all of it’s going in a single direction, and that this direction just happens to be defined by the literary tastes of an eccentric subcategory of 20th century popular fiction.) That sort of thinking led too many people in the midst of the postwar boom to forget that the universe is under no obligation to conform to our wholly anthropocentric notions of human destiny and provide us with New Worlds for Man just because we happen to want some.
Mutatis mutandis, that’s what happened to most of the other grand visions of transformative technological progress that were proclaimed so enthusiastically over the last century or so. Most of them never happened, and those that did turned out to be far less thrilling and far more problematic than the advance billing insisted they would be. Faced with that repeated realization, a great many Americans decided—and not without reason—that more of the same gosh-wow claims were not of interest. That shifted public taste away from cozy optimism toward a harsher take on our future.
The third factor driving that shift in taste, though, may be the most important of all, and it’s also one of the most comprehensively tabooed subjects in contemporary life. Most human phenomena are subject to the law of diminishing returns; the lesson that contemporary industrial societies are trying their level best not to learn just now is that technological progress is one of the phenomena to which this law applies. Thus there can be such a thing as too much technology, and a very strong case can be made that in the world’s industrial nations, we’ve already gotten well past that point.
In a typically cogent article, economist Herman Daly sorts our the law of diminishing returns into three interacting processes. The first is diminishing marginal utility—that is, the more of anything you have, the less any additional increment of that thing contributes to your wellbeing. If you’re hungry, one sandwich is a very good thing; two is pleasant; three is a luxury; and somewhere beyond that, when you’ve given sandwiches to all your coworkers, the local street people, and anyone else you can find, more sandwiches stop being any use to you. When more of anything  no longers bring any additional benefit, you’ve reached the point of futility, at which further increments are a waste of time and resources.
Well before that happens, though, two other factors come into play. First, it costs you almost nothing to cope with one sandwich, and very little more to cope with two or three. After that you start having to invest time, and quite possibly resources, in dealing with all those sandwiches, and each additional sandwich adds to the total burden. Economists call that increasing marginal disutility—that is, the more of anything you have, the more any additional increment of that thing is going to cost you, in one way or another. Somewhere in there, too, there’s the impact that dealing with those sandwiches has on your ability to deal with other things you need to do; that’s increasing risk of whole-system disruption—the more of anything you have, the more likely it is that an additional increment of that thing is going to disrupt the wider system in which you exist.
Next to nobody wants to talk about the way that technological progress has already passed the point of diminishing returns: that the marginal utility of each new round of technology is dropping fast, the marginal disutility is rising at least as fast, and whole-system disruptions driven by technology are becoming an inescapable presence in everyday life. Still, I’ve come to think that an uncomfortable awareness of that fact is becoming increasingly common these days, however subliminal that awareness may be, and beginning to have an popular culture among many other things. If you’re in a hole, as the saying goes, the first thing to do is stop digging; if a large and growing fraction of your society’s problems are being caused by too much technology applied with too little caution, similarly, it’s not exactly helpful to insist that applying even more technology with even less skepticism about its consequences is the only possible answer to those problems.
There’s a useful word for something that remains stuck in a culture after the conditions that once made it relevant have passed away, and that word is “superstition.” I’d like to suggest that the faith-based claims that more technology is always better than less, that every problem must have a technological solution, and that technology always solves more problems than it creates, are among the prevailing superstitions of our time. I’d also like to suggest that, comforting and soothing as those superstitions may be, it’s high time we outgrow them and deal with the world as it actually is—a world in which yet another helping of faith-based optimism is far from useful.

Dark Age America: The Cauldron of Nations

Wed, 2014-09-03 15:57
It's one thing to suggest, as I did in last week’s post here, that North America a few centuries from now might have something like five per cent of its current population. It’s quite another thing to talk about exactly whose descendants will comprise that five per cent. That’s what I intend to do this week, and yes, I know that raising that issue is normally a very good way to spark a shouting match in which who-did-what-to-whom rhetoric plays its usual role in drowning out everything else.
Now of course there’s a point to talking about, and learning from, the abuses inflicted by groups of people on other groups of people over the last five centuries or so of North American history.  Such discussions, though, have very little to offer the topic of the current series of posts here on The Archdruid Report.  History may be a source of moral lessons but it’s not a moral phenomenon; a glance back over our past shows clearly enough that who won, who lost, who ended up ruling a society, and who ended up enslaved or exterminated by that same society, was not determined by moral virtue or by the justice of one or another cause, but by the crassly pragmatic factors of military, political, and economic power. No doubt most of us would rather live in a world that didn’t work that way, but here we are, and morality remains a matter of individual choices—yours and mine—in the face of a cosmos that seems sublimely unconcerned with our moral beliefs.
Thus we can take it for granted that just as the borders that currently divide North America were put there by force or the threat of force, the dissolution of those borders and their replacement with new lines of division will happen the same way. For that matter, it’s a safe bet that the social divisions—ethnic and otherwise—of the successor cultures that emerge in the aftermath of our downfall will be established and enforced by means no more just or fair than the ones that currently distribute wealth and privilege to the different social and ethnic strata in today’s North American nations. Again, it would be pleasant to live in a world where that isn’t true, but we don’t.
I apologize to any of my readers who are offended or upset by these points. In order to make any kind of sense of the way that civilizations fall—and more to the point, the way that ours is currently falling—it’s essential to get past the belief that history is under any obligation to hand out rewards for good behavior and punishments for the opposite, or for that matter the other way around. Over the years and decades and centuries ahead of us, as industrial civilization crumbles, a great many people who believe with all their hearts that their cause is right and just are going to die anyway, and there will be no shortage of brutal, hateful, vile individuals who claw their way to the top—for a while, at least. One of the reliable features of dark ages is that while they last, the top of the heap is a very unsafe place to be.
North America being what it is today, a great many people considering the sort of future I’ve just sketched out immediately start thinking about the potential for ethnic conflict, especially but not only in the United States. It’s an issue worth discussing, and not only for the currently obvious reasons. Conflict between ethnic groups is quite often a major issue in the twilight years of a civilization, for reasons we’ll discuss shortly, but it’s also self-terminating, for an interesting reason: traditional ethnic divisions don’t survive dark ages. In an age of political dissolution, economic implosion, social chaos, demographic collapse, and mass migration, the factors that maintain established ethnic divisions in place don’t last long. In their place, new ethnicities emerge.  It’s a commonplace of history that dark ages are the cauldron from which nations are born.
So we have three stages, which overlap to a greater or lesser degree: a stage of ethnic conflict, a stage of ethnic dissolution, and a stage of ethnogenesis. Let’s take them one at a time.
The stage of ethnic conflict is one effect of the economic contraction that’s inseparable from the decline of a civilization.  If a rising tide lifts all boats, as economists of the trickle-down school used to insist, a falling tide has a much more differentiated effect, since each group in a declining society does its best to see to it that as much as possible of the costs of decline land on someone else.  Since each group’s access to wealth and privilege determines fairly exactly how much influence it has on the process, it’s one of the constants of decline and fall that the costs and burdens of decline trickle down, landing with most force on those at the bottom of the pyramid.
That heats up animosities across the board: between ethnic groups, between regions, between political and religious divisions, you name it. Since everyone below the uppermost levels of wealth and power loses some of what they’ve come to expect, and since it’s human nature to pay more attention to what you’ve lost than to the difference between what you’ve retained and what someone worse off than you has to make do with, everyone’s aggrieved, and everyone sees any attempt by someone else to better their condition as a threat. That’s by no means entirely inaccurate—if the pie’s shrinking, any attempt to get a wider slice has to come at somebody else’s expense—but it fans the flames of conflict even further, helping to drive the situation toward the inevitable explosions.
One very common and very interesting feature of this process is that the increase in ethnic tensions tend to parallel a process of ethnic consolidation. In the United States a century ago, for example, the division of society by ethnicity wasn’t anything so like as simple as it is today. The uppermost caste in most of the country wasn’t simply white, it was white male Episcopalians whose ancestors got here from northwestern Europe before the Revolutionary War. Irish ranked below Germans but above Italians, who looked down on Jews, and so on down the ladder to the very bottom, which was occupied by either African-Americans or Native Americans depending on locality. Within any given ethnicity, furthermore, steep social divisions existed, microcosms of a hierarchically ordered macrocosm; gender distinctions and a great many other lines of fracture combined with the ethnic divisions just noted to make American society in 1914 as intricately caste-ridden as any culture on the planet.
The partial dissolution of many of these divisions has resulted inevitably in the hardening of those that remain. That’s a common pattern, too: consider the way that the rights of Roman citizenship expanded step by step from the inhabitants of the city of Rome itself, to larger and larger fractions of the people it dominated, until finally every free adult male in the Empire was a Roman citizen by definition. Parallel to that process came a hardening of the major divisions, between free persons and slaves on the one hand, and between citizens of the Empire and the barbarians outside its borders on the other. The result was the same in that case as it is in ours: traditional, parochial jealousies and prejudices focused on people one step higher or lower on the ladder of caste give way to new loyalties and hatreds uniting ever greater fractions of the population into increasingly large and explosive masses.
The way that this interlocks with the standard mechanisms of decline and fall will be a central theme of future posts. The crucial detail, though, is that a society riven by increasingly bitter divisions of the sort just sketched out is very poorly positioned to deal with external pressure or serious crisis. “Divide and conquer,” the Romans liked to say during the centuries of their power:  splitting up their enemies and crushing them one at a time was the fundamental strategy they used to build their empire. On the way down, though, it was the body of Roman society that did the dividing, tearing itself apart along every available line of schism, and Rome was accordingly conquered in its turn. That’s usual for falling civilizations, and we’re well along the same route in the United States today.
Ethnic divisions thus routinely play a significant role in the crash of civilizations. Still, as noted above, the resulting chaos quickly shreds the institutional arrangements that make ethnic divisions endure in a settled society. Charismatic leaders emerge out of the chaos, and those that are capable of envisioning and forming alliances across ethnic lines succeed where their rivals fail; the reliable result is a chaotic melting pot of armed bands and temporary communities drawn from all available sources. When the Huns first came west from the Eurasian steppes around 370 CE, for example, they were apparently a federation of related Central Asian tribes; by the time of Attila, rather less than a century later, his vast armies included warriors from most of the ethnic groups of eastern Europe. We don’t even know what their leader’s actual name was; “Attila” was a nickname—“Daddy”—in Visigothic, the lingua franca among the eastern barbarians at that time.
The same chaotic reshuffling was just as common on the other side of the collapsing Roman frontiers. The province of Britannia, for instance, had long been divided into ethnic groups with their own distinct religious and cultural traditions. In the wake of the Roman collapse and the Saxon invasions, the survivors who took refuge in the mountains of the west forgot the old divisions, and took to calling themselves by a new name:  Combrogi, “fellow-countrymen” in old Brythonic. Nowadays that’s Cymry, the name the Welsh use for themselves.  Not everyone who ended up as Combrogi was British by ancestry—one of the famous Welsh chieftains in the wars against the Saxons was a Visigoth named Theodoric—nor were all the people on the other side Saxons—one of the leaders of the invaders was a Briton named Caradoc ap Cunorix,  the “Cerdic son of Cynric” of the Anglo-Saxon Chronicle.
It’s almost impossible to overstate the efficiency of the blender into which every political, economic, social, and ethnic manifestation got tossed in the last years of Rome. My favorite example of the raw confusion of that time is the remarkable career of another Saxon leader named Odoacer. He was the son of one of Attila the Hun’s generals, but got involved in Saxon raids on Britain after Attila’s death. Sometime in the 460s, when the struggle between the Britons and the Saxons was more or less stuck in deadlock, Odoacer decided to look for better pickings elsewhere, and led a Saxon fleet that landed at the mouth of the Loire in western France. For the next decade or so, more or less in alliance with Childeric, king of the Franks, he fought the Romans, the Goths, and the Bretons there.
When the Saxon hold on the Loire was finally broken, Odoacer took the remains of his force and joined Childeric in an assault on Italy. No records survive of the fate of that expedition, but it apparently didn’t go well. Odoacer next turned up, without an army, in what’s now Austria and was then the province of Noricum. It took him only a short time to scrape together a following from the random mix of barbarian warriors to be found there, and in 476 he marched on Italy again, and overthrew the equally random mix of barbarians who had recently seized control of the peninsula. 
The Emperor of the West just then, the heir of the Caesars and titular lord of half the world, was a boy named Romulus Augustulus. In a fine bit of irony, he also happened to be the son of Attila the Hun’s Greek secretary, a sometime ally of Odoacer’s father. This may be why, instead of doing the usual thing and having the boy killed, Odoacer basically told the last Emperor of Rome to run along and play.  That sort of clemency was unusual, and it wasn’t repeated by the next barbarian warlord in line; fourteen years later Odoacer was murdered by order of Theodoric, king of the Ostrogoths, who proceeded to take his place as temporary master of the corpse of imperial Rome.
Soldiers of fortune, or of misfortune for that matter, weren’t the only people engaged in this sort of heavily armed tour of the post-Roman world during those same years. Entire nations were doing the same thing. Those of my readers who have been watching North America’s climate come unhinged may be interested to know that severe droughts in Central Asia may have been the trigger that kickstarted the process, pushing nomadic tribes out of their traditional territories in a desperate quest for survival. Whether or not that’s what pushed the Huns into motion, the westward migration of the Huns forced other barbarian peoples further west to flee for their lives, and the chain of dominoes thus set in motion played a massive role in creating the chaos in which figures like Odoacer rose and fell. It’s a measure of the sheer scale of these migrations that before Rome started to topple, many of the ancestors of today’s Spaniards lived in what’s now the Ukraine.
And afterwards? The migrations slowed and finally stopped, the warlords became kings, and the people who found themselves in some more or less stable kingdom began the slow process by which a random assortment of refugees and military veterans from the far corners of the Roman world became the first draft of a nation. The former province of Britannia, for example, became seven Saxon kingdoms and a varying number of Celtic ones, and then began the slow process of war and coalescence out of which England, Scotland, Wales, and Cornwall gradually emerged. Elsewhere, the same process moved at varying rates; new nations, languages, ethnic groups came into being. The cauldron of nations had come off the boil, and the history of Europe settled down to a somewhat less frenetic rhythm.
I’ve used post-Roman Europe as a convenient and solidly documented example, but transformations of the same kind are commonplace whenever a civilization goes down. The smaller and more isolated the geographical area of the civilization that falls, the less likely mass migrations are—ancient China, Mesopotamia, and central Mexico had plenty of them, while the collapse of the classic Maya and Heian Japan featured a shortage of wandering hordes—but the rest of the story is among the standard features you get with societal collapse. North America is neither small nor isolated, and so it’s a safe bet that we’ll get a tolerably complete version of the usual process right here in the centuries ahead.
What does that mean in practice? It means, to begin with, that a rising spiral of conflict along ethnic, cultural, religious, political, regional, and social lines will play an ever larger role in North American life for decades to come. Those of my readers who have been paying attention to events, especially but not only in the United States, will have already seen that spiral getting under way. As the first few rounds of economic contraction have begun to bite, the standard response of every group you care to name has been to try to get the bite taken out of someone else. Listen to the insults being flung around in the political controversies of the present day—the thieving rich, the shiftless poor, and the rest of it—and notice how many of them amount to claims that wealth that ought to belong to one group of people is being unfairly held by another. In those claims, you can hear the first whispers of the battle-cries that will be shouted as the usual internecine wars begin to tear our civilization apart.
As those get under way, for reasons we’ll discuss at length in future posts, governments and the other institutions of civil society will come apart at the seams, and the charismatic leaders already mentioned will rise to fill their place. In response, existing loyalties will begin to dissolve as the normal process of warband formation kicks into overdrive. In such times a strong and gifted leader like Attila the Hun can unite any number of contending factions into a single overwhelming force, but at this stage such things have no permanence; once the warlord dies, ages, or runs out of luck, the forces so briefly united will turn on each other and plunge the continent back into chaos.
There will also be mass migrations, and far more likely than not these will be on a scale that would have impressed Attila himself. That’s one of the ways that the climate change our civilization has unleashed on the planet is a gift that just keeps on giving; until the climate settles back down to some semblance of stability, and sea levels have risen as far as they’re going to rise, people in vulnerable areas are going to be forced out of their homes by one form of unnatural catastrophe or another, and the same desperate quest for survival that may have sent the Huns crashing into Eastern Europe will send new hordes of refugees streaming across the landscape. Some of those hordes will have starting points within the United States—I expect mass migrations from Florida as the seas rise, and from the Southwest as drought finishes tightening its fingers around the Sun Belt’s throat—while others will come from further afield.
Five centuries from now, as a result, it’s entirely possible that most people in the upper Mississippi valley will be of Brazilian ancestry, and the inhabitants of the Hudson’s Bay region sing songs about their long-lost homes in drowned Florida, while languages descended from English may be spoken only in a region extending from New England to the isles of deglaciated Greenland. Nor will these people think of themselves in any of the national and ethnic terms that come so readily to our minds today. It’s by no means impossible that somebody may claim to be Presden of Meriga, Meer of Kanda, or what have you, just as Charlemagne and his successors claimed to be the emperors of Rome. Just as the Holy Roman Empire was proverbially neither holy, nor Roman, nor an empire, neither the office nor the nation at that future time is likely to have much of anything to do with its nominal equivalent today—and there will certainly be nations and ethnic groups in that time that have no parallel today.
One implication of these points may be worth noting here, as we move deeper into the stage of ethnic conflict. No matter what your ethnic group, dear reader, no matter how privileged or underprivileged it may happen to be in today’s world, it will almost certainly no longer exist as such when industrial civilization on this continent finishes the arc of the Long Descent. Such of your genes as make it through centuries of dieoff and ruthless Darwinian selection will be mixed with genes from many other nationalities and corners of the world, and it’s probably a safe bet that the people who carry those genes won’t call themselves by whatever label you call yourself. When a civilization falls the way ours is falling, that’s how things generally go.
*****************
In other news, I’m delighted to announce that my latest book, Twilight’s Last Gleaming, a novel based on the five-part scenario of US imperial collapse and dissolution posted here in 2012, will be hitting the bookshelves on October 31 of this year. Those of my readers who are interested may like to know that the publisher, Karnac Books, is offering a discount and free worldwide shipping on preorders. Those posts still hold this blog’s all-time record for page views, and the novel’s just as stark and fast-paced as the original posts; those of my readers who enjoy a good political-military thriller might want to check it out.

Dark Age America: The Population Implosion

Wed, 2014-08-27 17:31
The three environmental shifts discussed in earlier posts in this sequence—the ecological impacts of a sharply warmer and dryer climate, the flooding of coastal regions due to rising sea levels, and the long-term consequences of industrial America’s frankly brainless dumping of persistent radiological and chemical poisons—all involve changes to the North American continent that will endure straight through the deindustrial dark age ahead, and will help shape the history of the successor cultures that will rise amid our ruins. For millennia to come, the peoples of North America will have to contend with drastically expanded deserts, coastlines that in some regions will be many miles further inland than they are today, and the presence of dead zones where nuclear or chemical wastes in the soil and water make human settlement impossible.
All these factors mean, among other things, that deindustrial North America will support many fewer people than it did in 1880 or so, before new agricultural technologies dependent on fossil fuels launched the population boom that is peaking in our time. Now of course this also implies that deindustrial North America will support many, many fewer people than it does today. For obvious reasons, it’s worth talking about the processes by which today’s seriously overpopulated North America will become the sparsely populated continent of the coming dark age—but that’s going to involve a confrontation with a certain kind of petrified irrelevancy all too common in our time.
Every few weeks, the comments page of this blog fields something insisting that I’m ignoring the role of overpopulation in the crisis of our time, and demanding that I say or do something about that. In point of fact, I’ve said quite a bit about overpopulation on this blog over the years, dating back to this post from 2007. What I’ve said about it, though, doesn’t follow either one of the two officially sanctioned scripts into which discussions of overpopulation are inevitably shoehorned in today’s industrial world; the comments I get are thus basically objecting to the fact that I’m not toeing the party line.
Like most cultural phenomena in today’s industrial world, the scripts just mentioned hew closely to the faux-liberal and faux-conservative narratives that dominate so much of contemporary thought. (I insist on the prefix, as what passes for political thought these days has essentially nothing to do with either liberalism or conservatism as these were understood as little as a few decades ago.) The scripts differ along the usual lines: that is to say, the faux-liberal script is well-meaning and ineffectual, while the faux-conservative script is practicable and evil.
Thus the faux-liberal script insists that overpopulation is a terrible problem, and we ought to do something about it, and the things we should do about it are all things that don’t work, won’t work, and have been being tried over and over again for decades without having the slightest effect on the situation. The faux-conservative script insists that overpopulation is a terrible problem, but only because it’s people of, ahem, the wrong skin color who are overpopulating, ahem, our country: that is, overpopulation means immigration, and immigration means let’s throw buckets of gasoline onto the flames of ethnic conflict, so it can play its standard role in ripping apart a dying civilization with even more verve than usual.
Overpopulation and immigration policy are not the same thing; neither are depopulation and the mass migrations of whole peoples for which German historians of the post-Roman dark ages coined the neat term völkerwanderung, which are the corresponding phenomena in eras of decline and fall. For that reason, the faux-conservative side of the debate, along with the usually unmentioned realities of immigration policy in today’s America and the far greater and more troubling realities of mass migration and ethnogenesis that will follow in due time, will be left for next week’s post. For now I want to talk about overpopulation as such, and therefore about the faux-liberal side of the debate and the stark realities of depopulation that are waiting in the future.
All this needs to be put in its proper context. In 1962, the year I was born, there were about three and a half billion human beings on this planet. Today, there are more than seven billion of us. That staggering increase in human numbers has played an immense and disastrous role in backing today’s industrial world into the corner where it now finds itself. Among all the forces driving us toward an ugly future, the raw pressure of human overpopulation, with the huge and rising resource requirements it entails, is among the most important.
That much is clear. What to do about it is something else again. You’ll still hear people insisting that campaigns to convince people to limit their reproduction voluntarily ought to do the trick, but such campaigns have been ongoing since well before I was born, and human numbers more than doubled anyway. It bears repeating that if a strategy has failed every time it’s been tried, insisting that we ought to do it again isn’t a useful suggestion. That applies not only to the campaigns just noted, but to all the other proposals to slow or stop population growth that have been tried repeatedly and failed just as repeatedly over the decades just past.
These days, a great deal of the hopeful talk around the subject of limits to overpopulation has refocused on what’s called the demographic transition: the process, visible in the population history of most of today’s industrial nations, whereby people start voluntarily reducing their reproduction when their income and access to resources rise above a certain level. It’s a real effect, though its causes are far from clear. The problem here is simply that the resource base that would make it possible for enough of the world’s population to have the income and access to resources necessary to trigger a worldwide demographic transition simply don’t exist.
As fossil fuels and a galaxy of other nonrenewable resources slide down the slope of depletion at varying rates, for that matter, it’s becoming increasingly hard for people in the industrial nations to maintain their familiar standards of living. It may be worth noting that this hasn’t caused a sudden upward spike in population growth in those countries where downward mobility has become most visible. The demographic transition, in other words, doesn’t work in reverse, and this points to a crucial fact that hasn’t necessarily been given the weight it deserves in conversations about overpopulation.
The vast surge in human numbers that dominates the demographic history of modern times is wholly a phenomenon of the industrial age. Other historical periods have seen modest population increases, but nothing on the same scale, and those have reversed themselves promptly when ecological limits came into play. Whatever the specific factors and forces that drove the population boom, then, it’s a pretty safe bet that the underlying cause was the one factor present in industrial civilization that hasn’t played a significant role in any other human society: the exploitation of vast quantities of extrasomatic energy—that is, energy that doesn’t come into play by means of human or animal muscle. Place the curve of increasing energy per capita worldwide next to the curve of human population worldwide, and the two move very nearly in lockstep: thus it’s fair to say that human beings, like yeast, respond to increased access to energy with increased reproduction.
Does that mean that we’re going to have to deal with soaring population worldwide for the foreseeable future? No, and hard planetary limits to resource extraction are the reasons why. Without the huge energy subsidy to agriculture contributed by fossil fuels, producing enough food to support seven billion people won’t be possible. We saw a preview of the consequences in 2008 and 2009, when the spike in petroleum prices caused a corresponding spike in food prices and a great many people around the world found themselves scrambling to get enough to eat on any terms at all. The riots and revolutions that followed grabbed the headlines, but another shift that happened around the same time deserves more attention: birth rates in many Third World countries decreased noticeably, and have continued to trend downward since then.
The same phenomenon can be seen elsewhere. Since the collapse of the Soviet Union, most of the formerly Soviet republics have seen steep declines in rates of live birth, life expectancy, and most other measures of public health, while death rates have climbed well above birth rates and stayed there. For that matter, since 2008, birth rates in the United States have dropped even further below the rate of replacement than they were before that time; immigration is the only reason the population of the United States doesn’t register declines year after year.
This is the wave of the future.  As fossil fuel and other resources continue to deplete, and economies dependent on those resources become less and less able to provide people with the necessities of life, the population boom will turn into a population bust. The base scenario in 1972’s The Limits to Growth, still the most accurate (and thus inevitably the most vilified) model of the future into which we’re stumbling blindly just now, put the peak of global population somewhere around 2030: that is, sixteen years from now. Recent declines in birth rates in areas that were once hotbeds of population growth, such as Latin America and the Middle East, can be seen as the leveling off that always occurs in a population curve before decline sets in.
That decline is likely to go very far indeed. That’s partly a matter of straightforward logic: since global population has been artificially inflated by pouring extrasomatic energy into boosting the food supply and providing other necessary resources to human beings, the exhaustion of economically extractable reserves of the fossil fuels that made that process possible will knock the props out from under global population figures. Still, historical parallels also have quite a bit to offer here: extreme depopulation is a common feature of the decline and fall of civilizations, with up to 95% population loss over the one to three centuries that the fall of a civilization usually takes.
Suggest that to people nowadays and, once you get past the usual reactions of denial and disbelief, the standard assumption is that population declines so severe could only happen if there were catastrophes on a truly gargantuan scale. That’s an easy assumption to make, but it doesn’t happen to be true. Just as it didn’t take vast public orgies of copulation and childbirth to double the planet’s population over the last half-century, it wouldn’t take equivalent exercises in mass death to halve the planet’s population over the same time frame. The ordinary processes of demography can do the trick all by themselves.
Let’s explore that by way of a thought experiment. Between family, friends, coworkers, and the others that you meet in the course of your daily activities, you probably know something close to a hundred people. Every so often, in the ordinary course of events, one of them dies—depending on the age and social status of the people you know, that might happen once a year, once every two years, or what have you. Take a moment to recall the most recent death in your social circle, and the one before that, to help put the rest of the thought experiment in context.
Now imagine that from this day onward, among the hundred people you know, one additional person—one person more than you would otherwise expect to die—dies every year, while the rate of birth remains the same as it is now. Imagine that modest increase in the death rate affecting the people you know. One year, an elderly relative of yours doesn’t wake up one morning; the next, a barista at the place where you get coffee on the way to work dies of cancer; the year after that, a coworker’s child comes down with an infection the doctors can’t treat, and so on.  A noticeable shift? Granted, but it’s not Armageddon; you attend a few more funerals than you’re used to, make friends with the new barista, and go about your life until one of those additional deaths is yours.
Now take that process and extrapolate it out. (Those of my readers who have the necessary math skills should take the time to crunch the numbers themselves.) Over the course of three centuries, an increase in the crude death rate of one per cent per annum, given an unchanged birth rate, is sufficient to reduce a population to five per cent of its original level. Vast catastrophes need not apply; of the traditional four horsemen, War, Famine, and Pestilence can sit around drinking beer and playing poker. The fourth horseman, in the shape of a modest change in crude death rates, can do the job all by himself.
Now imagine the same scenario, except that there are two additional deaths each year in your social circle, rather than one.  That would be considerably more noticeable, but it still doesn’t look like the end of the world—at least until you do the math. An increase in the crude death rate of two per cent per annum, given an unchanged birth rate, is enough to reduce a population to five per cent of its original level within a single century. In global terms, if world population peaks around 8 billion in 2030, a decline on that scale would leave four hundred million people on the planet by 2130.
In the real world, of course, things are not as simple or smooth as they are in the thought experiment just offered. Birth rates are subject to complex pressures and vary up and down depending on the specific pressures a population faces, and even small increases in infant and child mortality have a disproportionate effect by removing potential breeding pairs from the population before they can reproduce. Meanwhile, population declines are rarely anything like so even as  the thought experiment suggests; those other three horsemen, in particular, tend to get bored of their poker game at intervals and go riding out to give the guy with the scythe some help with the harvest. War, famine, and pestilence are common events in the decline and fall of a civilization, and the twilight of the industrial world is likely to get its fair share of them.
Thus it probably won’t be a matter of two more deaths a year, every year. Instead, one year, war breaks out, most of the young men in town get drafted, and half of them come back in body bags.  Another year, after a string of bad harvests, the flu comes through, and a lot of people who would have shaken it off under better conditions are just that little bit too malnourished to survive.  Yet another year, a virus shaken out of its tropical home by climate change and ecosystem disruption goes through town, and fifteen per cent of the population dies in eight ghastly months. That’s the way population declines happen in history.
In the twilight years of the Roman world, for example, a steady demographic contraction was overlaid by civil wars, barbarian invasions, economic crises, famines, and epidemics; the total population decline varied significantly from one region to another, but even the relatively stable parts of the Eastern Empire seem to have had around a 50% loss of population, while some areas of the Western Empire suffered far more drastic losses; Britain in particular was transformed from a rich, populous, and largely urbanized province to a land of silent urban ruins and small, scattered villages of subsistence farmers where even so simple a technology as wheel-thrown pottery became a lost art.
The classic lowland Maya are another good example along the same lines.  Hammered by climate change and topsoil loss, the Maya heartland went through a rolling collapse a century and a half in length that ended with population levels maybe five per cent of what they’d been at the start of the Terminal Classic period, and most of the great Maya cities became empty ruins rapidly covered by the encroaching jungle. Those of my readers who have seen pictures of tropical foliage burying the pyramids of Tikal and Copan might want to imagine scenes of the same kind in the ruins of Atlanta and Austin a few centuries from now. That’s the kind of thing that happens when an urbanized society suffers severe population loss during the decline and fall of a civilization.
That, in turn, is what has to be factored into any realistic forecast of dark age America: there will be many, many fewer people inhabiting North America a few centuries from now than there are today.  Between the depletion of the fossil fuel resources necessary to maintain today’s hugely inflated numbers and the degradation of North America’s human carrying capacity by climate change, sea level rise, and persistent radiological and chemical pollution, the continent simply won’t be able to support that many people. The current total is about 470 million—35 million in Canada, 314 million in the US, and 121 million in Mexico, according to the latest figures I was able to find—and something close to five per cent of that—say, 20 to 25 million—might be a reasonable midrange estimate for the human population of the North American continent when the population implosion finally bottoms out a few centuries from now.
Now of course those 20 to 25 million people won’t be scattered evenly across the continent. There will be very large regions—for example, the nearly lifeless, sun-blasted wastelands that climate change will make of the southern Great Plains and the Sonoran desert—where human settlement will be as sparse as it is today in the bleakest parts of the Sahara or the Rub’al Khali of central Arabia. There will be other areas—for example, the Great Lakes region and the southern half of Mexico’s great central valley—where population will be relatively dense by Dark Age standards, and towns of modest size may even thrive if they happen to be in defensible locations.
The nomadic herding folk of the midwestern prairies, the villages of the Gulf Coast jungles, and the other human ecologies that will spring up in the varying ecosystems of deindustrial North America will all gradually settle into a more or less stable population level, at which births and deaths balance each other and the consumption of resources stays at or below sustainable levels of production. That’s what happens in human societies that don’t have the dubious advantage of a torrent of nonrenewable energy reserves to distract them temporarily from the hard necessities of survival.
It’s getting to that level that’s going to be a bear. The mechanisms of population contraction are simple enough, and as suggested above, they can have a dramatic impact on historical time scales without cataclysmic impact on the scale of individual lives. No, the difficult part of population contraction is its impact on economic patterns geared to continuous population growth. That’s part of a more general pattern, of course—the brutal impact of the end of growth on an economy that depends on growth to function at all—which has been discussed on this blog several times already, and will require close study in the present sequence of posts.
That examination will begin after we’ve considered the second half of the demography of dark age America: the role of mass migration and ethnogenesis in the birth of the cultures that will emerge on this continent when industrial civilization is a fading memory. That very challenging discussion will occupy next week’s post.

Heading Toward The Sidewalk

Wed, 2014-08-20 18:25
Talking about historical change is one thing when the changes under discussion are at some convenient remove in the past or the future. It’s quite another when the changes are already taking place. That’s one of the things that adds complexity to the project of this blog, because the decline and fall of modern industrial civilization isn’t something that might take place someday, if X or Y or Z happens or doesn’t happen; it’s under way now, all around us, and a good many of the tumults of our time are being driven by the unmentionable but inescapable fact that the process of decline is beginning to pick up speed.

Those tumults are at least as relevant to this blog’s project as the comparable events in the latter years of dead civilizations, and so it’s going to be necessary now and then to pause the current sequence of posts, set aside considerations of the far future for a bit, and take a look at what’s happening here and now. This is going to be one of those weeks, because a signal I’ve been expecting for a couple of years now has finally showed up, and its appearance means that real trouble may be imminent.

This has admittedly happened in a week when the sky is black with birds coming home to roost. I suspect that most of my readers have been paying at least some attention to the Ebola epidemic now spreading across West Africa. Over the last week, the World Health Organization has revealed that official statistics on the epidemic’s toll are significantly understated, the main nongovernmental organization fighting Ebola has admitted that the situation is out of anyone’s control, and a series of events neatly poised between absurdity and horror—a riot in one of Monrovia’s poorest slums directed at an emergency quarantine facility, in which looters made off with linens and bedding contaminated with the Ebola virus, and quarantined patients vanished into the crowd—may shortly plunge Liberia into scenes of a kind not witnessed since the heyday of the Black Death. The possibility that this outbreak may become a global pandemic, while still small, can no longer be dismissed out of hand.

Meanwhile, closer to home, what has become a routine event in today’s America—the casual killing of an unarmed African-American man by the police—has blown up in a decidedly nonroutine fashion, with imagery reminiscent of Cairo’s Tahrir Square being enacted night after night in the St. Louis suburb of Ferguson, Missouri. The culture of militarization and unaccountability that’s entrenched in urban police forces in the United States has been displayed in a highly unflattering light, as police officers dressed for all the world like storm troopers on the set of a bad science fiction movie did their best to act the part, tear-gassing and beating protesters, reporters, and random passersby in an orgy of jackbooted enthusiasm blatant enough that Tea Party Republicans have started to make worried speeches about just how closely this resembles the behavior of a police state.

If the police keep it up, the Arab Spring of a few years back may just be paralleled by an American Autumn. Even if some lingering spark of common sense on the part of state and local authorities heads off that possibility, the next time a white police officer guns down an African-American man for no particular reason—and there will be a next time; such events, as noted above, are routine in the United States these days—the explosion that follows will be even more severe, and the risk that such an explosion may end up driving the emergence of a domestic insurgency is not small. I noted in a post a couple of years back that the American way of war pretty much guarantees that any country conquered by our military will pup an insurgency in short order thereafter; there’s a great deal of irony in the thought that the importation of the same model of warfare into police practice in the US may have exactly the same effect here.

It may come as a surprise to some of my readers that the sign I noted is neither of these things. No, it’s not the big volcano in Iceland that’s showing worrying signs of blowing its top, either. It’s an absurdly little thing—a minor book review in an otherwise undistinguished financial-advice blog—and it matters only because it’s a harbinger of something considerably more important.

A glance at the past may be useful here. On September 9, 1929, no less a financial periodical than Barron’s took time off from its usual cheerleading of the stock market’s grand upward movement to denounce an investment analyst named Roger Babson in heated terms. Babson’s crime? Suggesting that the grand upward movement just mentioned was part of a classic speculative bubble, and the bubble’s inevitable bust would cause an economic depression. Babson had been saying this sort of thing all through the stock market boom of the late 1920s, and until that summer, the mainstream financial media simply ignored him, as they ignored everyone else whose sense of economic reality hadn’t gone out to lunch and forgotten to come back.

For those who followed the media, in fact, the summer and fall of 1929 were notable mostly for the fact that a set of beliefs that most people took for granted—above all else, the claim that the stock market could keep on rising indefinitely—suddenly were being loudly defended all over the place, even though next to nobody was attacking them. The June issue of The American Magazine featured an interview with financier Bernard Baruch, insisting that “the economic condition of the world seems on the verge of a great forward movement.” In the July 8 issue of Barron’s, similarly, an article insisted that people who worried about how much debt was propping up the market didn’t understand the role of broker’s loans as a major new investment outlet for corporate money.

As late as October 15, when the great crash was only days away, Professor Irving Fisher of Yale’s economics department made his famous announcement to the media: “Stock prices have reached what looks like a permanently high plateau.” That sort of puffery was business as usual, then as now. Assaulting the critics of the bubble in print, by name, was not. It was only when the market was sliding toward the abyss of the 1929 crash that financial columnists publicly trained their rhetorical guns on the handful of people who had been saying all along that the boom would inevitably bust.

That’s a remarkably common feature of speculative bubbles, and could be traced in any number of historical examples, starting with the tulip bubble in the 17th century Netherlands and going on from there. Some of my readers may well have experienced the same thing for themselves in the not too distant past, during the last stages of the gargantuan real estate bubble that popped so messily in 2008. I certainly did, and a glance back at that experience will help clarify the implications of the signal I noticed in the week just past.

Back when the real estate bubble was soaring to vertiginous and hopelessly unsustainable heights, I used to track its progress on a couple of news aggregator sites, especially Keith Brand’s lively HousingPanic blog. Now and then, as the bubble peaked and began losing air, I would sit down with a glass of scotch, a series of links to the latest absurd comments by real estate promoters, and my copy of John Kenneth Galbraith’s The Great Crash 1929—the source, by the way, of the anecdotes cited above—and enjoyed watching the rhetoric used to insist that the 2008 bubble wasn’t a bubble duplicate, in some cases word for word, the rhetoric used for the same purpose in 1929.

All the anti-bubble blogs fielded a steady stream of hostile comments from real estate investors who apparently couldn’t handle the thought that anyone might question their guaranteed ticket to unearned wealth, and Brand’s in particular saw no shortage of bare-knuckle verbal brawls. It was only in the last few months before the bubble burst, though, that pro-bubble blogs started posting personal attacks on Brand and his fellow critics, denouncing them by name in heated and usually inaccurate terms. At the time, I noted the parallel with the Barron’s attack on Roger Babson, and wondered if it meant the same thing; the events that followed showed pretty clearly that it did.

That same point may just have arrived in the fracking bubble—unsurprisingly, since that has followed the standard trajectory of speculative booms in all other respects so far. For some time now, the media has been full of proclamations about America’s allegely limitless petroleum supply, which resemble nothing so much as the airy claims about stocks made by Bernard Baruch and Irving Fisher back in 1929. Week after week, bloggers and commentators have belabored the concept of peak oil, finding new and ingenious ways to insist that it must somehow be possible to extract infinite amounts of oil from a finite planet; oddly enough, though it’s rare for anyone to speak up for peak oil on these forums, the arguments leveled against it have been getting louder and more shrill as time passes. Until recently, though, I hadn’t encountered the personal attacks that announce the imminence of the bust.

That was before this week. On August 11th, a financial-advice website hosted a fine example of the species, and rather to my surprise—I’m hardly the most influential or widely read critic of the fracking bubble, after all—it was directed at me.

Mind you, I have no objection to hostile reviews of my writing. A number of books by other people have come in for various kinds of rough treatment on this blog, and turnabout here as elsewhere is fair play. I do prefer reviewers, hostile or otherwise, to take the time to read a book of mine before they review it, but that’s not something any writer can count on; reviewers who clearly haven’t so much as opened the cover of the book on which they pass judgment have been the target of barbed remarks in literary circles since at least the 18th century. Still, a review of a book the reviewer hasn’t read is one thing, and a review of a book the author hasn’t written and the publisher hasn’t published is something else again.

That’s basically the case here. The reviewer, a stock market blogger named Andew McKillop, set out to critique a newly re-edited version of my 2008 book The Long Descent. That came as quite a surprise to me, as well as to New Society Publications, the publisher of the earlier book, since no such reissue exists. The Long Descent remains in print in its original edition, and my six other books on peak oil and the future of industrial society are, ahem, different books.

My best guess is that McKillop spotted my new title Decline and Fall: The End of Empire and the Future of Democracy in 21st Century America in a bookshop window, and simply jumped to the conclusion that it must be a new release of the earlier book. I’m still not sure whether the result counts as a brilliant bit of surrealist performance art or a new low in what we still jokingly call journalistic ethics; in either case, it’s definitely broken new ground. Still, I hope that McKillop does better research for the people who count on him for stock advice.

Given that starting point, the rest of the review is about what you would expect. I gather that McKillop read a couple of online reviews of The Long Descent and a couple more of Decline and Fall, skimmed over a few randomly chosen posts on this blog, tossed the results together all anyhow, and jumped to the conclusion that the resulting mess was what the book was about. The result is quite a lively little bricolage of misunderstandings, non sequiturs, and straightforward fabrications—I invite anyone who cares to make the attempt to point out the place in my writings, for example, where I contrast catabolic collapse with “anabolic collapse,” whatever on earth that latter might be.

There’s a certain wry amusement to be had from going through the review and trying to figure out exactly how McKillop might have gotten this or that bit of misinformation wedged into his brain, but I’ll leave that as a party game for my readers. The point I’d like to make here is that the appearance of this attempted counterblast in a mainstream financial blog is a warning sign. It suggests that the fracking boom, like previous bubbles when they reached the shoot-the-messenger stage, may well be teetering on the brink of a really spectacular crash—and it’s not the only such sign, either.

The same questions about debt that were asked about the stock market in 1929 and the housing market in 2008 are being asked now, with increasing urgency, about the immense volume of junk bonds that are currently propping up the shale boom. Meanwhile gas and oil companies are having to drill ever more frantically and invest ever more money to keep production rates from dropping like a rock Get past the vacuous handwaving about “Saudi America,” and it’s embarrassingly clear that the fracking boom is simply one more debt-fueled speculative orgy destined for one more messy bust. It’s disguised as an energy revolution in exactly the same way that the real estate bubble was disguised as a housing revolution, the tech-stock bubble as a technological revolution, and so on back through the annals of financial delusion as far as you care to go.

Sooner or later—and much more likely sooner than later—the fracking bubble is going to pop. Just how and when that will happen is impossible to know in advance. Even making an intelligent guess at this point would require a detailed knowledge of which banks and investment firms have gotten furthest over their heads in shale leases and the like, which petroleum and natural gas firms have gone out furthest on a financial limb, and so on. That’s the kind of information that the companies in question like to hide from one another, not to mention the general public; it’s thus effectively inaccessible to archdruids, which means that we’ll just have to wait for the bankruptcies, the panic selling, and the wet thud of financiers hitting Wall Street sidewalks to find out which firms won the fiscal irresponsibility sweepstakes this time around.

One way or another, the collapse of the fracking boom bids fair to deliver a body blow to the US economy, at a time when most sectors of that economy have yet to recover from the bruising they received at the hands of the real estate bubble and bust. Depending on how heavily and cluelessly foreign banks and investors have been sucked into the boom—again, hard to say without inside access to closely guarded financial information—the popping of the bubble could sucker-punch national economies elsewhere in the world as well. Either way, it’s going to be messy, and the consequences will likely include a second helping of the same unsavory stew of bailouts for the rich, austerity for the poor, bullying of weaker countries by their stronger neighbors, and the like, that was dished up with such reckless abandon in the aftermath of the 2008 real estate bust. Nor is any of this going to make it easier to deal with potential pandemics, simmering proto-insurgencies in the American heartland, or any of the other entertaining consequences of our headfirst collision with the sidewalks of reality.

The consequences may go further than this. The one detail that sets the fracking bubble apart from the real estate bubble, the tech stock bubble, and their kin further back in economic history is that fracking wasn’t just sold to investors as a way to get rich quick; it was also sold to them, and to the wider public as well, as a way to evade the otherwise inexorable reality of peak oil. 2008, it bears remembering, was not just the year that the real estate bubble crashed, and dragged much of the global economy down with it; it was also the year when all those prophets of perpetual business as usual who insisted that petroleum would never break $60 a barrel or so got to eat crow, deep-fried in light sweet crude, when prices spiked upwards of $140 a barrel. All of a sudden, all those warnings about peak oil that experts had been issuing since the 1950s became a great deal harder to dismiss out of hand.

The fracking bubble thus had mixed parentage; its father may have been the same merciless passion for fleecing the innocent that always sets the cold sick heart of Wall Street aflutter, but its mother was the uneasy dawn of recognition that by ignoring decades of warnings and recklessly burning through the Earth’s finite reserves of fossil fuels just as fast as they could be extracted, the industrial world has backed itself into a corner from which the only way out leads straight down. White’s Law, one of the core concepts of human ecology, points out that economic development is directly correlated with energy per capita; as depletion overtakes production and energy per capita begins to decline, the inevitable result is a long era of economic contraction, in which a galaxy of economic and cultural institutions predicated on continued growth will stop working, and those whose wealth and influence depend on those institutions will be left with few choices short of jumping out a Wall Street window.

The last few years of meretricious handwaving about fracking as the salvation of our fossil-fueled society may thus mark something rather more significant than another round of the pervasive financial fraud that’s become the lifeblood of the US economy in these latter days. It’s one of the latest—and maybe, just maybe, one of the last—of the mental evasions that people in the industrial world have used in the futile but fateful attempt to pretend that pursuing limitless economic growth on a finite and fragile planet is anything other than a guaranteed recipe for disaster. When the fracking bubble goes to its inevitable fate, and most of a decade of babbling about limitless shale oil takes its proper place in the annals of human idiocy, it’s just possible that some significant number of people will realize that the universe is under no obligation to provide us will all the energy and other resources we want, just because we happen to want them. I wouldn’t bet the farm on that, but I think the possibility is there.

One swallow does not a summer make, mind you, and one fumbled attempt at a hostile book review on one website doesn’t prove that the same stage in the speculative bubble cycle that saw frantic denunciations flung at Roger Babson and Keith Brand—the stage that comes immediately before the crash—has arrived this time around. I would encourage my readers to watch for similar denunciations aimed at more influential and respectable fracking-bubble critics such as Richard Heinberg or Kurt Cobb. Once those start showing up, hang onto your hat; it’s going to be a wild ride.

Dark Age America: A Bitter Legacy

Wed, 2014-08-13 16:45
Civilizations normally leave a damaged environment behind them when they fall, and ours shows every sign of following that wearily familiar pattern. The nature and severity of the ecological damage a civilization leaves behind, though, depend on two factors, one obvious, the other less so. The obvious factor derives from the nature of the technologies the civilization deployed in its heyday; the less obvious one depends on how many times those same technologies had been through the same cycle of rise and fall before the civilization under discussion got to them.

There’s an important lesson in this latter factor. Human technologies almost always start off their trajectory through time as environmental disasters looking for a spot marked X, which they inevitably find, and then have the rough edges knocked off them by centuries or millennia of bitter experience. When our species first developed the technologies that enabled hunting bands to take down big game animals, the result was mass slaughter and the extinction of entire species of megafauna, followed by famine and misery; rinse and repeat, and you get the exquisite ecological balance that most hunter-gatherer societies maintained in historic times. In much the same way, early field agriculture yielded bumper crops of topsoil loss and subsistence failure to go along with its less reliable yields of edible grain, and the hard lessons from that experience have driven the rise of more sustainable agricultural systems—a process completed in our time with the emergence of organic agricultural methods that build soil rather than depleting it.

Any brand new mode of human subsistence is thus normally cruising for a bruising, and will get it in due time at the hands of the biosphere. That’s not precisely good news for modern industrial civilization, because ours is a brand new mode of human subsistence; it’s the first human society ever to depend almost entirely on extrasomatic energy—energy, that is, that doesn’t come from human or animal muscles fueled by food crops. In my book The Ecotechnic Future, I’ve suggested that industrial civilization is simply the first and most wasteful of a new mode of human society, the technic society. Eventually, I proposed, technic societies will achieve the same precise accommodation to ecological reality that hunter-gatherer societies worked out long ago, and agricultural societies have spent the last eight thousand years or so pursuing. Unfortunately, that doesn’t help us much just now.

Modern industrial civilization, in point of fact, has been stunningly clueless in its relationship with the planetary cycles that keep us all alive. Like those early bands of roving hunters who slaughtered every mammoth they could find and then looked around blankly for something to eat, we’ve drawn down the finite stocks of fossil fuels on this planet without the least concern about what the future would bring—well, other than the occasional pious utterance of thoughtstopping mantras of the “Oh, I’m sure they’ll think of something” variety. That’s not the only thing we’ve drawn down recklessly, of course, and the impact of our idiotically short-term thinking on our long-term prospects will be among the most important forces shaping the next five centuries of North America’s future.

Let’s start with one of the most obvious: topsoil, the biologically active layer of soil that can support food crops. On average, as a result of today’s standard agricultural methods, North America’s arable land loses almost three tons of topsoil from each cultivated acre every single year. Most of the topsoil that made North America the breadbasket of the 20th century world is already gone, and at the current rate of loss, all of it will be gone by 2075. That would be bad enough if we could rely on artificial fertilizer to make up for the losses, but by 2075 that won’t be an option: the entire range of chemical fertilizers are made from nonrenewable resources—natural gas is the main feedstock for nitrate fertilizers, rock phosphate for phosphate fertilizers, and so on—and all of these are depleting fast.

Topsoil loss driven by bad agricultural practices is actually quite a common factor in the collapse of civilizations. Sea-floor cores in the waters around Greece, for example, show a spike in sediment deposition from rapidly eroding topsoil right around the end of the Mycenean civilization, and another from the latter years of the Roman Empire. If archeologists thousands of years from now try the same test, they’ll find yet another eroded topsoil layer at the bottom of the Gulf of Mexico, the legacy of an agricultural system that put quarterly profits ahead of the relatively modest changes that might have preserved the soil for future generations.

The methods of organic agriculture mentioned earlier could help very significantly with this problem, since those include techniques for preserving existing topsoil, and rebuilding depleted soil at a rate considerably faster than nature’s pace. To make any kind of difference, though, those methods would have to be deployed on a very broad scale, and then passed down through the difficult years ahead. Lacking that, even where desertification driven by climate change doesn’t make farming impossible, a very large part of today’s North American farm belt will likely be unable to support crops for centuries or millennia to come. Eventually, the same slow processes that replenished the soil on land scraped bare by the ice age glaciers will do the same thing to land stripped of topsoil by industrial farming, but “eventually” will not come quickly enough to spare our descendants many hungry days.

The same tune in a different key is currently being played across the world’s oceans, and as a result my readers can look forward, in the not too distant future, to tasting the last piece of seafood they will ever eat. Conservatively managed, the world’s fish stocks could have produced large yields indefinitely, but they were not conservatively managed; where regulation was attempted, political and economic pressure consistently drove catch limits above sustainable levels, and of course cheating was pervasive and the penalties for being caught were merely another cost of doing business. Fishery after fishery has accordingly collapsed, and the increasingly frantic struggle to feed seven billion hungry mouths is unlikely to leave any of those that remain intact for long.

Worse, all of this is happening in oceans that are being hammered by other aspects of our collective ecological stupidity. Global climate change, by boosting the carbon dioxide content of the atmosphere, is acidifying the oceans and causing sweeping shifts in oceanic food chains. Those shifts involve winners as well as losers; where calcium-shelled diatoms and corals are suffering population declines, seaweeds and algae, which are not so sensitive to changes in the acid-alkaline balance, are thriving on the increased CO2 in the water—but the fish that feed on seaweeds and algae are not the same as those that feed on diatoms and corals, and the resulting changes are whipsawing ocean ecologies.

Close to shore, toxic effluents from human industry and agriculture are also adding to the trouble. The deep oceans, all things considered, offer sparse pickings for most saltwater creatures; the vast majority of ocean life thrives within a few hundred miles of land, where rivers, upwelling zones, and the like provide nutrients in relative abundance. We’re already seeing serious problems with toxic substances concentrating up through oceanic food chains; unless communities close to the water’s edge respond to rising sea levels with consummate care, hauling every source of toxic chemicals out of reach of the waters, that problem is only going to grow worse. Different species react differently to this or that toxin; some kind of aquatic ecosystem will emerge and thrive even in the most toxic estuaries of deindustrial North America, but it’s unlikely that those ecosystems will produce anything fit for human beings to eat, and making the attempt may not be particularly good for one’s health.

Over the long run, that, too, will right itself. Bioaccumulated toxins will end up entombed in the muck on the ocean’s floor, providing yet another interesting data point for the archeologists of the far future; food chains and ecosystems will reorganize, quite possibly in very different forms from the ones they have now. Changes in water temperature, and potentially in the patterns of ocean currents, will bring unfamiliar species into contact with one another, and living things that survive the deindustrial years in isolated refugia will expand into their former range. These are normal stages in the adaptation of ecosystems to large-scale shocks. Still, those processes of renewal take time, and the deindustrial dark ages ahead of us will be long gone before the seas are restored to biological abundance.

Barren lands and empty seas aren’t the only bitter legacies we’re leaving our descendants, of course. One of the others has received quite a bit of attention on the apocalyptic end of the peak oil blogosphere for several years now—since March 11, 2011, to be precise, when the Fukushima Daiichi nuclear disaster got under way. Nuclear power exerts a curious magnetism on the modern mind, drawing it toward extremes in one direction or the other; the wildly unrealistic claims about its limitless potential to power the future that have been made by its supporters are neatly balanced by the wildly unrealistic claims about its limitless potential as a source of human extinction on the other. Negotiating a path between those extremes is not always an easy matter.

In both cases, though, it’s easy enough to clear away at least some of the confusion by turning to documented facts. It so happens, for instance, that no nation on Earth has ever been able to launch or maintain a nuclear power program without huge and continuing subsidies. Nuclear power never pays for itself; absent a steady stream of government handouts, it doesn’t make enough economic sense to attract enough private investment to cover its costs, much less meet the huge and so far unmet expenses of nuclear waste storage; and in the great majority of cases, the motive behind the program, and the subsidies, is pretty clearly the desire of the local government to arm itself with nuclear weapons at any cost. Thus the tired fantasy of cheap, abundant nuclear power needs to be buried alongside the Eisenhower-era propagandists who dreamed it up in the first place.

It also happens, of course, that there have been quite a few catastrophic nuclear accidents since the dawn of the atomic age just over seventy years ago, especially but not only in the former Soviet Union. Thus it’s no secret what the consequences are when a reactor melts down, or when mismanaged nuclear waste storage facilities catch fire and spew radioactive smoke across the countryside. What results is an unusually dangerous industrial accident, on a par with the sudden collapse of a hydroelectric dam or a chemical plant explosion that sends toxic gases drifting into a populated area; it differs from these mostly in that the contamination left behind by certain nuclear accidents remains dangerous for many years after it comes drifting down from the sky.

There are currently 69 operational nuclear power plants scattered unevenly across the face of North America, with 127 reactors among them; there are also 48 research reactors, most of them much smaller and less vulnerable to meltdown than the power plant reactors. Most North American nuclear power plants store spent fuel rods in pools of cooling water onsite, since the spent rods continue to give off heat and radiation and there’s no long term storage for high-level nuclear waste. Neither a reactor nor a fuel rod storage pool can be left untended for long without serious trouble, and a great many things—including natural disasters and human stupidity—can push them over into meltdown, in the case of reactors, or conflagration, in the case of spent fuel rods. In either case, or both, you’ll get a plume of toxic, highly radioactive smoke drifting in the wind, and a great many people immediately downwind will die quickly or slowly, depending on the details and the dose.

It’s entirely reasonable to predict that this is going to happen to some of those 175 reactors. In a world racked by climate change, resource depletion, economic disintegration, political and social chaos, mass movements of populations, and the other normal features of the decline and fall of a civilization and the coming of a dark age, the short straw is going to be drawn sooner or later, and serious nuclear disasters are going to happen. That doesn’t justify the claim that every one of those reactors is going to melt down catastrophically, every one of the spent-fuel storage facilities is going to catch fire, and so on—though of course that claim does make for more colorful rhetoric.

In the real world, for reasons I’ll be discussing further in this series of posts, we don’t face the kind of sudden collapse that could make all the lights go out at once. Some nations, regions, and local areas within regions will slide faster than others, or be deliberately sacrificed so that resources of one kind or another can be used somewhere else. As long as governments retain any kind of power at all, keeping nuclear facilities from adding to the ongoing list of disasters will be high on their agendas; shutting down reactors that are no longer safe to operate is one step they can certainly do, and so is hauling spent fuel rods out of the pools and putting them somewhere less immediately vulnerable.

It’s probably a safe bet that the further we go along the arc of decline and fall, the further these decommissioning exercises will stray from the optimum. I can all too easily imagine fuel rods being hauled out of their pools by condemned criminals or political prisoners, loaded on flatbed rail cars, taken to some desolate corner of the expanding western deserts, and tipped one at a time into trenches dug in the desert soil, then covered over with a few meters of dirt and left to the elements. Sooner or later the radionuclides will leak out, and that desolate place will become even more desolate, a place of rumors and legends where those who go don’t come back.

Meanwhile, the reactors and spent-fuel pools that don’t get shut down even in so cavalier a fashion will become the focal points of dead zones of a slightly different kind. The facilities themselves will be off limits for some thousands of years, and the invisible footprints left behind by the plumes of smoke and dust will be dangerous for centuries. The vagaries of deposition and erosion are impossible to predict; in areas downwind from Chernobyl or some of the less famous Soviet nuclear accidents, one piece of overgrown former farmland may be relatively safe while another a quarter hour’s walk away may still set a Geiger counter clicking at way-beyond-safe rates. Here I imagine cow skulls on poles, or some such traditional marker, warning the unwary that they stand on the edge of accursed ground.

It’s important to keep in mind that not all the accursed ground in deindustrial North America will be the result of nuclear accidents. There are already areas on the continent so heavily contaminated with toxic pollutants of less glow-in-the-dark varieties that anyone who attempts to grow food or drink the water there can count on a short life and a wretched death. As the industrial system spirals toward its end, and those environmental protections that haven’t been gutted already get flung aside in the frantic quest to keep the system going just a little bit longer, spills and other industrial accidents are very likely to become a good deal more common than they are already.

There are methods of soil and ecosystem bioremediation that can be done with very simple technologies—for example, plants that concentrate toxic metals in their tissues so it can be hauled away to a less dangerous site, and fungi that break down organic toxins—but if they’re to do any good at all, these will have to be preserved and deployed in the teeth of massive social changes and equally massive hardships. Lacking that, and it’s a considerable gamble at this point, the North America of the future will be spotted with areas where birth defects are a common cause of infant mortality and it will be rare to see anyone over the age of forty or so without the telltale signs of cancer.

There’s a bitter irony in the fact that cancer, a relatively rare disease a century and a half ago—most childhood cancers in particular were so rare that individual cases were written up in medical journals —has become the signature disease of industrial society, expanding its occurrence and death toll in lockstep with our mindless dumping of chemical toxins and radioactive waste into the environment. What, after all, is cancer? A disease of uncontrolled growth.

I sometimes wonder if our descendants in the deindustrial world will appreciate that irony. One way or another, I have no doubt that they’ll have their own opinions about the bitter legacy we’re leaving them. Late at night, when sleep is far away, I sometimes remember Ernest Thompson Seton’s heartrending 1927 prose poem “A Lament,” in which he recalled the beauty of the wild West he had known and the desolation of barbed wire and bleached bones he had seen it become. He projected the same curve of devastation forward until it rebounded on its perpetrators—yes, that would be us—and imagined the voyagers of some other nation landing centuries from now at the ruins of Manhattan, and slowly piecing together the story of a vanished people:

Their chiefs and wiser ones shall know
That here was a wastrel race, cruel and sordid,
Weighed and found wanting,
Once mighty but forgotten now.
And on our last remembrance stone,
These wiser ones will write of us:
They desolated their heritage,
They wrote their own doom.

I suspect, though, that our descendants will put things in language a good deal sharper than this. As they think back on the people of the 20th and early 21st centuries who gave them the barren soil and ravaged fisheries, the chaotic weather and rising oceans, the poisoned land and water, the birth defects and cancers that embitter their lives, how will they remember us? I think I know. I think we will be the orcs and Nazgûl of their legends, the collective Satan of their mythology, the ancient race who ravaged the earth and everything on it so they could enjoy lives of wretched excess at the future’s expense. They will remember us as evil incarnate—and from their perspective, it’s by no means easy to dispute that judgment.

Dark Age America: The Rising Oceans

Wed, 2014-08-06 17:02
The vagaries of global climate set in motion by our species’ frankly brainless maltreatment of the only atmosphere we’ve got, the subject of last week’s post here, have another dimension that bears close watching. History, as I suggested last week, can be seen as human ecology in its transformations over time, and every ecosystem depends in the final analysis on the available habitat. For human beings, the habitat that matters is dry land with adequate rainfall and moderate temperatures; we’ve talked about the way that anthropogenic climate change is interfering with the latter two, but it promises to have  significant impacts on the first of those requirements as well.
It’s helpful to put all this in the context of deep time. For most of the last billion years or so, the Earth has been a swampy jungle planet where ice and snow were theoretical possibilities only. Four times in that vast span, though, something—scientists are still arguing about what—turned the planet’s thermostat down sharply, resulting in ice ages millions of years in length. The most recent of these downturns began cooling the planet maybe ten million years ago, in the Miocene epoch; a little less than two million years ago, at the beginning of the Pleistocene epoch, the first of the great continental ice sheets began to spread across the Northern Hemisphere, and the ice age was on.
We’re still in it. During an ice age, a complex interplay of the Earth’s rotational and orbital wobbles drives the Milankovich cycle, a cyclical warming and cooling of the planet that takes around 100,000 years to complete, with long glaciations broken by much shorter interglacials. We’re approaching the end of the current interglacial, and it’s estimated that the current ice age has maybe another ten million years to go; one consequence is that at some point a few millennia in the future, we can pretty much count on the arrival of a new glaciation. In the meantime, we’ve still got continental ice sheets covering Antarctica and Greenland, and a significant amount of year-round ice in mountains in various corners of the world. That’s normal for an interglacial, though not for most of the planet’s history.
The back-and-forth flipflop between glaciations and interglacials has a galaxy of impacts on the climate and ecology of the planet, but one of the most obvious comes from the simple fact that all the frozen water needed to form a continental ice sheet have to come from somewhere, and the only available “somewhere” on this planet is the oceans. As glaciers spread, sea level drops accordingly; 18,000 years ago, when the most recent glaciation hit its final peak, sea level was more than 400 feet lower than today, and roaming tribal hunters could walk all the way from Holland to Ireland and keep going, following reindeer herds a good distance into what’s now the northeast Atlantic.
What followed has plenty of lessons on offer for our future. It used to be part of the received wisdom that ice ages began and ended with, ahem, glacial slowness, and there still seems to be good reason to think that the beginnings are fairly gradual, but the ending of the most recent ice age involved periods of very sudden change. 18,000 years ago, as already mentioned, the ice sheets were at their peak; about 16,000 years ago, the planetary climate began to warm, pushing the ice into a slow retreat. Around 14,700 years ago, the warm Bölling phase arrived, and the ice sheets retreated hundreds of miles; according to several studies, the West Antarctic ice sheet collapsed completely at this time.
The Bölling gave way after around 600 years to the Older Dryas cold period, putting the retreat of the ice on hold. After another six centuries or so, the Older Dryas gave way to a new warm period, the Alleröd, which sent the ice sheets reeling back and raised sea levels hundreds of feet worldwide. Then came a new cold phase, the frigid Younger Dryas, which brought temperatures back for a few centuries to their ice age lows, cold enough to allow the West Antarctic ice sheet to reestablish itself and to restore tundra conditions over large sections of the Northern Hemisphere. Ice core measurements suggest that the temperature drop hit fast, in a few decades or less—a useful reminder that rapid climate change can come from natural sources as well as from our smokestacks and tailpipes.
Just over a millennium later, right around 9600 BC, the Boreal phase arrived, and brought even more spectacular change. According to oxygen isotope measurements from Greenland ice cores—I get challenged on this point fairly often, so I’ll mention that the figure I’m citing is from Steven Mithen’s After The Ice, a widely respected 2003 survey of human prehistory—global temperatures spiked 7° C  in less than a decade, pushing the remaining ice sheets into rapid collapse and sending sea levels soaring. Over the next few thousand years, the planet’s ice cover shrank to a little less than its current level, and sea level rose a bit above what it is today; a gradual cooling trend beginning around 6000 BCE brought both to the status they had at the beginning of the industrial era.
Scientists still aren’t sure what caused the stunning temperature spike at the beginning of the Boreal phase, but one widely held theory is that it was driven by large-scale methane releases from the warming oceans and thawing permafrost. The ocean floor contains huge amounts of methane trapped in unstable methane hydrates; permafrost contains equally huge amounts of dead vegetation that’s kept from rotting by subfreezing temperatures, and when the permafrost thaws, that vegetation rots and releases more methane. Methane is a far more powerful greenhouse gas than carbon dioxide, but it’s also much more transient—once released into the atmosphere, methane breaks down into carbon dioxide and water relatively quickly, with an estimated average lifespan of ten years or so—and so it’s quite a plausible driver for the sort of sudden shock that can be traced in the Greenland ice cores.
If that’s what did it, of course, we’re arguably well on our way there. I discussed in a previous post here credible reports that large sections of the Arctic ocean are fizzing with methane, and I suspect many of my readers have heard of the recently discovered craters in Siberia that appear to have been caused by methane blowouts from thawing permafrost. On top of the current carbon dioxide spike, a methane spike would do a fine job of producing the kind of climate chaos I discussed in last week’s post. That doesn’t equal the kind of runaway feedback loop beloved of a certain sect of contemporary apocalypse-mongers, because there are massive sources of negative feedback that such claims always ignore, but it seems quite likely that the decades ahead of us will be enlivened by a period of extreme climate turbulence driven by significant methane releases.
Meanwhile, two of the world’s three remaining ice sheets—the West Antarctic and Greenland sheets—have already been destabilized by rising temperatures. Between them, these two ice sheets contain enough water to raise sea level around 50 feet globally, and the estimate I’m using for anthropogenic carbon dioxide emissions over the next century provides enough warming to cause the collapse and total melting of both of them. All that water isn’t going to hit the world’s oceans overnight, of course, and a great deal depends on just how fast the melting happens.
The predictions for sea level rise included in the last few IPCC reports assume a slow, linear process of glacial melting. That’s appropriate as a baseline, but the evidence from paleoclimatology shows that ice sheets collapse in relatively sudden bursts of melting, producing what are termed “global meltwater pulses” that can be tracked worldwide by a variety of proxy measurements. Mind you, “relatively sudden” in geological terms is slow by the standards of a human lifetime; the complete collapse of a midsized ice sheet like Greenland’s or West Antarctica’s can take five or six centuries, and that in turn involves periods of relatively fast melting and sea level rise, interspersed with slack periods when sea level creeps up much more slowly.
So far, at least, the vast East Antarctic ice sheet has shown only very modest changes, and most current estimates suggest that it would take something far more drastic than the carbon output of our remaining economically accessible fossil fuel reserves to tip it over into instability; this is a good thing, as East Antarctica’s ice fields contain enough water to drive sea level up 250 feet or so.  Thus a reasonable estimate for sea level change over the next five hundred years involves the collapse of the Greenland and West Antarctic sheets and some modest melting on the edges of the East Antarctic sheet, raising sea level by something over 50 feet, delivered in a series of unpredictable bursts divided by long periods of relative stability or slow change.
The result will be what paleogeographers call “marine transgression”—the invasion of dry land and fresh water by the sea. Fifty feet of sea level change adds up to quite a bit of marine transgression in some areas, much less in others, depending always on local topography. Where the ground is low and flat, the rising seas can penetrate a very long way; in California, for example, the state capital at Sacramento is many miles from the ocean, but since it’s only 30 feet above sea level and connected to the sea by a river, its  skyscrapers will be rising out of a brackish estuary long before Greenland and West Antarctica are bare of ice. The port cities of the Gulf coast are also on the front lines—New Orleans is actually below sea level, and will likely be an early casualty, but every other Gulf port from Brownsville, Texas (elevation 43 feet) to Tampa, Florida (elevation 15 feet) faces the same fate, and most East and West Coast ports face substantial flooding of economically important districts.
The flooding of Sacramento isn’t the end of the world, and there may even be some among my readers who would consider it to be a good thing. What I’d like to point out, though, is the economic impact of the rising waters. Faced with an unpredictable but continuing rise in sea level, communities and societies face one of two extremely expensive choices. They can abandon billions of dollars of infrastructure to the sea and rebuild further inland, or they can invest billions of dollars in flood control. Because the rate of sea level change can’t be anticipated, furthermore, there’s no way to know in advance how far to relocate or how high to build the barriers at any given time, and there are often hard limits to how much change can be done in advance:  port cities, for example, can’t just move away from the sea and still maintain a functioning economy.
This is a pattern we’ll be seeing over and over again in this series of posts. Societies descending into dark ages reliably get caught on the horns of a brutal dilemma. For any of a galaxy of reasons, crucial elements of infrastructure no longer do the job they once did, but reworking or replacing them runs up against two critical difficulties that are hardwired into the process of decline itself. The first is that, as time passes, the resources needed to do the necessary work become increasingly scarce; the second is that, as time passes, the uncertainties about what needs to be done become increasingly large.
The result can be tracked in the decline of every civilization. At first, failing systems are replaced with some success, but the economic impact of the replacement process becomes an ever-increasing burden, and the new systems never do quite manage to work as well as the older ones did in their heyday. As the process continues, the costs keep mounting and the benefits become less reliable; more and more often, scarce resources end up being wasted or put to counterproductive uses because the situation is too uncertain to allow for their optimum allocation. With each passing year, decision makers have to figure out how much of the dwindling stock of resources can be put to productive uses and how much has to be set aside for crisis management, and the raw uncertainty of the times guarantees that these decisions will very often turn out wrong. Eventually, the declining curve in available resources and the rising curve of uncertainty intersect to produce a crisis that spins out of control, and what’s left of a community, an economic sector, or a whole civilization goes to pieces under the impact.
It’s not too hard to anticipate how that will play out in the century or so immediately ahead of us. If, as I’ve suggested, we can expect the onset of a global meltwater pulse from the breakup of the Greenland and West Antarctic ice sheets at some point in the years ahead, the first upward jolt in sea level will doubtless be met with grand plans for flood-control measures in some areas, and relocation of housing and economic activities in others. Some of those plans may even be carried out, though the raw economic impact of worldwide coastal flooding on a global economy already under severe strain from a chaotic climate and a variety of other factors won’t make that easy. Some coastal cities will hunker down behind hurriedly built or enlarged levees, others will abandon low-lying districts and try to rebuild further upslope, still others will simply founder and be partly or wholly abandoned—and all these choices impose costs on society as a whole.
Thereafter, in years and decades when sea level rises only slowly, the costs of maintaining flood control measures and replacing vulnerable infrastructure with new facilities on higher ground will become an unpopular burden, and the same logic that drives climate change denialism today will doubtless find plenty of hearers then as well. In years and decades when sea level surges upwards, the flood control measures and relocation projects will face increasingly severe tests, which some of them will inevitably fail. The twin spirals of rising costs and rising uncertainty will have their usual effect, shredding the ability of a failing society to cope with the challenges that beset it.
It’s even possible in one specific case to make an educated guess as to the nature of the pressures that will finally push the situation over the edge into collapse and abandonment. It so happens that three different processes that follow in the wake of rapid glacial melting all have the same disastrous consequence for the eastern shores of North America.
The first of these is isostatic rebound. When you pile billions of tons of ice on a piece of land, the land sinks, pressing down hundreds or thousands of feet into the Earth’s mantle; melt the ice, and the land rises again. If the melting happens over a brief time, geologically speaking, the rebound is generally fast enough to place severe stress on geological faults all through the region, and thus sharply increases the occurrence of earthquakes. The Greenland ice sheet is by no means exempt from this process, and many of the earthquakes in the area around a rising Greenland will inevitably happen offshore. The likely result? Tsunamis.
The second process is the destabilization of undersea sediments that build up around an ice sheet that ends in the ocean. As the ice goes away, torrents of meltwater pour into the surrounding seas, and isostatic rebound changes the slope of the underlying rock, masses of sediment break free and plunge down the continental slope into the deep ocean. Some of the sediment slides that followed the end of the last ice age were of impressive scale—the Storegga Slide off the coast of Norway around 6220 BCE, which was caused by exactly this process, sent 840 cubic miles of sediment careening down the continental slope. The likely result? More tsunamis.
The third process, which is somewhat more speculative than the first two, is the sudden blowout of large volumes of undersea methane hydrates. Several oceanographers and paleoclimatologists have argued that the traces of very large underwater slides in the Atlantic, dating from the waning days of the last ice age, may well be the traces of such blowouts. As the climate warmed, they suggest, methane hydrates on the continental shelves were destabilized by rising temperatures, and a sudden shock—perhaps delivered by an earthquake, perhaps by something else—triggered the explosive release of thousands or millions of tons of methane all at once. The likely result? Still more tsunamis.
It’s crucial to realize the role that uncertainty plays here, as in so many dimensions of our predicament. No one knows whether tsunamis driven by glacial melting will hammer the shores of the northern Atlantic basin some time in the next week, or some time in the next millennium. Even if tsunamis driven by the collapse of the Greenland ice sheet become statistically inevitable, there’s no way for anyone to know in advance the timing, scale, and direction of any such event. Efficient allocation of resources to East Coast ports becomes a nighmarish challenge when you literally have no way of knowing how soon any given investment might suddenly end up on the bottom of the Atlantic.
If human beings behave as they usually do, what will most likely happen is that the port cities of the US East Coast will keep on trying to maintain business as usual until well after that stops making any kind of economic sense. The faster the seas rise and the sooner the first tsunamis show up, the sooner that response will tip over into its opposite, and people will begin to flee in large numbers from the coasts in search of safety for themselves and their families. My working guess is that the eastern seaboard of dark age America will be sparsely populated, with communities concentrated in those areas where land well above tsunami range lies close to the sea. The Pacific and Gulf coasts will be at much less risk from tsunamis, and so may be more thickly settled; that said, during periods of rapid marine transgression, the mostly flat and vulnerable Gulf Coast may lose a great deal of land, and those who live there will need to be ready to move inland in a hurry.
All these factors make for a shift in the economic and political geography of the continent that will be of quite some importance at a later point in this series of posts. In times of rapid sea level change, maintaining the infrastructure for maritime trade in seacoast ports is a losing struggle; maritime trade is still possible without port infrastructure, but it’s rarely economically viable; and that means that inland waterways with good navigable connections to the sea will take on an even greater importance than they have today. In North America, the most crucial of those are the St. Lawrence Seaway, the Hudson River-Erie Canal linkage to the Great Lakes, and whatever port further inland replaces New Orleans—Baton Rouge is a likely candidate, due to its location and elevation above sea level—once the current Mississippi delta drowns beneath the rising seas.
Even in dark ages, as I’ll demonstrate later on, maritime trade is a normal part of life, and that means that the waterways just listed will become the economic, political, and strategic keys to most of the North American continent. The implications of that geographical reality will be the focus of a number of posts as we proceed.