AODA Blog

Dark Age America: The Hour of the Knife

Wed, 2014-10-15 20:15
It was definitely the sort of week that could benefit from a little comic relief. The Ebola epidemic marked another week of rising death tolls and inadequate international response . Bombs rained down ineffectually on various corners of Iraq and Syria as the United States and an assortment of putative allies launched air strikes at the Islamic State insurgents; since air strikes by themselves don’t win wars, and none of the combatants except Islamic State and the people they’re attacking have shown any inclination to put boots on the ground, that high-tech tantrum also counts in every practical sense as an admission of defeat, a point which is doubtless not lost on Islamic State. Meanwhile stock markets worldwide plunged on an assortment of ghastly economic news, with most indexes giving up their 2014 gains and then some, and oil prices dropped on weakening demand, reaching levels that put a good many fracking firms in imminent danger of bankruptcy.
In the teeth of all this bad news, I’m pleased to say, Paul Krugman rose to the occasion and gave all of us in the peak oil scene something to laugh about.  My regular readers will recall that Krugman assailed Post Carbon Institute a couple of weeks ago for having the temerity to point out that transitioning away from fossil fuels was, ahem, actually going to cost money. His piece was rebutted at once by Post Carbon’s Richard Heinberg and others, who challenged Krugman’s crackpot optimism and pointed out that the laws of physics and geology really do trump those of economics.
Krugman’s response—it really is a comic masterpiece, better than anything I’ve seen since the heyday of Francis Fukuyama—involved, among other non sequiturs and dubious claims, assailing mere scientists for thinking that they know more than economists. Er, let’s see: which of these two groups of people is expected to test their predictions against hard facts and discard a theory that produces inaccurate predictions? That’s what scientists do every working day, while economists apparently have something else to occupy their time. This may be why, when it comes to predicting macroeconomic conditions, economists these days are rarely as accurate as a tossed coin: consider the IMF’s continued advocacy of austerity programs as the road to prosperity when no country that has ever implemented them has ever achieved prosperity thereby, or for that matter the huge majority of economists who insisted the housing bubble wasn’t a bubble and wouldn’t crash, right up until the bottom dropped out.
Like so much great comedy, though, Krugman’s jest has its serious side. He sees a permanent condition of economic growth as the normal, indeed the inevitable state of affairs; it has doubtless never occurred to him that it might merely be a temporary anomaly, made possible only by the reckless extraction and consumption of half a billion years of fossil sunlight in a few short centuries. That the needle on the world’s fossil fuel gauge is swinging inexorably over toward E, to him, thus can only mean that some other source of cheap, abundant, highly concentrated energy will have to be found to keep the engines of economic growth roaring on at full throttle. That there may be no such replacement for fossil fuels ready and waiting in Nature’s cookie jar, and that economic growth can thus give way to an economic contraction extending over decades and centuries to come, has never entered his darkest dream.
That is to say, Krugman is still thinking the thoughts of a bygone era when the assumptions guiding those thoughts are long past their pull date and a very different era is taking shape around him. That’s a common source of confusion in times of rapid change, and never more so than in the decline and fall of civilizations—the theme of the current series of posts here. One specific form of that confusion very often becomes the mechanism by which the governing elite of a society in decline removes itself from power, and that mechanism is what I want to discuss this week.
To make sense of that process, it’s going to be necessary to take a step back and revisit some of the points made in an earlier post in this series. I discussed there the way that the complex social hierarchies common to mature civilizations break down into larger and less stable masses in which new loyalties and hatreds more easily build to explosive intensity. America’s as good an example of that as any.  A century ago, for example, racists in this country were at great pains to distinguish various classes of whiteness, with people of Anglo-Saxon ancestry at the pinnacle of whiteness and everybody else fitted into an intricate scheme of less-white categories below. Over the course of the twentieth century, those categories collapsed into a handful of abstract ethnicities—white, black, Hispanic, Asian—and can be counted on to collapse further as we proceed, until there are just two categories left, which are not determined by ethnicity but purely by access to the machinery of power.
Arnold Toynbee, whose immensely detailed exploration of this process remains the best account for our purposes, called those two the dominant minority and the internal proletariat. The dominant minority is the governing elite of a civilization in its last phases, a group of people united not by ethnic, cultural, religious, or ideological ties, but purely by their success in either clawing their way up the social ladder to a position of power, or hanging on to a position inherited from their forebears. Toynbee draws a sharp division between a dominant minority and the governing elite of a civilization that hasn’t yet begun to decline, which he calls a creative minority. The difference is that a creative minority hasn’t yet gone through the descent into senility that afflicts elites, and still recalls its dependence on the loyalty of those further down the social ladder; a dominant minority or, in my terms, a senile elite has lost track of that, and has to demand and enforce obedience because it can no longer inspire respect.
Everyone else in a declining civilization belongs to the second category, the internal proletariat. Like the dominant minority, the internal proletariat has nothing to unite it but its relationship to political power: it consists of all those people who have none. In the face of that fact, other social divisions gradually evaporate.  Social hierarchies are a form of capital, and like any form of capital, they have maintenance costs, which are paid out in the form of influence and wealth.   The higher someone stands in the social hierarchy, the more access to influence and wealth they have; that’s their payoff for cooperating with the system and enforcing its norms on those further down.
As resources run short and a civilization in decline has to start cutting its maintenance costs, though, the payoffs get cut. For obvious reasons, the higher someone is on the ladder to begin with, the more influence they have over whose payoffs get cut, and that reliably works out to “not mine.” The further down you go, by contrast, the more likely people are to get the short end of the stick. That said, until the civilization actually comes apart, there’s normally a floor to the process, somewhere around the minimum necessary to actually sustain life; an unlucky few get pushed below this, but normally it’s easier to maintain social order when the very poor get just enough to survive. Thus social hierarchies disintegrate from the bottom up, as more and more people on the lower rungs of the latter are pushed down to the bottom, erasing the social distinctions that once differentiated them from the lowest rung.
That happens in society as a whole; it also happens in each of the broad divisions of the caste system—in the United States, those would be the major ethnic divisions. The many shades of relative whiteness that used to divide white Americans into an intricate array of castes, for instance, have almost entirely gone by the boards; you have to go pretty far up the ladder to find white Americans who differentiate themselves from other white Americans on the basis of whose descendants they are. Further down the ladder, Americans of Italian, Irish, and Polish descent—once strictly defined castes with their own churches, neighborhoods, and institutions—now as often as not think of themselves as white without further qualification.
The same process has gotten under way to one extent or another in the other major ethnic divisions of American society, and it’s also started to dissolve even those divisions among the growing masses of the very poor.  I have something of a front-row seat on that last process; I live on the edge of the low-rent district in an old mill town in the Appalachians, and shopping and other errands take me through the neighborhood on foot quite often. I walk past couples pushing baby carriages, kids playing in backyards or vacant lots, neighbors hanging out together on porches, and as often as not these days the people in these groups don’t all have the same skin color. Head into the expensive part of town and you won’t see that; the dissolution of the caste system hasn’t extended that far up the ladder—yet.
This is business as usual in a collapsing civilization.  Sooner or later, no matter how intricate the caste system you start with, you end up with a society divided along the lines sketched out by Toynbee, with a dominant minority defined solely by its access to power and wealth and an internal proletariat defined solely by its exclusion from these things. We’re not there yet, not in the United States; there are still an assortment of intermediate castes between the two final divisions of society—but as Bob Dylan said a long time ago, you don’t have to be a weatherman to know which way the wind is blowing.
The political implications of this shift are worth watching. As I’ve noted here more than once, ruling elites in mature civilizations don’t actually exercise power themselves; they issue general directives to their immediate subordinates, who hand them further down the pyramid; along the way the general directives are turned into specific orders, which finally go to the ordinary working Joes and Janes who actually do the work of maintaining the status quo against potential rivals, rebels, and dissidents. A governing elite that hasn’t yet gone senile knows that it has to keep the members of its overseer class happy, and provides them with appropriate perks and privileges toward this end. As the caste system starts to disintegrate due to a shortage of resources to meet maintenance costs, though, the salaries and benefits at the bottom of the overseer class get cut, and more and more of the work of maintaining the system is assigned to poorly paid, poorly trained, and poorly motivated temp workers whose loyalties don’t necessarily lie with their putative masters.
You might think that even an elite gone senile would have enough basic common sense left to notice that losing the loyalty of the people who keep the elite in power is a fatal error.  In practice, though, the disconnection between the world of the dominant elite and the world of the internal proletariat quickly becomes total, and the former can be completely convinced that everything is fine when the latter know otherwise. As I write this, there’s a timely example unfolding at Texas Health Presbyterian Hospital in Dallas, where hospital administrators have been insisting at the top of their lungs that every possible precaution was taken when the late Thomas Duncan was being treated there for Ebola. According to the nursing staff—two of whom have now come down with the disease—“every possible precaution” amounted to no training, inadequate protective gear, and work schedules that had nurses who treated Duncan go on to tend other patients immediately thereafter.
A few weeks ago, the US media was full of confident bluster about how our high-tech medical industry would swing into action and stop the disease in its tracks; the gap between those easy assurances and the Keystone Kops response currently under way in Dallas is the same, mutatis mutandis, as the gap between the august edicts proclaimed in the capital during the last years of every civilization and the chaos in the streets and on the borders. You can see the same gap at work every time the US government trots out the latest round of heavily massaged economic statistics claiming that prosperity is just around the corner, or—well, I could go on listing examples for any number of pages.
So the gap that opens up between the dominant minority and the internal proletariat is much easier to see from below than from above. Left to itself, that gap would probably keep widening until the dominant minority toppled into it. It’s an interesting regularity of history, though, that this process is almost never left to run its full length. Instead, another series of events overtakes it, with the same harsh consequences for the dominant minority.
To understand this it’s necessary to include another aspect of Toynbee’s analysis, and look at what’s going on just outside the borders of a civilization in decline. Civilizations prosper by preying on their neighbors; the mechanism may be invasion and outright pillage, demands for tribute backed up by the threat of armed force, unbalanced systems of exchange that concentrate wealth in an imperial center at the expense of the periphery, or what have you, but the process is the same in every case, and so are the results. One way or another, the heartland of every civilization ends up surrounded by an impoverished borderland, scaled according to the transport technologies of the era.  In the case of the ancient Maya, the borderland extended only a modest distance in any direction; in the case of ancient Rome, it extended north to the Baltic Sea and east up to the borders of Parthia; in the case of modern industrial society, the borderland includes the entire Third World.
However large the borderland may be, its inhabitants fill a distinctive role in the decline and fall of a civilization. Toynbee calls them the external proletariat; as a civilization matures, their labor provides a steadily increasing share of the wealth that keeps the civilization and its dominant elite afloat, but they receive essentially nothing in return, and they’re keenly aware of this. Civilizations in their prime keep their external proletariats under control by finding and funding compliant despots to rule over the borderlands and, not incidentally, distract the rage of the external proletariat to some target more expendable than the civilization’s dominant minority. Here again, though, maintenance costs are the critical issue. When a dominant minority can no longer afford the subsidies and regular military expeditions needed to keep their puppet despots on their thrones, and try to maintain peace along the borders on teh cheap, they invariably catalyze the birth of the social form that brings them down.
Historians call it the warband: a group of young men whose sole trade is violence, gathered around a charismatic leader.  Warbands spring up in the borderlands of a civilization as the dominant minority or its pet despots lose their grip, and go through a brutally Darwinian process of evolution thereafter in constant struggle with each other and with every other present or potential rival in range. Once they start forming, there seems to be little that a declining civilization can do to derail that evolutionary process; warbands are born of chaos, their activities add to the chaos, and every attempt to pacify the borderlands by force simply adds to the chaos that feeds them. In their early days, warbands cover their expenses by whatever form of violent activity will pay the bills, from armed robbery to smuggling to mercenary service; as they grow, raids across the border are the next step; as the civilization falls apart and the age of migrations begins, warbands are the cutting edge of the process that shreds nations and scatters their people across the map.
The process of warband formation itself can quite readily bring a civilization down. Very often, though, the dominant minority of the declining civilization gives the process a good hard shove. As the chasm between the dominant minority and the internal proletariat becomes wider, remember, the overseer class that used to take care of crowd control and the like for the dominant minority becomes less and less reliable, as their morale and effectiveness are hammered by ongoing budget cuts, and the social barriers that once divided them from the people they are supposed to control will have begun to dissolve if they haven’t entirely given way yet. What’s the obvious option for a dominant minority that is worried about its ability to control the internal proletariat, can no longer rely on its own overseer class, and also has a desperate need to find something to distract the warbands on its borders?
They hire the warbands, of course.
That’s what inspired the Roman-British despot Vortigern to hire the Saxon warlord Hengist and three shiploads of his heavily armed friends to help keep the peace in Britannia after the legions departed. That’s what led the Fujiwara family, the uncrowned rulers of Japan, to hire uncouth samurai from the distant, half-barbarous Kanto plain to maintain peace in the twilight years of the Heian period. That’s why scores of other ruling elites have made the obvious, logical, and lethal choice to hire their own replacements and hand over the actual administration of power to them.
That latter is the moment toward which all the political trends examined in the last four posts in this sequence converge. The disintegration of social hierarchies, the senility of ruling elites, and the fossilization of institutions all lead to the hour of the knife, the point at those who think they still rule a civilization discover the hard way—sometimes the very hard way—that effective power has transferred to new and more muscular hands. Those of the elites that attempt to resist this transfer rarely survive the experience.  Those who accommodate themselves to the new state of affairs may be able to prosper for a time, but only so long as their ability to manipulate what’s left of the old system makes them useful to its new overlords. As what was once a complex society governed by bureaucratic institutions dissolves into a much simpler society governed by the personal rule of warlords, that skill set does not necessarily wear well.
In some cases—Hengist is an example—the warlords allow the old institutions to fall to pieces all at once, and the transition from an urban civilization to a protofeudal rural society takes place in a few generations at most. In others—the samurai of the Minamoto clan, who came out on top in the furious struggles that surrounded the end of the Heian period, are an example here—the warlords try to maintain the existing order of society as best they can, and get dragged down by the same catabolic trap that overwhelmed their predecessors. In an unusually complex case—for example, post-Roman Italy—one warlord after another can seize what’s left of the institutional structure of a dead empire, try to run it for a while, and then get replaced by someone else with the same agenda, each change driving one more step down the long stair that turned the Forum into a sheep pasture.
Exactly how this process will play out in the present case is impossible to predict in advance. We’ve got warband formation well under way in quite a few corners of industrial civilization’s borderlands, the southern border of the United States among them; we’ve got a dominant minority far advanced in the state of senility described in an earlier post; we’ve got a society equally well advanced in the dissolution of castes into dominant minority and internal proletariat. Where we are now in the process is clear enough; what will come out the other side, which will be discussed in a future post, is equally clear; the exact series of steps between them is of less importance—except, of course, to those who have the most to fear when the hour of the knife arrives.

****************
In other news, I'm pleased to announce that my latest book from New Society Publications, After Progress: Reason, Religion, and the End of the Industrial Age is now available for preorder, with a 20% discount off the cover price as an additional temptation. Those readers who enjoyed last year's series of posts on religion and the end of progress will find this very much to their taste. 

Dark Age America: The Collapse of Political Complexity

Wed, 2014-10-08 16:10
The senility that afflicts ruling elites in their last years, the theme of the previous post in this sequence, is far from the only factor leading the rich and influential members of a failing civilization to their eventual destiny as lamppost decorations or come close equivalent. Another factor, at least as important, is a lethal mismatch between the realities of power in an age of decline and the institutional frameworks inherited from a previous age of ascent.
That sounds very abstract, and appropriately so. Power in a mature civilization is very abstract, and the further you ascend the social ladder, the more abstract it becomes. Conspiracy theorists of a certain stripe have invested vast amounts of time and effort in quarrels over which specific group of people it is that runs everything in today’s America. All of it was wasted, because the nature of power in a mature civilization precludes the emergence of any one center of power that dominates all others.
Look at the world through the eyes of an elite class and it’s easy to see how this works. Members of an elite class compete against one another to increase their own wealth and influence, and form alliances to pool resources and counter the depredations of their rivals. The result, in every human society complex enough to have an elite class in the first place, is an elite composed of squabbling factions that jealously resist any attempt at further centralization of power. In times of crisis, that resistance can be overcome, but in less troubled times, any attempt by an individual or faction to seize control of the whole system faces the united opposition of the rest of the elite class.
One result of the constant defensive stance of elite factions against each other is that as a society matures, power tends to pass from individuals to institutions. Bureaucratic systems take over more and more of the management of political, economic, and cultural affairs, and the policies that guide the bureaucrats in their work slowly harden until they are no more subject to change than the law of gravity.  Among its other benefits to the existing order of society, this habit—we may as well call it policy mummification—limits the likelihood that an ambitious individual can parlay control over a single bureaucracy into a weapon against his rivals.
Our civilization is no exception to any of this.  In the modern industrial world, some bureaucracies are overtly part of the political sphere; others—we call them corporations—are supposedly apart from government, and still others like to call themselves “non-governmental organizations” as a form of protective camouflage. They are all part of the institutional structure of power, and thus function in practice as arms of government.  They have more in common than this; most of them have the same hierarchical structure and organizational culture; those that are large enough to matter have executives who went to the same schools, share the same values, and crave the same handouts from higher up the ladder. No matter how revolutionary their rhetoric, for that matter, upsetting the system that provides them with their status and its substantial benefits is the last thing any of them want to do.
All these arrangements make for a great deal of stability, which the elite classes of mature civilizations generally crave. The downside is that it’s not easy for a society that’s proceeded along this path to change its ways to respond to new circumstances. Getting an entrenched bureaucracy to set aside its mummified policies in the face of changing conditions is generally so difficult that it’s often easier to leave the old system in place while redirecting all its important functions to another, newly founded bureaucracy oriented toward the new policies. If conditions change again, the same procedure repeats, producing a layer cake of bureaucratic organizations that all supposedly exist to do the same thing.
Consider, as one example out of many, the shifting of responsibility for US foreign policy over the years. Officially, the State Department has charge of foreign affairs; in practice, its key responsibilities passed many decades ago to the staff of the National Security Council, and more recently have shifted again to coteries of advisers assigned to the Office of the President.  In each case, what drove the shift was the attachment of the older institution to a set of policies and procedures that stopped being relevant to the world of foreign policy—in the case of the State Department, the customary notions of old-fashioned diplomacy; in the case of the National Security Council, the bipolar power politics of the Cold War era—but could not be dislodged from the bureaucracy in question due to the immense inertia of policy mummification in institutional frameworks.
The layered systems that result are not without their practical advantages to the existing order. Many bureaucracies provide even more stability than a single bureaucracy, since it’s often necessary for the people who actually have day to day responsibility for this or that government function to get formal approval from the top officials of the agency or agencies that used to have that responsibility, Even when those officials no longer have any formal way to block a policy they don’t like, the personal and contextual nature of elite politics means that informal options usually exist. Furthermore, since the titular headship of some formerly important body such as the US State Department confers prestige but not power, it makes a good consolation prize to be handed out to also-rans in major political contests, a place to park well-connected incompetents, or what have you.
Those of my readers who recall the discussion of catabolic collapse three weeks ago will already have figured out one of the problems with the sort of system that results from the processes just sketched out:  the maintenance bill for so baroque a form of capital is not small. In a mature civilization, a large fraction of available resources and economic production end up being consumed by institutions that no longer have any real function beyond perpetuating their own existence and the salaries and prestige of their upper-level functionaries. It’s not unusual for the maintenance costs of unproductive capital of this kind to become so great a burden on society that the burden in itself forces a crisis—that was one of the major forces that brought the French Revolution, for instance. Still, I’d like to focus for a moment on a different issue, which is the effect that the institutionalization of power and the multiplication of bureaucracy has on the elites who allegedly run the system from which they so richly benefit.
France in the years leading up to the Revolution makes a superb example, one that John Kenneth Galbraith discussed with his trademark sardonic humor in his useful book The Culture of Contentment. The role of ruling elite in pre-1789 France was occupied by close equivalents of the people who fill that same position in America today: the “nobility of the sword,” the old feudal aristocracy, who had roughly the same role as the holders of inherited wealth in today’s America, and the “nobility of the robe,” who owed their position to education, political office, and a talent for social climbing, and thus had roughly the same role as successful Ivy League graduates do here and now. These two elite classes sparred constantly against each other, and just as constantly competed against their own peers for wealth, influence, and position.
One of the most notable features of both sides of the French elite in those days was just how little either group actually had to do with the day-to-day management of public affairs, or for that matter of their own considerable wealth. The great aristocratic estates of the time were bureaucratic societies in miniature, ruled by hierarchies of feudal servitors and middle-class managers, while the hot new financial innovation of the time, the stock market, allowed those who wanted their wealth in a less tradition-infested form to neglect every part of business ownership but the profits. Those members of the upper classes who held offices in government, the church, and the other venues of power presided decorously over institutions that were perfectly capable of functioning without them.
The elite classes of mature civilizations almost always seek to establish arrangements of this sort, and understandably so. It’s easy to recognize the attractiveness of a state of affairs in which the holders of wealth and influence get all the advantages of their positions and have to put up with as few as possible of the inconveniences thereof. That said, this attraction is also a death wish, because it rarely takes the people who actually do the work long to figure out that a ruling class in this situation has become entirely parasitic, and that society would continue to function perfectly well were something suitably terminal to happen to the titular holders of power.
This is why most of the revolutions in modern history have taken place in nations in which the ruling elite has followed its predilections and handed over all its duties to subordinates. In the case of the American revolution, the English nobility had been directly involved in colonial affairs in the first century or so after Jamestown. Once it left the colonists to manage their own affairs, the latter needed very little time to realize that the only thing they had to lose by seeking independence was the steady hemorrhage of wealth from the colonies to England. In the case of the French and Russian revolutions, much the same thing happened without the benefit of an ocean in the way: the middle classes who actually ran both societies recognized that the monarchy and aristocracy had become disposable, and promptly disposed of them once a crisis made it possible to do so.
The crisis just mentioned is a significant factor in the process. Under normal conditions, a society with a purely decorative ruling elite can keep on stumbling along indefinitely on sheer momentum. It usually takes a crisis—Britain’s military response to colonial protests in 1775, the effective bankruptcy of the French government in 1789, the total military failure of the Russian government in 1917, or what have you—to convince the people who actually handle the levers of power that their best interests no longer lie with their erstwhile masters. Once the crisis hits, the unraveling of the institutional structures of authority can happen with blinding speed, and the former ruling elite is rarely in a position to do anything about it. All they have ever had to do, and all they know how to do, is issue orders to deferential subordinates. When there are none of these latter to be found, or (as more often happens) when the people to whom the deferential subordinates are supposed to pass the orders are no longer interested in listening, the elite has no options left.
The key point to be grasped here is that power is always contextual. A powerful person is a person able to exert particular kinds of power, using particular means, on some particular group of other people, and someone thus can be immensely powerful in one setting and completely powerless in another. What renders the elite classes of a mature society vulnerable to a total collapse of power is that they almost always lose track of this unwelcome fact. Hereditary elites are particularly prone to fall into the trap of thinking of their position in society as an accurate measure of their own personal qualifications to rule, but it’s also quite common for those who are brought into the elite from the classes immediately below to think of their elevation as proof of their innate superiority. That kind of thinking is natural for elites, but once they embrace it, they’re doomed.
It’s dangerous enough for elites to lose track of the contextual and contingent nature of their power when the mechanisms through which power is enforced can be expected to remain in place—as it was in the American colonies in 1776, France in 1789, and Russia in 1917. It’s far more dangerous if the mechanisms of power themselves are in flux. That can happen for any number of reasons, but the one that’s of central importance to the theme of this series of posts is the catabolic collapse of a declining civilization, in which the existing mechanisms of power come apart because their maintenance costs can no longer be met.
That poses at least two challenges to the ruling elite, one obvious and the other less so. The obvious one is that any deterioration in the mechanisms of power limits the ability of the elite to keep the remaining mechanisms of power funded, since a great deal of power is always expended in paying the maintenance costs of power. Thus in the declining years of Rome, for example, the crucial problem the empire faced was precisely that the sprawling system of imperial political and military administration cost more than the imperial revenues could support, but the weakening of that system made it even harder to collect the revenues on which the rest of the system depended, and forced more of what money there was to go for crisis management. Year after year, as a result, roads, fortresses, and the rest of the infrastructure of Roman power sank under a burden of deferred maintenance and malign neglect, and the consequences of each collapse became more and more severe because there was less and less in the treasury to pay for rebuilding when the crisis was over.
That’s the obvious issue. More subtle is the change in the nature of power that accompanies the decay in the mechanisms by which it’s traditionally been used. Power in a mature civilization, as already noted, is very abstract, and the people who are responsible for administering it at the top of the social ladder rise to those positions precisely because of their ability to manage abstract power through the complex machinery that a mature civilization provides them. As the mechanisms collapse, though, power stops being abstract in a hurry, and the skills that allow the manipulation of abstract power have almost nothing in common with the skills that allow concrete power to be wielded.
Late imperial Rome, again, is a fine example. There, as in other mature civilizations, the ruling elite had a firm grip on the intricate mechanisms of social control at their uppermost and least tangible end. The inner circle of each imperial administration—which sometimes included the emperor himself, and sometimes treated him as a sock puppet—could rely on sprawling many-layered civil and military bureaucracies to put their orders into effect. They were by and large subtle, ruthless, well-educated men, schooled in the intricacies of imperial administration, oriented toward the big picture, and completely dependent on the obedience of their underlings and the survival of the Roman system itself.
The people who replaced them, once the empire actually fell, shared none of these characteristics except the ruthlessness. The barbarian warlords who carved up the corpse of Roman power had a completely different set of skills and characteristics: raw physical courage, a high degree of competence in the warrior’s trade, and the kind of charisma that attracts cooperation and obedience from those who have many other options. Their power was concrete, personal, and astonishingly independent of institutional forms. That’s why Odoacer, whose remarkable career was mentioned in an earlier post in this sequence, could turn up alone in a border province, patch together an army out of a random mix of barbarian warriors, and promptly lead them to the conquest of Italy.
There were a very few members of the late Roman elite who could exercise power in the same way as Odoacer and his equivalents, and they’re the exceptions that prove the rule. The greatest of them, Flavius Aetius, spent many years in youth as a hostage in the royal courts of the Visigoths and the Huns and got his practical education there, rather than in Roman schools. He was for all practical purposes a barbarian warlord who happened to be Roman by birth, and played the game as well as any of the other warlords of his age. His vulnerabilities were all on the Roman side of the frontier, where the institutions of Roman society still retained a fingernail grip on power, and so—having defeated the Visigoths, the Franks, the Burgundians, and the massed armies of Attila the Hun, all for the sake of Rome’s survival—he was assassinated by the emperor he served.
Fast forward close to two thousand years and it’s far from difficult to see how the same pattern of elite extinction through the collapse of political complexity will likely work out here in North America. The ruling elites of our society, like those of the late Roman Empire, are superbly skilled at manipulating and parasitizing a fantastically elaborate bureaucratic machine which includes governments, business firms, universities, and many other institutions among its components. That’s what they do, that’s what they know how to do, and that’s what all their training and experience has prepared them to do.  Thus their position is exactly equivalent to that of French aristocrats before 1789, but they’re facing the added difficulty that the vast mechanism on which their power depends has maintenance costs that their civilization can no longer meet. As the machine fails, so does their power.
Nor are they particularly well prepared to make the transition to a radically different way of exercising power. Imagine for a moment that one of the current US elite—an executive from a too-big-to-fail investment bank, a top bureaucrat from inside the DC beltway, a trust-fund multimillionaire with a pro forma job at the family corporation, or what have you—were to turn up in some chaotic failed state on the fringes of the industrial world, with no money, no resources, no help from abroad, and no ticket home. What’s the likelihood that, without anything other than whatever courage, charisma, and bare-knuckle fighting skills he might happen to have, some such person could equal Odoacer’s feat, win the loyalty and obedience of thousands of gang members and unemployed mercenaries, and lead them in a successful invasion of a neighboring country?
There are people in North America who could probably carry off a feat of that kind, but you won’t find them in the current ruling elite. That in itself defines part of the path to dark age America: the replacement of a ruling class that specializes in managing abstract power through institutions with a ruling class that specializes in expressing power up close and in person, using the business end of the nearest available weapon. The process by which the new elite emerges and elbows its predecessors out of the way, in turn, is among the most reliable dimensions of decline and fall; we’ll talk about it next week.

The Buffalo Wind

Wed, 2014-10-01 17:50
I've talked more than once in these essays about the challenge of discussing the fall of civilizations when the current example is picking up speed right outside the window.  In a calmer time, it might be possible to treat the theory of catabolic collapse as a pure abstraction, and contemplate the relationship between the maintenance costs of capital and the resources available to meet those costs without having to think about the ghastly human consequences of shortfall. As it is, when I sketch out this or that detail of the trajectory of a civilization’s fall, the commotions of our time often bring an example of that detail to the surface, and sometimes—as now—those lead in directions I hadn’t planned to address.
This is admittedly a time when harbingers of disaster are not in short supply. I was amused a few days back to see yet another denunciation of economic heresy in the media. This time the author was one Matt Egan, the venue was CNN/Money, and the target was Zero Hedge, one of the more popular sites on the doomward end of the blogosphere. The burden of the CNN/Money piece was that Zero Hedge must be wrong in questioning the giddy optimism of the stock market—after all, stock values have risen to record heights, so what could possibly go wrong?
Zero Hedge’s pseudonymous factotum Tyler Durden had nothing to say to CNN/Money, and quite reasonably so.  He knows as well as I do that in due time, Egan will join that long list of pundits who insisted that the bubble du jour would keep on inflating forever, and got to eat crow until the end of their days as a result. He's going to have plenty of company; the chorus of essays and blog posts denouncing peak oil in increasingly strident tones has built steadily in recent months. I expect that chorus to rise to a deafening shriek right about the time the bottom drops out of the fracking bubble.
Meanwhile the Ebola epidemic has apparently taken another large step toward fulfilling its potential as the Black Death of the 21st century. A month ago, after reports surfaced of Ebola in a southwestern province, Sudan slapped a media blackout on reports of Ebola cases in the country. Maybe there’s an innocent reason for this policy, but I confess I can’t think of one. Sudan is a long way from the West African hotspots of the epidemic, and unless a local outbreak has coincidentally taken place—which is of course possible—this suggests the disease has already spread along the ancient east-west trade routes of the Sahel. If the epidemic gets a foothold in Sudan, the next stops are the teeming cities of Egypt and the busy ports of East Africa, full of shipping from the Gulf States, the Indian subcontinent, and eastern Asia.
I’ve taken a wry amusement in the way that so many people have reacted to the spread of the epidemic by insisting that Ebola can’t possibly be a problem outside the West African countries it’s currently devastating. Here in the US, the media’s full of confident-sounding claims that our high-tech health care system will surely keep Ebola at bay. It all looks very encouraging, unless you happen to know that diseases spread by inadequate handwashing are common in US hospitals, only a small minority of facilities have the high-end gear necessary to isolate an Ebola patient, and the Ebola patient just found in Dallas got misdiagnosed and sent home with a prescription for antibiotics, exposing plenty of people to the virus.
More realistically, Laurie Garrett, a respected figure in the public health field, warns that ”you are not nearly scared enough about Ebola.”  In the peak oil community, Mary Odum, whose credentials as ecologist and nurse make her eminently qualified to discuss the matter, has tried to get the same message across. Few people are listening.
Like the frantic claims that peak oil has been disproven and the economy isn’t on the verge of another ugly slump, the insistence that Ebola can’t possibly break out of its current hot zones is what scholars of the magical arts call an apotropaic charm—that is, an attempt to turn away an unwanted reality by means of incantation. In the case of Ebola, the incantation usually claims that the West African countries currently at ground zero of the epidemic are somehow utterly unlike all the other troubled and impoverished Third World nations it hasn’t yet reached, and that the few thousand deaths racked up so far by the epidemic is a safe measure of its potential.
Those of my readers who have been thinking along these lines are invited to join me in a little thought experiment. According to the World Health Organization, the number of cases of Ebola in the current epidemic is doubling every twenty days, and could reach 1.4 million by the beginning of 2015. Let’s round down, and say that there are one million cases on January 1, 2015.  Let’s also assume for the sake of the experiment that the doubling time stays the same. Assuming that nothing interrupts the continued spread of the virus, and cases continue to double every twenty days, in what month of what year will the total number of cases equal the human population of this planet? Go ahead and do the math for yourself.  If you’re not used to exponential functions, it’s particularly useful to take a 2015 calendar, count out the 20-day intervals, and see exactly how the figure increases over time.
Now of course this is a thought experiment, not a realistic projection. In the real world, the spread of an epidemic disease is a complex process shaped by modes of human contact and transport.  There are bottlenecks that slow propagation across geographical and political barriers, and different cultural practices that can help or hinder the transmission of the Ebola virus. It’s also very likely that some nations, especially in the developed world, will be able to mobilize the sanitation and public-health infrastructure to stop a self-sustaining epidemic from getting under way on their territory before a vaccine can be developed and manufactured in sufficient quantity to matter.
Most members of our species, though, live in societies that don’t have those resources, and the steps that could keep Ebola from spreading to the rest of the Third World are not being taken. Unless massive resources are committed to that task soon—as in before the end of this year—the possibility exists that when the pandemic finally winds down a few years from now, two to three billion people could be dead. We need to consider the possibility that the peak of global population is no longer an abstraction set comfortably off somewhere in the future. It may be knocking at the future’s door right now, shaking with fever and dripping blood from its gums.
That ghastly possibility is still just that, a possibility. It can still be averted, though the window of opportunity in which that could be done  is narrowing with each passing day. Epizootic disease is one of the standard ways by which an animal species in overshoot has its population cut down to levels that the carrying capacity of the environment can support, and the same thing has happened often enough with human beings. It’s not the only way for human numbers to decline; I’ve discussed here at some length the possibility that that could happen by way of ordinary demographic contraction—but we’re now facing a force that could make the first wave of population decline happen in a much faster and more brutal way.
Is that the end of the world? Of course not. Any of my readers who have read a good history of the Black Death—not a bad idea just now, all things considered—know that human societies can take a massive population loss from pandemic disease and still remain viable. That said, any such event is a shattering experience, shaking political, economic, cultural, and spiritual institutions and beliefs down to their core. In the present case, the implosion of the global economy and the demise of the tourism and air travel industries are only the most obvious and immediate impacts. There are also broader and deeper impacts, cascading down from the visible realms of economics and politics into the too rarely noticed substructure of ecological relationships that sustain human existence.
And this, in turn, has me thinking of buffalo.
In there among all the other new stories of the last week, by turns savage and silly, is a report from Montana, where representatives of Native American peoples from the prairies of the United States and Canada signed a treaty pledging their tribes to cooperate in reintroducing wild buffalo to the Great Plains. I doubt most people in either country heard of it, and fewer gave it a second thought. There have been herds of domesticated buffalo in North America for a good many decades now, but only a few very small herds, on reservations or private nature sanctuaries, have been let loose to wander freely as their ancestors did.
A great many of the white residents of the Great Plains are furiously opposed to the project. It’s hard to find any rational reason for that opposition—the Native peoples have merely launched a slow process of putting wild buffalo herds on their own tribal property, not encroaching on anyone or anything else—but rational reasons are rarely that important in human motivation, and the nonrational dimension here as so often  is the determining factor. The entire regional culture of the Great Plains centers on the pioneer experience, the migration that swept millions of people westward onto the prairies on the quest to turn some of North America’s bleakest land into a cozy patchwork of farms and towns, nature replaced by culture across thousands of miles where the buffalo once roamed.
The annihilation of the buffalo was central to that mythic quest, as central as the dispossession of the Native peoples and the replacement of the tallgrass prairie by farm crops. A land with wild buffalo herds upon it is not a domesticated land. Those who saw the prairies in their wild state brought back accounts that sound like something out of mythology: grass so tall a horseman could ride off into it and never be seen again, horizons as level and distant as those of the open ocean, and the buffalo: up to sixty million of them, streaming across the landscape in herds that sometimes reached from horizon to horizon.  The buffalo were the keystone of the prairie ecosystem, and their extermination was an essential step in shattering that ecosystem and extracting the richness of its topsoil for temporary profit.
A little while back I happened to see a video online about the ecological effects of reintroducing wolves to Yellowstone Park. It’s an interesting story:  the return of wolves, most of a century after their extermination, caused deer to stay away from areas of the park where they were vulnerable to attack.  Once those areas were no longer being browsed by deer, their vegetation changed sharply, making the entire park more ecologically diverse; species that had been rare or absent in the park reappeared to take advantage of the new, richer habitat.  Even the behavior of the park’s rivers changed, as vegetation shifts slowed riverine erosion.
All this was narrated by George Monbiot in a tone of gosh-wow wonderment that irritated me at first hearing. Surely it would be obvious, I thought, that changing one part of an ecosystem would change everything else, and that removing or reintroducing one of the key species in the ecosystem would have particularly dramatic effects! Of course I stopped then and laughed, since for most people it’s anything but obvious. Our entire culture is oriented toward machines, not living systems, and what defines a machine is precisely that it’s meant to do exactly what it’s told and nothing else. Push this button, and that happens; turn this switch, and something else happens; pull this trigger, and the buffalo falls dead.  We’re taught to think of the world as though that same logic controlled its responses to our actions, and then get blindsided when it acts like a whole system instead.
I’d be surprised to hear any of the opponents of reintroducing wild buffalo talk in so many words about the buffalo as a keystone species of the prairie ecosystem, and suggesting that its return to the prairies might set off a trophic cascade—that’s the technical term for the avalanche of changes, spreading down the food web to its base, that the Yellowstone wolves set in motion once they sniffed the wind, caught the tasty scent of venison, and went to look. Still, it’s one of the basic axioms of the Druid teachings that undergird these posts that people know more than they think they know, and a gut-level sense of the cascade of changes that would be kickstarted by wild buffalo may be helping drive their opposition.
That said, there’s a further dimension. It’s not just in an ecological sense that a land with wild buffalo herds upon it is not a domesticated land. To the descendants of the pioneers, the prairie, the buffalo, and the Indian are what their ancestors came West to destroy. Behind that identification lies the whole weight of the mythology of progress, the conviction that it’s the destiny of the West to be transformed from wilderness to civilization. The return of wild buffalo is unthinkable from within the pioneer worldview, because it means that “the winning of the West” was not a permanent triumph but a temporary condition, which may yet be followed in due time by the losing of the West.
Of course there were already good reasons to think along those unthinkable lines, long before the Native tribes started drafting their treaty.  The economics of dryland farming on the Great Plains never really made that much sense. Homestead acts and other government subsidies in the 19th century, and the economic impacts of two world wars in the 20th, made farming the Plains look viable, in much the same way that huge government subsidies make nuclear power look viable today. In either case, take away the subsidies and you’ve got an arrangement without a future. That’s the subtext behind the vacant and half-vacant towns you’ll find all over the West these days. That the fields and farms and towns may be replaced in turn by prairie grazed by herds of wild buffalo is unthinkable from within the pioneer worldview, too—but across the West, the unthinkable is increasingly the inescapable.
Equally, it’s unthinkable to most people in the industrial world today that a global pandemic could brush aside the world’s terminally underfunded public health systems and snuff out millions or billions of lives in a few years. It’s just as unthinkable to most people in the industrial world that the increasingly frantic efforts of wealthy elites to prop up the global economy and get it to start generating prosperity again will fail, plunging the world into irrevocable economic contraction. It’s among the articles of faith of the industrial world that the future must lead onward and upward, that the sort of crackpot optimism that draws big crowds at TED Talks counts as realistic thinking about the future, and that the limits to growth can’t possibly get in the way of our craving for limitlessness. Here again, though, the unthinkable is becoming the inescapable.
In each of these cases, and many others, the unthinkable can be described neatly as the possibility that a set of changes that we happen to have decked out with the sanctified label of “progress” might turn out instead to be a temporary and reversible condition. The agricultural settlement of the Great Plains, the relatively brief period when humanity was not troubled by lethal pandemics, and the creation of a global economy powered by extravagant burning of fossil fuels were all supposed to be permanent changes, signs of progress and Man’s Conquest of Nature. No one seriously contemplated the chance that each of those changes would turn out to be transient, that they would shift into reverse under the pressure of their own unintended consequences, and that the final state of each whole system would have more in common with its original condition than with the state it briefly attained in between.
There are plenty of ways to talk about the implications of that great reversal, but the one that speaks to me now comes from the writings of Ernest Thompson Seton, whose nature books were a fixture of my childhood and who would probably be the patron saint of this blog if Druidry had patron saints. He spent the whole of his adult career as naturalist, artist, writer, storyteller, and founder of a youth organization—Woodcraft, which taught wilderness lore, practical skills, and democratic self-government to boys and girls alike, and might be well worth reviving now—fighting for a world in which there would still be a place for wild buffalo roaming the prairies: fought, and lost. (It would be one of his qualifications for Druid sainthood that he knew he would lose, and kept fighting anyway. The English warriors at the battle of Maldon spoke that same language: “Will shall be sterner, heart the stronger, mood shall be more as our might falters.”)
He had no shortage of sound rational reasons for his lifelong struggle, but now and again, in his writings or when talking around the campfire, he would set those aside and talk about deeper issues. He spoke of the “Buffalo Wind,” the wind off the open prairies that tingles with life and wonder, calling humanity back to its roots in the natural order, back to harmony with the living world: not rejecting the distinctive human gifts of culture and knowledge, but holding them in balance with the biological realities of our existence and the needs of the biosphere. I’ve felt that wind; so, I think, have most Druids, and so have plenty of other people who couldn’t tell a Druid from a dormouse but who feel in their bones that industrial humanity’s attempted war against nature is as senseless as a plant trying to gain its freedom by pulling itself up by the roots.
One of the crucial lessons of the Buffalo Wind, though, is that it’s not always gentle. It can also rise to a shrieking gale, tear the roofs off houses, and leave carnage in its wake. We can embrace the lessons that the natural world is patiently and pitilessly teaching us, in other words, or we can close our eyes and stop our ears until sheer pain forces the lessons through our barriers, but one way or another, we’re going to learn those lessons. It’s possible, given massively funded interventions and a good helping of plain dumb luck, that the current Ebola epidemic might be stopped before it spreads around the world. It’s possible that the global economy might keep staggering onward for another season, and that wild buffalo might be kept from roaming the Great Plains for a while yet. Those are details; the underlying issue—the inescapable collision between the futile fantasy of limitless economic expansion on a finite planet and the hard realities of ecology, geology, and thermodynamics—is not going away.
The details also matter, though; in a very old way of speaking, the current shudderings of the economy, the imminent risk of pandemic, and the distant sound of buffalo bellowing in the Montana wind are omens. The Buffalo Wind is rising now, keening in the tall grass, whispering in the branches and setting fallen leaves aswirl. I could be mistaken, but I think that not too far in the future it will become a storm that will shake the industrial world right down to its foundations.

Dark Age America: The Senility of the Elites

Wed, 2014-09-24 17:50
Regular readers of this blog will no doubt recall that, toward the beginning of last month, I commented on a hostile review of one of my books that had just appeared in the financial blogosphere. At the time, I noted that the mainstream media normally ignore the critics of business as usual, and suggested that my readers might want to watch for similar attacks by more popular pundits, in more mainstream publications, on those critics who have more of a claim to conventional respectability than, say, archdruids. Such attacks, as I pointed out then, normally happen in the weeks immediately before business as usual slams face first into a brick wall of its own making
Well, it’s happened. Brace yourself for the impact.
The pundit in question was no less a figure than Paul Krugman, who chose the opinion pages of the New York Times for a shrill and nearly fact-free diatribe lumping Post Carbon Institute together with the Koch brothers as purveyors of “climate despair.” PCI’s crime, in Krugman’s eyes, consists of noticing that the pursuit of limitless economic growth on a finite planet, with or without your choice of green spraypaint, is a recipe for disaster.  Instead of paying attention to such notions, he insists, we ought to believe the IMF and a panel of economists when they claim that replacing trillions of dollars of fossil fuel-specific infrastructure with some unnamed set of sustainable replacements will somehow cost nothing, and that we can have all the economic growth we want because, well, because we can, just you wait and see!
PCI’s Richard Heinberg responded with a crisp and tautly reasoned rebuttal pointing out the gaping logical and factual holes in Krugman’s screed, so there’s no need for me to cover the same ground here. Mind you, Heinberg was too gentlemanly to point out that the authorities Krugman cites aren’t exactly known for their predictive accuracy—the IMF in particular has become notorious in recent decades for insisting that austerity policies that have brought ruin to every country that has ever tried them are the one sure ticket to prosperity—but we can let that pass, too. What I want to talk about here is what Krugman’s diatribe implies for the immediate future.
Under normal circumstances, dissident groups such as Post Carbon Institute and dissident intellectuals such as Richard Heinberg never, but never, get air time in the mainstream media. At most, a cheap shot or two might be aimed at unnamed straw men while passing from one bit of conventional wisdom to the next. It’s been one of the most interesting details of the last few years that peak oil has actually been mentioned by name repeatedly by mainstream pundits: always, to be sure, in tones of contempt, and always in the context of one more supposed proof that a finite planet can too cough up infinite quantities of oil, but it’s been named. The kind of total suppression that happened between the mid-1980s and the turn of the millennium, when the entire subject vanished from the collective conversation of our society, somehow didn’t happen this time.
That says to me that a great many of those who were busy denouncing peak oil and the limits to growth were far less confident than they wanted to appear. You don’t keep on trying to disprove something that nobody believes, and of course the mere fact that oil prices and other quantitative measures kept on behaving the way peak oil theory said they would behave, rather than trotting obediently the way peak oil critics such as Bjorn Lomborg and Daniel Yergin told them to go, didn’t help matters much. The cognitive dissonance between the ongoing proclamations of coming prosperity via fracking and the soaring debt load and grim financial figures of the fracking industry has added to the burden.
Even so, it’s only in extremis that denunciations of this kind shift from attacks on ideas to attacks on individuals. As I noted in the earlier post, one swallow does not a summer make, and one ineptly written book review by an obscure blogger on an obscure website denouncing an archdruid, of all people, might indicate nothing more than a bout of dyspepsia or a disappointing evening at the local singles bar.  When a significant media figure uses one of the world’s major newspapers of record to lash out at a particular band of economic heretics by name, on the other hand, we’ve reached the kind of behavior that only happens, historically speaking, when crunch time is very, very close. Given that we’ve also got a wildly overvalued stock market, falling commodity prices, and a great many other symptoms of drastic economic trouble bearing down on us right now, not to mention the inevitable unraveling of the fracking bubble, there’s a definite chance that the next month or two could see the start of a really spectacular financial crash.
While we wait for financiers to start raining down on Wall Street sidewalks, though, it’s far from inappropriate to continue with the current sequence of posts about the end of industrial civilization—especially as the next topic in line is the way that the elites of a falling civilization destroy themselves.
One of the persistent tropes in current speculations on the future of our civilization revolves around the notion that the current holders of wealth and influence will entrench themselves even more firmly in their positions as things fall apart. A post here back in 2007 criticized what was then a popular form of that trope, the claim that the elites planned to impose a “feudal-fascist” regime on the deindustrial world. That critique still applies; that said, it’s worth discussing what tends to happen to elite classes in the decline and fall of a civilization, and seeing what that has to say about the probable fate of the industrial world’s elite class as our civilization follows the familiar path.
It’s probably necessary to say up front that we’re not talking about the evil space lizards that haunt David Icke’s paranoid delusions, or for that matter the faux-Nietzschean supermen who play a parallel role in Ayn Rand’s dreary novels and even drearier pseudophilosophical rants. What we’re talking about, rather, is something far simpler, which all of my readers will have experienced in their own lives.  Every group of social primates has an inner core of members who have more access to the resources controlled by the group, and more influence over the decisions made by the group, than other members.  How individuals enter that core and maintain themselves there against their rivals varies from one set of social primates to another—baboons settle such matters with threat displays backed up with violence, church ladies do the same thing with social maneuvering and gossip, and so on—but the effect is the same: a few enter the inner core, the rest are excluded from it. That process, many times amplified, gives rise to the ruling elite of a civilization.
I don’t happen to know much about the changing patterns of leadership in baboon troops, but among human beings, there’s a predictable shift over time in the way that individuals gain access to the elite. When institutions are new and relatively fragile, it’s fairly easy for a gifted and ambitious outsider to bluff and bully his way into the elite. As any given institution becomes older and more firmly settled in its role, that possibility fades. What happens instead in a mature institution is that the existing members of the elite group select, from the pool of available candidates, those individuals who will be allowed to advance into the elite.  The church ladies just mentioned are a good example of this process in action; if any of my readers are doctoral candidates in sociology looking for a dissertation topic, I encourage them to consider joining a local church, and tracking the way the elderly women who run most of its social functions groom their own replacements and exclude those they consider unfit for that role.
That process is a miniature version of the way the ruling elite of the world’s industrial nations select new additions to their number. There, as among church ladies, there are basically two routes in. You can be born into the family of a member of the inner circle, and if you don’t run off the rails too drastically, you can count on a place in the inner circle yourself in due time. Alternatively, you can work your way in from outside by being suitably deferential and supportive to the inner circle, meeting all of its expectations and conforming to its opinions and decisions, until the senior members of the elite start treating you as a junior member and the junior members have to deal with you as an equal. You can watch that at work, as already mentioned, in your local church—and you can also watch it at work in the innermost circles of power and privilege in American life.
Here in America, the top universities are the places where the latter version of the process stands out in all its dubious splendor. To these universities, every autumn, come the children of rich and influential families to begin the traditional four-year rite of passage. It would require something close to a superhuman effort on their part to fail. If they don’t fancy attending lectures, they can hire impecunious classmates as “note takers” to do that for them.  If they don’t wish to write papers, the same principle applies, and the classmates are more than ready to help out, since that can be the first step to a career as an executive assistant, speechwriter, or the like. The other requirements of college life can be met in the same manner as needed, and the university inevitably looks the other way, knowing that they can count on a generous donation from the parents as a reward for putting up with Junior’s antics.
Those of my readers who’ve read the novels of Thomas Mann, and recall the satiric portrait of central European minor royalty in Royal Highness, already know their way around the sort of life I’m discussing here. Those who don’t may want to recall everything they learned about the education and business career of George W. Bush. All the formal requirements are met, every gracious gesture is in place:  the diploma, the prestigious positions in business or politics or the stateside military, maybe a book written by one of those impecunious classmates turned ghostwriter and published to bland and favorable reviews in the newspapers of record:  it’s all there, and the only detail that nobody sees fit to mention is that the whole thing could be done just as well by a well-trained cockatiel, and much of it is well within the capacities of a department store mannequin—provided, of course, that one of those impecunious classmates stands close by, pulling the strings that make the hand wave and the head nod.
The impecunious classmates, for their part, are aspirants to the second category mentioned above, those who work their way into the elite from outside. They also come to the same top universities every autumn, but they don’t get there because of who their parents happen to be. They get there by devoting every spare second to that goal from middle school on. They take the right classes, get the right grades, play the right sports, pursue the right extracurricular activities, and rehearse for their entrance interviews by the hour; they are bright, earnest, amusing, pleasant, because they know that that’s what they need to be in order to get where they want to go. Scratch that glossy surface and you’ll find an anxious conformist terrified of failing to measure up to expectations, and it’s a reasonable terror—most of them will in fact fail to do that, and never know how or why.
Once in an Ivy League university or the equivalent, they’re pretty much guaranteed passing grades and a diploma unless they go out of their way to avoid them. Most of them, though, will be shunted off to midlevel posts in business, government, or one of the professions. Only the lucky few will catch the eye of someone with elite connections, and be gently nudged out of their usual orbit into a place from which further advancement is possible. Whether the rich kid whose exam papers you ghostwrote takes a liking to you, and arranges to have you hired as his executive assistant when he gets his first job out of school, or the father of a friend of a friend meets you on some social occasion, chats with you, and later on has the friend of a friend mention in passing that you might consider a job with this senator or that congressman, or what have you, it’s not what you know, it’s who you know, not to mention how precisely you conform to the social and intellectual expectations of the people who have the power to give or withhold the prize you crave so desperately.
That’s how the governing elite of today’s America recruits new members. Mutatis mutandis, it’s how the governing elite of every stable, long-established society recruits new members. That procedure has significant advantages, and not just for the elites. Above all else, it provides stability. Over time, any elite self-selected in this fashion converges asymptotically on the standard model of a mature aristocracy, with an inner core of genial duffers surrounded by an outer circle of rigid conformists—the last people on the planet who are likely to disturb the settled calm of the social order. Like the lead-weighted keel of a deepwater sailboat, their inertia becomes a stabilizing force that only the harshest of tempests can overturn.
Inevitably, though, this advantage comes with certain disadvantages, two of which are of particular importance for our subject. The first is that stability and inertia are not necessarily a good thing in a time of crisis. In particular, if the society governed by an elite of the sort just described happens to depend for its survival on some unsustainable relationship with surrounding societies, the world of nature, or both, the leaden weight of a mature elite can make necessary change impossible until it’s too late for any change at all to matter. One of the most consistent results of the sort of selection process I’ve sketched out is the elimination of any tendency toward original thinking on the part of those selected; “creativity” may be lauded, but what counts as creativity in such a system consists solely of taking some piece of accepted conventional wisdom one very carefully measured step further than anyone else has quite gotten around to going yet.
In a time of drastic change, that sort of limitation is lethal. More deadly still is the other disadvantage I have in mind, which is the curious and consistent habit such elites have of blind faith in their own invincibility. The longer a given elite has been in power, and the more august and formal and well-aged the institutions of its power and wealth become, the easier it seems to be for the very rich to forget that their forefathers established themselves in that position by some form of more or less blatant piracy, and that they themselves could be deprived of it by that same means. Thus elites tend to, shall we say, “misunderestimate” exactly those crises and sources of conflict that pose an existential threat to the survival of their class and its institutions, precisely because they can’t imagine that an existential threat to these things could be posed by anything at all.
The irony, and it’s a rich one, is that the same conviction tends to become just as widespread outside elite circles as within it. The illusion of invincibility, the conviction that the existing order of things is impervious to any but the most cosmetic changes, tends to be pervasive in any mature society, and remains fixed in place right up to the moment that everything changes and the existing order of things is swept away forever. The intensity of the illusion very often has nothing to do with the real condition of the social order to which it applies; France in 1789 and Russia in 1917 were both brittle, crumbling, jerry-rigged hulks waiting for the push that would send them tumbling into oblivion, which they each received shortly thereafter—but next to no one saw the gaping vulnerabilities at the time. In both cases, even the urban rioters that applied the push were left standing there slack-jawed when they saw how readily the whole thing came crashing down.
The illusion of invincibility is far and away the most important asset a mature ruling elite has, because it discourages deliberate attempts at regime change from within. Everyone in the society, in the elite or outside it, assumes that the existing order is so firmly bolted into place that only the most apocalyptic events would be able to shake its grip. In such a context, most activists either beg for scraps from the tables of the rich or content themselves with futile gestures of hostility at a system they don’t seriously expect to be able to harm, while the members of the elite go their genial way, stumbling from one preventable disaster to another, convinced of the inevitability of their positions, and blissfully unconcerned with the possibility—which normally becomes a reality sooner or later—that their own actions might be sawing away at the old and brittle branch on which they’re seated.
If this doesn’t sound familiar to you, dear reader, you definitely need to get out more. The behavior of the holders of wealth and power in contemporary America, as already suggested, is a textbook example of the way that a mature elite turns senile. Consider the fact that the merry pranksters in the banking industry, having delivered a body blow to the global economy in 2008 and 2009 with worthless mortgage-backed securities, are now busy hawking equally worthless securities backed by income from rental properties. Each round of freewheeling financial fraud, each preventable economic slump, increases the odds that an already brittle, crumbling, and jerry-rigged system will crack under the strain, opening a window of opportunity that hostile foreign powers and domestic demagogues alike will not be slow to exploit. Do such considerations move the supposed defenders of the status quo to rein in the manufacture of worthless financial paper? Surely you jest.
It deserves to be said that at least one corner of the current American ruling elite has recently showed some faint echo of the hard common sense once possessed by its piratical forebears. Now of course the recent announcement that one of the Rockefeller charities is about to move some of its investment funds out of fossil fuel industries doesn’t actually justify the rapturous language lavished on it by activists; the amount of money being moved amounts to one tiny droplet in the overflowing bucket of Rockefeller wealth, after all.  For that matter, as the fracking industry founders under a soaring debt load and slumping petroleum prices warn of troubles ahead, pulling investment funds out of fossil fuel companies and putting them in industries that will likely see panic buying when the fracking bubble pops may be motivated by something other than a sudden outburst of environmental sensibility. Even so, it’s worth noting that the Rockefellers, at least, still remember that it’s crucial for elites to play to the audience, to convince those outside elite circles that the holders of wealth and power still have some vague sense of concern for the survival of the society they claim the right to lead.
Most members of America’s elite have apparently lost track of that. Even such modest gestures as the Rockefellers have just made seem to be outside the repertory of most of the wealthy and privileged these days.  Secure in their sense of their own invulnerability, they amble down the familiar road that led so many of their equivalents in past societies to dispossession or annihilation. How that pattern typically plays out will be the subject of next week’s post.

Dark Age America: The End of the Old Order

Wed, 2014-09-17 18:30
Lately I’ve been rereading some of the tales of H.P. Lovecraft. He’s nearly unique among the writers of American horror stories, in that his sense of the terrible was founded squarely on the worldview of modern science. He was a steadfast atheist and materialist, but unlike so many believers in that creed, his attitude toward the cosmos revealed by science was not smug satisfaction but shuddering horror. The first paragraph of his most famous story, “The Call of Cthulhu,” is typical:
“The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.”
It’s entirely possible that this insight of Lovecraft’s will turn out to be prophetic, and that a passionate popular revolt against the implications—and even more, the applications—of contemporary science will be one of the forces that propel us into the dark age ahead. Still, that’s a subject for a later post in this series. The point I want to make here is that Lovecraft’s image of people eagerly seeking such peace and safety as a dark age may provide them is not as ironic as it sounds. Outside the elites, which have a different and considerably more gruesome destiny than the other inhabitants of a falling civilization, it’s surprisingly rare for people to have to be forced to trade civilization for barbarism, either by human action or by the pressure of events.  By and large, by the time that choice arrives, the great majority are more than ready to make the exchange, and for good reason.
Let’s start by reviewing some basics. As I pointed out in a paper published online back in 2005—a PDF is available here—the process that drives the collapse of civilizations has a surprisingly simple basis: the mismatch between the maintenance costs of capital and the resources that are available to meet those costs. Capital here is meant in the broadest sense of the word, and includes everything in which a civilizations invests its wealth: buildings, roads, imperial expansion, urban infrastructure, information resources, trained personnel, or what have you. Capital of every kind has to be maintained, and as a civilization adds to its stock of capital, the costs of maintenance rise steadily, until the burden they place on the civilization’s available resources can’t be supported any longer.
The only way to resolve that conflict is to allow some of the capital to be converted to waste, so that its maintenance costs drop to zero and any useful resources locked up in the capital can be put to other uses. Human beings being what they are, the conversion of capital to waste generally isn’t carried out in a calm, rational manner; instead, kingdoms fall, cities get sacked, ruling elites are torn to pieces by howling mobs, and the like. If a civilization depends on renewable resources, each round of capital destruction is followed by a return to relative stability and the cycle begins all over again; the history of imperial China is a good example of how that works out in practice.
If a civilization depends on nonrenewable resources for essential functions, though, destroying some of its capital yields only a brief reprieve from the crisis of maintenance costs. Once the nonrenewable resource base tips over into depletion, there’s less and less available each year thereafter to meet the remaining maintenance costs, and the result is the stairstep pattern of decline and fall so familiar from history:  each crisis leads to a round of capital destruction, which leads to renewed stability, which gives way to crisis as the resource base drops further. Here again, human beings being what they are, this process isn’t carried out in a calm, rational manner; the difference here is simply that kingdoms keep falling, cities keep getting sacked, ruling elites are slaughtered one after another in ever more inventive and colorful ways, until finally contraction has proceeded far enough that the remaining capital can be supported on the available stock of renewable resources.
That’s a thumbnail sketch of the theory of catabolic collapse, the basic model of the decline and fall of civilizations that underlies the overall project of this blog. I’d encourage those who have questions about the details of the theory to go ahead and read the published version linked above; down the road a ways, I hope to publish a much more thoroughly developed version of the theory, but that project is still in the earliest stages just now. What I want to do here is to go a little more deeply into the social implications of the theory.
It’s common these days to hear people insist that our society is divided into two and only two classes, an elite class that receives all the benefits of the system, and everyone else, who bears all the burdens. The reality, in ours as in every other human society, is a great deal more nuanced. It’s true, of course, that the benefits move toward the top of the ladder of wealth and privilege and the burdens get shoved toward the bottom, but in most cases—ours very much included—you have to go a good long way down the ladder before you find people who receive no benefits at all.
There have admittedly been a few human societies in which most people receive only such benefits from the system as will enable them to keep working until they drop. The early days of plantation slavery in the United States and the Caribbean islands, when the average lifespan of a slave from purchase to death was under ten years, fell into that category, and so do a few others—for example, Cambodia under the Khmer Rouge. These are exceptional cases; they emerge when the cost of unskilled labor drops close to zero and either abundant profits or ideological considerations make the fate of the laborers a matter of complete indifference to their masters.
Under any other set of conditions, such arrangements are uneconomical. It’s more profitable, by and large, to allow such additional benefits to the laboring class as will permit them to survive and raise families, and to motivate them to do more than the bare minimum that will evade the overseer’s lash. That’s what generates the standard peasant economy, for example, in which the rural poor pay landowners in labor and a share of agricultural production for access to arable land.
There are any number of similar arrangements, in which the laboring classes do the work, the ruling classes allow them access to productive capital, and the results are divided between the two classes in a proportion that allows the ruling classes to get rich and the laboring classes to get by. If that sounds familiar, it should.  In terms of the distribution of labor, capital, and production, the latest offerings of today’s job market are indistinguishable from the arrangements between an ancient Egyptian landowner and the peasants who planted and harvested his fields.
The more complex a society becomes, the more intricate the caste system that divides it, and the more diverse the changes that are played on this basic scheme. A relatively simple medieval society might get by with four castes—the feudal Japanese model, which divided society into aristocrats, warriors, farmers, and a catchall category of traders, craftspeople, entertainers, and the like, is as good an example as any. A stable society near the end of a long age of expansion, by contrast, might have hundreds or even thousands of distinct castes, each with its own niche in the social and economic ecology of that society. In every case, each caste represents a particular balance between benefits received and burdens exacted, and given a stable economy entirely dependent on renewable resources, such a system can continue intact for a very long time.
Factor in the process of catabolic collapse, though, and an otherwise stable system turns into a fount of cascading instabilities. The point that needs to be grasped here is that social hierarchies are a form of capital, in the broad sense mentioned above. Like the other forms of capital included in the catabolic collapse model, social hierarchies facilitate the production and distribution of goods and services, and they have maintenance costs that have to be met. If the maintenance costs aren’t met, as with any other form of capital, social hierarchies are converted to waste; they stop fulfilling their economic function, and become available for salvage.
That sounds very straightforward. Here as so often, though, it’s the human factor that transforms it from a simple equation to the raw material of history.  As the maintenance costs of a civilization’s capital begin to mount up toward the point of crisis, corners get cut and malign neglect becomes the order of the day. Among the various forms of capital, though, some benefit people at one point on the ladder of social hierarchy more than people at other levels. As the maintenance budget runs short, people normally try to shield the forms of capital that benefit them directly, and push the cutbacks off onto forms of capital that benefit others instead. Since the ability of any given person to influence where resources go corresponds very precisely to that person’s position in the social hierarchy, this means that the forms of capital that benefit the people at the bottom of the ladder get cut first.
Now of course this isn’t what you hear from Americans today, and it’s not what you hear from people in any society approaching catabolic collapse. When contraction sets in, as I noted here in a post two weeks ago, people tend to pay much more attention to whatever they’re losing than to the even greater losses suffered by others. The middle-class Americans who denounce welfare for the poor at the top of their lungs while demanding that funding for Medicare and Social Security remain intact are par for the course; so, for that matter, are the other middle-class Americans who denounce the admittedly absurd excesses of the so-called 1% while carefully neglecting to note the immense differentials of wealth and privilege that separate them from those still further down the ladder.
This sort of thing is inevitable in a fight over slices of a shrinking pie. Set aside the inevitable partisan rhetoric, though, and a society moving into the penumbra of catabolic collapse is a society in which more and more people are receiving less and less benefit from the existing order of society, while being expected to shoulder an ever-increasing share of the costs of a faltering system. To those who receive little or no benefits in return, the maintenance costs of social capital rapidly become an intolerable burden, and as the supply of benefits still available from a faltering system becomes more and more a perquisite of the upper reaches of the social hierarchy, that burden becomes an explosive political fact.
Every society depends for its survival on the passive acquiescence of the majority of the population and the active support of a large minority. That minority—call them the overseer class—are the people who operate the mechanisms of social hierarchy: the bureaucrats, media personnel, police, soldiers, and other functionaries who are responsible for maintaining social order. They are not drawn from the ruling elite; by and large, they come from the same classes they are expected to control; and if their share of the benefits of the existing order falters, if their share of the burdens increases too noticeably, or if they find other reasons to make common cause with those outside the overseer class against the ruling elite, then the ruling elite can expect to face the brutal choice between flight into exile and a messy death. The mismatch between maintenance costs and available resources, in turn, makes some such turn of events extremely difficult to avoid.
A ruling elite facing a crisis of this kind has at least three available options. The first, and by far the easiest, is to ignore the situation. In the short term, this is actually the most economical option; it requires the least investment of scarce resources and doesn’t require potentially dangerous tinkering with fragile social and political systems. The only drawback is that once the short term runs out, it pretty much guarantees a horrific fate for the members of the ruling elite, and in many cases, this is a less convincing argument than one might think. It’s always easy to find an ideology that insists that things will turn out otherwise, and since members of a ruling elite are generally well insulated from the unpleasant realities of life in the society over which they preside, it’s usually just as easy for them to convince themselves of the validity of whatever ideology they happen to choose. The behavior of the French aristocracy in the years leading up to the French Revolution is worth consulting in this context.
The second option is to try to remedy the situation by increased repression. This is the most expensive option, and it’s generally even less effective than the first, but ruling elites with a taste for jackboots tend to fall into the repression trap fairly often. What makes repression a bad choice is that it does nothing to address the sources of the problems it attempts to suppress. Furthermore, it increases the maintenance costs of social hierarchy drastically—secret police, surveillance gear, prison camps, and the like don’t come cheap—and it enforces the lowest common denominator of passive obedience while doing much to discourage active engagement of people outside the elite in the project of saving the society.  A survey of the fate of the Communist dictatorships of Eastern Europe is a good antidote to the delusion that an elite with enough spies and soldiers can stay in power indefinitely.
That leaves the third option, which requires the ruling elite to sacrifice some of its privileges and perquisites so that those further down the social ladder still have good reason to support the existing order of society. That isn’t common, but it does happen; it happened in the United States as recently as the 1930s, when Franklin Roosevelt spearheaded changes that spared the United States the sort of fascist takeover or civil war that occurred in so many other failed democracies in the same era. Roosevelt and his allies among the very rich realized that fairly modest reforms would be enough to comvince most Americans that they had more to gain from supporting the system than they would gain by overthrowing it.  A few job-creation projects and debt-relief measures, a few welfare programs, and a few perp walks by the most blatant of the con artists of the preceding era of high finance, were enough to stop the unraveling of the social hierarchy, and restore a sense of collective unity strong enough to see the United States through a global war in the following decade.
Now of course Roosevelt and his allies had huge advantages that any comparable project would not be able to duplicate today. In 1933, though it was hamstrung by a collapsed financial system and a steep decline in international trade, the economy of the United States still had the world’s largest and most productive industrial plant and some of the world’s richest deposits of petroleum, coal, and many other natural resources. Eighty years later, the industrial plant was abandoned decades ago in an orgy of offshoring motivated by short-term profit-seeking, and nearly every resource the American land once offered in abundance has been mined and pumped right down to the dregs. That means that an attempt to imitate Roosevelt’s feat under current conditions would face much steeper obstacles, and it would also require the ruling elite to relinquish a much greater share of its current perquisites and privileges than they did in Roosevelt’s day.
I could be mistaken, but I don’t think it will even be tried this time around. Just at the moment, the squabbling coterie of competing power centers that constitutes the ruling elite of the United States seems committed to an approach halfway between the first two options I’ve outlined. The militarization of US domestic police forces and the rising spiral of civil rights violations carried out with equal enthusiasm by both mainstream political parties fall on the repressive side of the scale.  At the same time, for all these gestures in the direction of repression, the overall attitude of American politicians and financiers seems to be that nothing really that bad can actually happen to them or the system that provides them with their power and their wealth.
They’re wrong, and at this point it’s probably a safe bet that a great many of them will die because of that mistake. Already, a large fraction of Americans—probably a majority—accept the continuation of the existing order of society in the United States only because a viable alternative has yet to emerge. As the United States moves closer to catabolic collapse, and the burden of propping up an increasingly dysfunctional status quo bears down ever more intolerably on more and more people outside the narrowing circle of wealth and privilege, the bar that any alternative has to leap will be set lower and lower. Sooner or later, something will make that leap and convince enough people that there’s a workable alternative to the status quo, and the passive acquiescence on which the system depends for its survival will no longer be something that can be taken for granted.
It’s not necessary for such an alternative to be more democratic or more humane than the order that it attempts to replace. It can be considerably less so, so long as it imposes fewer costs on the majority of people and distributes benefits more widely than the existing order does. That’s why, in the last years of Rome, so many people of the collapsing empire so readily accepted the rule of barbarian warlords in place of the imperial government. That government had become hopelessly dysfunctional by the time of the barbarian invasions, centralizing authority in distant bureaucratic centers out of touch with current realities and imposing tax burdens on the poor so crushing that many people were forced to sell themselves into slavery or flee to depopulated regions of the countryside to take up the uncertain life of Bacaudae, half guerrilla and half bandit, hunted by imperial troops whenever those had time to spare from the defense of the frontiers.
By contrast, the local barbarian warlord might be brutal and capricious, but he was there on the scene, and thus unlikely to exhibit the serene detachment from reality so common in centralized bureaucratic states at the end of their lives. What’s more, the warlord had good reason to protect the peasants who put bread and meat on his table, and the cost of supporting him and his retinue in the relatively modest style of barbarian kingship was considerably less expensive than the burden of helping to prop up the baroque complexities of the late Roman imperial bureaucracy. That’s why the peasants and agricultural slaves of the late Roman world acquiesced so calmly in the implosion of Rome and its replacement by a patchwork of petty kingdoms. It wasn’t just that it was merely a change of masters; it was that in a great many cases, the new masters were considerably less of a burden than the old ones had been.
We can expect much the same process to unfold in North America as the United States passes through its own trajectory of decline and fall. Before tracing the ways that process might work out, though, it’s going to be necessary to sort through some common misconceptions, and that requires us to examine the ways that ruling elites destroy themselves. We’ll cover that next week.

Technological Superstitions

Wed, 2014-09-10 17:33
I'd meant to go straight on from last week’s post about völkerwanderungand the dissolution and birth of ethnic identities in dark age societies, and start talking about the mechanisms by which societies destroy themselves—with an eye, of course, to the present example. Still, as I’ve noted here more than once, there are certain complexities involved in the project of discussing the decline and fall of civilizations in a civilization that’s hard at work on its own decline and fall, and one of those complexities is the way that tempting examples of the process keep popping up as we go.
The last week or so has been unusually full of those. The Ebola epidemic in West Africa has continued to spread at an exponential rate as hopelessly underfunded attempts to contain it crumple, while the leaders of the world’s industrial nations distract themselves playing geopolitics in blithe disregard of the very real possibility that their inattention may be helping to launch the next great global pandemic.  In other news—tip of the archdruidical hat here to The Daily Impact—companies and investors who have been involved in the fracking bubble are quietly bailing out. If things continue on their current trajectory, as I’ve noted before, this autumn could very well see the fracking boom go bust; it’s anyone’s guess how heavily that will hit the global economy, but fracking-related loans and investments have accounted for a sufficiently large fraction of Wall Street profits in recent years that the crater left by a fracking bust will likely be large and deep. 
Regular readers of this blog already know, though, that it’s most often the little things that catch my attention, and the subject of this week’s post is no exception. Thus I’m pleased to announce that a coterie of scientists and science fiction writers has figured out what’s wrong with the world today: there are, ahem, too many negative portrayals of the future in popular media. To counter this deluge of unwarranted pessimism, they’ve organized a group called Project Hieroglyph, and published an anthology of new, cheery, upbeat SF stories about marvelous new technologies that could become realities within the next fifty years. That certainly ought to do the trick!
Now of course I’m hardly in a position to discourage anyone from putting together a science fiction anthology around an unpopular theme. After Oil: SF Visions of a Post-Petroleum Future, the anthology that resulted from the first Space Bats challenge here in 2011, is exactly that, and two similar anthologies from this blog’s second Space Bats challenge are going through the editing and publishing process as I write these words. That said, I’d question the claim that those three anthologies will somehow cause the planet’s oil reserves to run dry any faster than they otherwise will.
The same sort of skepticism, I suggest, may be worth applying to Project Hieroglyph and its anthology.  The contemporary  crisis of industrial society isn’t being caused by a lack of optimism; its roots go deep into the tough subsoil of geological and thermodynamic reality, to the lethal mismatch between fantasies of endless economic growth and the hard limits of a finite planet, and to the less immediately deadly but even more pervasive mismatch between fantasies of perpetual technological progress and that nemesis of all linear thinking, the law of diminishing returns.  The failure of optimism that these writers are bemoaning is a symptom rather than a cause, and insisting that the way to solve our problems is to push optimistic notions about the future at people is more than a little like deciding that the best way to deal with flashing red warning lights on the control panel of an airplane is to put little pieces of opaque green tape over them so everything looks fine again.
It’s not as though there’s been a shortage of giddily optimistic visions of a gizmocentric future in recent years, after all. I grant that the most colorful works of imaginative fiction we’ve seen of late have come from those economists and politicians who keep insisting that the only way out of our current economic and social malaise is to do even more of the same things that got us into it. That said, any of my readers who step into a bookstore or a video store and look for something that features interstellar travel or any of the other shibboleths of the contemporary cult of progress won’t have to work hard to find one. What’s happened, rather, is that such things are no longer as popular as they once were, because people find that stories about bleaker futures hedged in with harsh limits are more to their taste.
The question that needs to be asked, then, is why this should be the case. As I see it, there are at least three very good reasons.
First, those bleaker futures and harsh limits reflect the realities of life in contemporary America. Set aside the top twenty per cent of the population by income, and Americans have on average seen their standard of living slide steadily downhill for more than four decades. In 1970, to note just one measure of how far things have gone, an American family with one working class salary could afford to buy a house, pay all their bills on time, put three square meals on the table every day, and still have enough left over for the occasional vacation or high-ticket luxury item. Now? In much of today’s America, a single working class salary isn’t enough to keep a family off the streets.
That history of relentless economic decline has had a massive impact on attitudes toward the future, toward science, and toward technological progress. In 1969, it was only in the ghettos where America confined its urban poor that any significant number of people responded to the Apollo moon landing with the sort of disgusted alienation that Gil Scott-Heron expressed memorably in his furious ballad “Whitey on the Moon.”  Nowadays, a much greater number of Americans—quite possibly a majority—see the latest ballyhooed achievements of science and technology as just one more round of pointless stunts from which they won’t benefit in the least.
It’s easy but inaccurate to insist that they’re mistaken in that assessment. Outside the narrowing circle of the well-to-do, many Americans these days spend more time coping with the problems caused by technologies than they do enjoying the benefits thereof. Most of the jobs eliminated by automation, after all, used to provide gainful employment for the poor; most of the localities that are dumping grounds for toxic waste, similarly, are inhabited by people toward the bottom of the socioeconomic pyramid, and so on down the list of unintended consequences and technological blowback. By and large, the benefits of new technology trickle up the social ladder, while the costs and burdens trickle down; this has a lot to do with the fact that the grandchildren of people who enjoyed The Jetsons now find The Hunger Games more to their taste.
That’s the first reason. The second is that for decades now, the great majority of the claims made about wonderful new technologies that would inevitably become part of our lives in the next few decades have turned out to be dead wrong. From jetpacks and flying cars to domed cities and vacations on the Moon, from the nuclear power plants that would make electricity too cheap to meter to the conquest of poverty, disease, and death itself, most of the promises offered by the propagandists and publicists of technological progress haven’t happened. That has understandably made people noticeably less impressed by further rounds of promises that likely won’t come true either.
When I was a child, if I may insert a personal reflection here, one of my favorite books was titled You Will Go To The Moon. I suspect most American of my generation remember that book, however dimly, with its glossy portrayal of what space travel would be like in the near future: the great conical rocket with its winged upper stage, the white doughnut-shaped space station turning in orbit, and the rest of it. I honestly expected to make that trip someday, and I was encouraged in that belief by a chorus of authoritative voices for whom permanent space stations, bases on the Moon, and a manned landing on Mars were a done deal by the year 2000.
Now of course in those days the United States still had a manned space program capable of putting bootprints on the Moon. We don’t have one of those any more. It’s worth talking about why that is, because the same logic applies equally well to most of the other grand technological projects that were proclaimed not so long ago as the inescapable path to a shiny new future.
We don’t have a manned space program any more, to begin with, because the United States is effectively bankrupt, having committed itself in the usual manner to the sort of imperial overstretch chronicled by Paul Kennedy in The Rise and Fall of the Great Powers, and cashed in its future for a temporary supremacy over most of the planet. That’s the unmentionable subtext behind the disintegration of America’s infrastructure and built environment, the gutting of its once-mighty industrial plant, and a good deal of the steady decline in standards of living mentioned earlier in this post. Britain dreamed about expansion into space when it still had an empire—the British Interplanetary Society was a major presence in space-travel advocacy in the first half of the twentieth century—and shelved those dreams when its empire went away; the United States is in the process of the same retreat. Still, there’s more going on here than this.
Another reason we don’t have a manned space program any more is that all those decades of giddy rhetoric about New Worlds For Man never got around to discussing the difference between technical feasibility and economic viability. The promoters of space travel fell into the common trap of believing their own hype, and convinced themselves that orbital factories, mines on the Moon, and the like would surely turn out to be paying propositions. What they forgot, of course, is what I’ve called the biosphere dividend:  the vast array of goods and services that the Earth’s natural cycles provide for human beings free of charge, which have to be paid for anywhere else. The best current estimate for the value of that dividend, from a 1997 paper in Science written by a team headed by Richard Constanza, is that it’s something like three times the total value of all goods and services produced by human beings.
As a very rough estimate, in other words, economic activity anywhere in the solar system other than Earth will cost around four times what it costs on Earth, even apart from transportation costs, because the services provided here for free by the biosphere have to be paid for in space or on the solar system’s other worlds. That’s why all the talk about space as a new economic frontier went nowhere; orbital manufacturing was tried—the Skylab program of the 1970s, the Space Shuttle, and the International Space Station in its early days all featured experiments along those lines—and the modest advantages of freefall and ready access to hard vacuum didn’t make enough of a difference to offset the costs. Thus manned space travel, like commercial supersonic aircraft, nuclear power plants, and plenty of other erstwhile waves of the future, turned into a gargantuan white elephant that could only be supported so long as massive and continuing government subsidies were forthcoming.
Those are two of the reasons why we don’t have a manned space program any more. The third is less tangible but, I suspect, far and away the most important. It can be tracked by picking up any illustrated book about the solar system that was written before we got there, and comparing what outer space and other worlds were supposed to look like with what was actually waiting for our landers and probes.
I have in front of me right now, for example, a painting of a scene on the Moon in a book published the year I was born. It’s a gorgeous, romantic view. Blue earthlight splashes over a crater in the foreground; further off, needle-sharp mountains catch the sunlight; the sky is full of brilliant stars. Too bad that’s not what the Apollo astronauts found when they got there. Nobody told the Moon it was supposed to cater to human notions of scenic grandeur, and so it presented its visitors with vistas of dull gray hillocks and empty plains beneath a flat black sky. To anybody but a selenologist, the one thing worth attention in that dreary scene was the glowing blue sphere of Earth 240,000 miles away.
For an even stronger contrast, consider the pictures beamed back by the first Viking probe from the surface of Mars in 1976, and compare that to the gaudy images of the Sun’s fourth planet that were in circulation in popular culture up to that time. I remember the event tolerably well, and one of the things I remember most clearly is the pervasive sense of disappointment—of “is that all?”—shared by everyone in the room.  The images from the lander didn’t look like Barsoom, or the arid but gorgeous setting of Ray Bradbury’s The Martian Chronicles, or any of the other visions of Mars everyone in 1970s America had tucked away in their brains; they looked for all of either world like an unusually dull corner of Nevada that had somehow been denuded of air, water, and life.
Here again, the proponents of space travel fell into the trap of believing their own hype, and forgot that science fiction is no more about real futures than romance novels are about real relationships. That isn’t a criticism of science fiction, by the way, though I suspect the members of Project Hieroglyph will take it as one. Science fiction is a literature of ideas, not of crass realities, and it evokes the sense of wonder that is its distinctive literary effect by dissolving the barrier between the realistic and the fantastic. What too often got forgotten, though, is that literary effects don’t guarantee the validity of prophecies—they’re far more likely to hide the flaws of improbable claims behind a haze of emotion.
Romance writers don’t seem to have much trouble recognizing that their novels are not about the real world. Science fiction, by contrast, has suffered from an overdeveloped sense of its own importance for many years now. I’m thinking just now of a typical essay by Isaac Asimov that described science fiction writers as scouts for the onward march of humanity. (Note the presuppositions that humanity is going somewhere, that all of it’s going in a single direction, and that this direction just happens to be defined by the literary tastes of an eccentric subcategory of 20th century popular fiction.) That sort of thinking led too many people in the midst of the postwar boom to forget that the universe is under no obligation to conform to our wholly anthropocentric notions of human destiny and provide us with New Worlds for Man just because we happen to want some.
Mutatis mutandis, that’s what happened to most of the other grand visions of transformative technological progress that were proclaimed so enthusiastically over the last century or so. Most of them never happened, and those that did turned out to be far less thrilling and far more problematic than the advance billing insisted they would be. Faced with that repeated realization, a great many Americans decided—and not without reason—that more of the same gosh-wow claims were not of interest. That shifted public taste away from cozy optimism toward a harsher take on our future.
The third factor driving that shift in taste, though, may be the most important of all, and it’s also one of the most comprehensively tabooed subjects in contemporary life. Most human phenomena are subject to the law of diminishing returns; the lesson that contemporary industrial societies are trying their level best not to learn just now is that technological progress is one of the phenomena to which this law applies. Thus there can be such a thing as too much technology, and a very strong case can be made that in the world’s industrial nations, we’ve already gotten well past that point.
In a typically cogent article, economist Herman Daly sorts our the law of diminishing returns into three interacting processes. The first is diminishing marginal utility—that is, the more of anything you have, the less any additional increment of that thing contributes to your wellbeing. If you’re hungry, one sandwich is a very good thing; two is pleasant; three is a luxury; and somewhere beyond that, when you’ve given sandwiches to all your coworkers, the local street people, and anyone else you can find, more sandwiches stop being any use to you. When more of anything  no longers bring any additional benefit, you’ve reached the point of futility, at which further increments are a waste of time and resources.
Well before that happens, though, two other factors come into play. First, it costs you almost nothing to cope with one sandwich, and very little more to cope with two or three. After that you start having to invest time, and quite possibly resources, in dealing with all those sandwiches, and each additional sandwich adds to the total burden. Economists call that increasing marginal disutility—that is, the more of anything you have, the more any additional increment of that thing is going to cost you, in one way or another. Somewhere in there, too, there’s the impact that dealing with those sandwiches has on your ability to deal with other things you need to do; that’s increasing risk of whole-system disruption—the more of anything you have, the more likely it is that an additional increment of that thing is going to disrupt the wider system in which you exist.
Next to nobody wants to talk about the way that technological progress has already passed the point of diminishing returns: that the marginal utility of each new round of technology is dropping fast, the marginal disutility is rising at least as fast, and whole-system disruptions driven by technology are becoming an inescapable presence in everyday life. Still, I’ve come to think that an uncomfortable awareness of that fact is becoming increasingly common these days, however subliminal that awareness may be, and beginning to have an popular culture among many other things. If you’re in a hole, as the saying goes, the first thing to do is stop digging; if a large and growing fraction of your society’s problems are being caused by too much technology applied with too little caution, similarly, it’s not exactly helpful to insist that applying even more technology with even less skepticism about its consequences is the only possible answer to those problems.
There’s a useful word for something that remains stuck in a culture after the conditions that once made it relevant have passed away, and that word is “superstition.” I’d like to suggest that the faith-based claims that more technology is always better than less, that every problem must have a technological solution, and that technology always solves more problems than it creates, are among the prevailing superstitions of our time. I’d also like to suggest that, comforting and soothing as those superstitions may be, it’s high time we outgrow them and deal with the world as it actually is—a world in which yet another helping of faith-based optimism is far from useful.

Dark Age America: The Cauldron of Nations

Wed, 2014-09-03 15:57
It's one thing to suggest, as I did in last week’s post here, that North America a few centuries from now might have something like five per cent of its current population. It’s quite another thing to talk about exactly whose descendants will comprise that five per cent. That’s what I intend to do this week, and yes, I know that raising that issue is normally a very good way to spark a shouting match in which who-did-what-to-whom rhetoric plays its usual role in drowning out everything else.
Now of course there’s a point to talking about, and learning from, the abuses inflicted by groups of people on other groups of people over the last five centuries or so of North American history.  Such discussions, though, have very little to offer the topic of the current series of posts here on The Archdruid Report.  History may be a source of moral lessons but it’s not a moral phenomenon; a glance back over our past shows clearly enough that who won, who lost, who ended up ruling a society, and who ended up enslaved or exterminated by that same society, was not determined by moral virtue or by the justice of one or another cause, but by the crassly pragmatic factors of military, political, and economic power. No doubt most of us would rather live in a world that didn’t work that way, but here we are, and morality remains a matter of individual choices—yours and mine—in the face of a cosmos that seems sublimely unconcerned with our moral beliefs.
Thus we can take it for granted that just as the borders that currently divide North America were put there by force or the threat of force, the dissolution of those borders and their replacement with new lines of division will happen the same way. For that matter, it’s a safe bet that the social divisions—ethnic and otherwise—of the successor cultures that emerge in the aftermath of our downfall will be established and enforced by means no more just or fair than the ones that currently distribute wealth and privilege to the different social and ethnic strata in today’s North American nations. Again, it would be pleasant to live in a world where that isn’t true, but we don’t.
I apologize to any of my readers who are offended or upset by these points. In order to make any kind of sense of the way that civilizations fall—and more to the point, the way that ours is currently falling—it’s essential to get past the belief that history is under any obligation to hand out rewards for good behavior and punishments for the opposite, or for that matter the other way around. Over the years and decades and centuries ahead of us, as industrial civilization crumbles, a great many people who believe with all their hearts that their cause is right and just are going to die anyway, and there will be no shortage of brutal, hateful, vile individuals who claw their way to the top—for a while, at least. One of the reliable features of dark ages is that while they last, the top of the heap is a very unsafe place to be.
North America being what it is today, a great many people considering the sort of future I’ve just sketched out immediately start thinking about the potential for ethnic conflict, especially but not only in the United States. It’s an issue worth discussing, and not only for the currently obvious reasons. Conflict between ethnic groups is quite often a major issue in the twilight years of a civilization, for reasons we’ll discuss shortly, but it’s also self-terminating, for an interesting reason: traditional ethnic divisions don’t survive dark ages. In an age of political dissolution, economic implosion, social chaos, demographic collapse, and mass migration, the factors that maintain established ethnic divisions in place don’t last long. In their place, new ethnicities emerge.  It’s a commonplace of history that dark ages are the cauldron from which nations are born.
So we have three stages, which overlap to a greater or lesser degree: a stage of ethnic conflict, a stage of ethnic dissolution, and a stage of ethnogenesis. Let’s take them one at a time.
The stage of ethnic conflict is one effect of the economic contraction that’s inseparable from the decline of a civilization.  If a rising tide lifts all boats, as economists of the trickle-down school used to insist, a falling tide has a much more differentiated effect, since each group in a declining society does its best to see to it that as much as possible of the costs of decline land on someone else.  Since each group’s access to wealth and privilege determines fairly exactly how much influence it has on the process, it’s one of the constants of decline and fall that the costs and burdens of decline trickle down, landing with most force on those at the bottom of the pyramid.
That heats up animosities across the board: between ethnic groups, between regions, between political and religious divisions, you name it. Since everyone below the uppermost levels of wealth and power loses some of what they’ve come to expect, and since it’s human nature to pay more attention to what you’ve lost than to the difference between what you’ve retained and what someone worse off than you has to make do with, everyone’s aggrieved, and everyone sees any attempt by someone else to better their condition as a threat. That’s by no means entirely inaccurate—if the pie’s shrinking, any attempt to get a wider slice has to come at somebody else’s expense—but it fans the flames of conflict even further, helping to drive the situation toward the inevitable explosions.
One very common and very interesting feature of this process is that the increase in ethnic tensions tend to parallel a process of ethnic consolidation. In the United States a century ago, for example, the division of society by ethnicity wasn’t anything so like as simple as it is today. The uppermost caste in most of the country wasn’t simply white, it was white male Episcopalians whose ancestors got here from northwestern Europe before the Revolutionary War. Irish ranked below Germans but above Italians, who looked down on Jews, and so on down the ladder to the very bottom, which was occupied by either African-Americans or Native Americans depending on locality. Within any given ethnicity, furthermore, steep social divisions existed, microcosms of a hierarchically ordered macrocosm; gender distinctions and a great many other lines of fracture combined with the ethnic divisions just noted to make American society in 1914 as intricately caste-ridden as any culture on the planet.
The partial dissolution of many of these divisions has resulted inevitably in the hardening of those that remain. That’s a common pattern, too: consider the way that the rights of Roman citizenship expanded step by step from the inhabitants of the city of Rome itself, to larger and larger fractions of the people it dominated, until finally every free adult male in the Empire was a Roman citizen by definition. Parallel to that process came a hardening of the major divisions, between free persons and slaves on the one hand, and between citizens of the Empire and the barbarians outside its borders on the other. The result was the same in that case as it is in ours: traditional, parochial jealousies and prejudices focused on people one step higher or lower on the ladder of caste give way to new loyalties and hatreds uniting ever greater fractions of the population into increasingly large and explosive masses.
The way that this interlocks with the standard mechanisms of decline and fall will be a central theme of future posts. The crucial detail, though, is that a society riven by increasingly bitter divisions of the sort just sketched out is very poorly positioned to deal with external pressure or serious crisis. “Divide and conquer,” the Romans liked to say during the centuries of their power:  splitting up their enemies and crushing them one at a time was the fundamental strategy they used to build their empire. On the way down, though, it was the body of Roman society that did the dividing, tearing itself apart along every available line of schism, and Rome was accordingly conquered in its turn. That’s usual for falling civilizations, and we’re well along the same route in the United States today.
Ethnic divisions thus routinely play a significant role in the crash of civilizations. Still, as noted above, the resulting chaos quickly shreds the institutional arrangements that make ethnic divisions endure in a settled society. Charismatic leaders emerge out of the chaos, and those that are capable of envisioning and forming alliances across ethnic lines succeed where their rivals fail; the reliable result is a chaotic melting pot of armed bands and temporary communities drawn from all available sources. When the Huns first came west from the Eurasian steppes around 370 CE, for example, they were apparently a federation of related Central Asian tribes; by the time of Attila, rather less than a century later, his vast armies included warriors from most of the ethnic groups of eastern Europe. We don’t even know what their leader’s actual name was; “Attila” was a nickname—“Daddy”—in Visigothic, the lingua franca among the eastern barbarians at that time.
The same chaotic reshuffling was just as common on the other side of the collapsing Roman frontiers. The province of Britannia, for instance, had long been divided into ethnic groups with their own distinct religious and cultural traditions. In the wake of the Roman collapse and the Saxon invasions, the survivors who took refuge in the mountains of the west forgot the old divisions, and took to calling themselves by a new name:  Combrogi, “fellow-countrymen” in old Brythonic. Nowadays that’s Cymry, the name the Welsh use for themselves.  Not everyone who ended up as Combrogi was British by ancestry—one of the famous Welsh chieftains in the wars against the Saxons was a Visigoth named Theodoric—nor were all the people on the other side Saxons—one of the leaders of the invaders was a Briton named Caradoc ap Cunorix,  the “Cerdic son of Cynric” of the Anglo-Saxon Chronicle.
It’s almost impossible to overstate the efficiency of the blender into which every political, economic, social, and ethnic manifestation got tossed in the last years of Rome. My favorite example of the raw confusion of that time is the remarkable career of another Saxon leader named Odoacer. He was the son of one of Attila the Hun’s generals, but got involved in Saxon raids on Britain after Attila’s death. Sometime in the 460s, when the struggle between the Britons and the Saxons was more or less stuck in deadlock, Odoacer decided to look for better pickings elsewhere, and led a Saxon fleet that landed at the mouth of the Loire in western France. For the next decade or so, more or less in alliance with Childeric, king of the Franks, he fought the Romans, the Goths, and the Bretons there.
When the Saxon hold on the Loire was finally broken, Odoacer took the remains of his force and joined Childeric in an assault on Italy. No records survive of the fate of that expedition, but it apparently didn’t go well. Odoacer next turned up, without an army, in what’s now Austria and was then the province of Noricum. It took him only a short time to scrape together a following from the random mix of barbarian warriors to be found there, and in 476 he marched on Italy again, and overthrew the equally random mix of barbarians who had recently seized control of the peninsula. 
The Emperor of the West just then, the heir of the Caesars and titular lord of half the world, was a boy named Romulus Augustulus. In a fine bit of irony, he also happened to be the son of Attila the Hun’s Greek secretary, a sometime ally of Odoacer’s father. This may be why, instead of doing the usual thing and having the boy killed, Odoacer basically told the last Emperor of Rome to run along and play.  That sort of clemency was unusual, and it wasn’t repeated by the next barbarian warlord in line; fourteen years later Odoacer was murdered by order of Theodoric, king of the Ostrogoths, who proceeded to take his place as temporary master of the corpse of imperial Rome.
Soldiers of fortune, or of misfortune for that matter, weren’t the only people engaged in this sort of heavily armed tour of the post-Roman world during those same years. Entire nations were doing the same thing. Those of my readers who have been watching North America’s climate come unhinged may be interested to know that severe droughts in Central Asia may have been the trigger that kickstarted the process, pushing nomadic tribes out of their traditional territories in a desperate quest for survival. Whether or not that’s what pushed the Huns into motion, the westward migration of the Huns forced other barbarian peoples further west to flee for their lives, and the chain of dominoes thus set in motion played a massive role in creating the chaos in which figures like Odoacer rose and fell. It’s a measure of the sheer scale of these migrations that before Rome started to topple, many of the ancestors of today’s Spaniards lived in what’s now the Ukraine.
And afterwards? The migrations slowed and finally stopped, the warlords became kings, and the people who found themselves in some more or less stable kingdom began the slow process by which a random assortment of refugees and military veterans from the far corners of the Roman world became the first draft of a nation. The former province of Britannia, for example, became seven Saxon kingdoms and a varying number of Celtic ones, and then began the slow process of war and coalescence out of which England, Scotland, Wales, and Cornwall gradually emerged. Elsewhere, the same process moved at varying rates; new nations, languages, ethnic groups came into being. The cauldron of nations had come off the boil, and the history of Europe settled down to a somewhat less frenetic rhythm.
I’ve used post-Roman Europe as a convenient and solidly documented example, but transformations of the same kind are commonplace whenever a civilization goes down. The smaller and more isolated the geographical area of the civilization that falls, the less likely mass migrations are—ancient China, Mesopotamia, and central Mexico had plenty of them, while the collapse of the classic Maya and Heian Japan featured a shortage of wandering hordes—but the rest of the story is among the standard features you get with societal collapse. North America is neither small nor isolated, and so it’s a safe bet that we’ll get a tolerably complete version of the usual process right here in the centuries ahead.
What does that mean in practice? It means, to begin with, that a rising spiral of conflict along ethnic, cultural, religious, political, regional, and social lines will play an ever larger role in North American life for decades to come. Those of my readers who have been paying attention to events, especially but not only in the United States, will have already seen that spiral getting under way. As the first few rounds of economic contraction have begun to bite, the standard response of every group you care to name has been to try to get the bite taken out of someone else. Listen to the insults being flung around in the political controversies of the present day—the thieving rich, the shiftless poor, and the rest of it—and notice how many of them amount to claims that wealth that ought to belong to one group of people is being unfairly held by another. In those claims, you can hear the first whispers of the battle-cries that will be shouted as the usual internecine wars begin to tear our civilization apart.
As those get under way, for reasons we’ll discuss at length in future posts, governments and the other institutions of civil society will come apart at the seams, and the charismatic leaders already mentioned will rise to fill their place. In response, existing loyalties will begin to dissolve as the normal process of warband formation kicks into overdrive. In such times a strong and gifted leader like Attila the Hun can unite any number of contending factions into a single overwhelming force, but at this stage such things have no permanence; once the warlord dies, ages, or runs out of luck, the forces so briefly united will turn on each other and plunge the continent back into chaos.
There will also be mass migrations, and far more likely than not these will be on a scale that would have impressed Attila himself. That’s one of the ways that the climate change our civilization has unleashed on the planet is a gift that just keeps on giving; until the climate settles back down to some semblance of stability, and sea levels have risen as far as they’re going to rise, people in vulnerable areas are going to be forced out of their homes by one form of unnatural catastrophe or another, and the same desperate quest for survival that may have sent the Huns crashing into Eastern Europe will send new hordes of refugees streaming across the landscape. Some of those hordes will have starting points within the United States—I expect mass migrations from Florida as the seas rise, and from the Southwest as drought finishes tightening its fingers around the Sun Belt’s throat—while others will come from further afield.
Five centuries from now, as a result, it’s entirely possible that most people in the upper Mississippi valley will be of Brazilian ancestry, and the inhabitants of the Hudson’s Bay region sing songs about their long-lost homes in drowned Florida, while languages descended from English may be spoken only in a region extending from New England to the isles of deglaciated Greenland. Nor will these people think of themselves in any of the national and ethnic terms that come so readily to our minds today. It’s by no means impossible that somebody may claim to be Presden of Meriga, Meer of Kanda, or what have you, just as Charlemagne and his successors claimed to be the emperors of Rome. Just as the Holy Roman Empire was proverbially neither holy, nor Roman, nor an empire, neither the office nor the nation at that future time is likely to have much of anything to do with its nominal equivalent today—and there will certainly be nations and ethnic groups in that time that have no parallel today.
One implication of these points may be worth noting here, as we move deeper into the stage of ethnic conflict. No matter what your ethnic group, dear reader, no matter how privileged or underprivileged it may happen to be in today’s world, it will almost certainly no longer exist as such when industrial civilization on this continent finishes the arc of the Long Descent. Such of your genes as make it through centuries of dieoff and ruthless Darwinian selection will be mixed with genes from many other nationalities and corners of the world, and it’s probably a safe bet that the people who carry those genes won’t call themselves by whatever label you call yourself. When a civilization falls the way ours is falling, that’s how things generally go.
*****************
In other news, I’m delighted to announce that my latest book, Twilight’s Last Gleaming, a novel based on the five-part scenario of US imperial collapse and dissolution posted here in 2012, will be hitting the bookshelves on October 31 of this year. Those of my readers who are interested may like to know that the publisher, Karnac Books, is offering a discount and free worldwide shipping on preorders. Those posts still hold this blog’s all-time record for page views, and the novel’s just as stark and fast-paced as the original posts; those of my readers who enjoy a good political-military thriller might want to check it out.

Dark Age America: The Population Implosion

Wed, 2014-08-27 17:31
The three environmental shifts discussed in earlier posts in this sequence—the ecological impacts of a sharply warmer and dryer climate, the flooding of coastal regions due to rising sea levels, and the long-term consequences of industrial America’s frankly brainless dumping of persistent radiological and chemical poisons—all involve changes to the North American continent that will endure straight through the deindustrial dark age ahead, and will help shape the history of the successor cultures that will rise amid our ruins. For millennia to come, the peoples of North America will have to contend with drastically expanded deserts, coastlines that in some regions will be many miles further inland than they are today, and the presence of dead zones where nuclear or chemical wastes in the soil and water make human settlement impossible.
All these factors mean, among other things, that deindustrial North America will support many fewer people than it did in 1880 or so, before new agricultural technologies dependent on fossil fuels launched the population boom that is peaking in our time. Now of course this also implies that deindustrial North America will support many, many fewer people than it does today. For obvious reasons, it’s worth talking about the processes by which today’s seriously overpopulated North America will become the sparsely populated continent of the coming dark age—but that’s going to involve a confrontation with a certain kind of petrified irrelevancy all too common in our time.
Every few weeks, the comments page of this blog fields something insisting that I’m ignoring the role of overpopulation in the crisis of our time, and demanding that I say or do something about that. In point of fact, I’ve said quite a bit about overpopulation on this blog over the years, dating back to this post from 2007. What I’ve said about it, though, doesn’t follow either one of the two officially sanctioned scripts into which discussions of overpopulation are inevitably shoehorned in today’s industrial world; the comments I get are thus basically objecting to the fact that I’m not toeing the party line.
Like most cultural phenomena in today’s industrial world, the scripts just mentioned hew closely to the faux-liberal and faux-conservative narratives that dominate so much of contemporary thought. (I insist on the prefix, as what passes for political thought these days has essentially nothing to do with either liberalism or conservatism as these were understood as little as a few decades ago.) The scripts differ along the usual lines: that is to say, the faux-liberal script is well-meaning and ineffectual, while the faux-conservative script is practicable and evil.
Thus the faux-liberal script insists that overpopulation is a terrible problem, and we ought to do something about it, and the things we should do about it are all things that don’t work, won’t work, and have been being tried over and over again for decades without having the slightest effect on the situation. The faux-conservative script insists that overpopulation is a terrible problem, but only because it’s people of, ahem, the wrong skin color who are overpopulating, ahem, our country: that is, overpopulation means immigration, and immigration means let’s throw buckets of gasoline onto the flames of ethnic conflict, so it can play its standard role in ripping apart a dying civilization with even more verve than usual.
Overpopulation and immigration policy are not the same thing; neither are depopulation and the mass migrations of whole peoples for which German historians of the post-Roman dark ages coined the neat term völkerwanderung, which are the corresponding phenomena in eras of decline and fall. For that reason, the faux-conservative side of the debate, along with the usually unmentioned realities of immigration policy in today’s America and the far greater and more troubling realities of mass migration and ethnogenesis that will follow in due time, will be left for next week’s post. For now I want to talk about overpopulation as such, and therefore about the faux-liberal side of the debate and the stark realities of depopulation that are waiting in the future.
All this needs to be put in its proper context. In 1962, the year I was born, there were about three and a half billion human beings on this planet. Today, there are more than seven billion of us. That staggering increase in human numbers has played an immense and disastrous role in backing today’s industrial world into the corner where it now finds itself. Among all the forces driving us toward an ugly future, the raw pressure of human overpopulation, with the huge and rising resource requirements it entails, is among the most important.
That much is clear. What to do about it is something else again. You’ll still hear people insisting that campaigns to convince people to limit their reproduction voluntarily ought to do the trick, but such campaigns have been ongoing since well before I was born, and human numbers more than doubled anyway. It bears repeating that if a strategy has failed every time it’s been tried, insisting that we ought to do it again isn’t a useful suggestion. That applies not only to the campaigns just noted, but to all the other proposals to slow or stop population growth that have been tried repeatedly and failed just as repeatedly over the decades just past.
These days, a great deal of the hopeful talk around the subject of limits to overpopulation has refocused on what’s called the demographic transition: the process, visible in the population history of most of today’s industrial nations, whereby people start voluntarily reducing their reproduction when their income and access to resources rise above a certain level. It’s a real effect, though its causes are far from clear. The problem here is simply that the resource base that would make it possible for enough of the world’s population to have the income and access to resources necessary to trigger a worldwide demographic transition simply don’t exist.
As fossil fuels and a galaxy of other nonrenewable resources slide down the slope of depletion at varying rates, for that matter, it’s becoming increasingly hard for people in the industrial nations to maintain their familiar standards of living. It may be worth noting that this hasn’t caused a sudden upward spike in population growth in those countries where downward mobility has become most visible. The demographic transition, in other words, doesn’t work in reverse, and this points to a crucial fact that hasn’t necessarily been given the weight it deserves in conversations about overpopulation.
The vast surge in human numbers that dominates the demographic history of modern times is wholly a phenomenon of the industrial age. Other historical periods have seen modest population increases, but nothing on the same scale, and those have reversed themselves promptly when ecological limits came into play. Whatever the specific factors and forces that drove the population boom, then, it’s a pretty safe bet that the underlying cause was the one factor present in industrial civilization that hasn’t played a significant role in any other human society: the exploitation of vast quantities of extrasomatic energy—that is, energy that doesn’t come into play by means of human or animal muscle. Place the curve of increasing energy per capita worldwide next to the curve of human population worldwide, and the two move very nearly in lockstep: thus it’s fair to say that human beings, like yeast, respond to increased access to energy with increased reproduction.
Does that mean that we’re going to have to deal with soaring population worldwide for the foreseeable future? No, and hard planetary limits to resource extraction are the reasons why. Without the huge energy subsidy to agriculture contributed by fossil fuels, producing enough food to support seven billion people won’t be possible. We saw a preview of the consequences in 2008 and 2009, when the spike in petroleum prices caused a corresponding spike in food prices and a great many people around the world found themselves scrambling to get enough to eat on any terms at all. The riots and revolutions that followed grabbed the headlines, but another shift that happened around the same time deserves more attention: birth rates in many Third World countries decreased noticeably, and have continued to trend downward since then.
The same phenomenon can be seen elsewhere. Since the collapse of the Soviet Union, most of the formerly Soviet republics have seen steep declines in rates of live birth, life expectancy, and most other measures of public health, while death rates have climbed well above birth rates and stayed there. For that matter, since 2008, birth rates in the United States have dropped even further below the rate of replacement than they were before that time; immigration is the only reason the population of the United States doesn’t register declines year after year.
This is the wave of the future.  As fossil fuel and other resources continue to deplete, and economies dependent on those resources become less and less able to provide people with the necessities of life, the population boom will turn into a population bust. The base scenario in 1972’s The Limits to Growth, still the most accurate (and thus inevitably the most vilified) model of the future into which we’re stumbling blindly just now, put the peak of global population somewhere around 2030: that is, sixteen years from now. Recent declines in birth rates in areas that were once hotbeds of population growth, such as Latin America and the Middle East, can be seen as the leveling off that always occurs in a population curve before decline sets in.
That decline is likely to go very far indeed. That’s partly a matter of straightforward logic: since global population has been artificially inflated by pouring extrasomatic energy into boosting the food supply and providing other necessary resources to human beings, the exhaustion of economically extractable reserves of the fossil fuels that made that process possible will knock the props out from under global population figures. Still, historical parallels also have quite a bit to offer here: extreme depopulation is a common feature of the decline and fall of civilizations, with up to 95% population loss over the one to three centuries that the fall of a civilization usually takes.
Suggest that to people nowadays and, once you get past the usual reactions of denial and disbelief, the standard assumption is that population declines so severe could only happen if there were catastrophes on a truly gargantuan scale. That’s an easy assumption to make, but it doesn’t happen to be true. Just as it didn’t take vast public orgies of copulation and childbirth to double the planet’s population over the last half-century, it wouldn’t take equivalent exercises in mass death to halve the planet’s population over the same time frame. The ordinary processes of demography can do the trick all by themselves.
Let’s explore that by way of a thought experiment. Between family, friends, coworkers, and the others that you meet in the course of your daily activities, you probably know something close to a hundred people. Every so often, in the ordinary course of events, one of them dies—depending on the age and social status of the people you know, that might happen once a year, once every two years, or what have you. Take a moment to recall the most recent death in your social circle, and the one before that, to help put the rest of the thought experiment in context.
Now imagine that from this day onward, among the hundred people you know, one additional person—one person more than you would otherwise expect to die—dies every year, while the rate of birth remains the same as it is now. Imagine that modest increase in the death rate affecting the people you know. One year, an elderly relative of yours doesn’t wake up one morning; the next, a barista at the place where you get coffee on the way to work dies of cancer; the year after that, a coworker’s child comes down with an infection the doctors can’t treat, and so on.  A noticeable shift? Granted, but it’s not Armageddon; you attend a few more funerals than you’re used to, make friends with the new barista, and go about your life until one of those additional deaths is yours.
Now take that process and extrapolate it out. (Those of my readers who have the necessary math skills should take the time to crunch the numbers themselves.) Over the course of three centuries, an increase in the crude death rate of one per cent per annum, given an unchanged birth rate, is sufficient to reduce a population to five per cent of its original level. Vast catastrophes need not apply; of the traditional four horsemen, War, Famine, and Pestilence can sit around drinking beer and playing poker. The fourth horseman, in the shape of a modest change in crude death rates, can do the job all by himself.
Now imagine the same scenario, except that there are two additional deaths each year in your social circle, rather than one.  That would be considerably more noticeable, but it still doesn’t look like the end of the world—at least until you do the math. An increase in the crude death rate of two per cent per annum, given an unchanged birth rate, is enough to reduce a population to five per cent of its original level within a single century. In global terms, if world population peaks around 8 billion in 2030, a decline on that scale would leave four hundred million people on the planet by 2130.
In the real world, of course, things are not as simple or smooth as they are in the thought experiment just offered. Birth rates are subject to complex pressures and vary up and down depending on the specific pressures a population faces, and even small increases in infant and child mortality have a disproportionate effect by removing potential breeding pairs from the population before they can reproduce. Meanwhile, population declines are rarely anything like so even as  the thought experiment suggests; those other three horsemen, in particular, tend to get bored of their poker game at intervals and go riding out to give the guy with the scythe some help with the harvest. War, famine, and pestilence are common events in the decline and fall of a civilization, and the twilight of the industrial world is likely to get its fair share of them.
Thus it probably won’t be a matter of two more deaths a year, every year. Instead, one year, war breaks out, most of the young men in town get drafted, and half of them come back in body bags.  Another year, after a string of bad harvests, the flu comes through, and a lot of people who would have shaken it off under better conditions are just that little bit too malnourished to survive.  Yet another year, a virus shaken out of its tropical home by climate change and ecosystem disruption goes through town, and fifteen per cent of the population dies in eight ghastly months. That’s the way population declines happen in history.
In the twilight years of the Roman world, for example, a steady demographic contraction was overlaid by civil wars, barbarian invasions, economic crises, famines, and epidemics; the total population decline varied significantly from one region to another, but even the relatively stable parts of the Eastern Empire seem to have had around a 50% loss of population, while some areas of the Western Empire suffered far more drastic losses; Britain in particular was transformed from a rich, populous, and largely urbanized province to a land of silent urban ruins and small, scattered villages of subsistence farmers where even so simple a technology as wheel-thrown pottery became a lost art.
The classic lowland Maya are another good example along the same lines.  Hammered by climate change and topsoil loss, the Maya heartland went through a rolling collapse a century and a half in length that ended with population levels maybe five per cent of what they’d been at the start of the Terminal Classic period, and most of the great Maya cities became empty ruins rapidly covered by the encroaching jungle. Those of my readers who have seen pictures of tropical foliage burying the pyramids of Tikal and Copan might want to imagine scenes of the same kind in the ruins of Atlanta and Austin a few centuries from now. That’s the kind of thing that happens when an urbanized society suffers severe population loss during the decline and fall of a civilization.
That, in turn, is what has to be factored into any realistic forecast of dark age America: there will be many, many fewer people inhabiting North America a few centuries from now than there are today.  Between the depletion of the fossil fuel resources necessary to maintain today’s hugely inflated numbers and the degradation of North America’s human carrying capacity by climate change, sea level rise, and persistent radiological and chemical pollution, the continent simply won’t be able to support that many people. The current total is about 470 million—35 million in Canada, 314 million in the US, and 121 million in Mexico, according to the latest figures I was able to find—and something close to five per cent of that—say, 20 to 25 million—might be a reasonable midrange estimate for the human population of the North American continent when the population implosion finally bottoms out a few centuries from now.
Now of course those 20 to 25 million people won’t be scattered evenly across the continent. There will be very large regions—for example, the nearly lifeless, sun-blasted wastelands that climate change will make of the southern Great Plains and the Sonoran desert—where human settlement will be as sparse as it is today in the bleakest parts of the Sahara or the Rub’al Khali of central Arabia. There will be other areas—for example, the Great Lakes region and the southern half of Mexico’s great central valley—where population will be relatively dense by Dark Age standards, and towns of modest size may even thrive if they happen to be in defensible locations.
The nomadic herding folk of the midwestern prairies, the villages of the Gulf Coast jungles, and the other human ecologies that will spring up in the varying ecosystems of deindustrial North America will all gradually settle into a more or less stable population level, at which births and deaths balance each other and the consumption of resources stays at or below sustainable levels of production. That’s what happens in human societies that don’t have the dubious advantage of a torrent of nonrenewable energy reserves to distract them temporarily from the hard necessities of survival.
It’s getting to that level that’s going to be a bear. The mechanisms of population contraction are simple enough, and as suggested above, they can have a dramatic impact on historical time scales without cataclysmic impact on the scale of individual lives. No, the difficult part of population contraction is its impact on economic patterns geared to continuous population growth. That’s part of a more general pattern, of course—the brutal impact of the end of growth on an economy that depends on growth to function at all—which has been discussed on this blog several times already, and will require close study in the present sequence of posts.
That examination will begin after we’ve considered the second half of the demography of dark age America: the role of mass migration and ethnogenesis in the birth of the cultures that will emerge on this continent when industrial civilization is a fading memory. That very challenging discussion will occupy next week’s post.

Heading Toward The Sidewalk

Wed, 2014-08-20 18:25
Talking about historical change is one thing when the changes under discussion are at some convenient remove in the past or the future. It’s quite another when the changes are already taking place. That’s one of the things that adds complexity to the project of this blog, because the decline and fall of modern industrial civilization isn’t something that might take place someday, if X or Y or Z happens or doesn’t happen; it’s under way now, all around us, and a good many of the tumults of our time are being driven by the unmentionable but inescapable fact that the process of decline is beginning to pick up speed.

Those tumults are at least as relevant to this blog’s project as the comparable events in the latter years of dead civilizations, and so it’s going to be necessary now and then to pause the current sequence of posts, set aside considerations of the far future for a bit, and take a look at what’s happening here and now. This is going to be one of those weeks, because a signal I’ve been expecting for a couple of years now has finally showed up, and its appearance means that real trouble may be imminent.

This has admittedly happened in a week when the sky is black with birds coming home to roost. I suspect that most of my readers have been paying at least some attention to the Ebola epidemic now spreading across West Africa. Over the last week, the World Health Organization has revealed that official statistics on the epidemic’s toll are significantly understated, the main nongovernmental organization fighting Ebola has admitted that the situation is out of anyone’s control, and a series of events neatly poised between absurdity and horror—a riot in one of Monrovia’s poorest slums directed at an emergency quarantine facility, in which looters made off with linens and bedding contaminated with the Ebola virus, and quarantined patients vanished into the crowd—may shortly plunge Liberia into scenes of a kind not witnessed since the heyday of the Black Death. The possibility that this outbreak may become a global pandemic, while still small, can no longer be dismissed out of hand.

Meanwhile, closer to home, what has become a routine event in today’s America—the casual killing of an unarmed African-American man by the police—has blown up in a decidedly nonroutine fashion, with imagery reminiscent of Cairo’s Tahrir Square being enacted night after night in the St. Louis suburb of Ferguson, Missouri. The culture of militarization and unaccountability that’s entrenched in urban police forces in the United States has been displayed in a highly unflattering light, as police officers dressed for all the world like storm troopers on the set of a bad science fiction movie did their best to act the part, tear-gassing and beating protesters, reporters, and random passersby in an orgy of jackbooted enthusiasm blatant enough that Tea Party Republicans have started to make worried speeches about just how closely this resembles the behavior of a police state.

If the police keep it up, the Arab Spring of a few years back may just be paralleled by an American Autumn. Even if some lingering spark of common sense on the part of state and local authorities heads off that possibility, the next time a white police officer guns down an African-American man for no particular reason—and there will be a next time; such events, as noted above, are routine in the United States these days—the explosion that follows will be even more severe, and the risk that such an explosion may end up driving the emergence of a domestic insurgency is not small. I noted in a post a couple of years back that the American way of war pretty much guarantees that any country conquered by our military will pup an insurgency in short order thereafter; there’s a great deal of irony in the thought that the importation of the same model of warfare into police practice in the US may have exactly the same effect here.

It may come as a surprise to some of my readers that the sign I noted is neither of these things. No, it’s not the big volcano in Iceland that’s showing worrying signs of blowing its top, either. It’s an absurdly little thing—a minor book review in an otherwise undistinguished financial-advice blog—and it matters only because it’s a harbinger of something considerably more important.

A glance at the past may be useful here. On September 9, 1929, no less a financial periodical than Barron’s took time off from its usual cheerleading of the stock market’s grand upward movement to denounce an investment analyst named Roger Babson in heated terms. Babson’s crime? Suggesting that the grand upward movement just mentioned was part of a classic speculative bubble, and the bubble’s inevitable bust would cause an economic depression. Babson had been saying this sort of thing all through the stock market boom of the late 1920s, and until that summer, the mainstream financial media simply ignored him, as they ignored everyone else whose sense of economic reality hadn’t gone out to lunch and forgotten to come back.

For those who followed the media, in fact, the summer and fall of 1929 were notable mostly for the fact that a set of beliefs that most people took for granted—above all else, the claim that the stock market could keep on rising indefinitely—suddenly were being loudly defended all over the place, even though next to nobody was attacking them. The June issue of The American Magazine featured an interview with financier Bernard Baruch, insisting that “the economic condition of the world seems on the verge of a great forward movement.” In the July 8 issue of Barron’s, similarly, an article insisted that people who worried about how much debt was propping up the market didn’t understand the role of broker’s loans as a major new investment outlet for corporate money.

As late as October 15, when the great crash was only days away, Professor Irving Fisher of Yale’s economics department made his famous announcement to the media: “Stock prices have reached what looks like a permanently high plateau.” That sort of puffery was business as usual, then as now. Assaulting the critics of the bubble in print, by name, was not. It was only when the market was sliding toward the abyss of the 1929 crash that financial columnists publicly trained their rhetorical guns on the handful of people who had been saying all along that the boom would inevitably bust.

That’s a remarkably common feature of speculative bubbles, and could be traced in any number of historical examples, starting with the tulip bubble in the 17th century Netherlands and going on from there. Some of my readers may well have experienced the same thing for themselves in the not too distant past, during the last stages of the gargantuan real estate bubble that popped so messily in 2008. I certainly did, and a glance back at that experience will help clarify the implications of the signal I noticed in the week just past.

Back when the real estate bubble was soaring to vertiginous and hopelessly unsustainable heights, I used to track its progress on a couple of news aggregator sites, especially Keith Brand’s lively HousingPanic blog. Now and then, as the bubble peaked and began losing air, I would sit down with a glass of scotch, a series of links to the latest absurd comments by real estate promoters, and my copy of John Kenneth Galbraith’s The Great Crash 1929—the source, by the way, of the anecdotes cited above—and enjoyed watching the rhetoric used to insist that the 2008 bubble wasn’t a bubble duplicate, in some cases word for word, the rhetoric used for the same purpose in 1929.

All the anti-bubble blogs fielded a steady stream of hostile comments from real estate investors who apparently couldn’t handle the thought that anyone might question their guaranteed ticket to unearned wealth, and Brand’s in particular saw no shortage of bare-knuckle verbal brawls. It was only in the last few months before the bubble burst, though, that pro-bubble blogs started posting personal attacks on Brand and his fellow critics, denouncing them by name in heated and usually inaccurate terms. At the time, I noted the parallel with the Barron’s attack on Roger Babson, and wondered if it meant the same thing; the events that followed showed pretty clearly that it did.

That same point may just have arrived in the fracking bubble—unsurprisingly, since that has followed the standard trajectory of speculative booms in all other respects so far. For some time now, the media has been full of proclamations about America’s allegely limitless petroleum supply, which resemble nothing so much as the airy claims about stocks made by Bernard Baruch and Irving Fisher back in 1929. Week after week, bloggers and commentators have belabored the concept of peak oil, finding new and ingenious ways to insist that it must somehow be possible to extract infinite amounts of oil from a finite planet; oddly enough, though it’s rare for anyone to speak up for peak oil on these forums, the arguments leveled against it have been getting louder and more shrill as time passes. Until recently, though, I hadn’t encountered the personal attacks that announce the imminence of the bust.

That was before this week. On August 11th, a financial-advice website hosted a fine example of the species, and rather to my surprise—I’m hardly the most influential or widely read critic of the fracking bubble, after all—it was directed at me.

Mind you, I have no objection to hostile reviews of my writing. A number of books by other people have come in for various kinds of rough treatment on this blog, and turnabout here as elsewhere is fair play. I do prefer reviewers, hostile or otherwise, to take the time to read a book of mine before they review it, but that’s not something any writer can count on; reviewers who clearly haven’t so much as opened the cover of the book on which they pass judgment have been the target of barbed remarks in literary circles since at least the 18th century. Still, a review of a book the reviewer hasn’t read is one thing, and a review of a book the author hasn’t written and the publisher hasn’t published is something else again.

That’s basically the case here. The reviewer, a stock market blogger named Andew McKillop, set out to critique a newly re-edited version of my 2008 book The Long Descent. That came as quite a surprise to me, as well as to New Society Publications, the publisher of the earlier book, since no such reissue exists. The Long Descent remains in print in its original edition, and my six other books on peak oil and the future of industrial society are, ahem, different books.

My best guess is that McKillop spotted my new title Decline and Fall: The End of Empire and the Future of Democracy in 21st Century America in a bookshop window, and simply jumped to the conclusion that it must be a new release of the earlier book. I’m still not sure whether the result counts as a brilliant bit of surrealist performance art or a new low in what we still jokingly call journalistic ethics; in either case, it’s definitely broken new ground. Still, I hope that McKillop does better research for the people who count on him for stock advice.

Given that starting point, the rest of the review is about what you would expect. I gather that McKillop read a couple of online reviews of The Long Descent and a couple more of Decline and Fall, skimmed over a few randomly chosen posts on this blog, tossed the results together all anyhow, and jumped to the conclusion that the resulting mess was what the book was about. The result is quite a lively little bricolage of misunderstandings, non sequiturs, and straightforward fabrications—I invite anyone who cares to make the attempt to point out the place in my writings, for example, where I contrast catabolic collapse with “anabolic collapse,” whatever on earth that latter might be.

There’s a certain wry amusement to be had from going through the review and trying to figure out exactly how McKillop might have gotten this or that bit of misinformation wedged into his brain, but I’ll leave that as a party game for my readers. The point I’d like to make here is that the appearance of this attempted counterblast in a mainstream financial blog is a warning sign. It suggests that the fracking boom, like previous bubbles when they reached the shoot-the-messenger stage, may well be teetering on the brink of a really spectacular crash—and it’s not the only such sign, either.

The same questions about debt that were asked about the stock market in 1929 and the housing market in 2008 are being asked now, with increasing urgency, about the immense volume of junk bonds that are currently propping up the shale boom. Meanwhile gas and oil companies are having to drill ever more frantically and invest ever more money to keep production rates from dropping like a rock Get past the vacuous handwaving about “Saudi America,” and it’s embarrassingly clear that the fracking boom is simply one more debt-fueled speculative orgy destined for one more messy bust. It’s disguised as an energy revolution in exactly the same way that the real estate bubble was disguised as a housing revolution, the tech-stock bubble as a technological revolution, and so on back through the annals of financial delusion as far as you care to go.

Sooner or later—and much more likely sooner than later—the fracking bubble is going to pop. Just how and when that will happen is impossible to know in advance. Even making an intelligent guess at this point would require a detailed knowledge of which banks and investment firms have gotten furthest over their heads in shale leases and the like, which petroleum and natural gas firms have gone out furthest on a financial limb, and so on. That’s the kind of information that the companies in question like to hide from one another, not to mention the general public; it’s thus effectively inaccessible to archdruids, which means that we’ll just have to wait for the bankruptcies, the panic selling, and the wet thud of financiers hitting Wall Street sidewalks to find out which firms won the fiscal irresponsibility sweepstakes this time around.

One way or another, the collapse of the fracking boom bids fair to deliver a body blow to the US economy, at a time when most sectors of that economy have yet to recover from the bruising they received at the hands of the real estate bubble and bust. Depending on how heavily and cluelessly foreign banks and investors have been sucked into the boom—again, hard to say without inside access to closely guarded financial information—the popping of the bubble could sucker-punch national economies elsewhere in the world as well. Either way, it’s going to be messy, and the consequences will likely include a second helping of the same unsavory stew of bailouts for the rich, austerity for the poor, bullying of weaker countries by their stronger neighbors, and the like, that was dished up with such reckless abandon in the aftermath of the 2008 real estate bust. Nor is any of this going to make it easier to deal with potential pandemics, simmering proto-insurgencies in the American heartland, or any of the other entertaining consequences of our headfirst collision with the sidewalks of reality.

The consequences may go further than this. The one detail that sets the fracking bubble apart from the real estate bubble, the tech stock bubble, and their kin further back in economic history is that fracking wasn’t just sold to investors as a way to get rich quick; it was also sold to them, and to the wider public as well, as a way to evade the otherwise inexorable reality of peak oil. 2008, it bears remembering, was not just the year that the real estate bubble crashed, and dragged much of the global economy down with it; it was also the year when all those prophets of perpetual business as usual who insisted that petroleum would never break $60 a barrel or so got to eat crow, deep-fried in light sweet crude, when prices spiked upwards of $140 a barrel. All of a sudden, all those warnings about peak oil that experts had been issuing since the 1950s became a great deal harder to dismiss out of hand.

The fracking bubble thus had mixed parentage; its father may have been the same merciless passion for fleecing the innocent that always sets the cold sick heart of Wall Street aflutter, but its mother was the uneasy dawn of recognition that by ignoring decades of warnings and recklessly burning through the Earth’s finite reserves of fossil fuels just as fast as they could be extracted, the industrial world has backed itself into a corner from which the only way out leads straight down. White’s Law, one of the core concepts of human ecology, points out that economic development is directly correlated with energy per capita; as depletion overtakes production and energy per capita begins to decline, the inevitable result is a long era of economic contraction, in which a galaxy of economic and cultural institutions predicated on continued growth will stop working, and those whose wealth and influence depend on those institutions will be left with few choices short of jumping out a Wall Street window.

The last few years of meretricious handwaving about fracking as the salvation of our fossil-fueled society may thus mark something rather more significant than another round of the pervasive financial fraud that’s become the lifeblood of the US economy in these latter days. It’s one of the latest—and maybe, just maybe, one of the last—of the mental evasions that people in the industrial world have used in the futile but fateful attempt to pretend that pursuing limitless economic growth on a finite and fragile planet is anything other than a guaranteed recipe for disaster. When the fracking bubble goes to its inevitable fate, and most of a decade of babbling about limitless shale oil takes its proper place in the annals of human idiocy, it’s just possible that some significant number of people will realize that the universe is under no obligation to provide us will all the energy and other resources we want, just because we happen to want them. I wouldn’t bet the farm on that, but I think the possibility is there.

One swallow does not a summer make, mind you, and one fumbled attempt at a hostile book review on one website doesn’t prove that the same stage in the speculative bubble cycle that saw frantic denunciations flung at Roger Babson and Keith Brand—the stage that comes immediately before the crash—has arrived this time around. I would encourage my readers to watch for similar denunciations aimed at more influential and respectable fracking-bubble critics such as Richard Heinberg or Kurt Cobb. Once those start showing up, hang onto your hat; it’s going to be a wild ride.

Dark Age America: A Bitter Legacy

Wed, 2014-08-13 16:45
Civilizations normally leave a damaged environment behind them when they fall, and ours shows every sign of following that wearily familiar pattern. The nature and severity of the ecological damage a civilization leaves behind, though, depend on two factors, one obvious, the other less so. The obvious factor derives from the nature of the technologies the civilization deployed in its heyday; the less obvious one depends on how many times those same technologies had been through the same cycle of rise and fall before the civilization under discussion got to them.

There’s an important lesson in this latter factor. Human technologies almost always start off their trajectory through time as environmental disasters looking for a spot marked X, which they inevitably find, and then have the rough edges knocked off them by centuries or millennia of bitter experience. When our species first developed the technologies that enabled hunting bands to take down big game animals, the result was mass slaughter and the extinction of entire species of megafauna, followed by famine and misery; rinse and repeat, and you get the exquisite ecological balance that most hunter-gatherer societies maintained in historic times. In much the same way, early field agriculture yielded bumper crops of topsoil loss and subsistence failure to go along with its less reliable yields of edible grain, and the hard lessons from that experience have driven the rise of more sustainable agricultural systems—a process completed in our time with the emergence of organic agricultural methods that build soil rather than depleting it.

Any brand new mode of human subsistence is thus normally cruising for a bruising, and will get it in due time at the hands of the biosphere. That’s not precisely good news for modern industrial civilization, because ours is a brand new mode of human subsistence; it’s the first human society ever to depend almost entirely on extrasomatic energy—energy, that is, that doesn’t come from human or animal muscles fueled by food crops. In my book The Ecotechnic Future, I’ve suggested that industrial civilization is simply the first and most wasteful of a new mode of human society, the technic society. Eventually, I proposed, technic societies will achieve the same precise accommodation to ecological reality that hunter-gatherer societies worked out long ago, and agricultural societies have spent the last eight thousand years or so pursuing. Unfortunately, that doesn’t help us much just now.

Modern industrial civilization, in point of fact, has been stunningly clueless in its relationship with the planetary cycles that keep us all alive. Like those early bands of roving hunters who slaughtered every mammoth they could find and then looked around blankly for something to eat, we’ve drawn down the finite stocks of fossil fuels on this planet without the least concern about what the future would bring—well, other than the occasional pious utterance of thoughtstopping mantras of the “Oh, I’m sure they’ll think of something” variety. That’s not the only thing we’ve drawn down recklessly, of course, and the impact of our idiotically short-term thinking on our long-term prospects will be among the most important forces shaping the next five centuries of North America’s future.

Let’s start with one of the most obvious: topsoil, the biologically active layer of soil that can support food crops. On average, as a result of today’s standard agricultural methods, North America’s arable land loses almost three tons of topsoil from each cultivated acre every single year. Most of the topsoil that made North America the breadbasket of the 20th century world is already gone, and at the current rate of loss, all of it will be gone by 2075. That would be bad enough if we could rely on artificial fertilizer to make up for the losses, but by 2075 that won’t be an option: the entire range of chemical fertilizers are made from nonrenewable resources—natural gas is the main feedstock for nitrate fertilizers, rock phosphate for phosphate fertilizers, and so on—and all of these are depleting fast.

Topsoil loss driven by bad agricultural practices is actually quite a common factor in the collapse of civilizations. Sea-floor cores in the waters around Greece, for example, show a spike in sediment deposition from rapidly eroding topsoil right around the end of the Mycenean civilization, and another from the latter years of the Roman Empire. If archeologists thousands of years from now try the same test, they’ll find yet another eroded topsoil layer at the bottom of the Gulf of Mexico, the legacy of an agricultural system that put quarterly profits ahead of the relatively modest changes that might have preserved the soil for future generations.

The methods of organic agriculture mentioned earlier could help very significantly with this problem, since those include techniques for preserving existing topsoil, and rebuilding depleted soil at a rate considerably faster than nature’s pace. To make any kind of difference, though, those methods would have to be deployed on a very broad scale, and then passed down through the difficult years ahead. Lacking that, even where desertification driven by climate change doesn’t make farming impossible, a very large part of today’s North American farm belt will likely be unable to support crops for centuries or millennia to come. Eventually, the same slow processes that replenished the soil on land scraped bare by the ice age glaciers will do the same thing to land stripped of topsoil by industrial farming, but “eventually” will not come quickly enough to spare our descendants many hungry days.

The same tune in a different key is currently being played across the world’s oceans, and as a result my readers can look forward, in the not too distant future, to tasting the last piece of seafood they will ever eat. Conservatively managed, the world’s fish stocks could have produced large yields indefinitely, but they were not conservatively managed; where regulation was attempted, political and economic pressure consistently drove catch limits above sustainable levels, and of course cheating was pervasive and the penalties for being caught were merely another cost of doing business. Fishery after fishery has accordingly collapsed, and the increasingly frantic struggle to feed seven billion hungry mouths is unlikely to leave any of those that remain intact for long.

Worse, all of this is happening in oceans that are being hammered by other aspects of our collective ecological stupidity. Global climate change, by boosting the carbon dioxide content of the atmosphere, is acidifying the oceans and causing sweeping shifts in oceanic food chains. Those shifts involve winners as well as losers; where calcium-shelled diatoms and corals are suffering population declines, seaweeds and algae, which are not so sensitive to changes in the acid-alkaline balance, are thriving on the increased CO2 in the water—but the fish that feed on seaweeds and algae are not the same as those that feed on diatoms and corals, and the resulting changes are whipsawing ocean ecologies.

Close to shore, toxic effluents from human industry and agriculture are also adding to the trouble. The deep oceans, all things considered, offer sparse pickings for most saltwater creatures; the vast majority of ocean life thrives within a few hundred miles of land, where rivers, upwelling zones, and the like provide nutrients in relative abundance. We’re already seeing serious problems with toxic substances concentrating up through oceanic food chains; unless communities close to the water’s edge respond to rising sea levels with consummate care, hauling every source of toxic chemicals out of reach of the waters, that problem is only going to grow worse. Different species react differently to this or that toxin; some kind of aquatic ecosystem will emerge and thrive even in the most toxic estuaries of deindustrial North America, but it’s unlikely that those ecosystems will produce anything fit for human beings to eat, and making the attempt may not be particularly good for one’s health.

Over the long run, that, too, will right itself. Bioaccumulated toxins will end up entombed in the muck on the ocean’s floor, providing yet another interesting data point for the archeologists of the far future; food chains and ecosystems will reorganize, quite possibly in very different forms from the ones they have now. Changes in water temperature, and potentially in the patterns of ocean currents, will bring unfamiliar species into contact with one another, and living things that survive the deindustrial years in isolated refugia will expand into their former range. These are normal stages in the adaptation of ecosystems to large-scale shocks. Still, those processes of renewal take time, and the deindustrial dark ages ahead of us will be long gone before the seas are restored to biological abundance.

Barren lands and empty seas aren’t the only bitter legacies we’re leaving our descendants, of course. One of the others has received quite a bit of attention on the apocalyptic end of the peak oil blogosphere for several years now—since March 11, 2011, to be precise, when the Fukushima Daiichi nuclear disaster got under way. Nuclear power exerts a curious magnetism on the modern mind, drawing it toward extremes in one direction or the other; the wildly unrealistic claims about its limitless potential to power the future that have been made by its supporters are neatly balanced by the wildly unrealistic claims about its limitless potential as a source of human extinction on the other. Negotiating a path between those extremes is not always an easy matter.

In both cases, though, it’s easy enough to clear away at least some of the confusion by turning to documented facts. It so happens, for instance, that no nation on Earth has ever been able to launch or maintain a nuclear power program without huge and continuing subsidies. Nuclear power never pays for itself; absent a steady stream of government handouts, it doesn’t make enough economic sense to attract enough private investment to cover its costs, much less meet the huge and so far unmet expenses of nuclear waste storage; and in the great majority of cases, the motive behind the program, and the subsidies, is pretty clearly the desire of the local government to arm itself with nuclear weapons at any cost. Thus the tired fantasy of cheap, abundant nuclear power needs to be buried alongside the Eisenhower-era propagandists who dreamed it up in the first place.

It also happens, of course, that there have been quite a few catastrophic nuclear accidents since the dawn of the atomic age just over seventy years ago, especially but not only in the former Soviet Union. Thus it’s no secret what the consequences are when a reactor melts down, or when mismanaged nuclear waste storage facilities catch fire and spew radioactive smoke across the countryside. What results is an unusually dangerous industrial accident, on a par with the sudden collapse of a hydroelectric dam or a chemical plant explosion that sends toxic gases drifting into a populated area; it differs from these mostly in that the contamination left behind by certain nuclear accidents remains dangerous for many years after it comes drifting down from the sky.

There are currently 69 operational nuclear power plants scattered unevenly across the face of North America, with 127 reactors among them; there are also 48 research reactors, most of them much smaller and less vulnerable to meltdown than the power plant reactors. Most North American nuclear power plants store spent fuel rods in pools of cooling water onsite, since the spent rods continue to give off heat and radiation and there’s no long term storage for high-level nuclear waste. Neither a reactor nor a fuel rod storage pool can be left untended for long without serious trouble, and a great many things—including natural disasters and human stupidity—can push them over into meltdown, in the case of reactors, or conflagration, in the case of spent fuel rods. In either case, or both, you’ll get a plume of toxic, highly radioactive smoke drifting in the wind, and a great many people immediately downwind will die quickly or slowly, depending on the details and the dose.

It’s entirely reasonable to predict that this is going to happen to some of those 175 reactors. In a world racked by climate change, resource depletion, economic disintegration, political and social chaos, mass movements of populations, and the other normal features of the decline and fall of a civilization and the coming of a dark age, the short straw is going to be drawn sooner or later, and serious nuclear disasters are going to happen. That doesn’t justify the claim that every one of those reactors is going to melt down catastrophically, every one of the spent-fuel storage facilities is going to catch fire, and so on—though of course that claim does make for more colorful rhetoric.

In the real world, for reasons I’ll be discussing further in this series of posts, we don’t face the kind of sudden collapse that could make all the lights go out at once. Some nations, regions, and local areas within regions will slide faster than others, or be deliberately sacrificed so that resources of one kind or another can be used somewhere else. As long as governments retain any kind of power at all, keeping nuclear facilities from adding to the ongoing list of disasters will be high on their agendas; shutting down reactors that are no longer safe to operate is one step they can certainly do, and so is hauling spent fuel rods out of the pools and putting them somewhere less immediately vulnerable.

It’s probably a safe bet that the further we go along the arc of decline and fall, the further these decommissioning exercises will stray from the optimum. I can all too easily imagine fuel rods being hauled out of their pools by condemned criminals or political prisoners, loaded on flatbed rail cars, taken to some desolate corner of the expanding western deserts, and tipped one at a time into trenches dug in the desert soil, then covered over with a few meters of dirt and left to the elements. Sooner or later the radionuclides will leak out, and that desolate place will become even more desolate, a place of rumors and legends where those who go don’t come back.

Meanwhile, the reactors and spent-fuel pools that don’t get shut down even in so cavalier a fashion will become the focal points of dead zones of a slightly different kind. The facilities themselves will be off limits for some thousands of years, and the invisible footprints left behind by the plumes of smoke and dust will be dangerous for centuries. The vagaries of deposition and erosion are impossible to predict; in areas downwind from Chernobyl or some of the less famous Soviet nuclear accidents, one piece of overgrown former farmland may be relatively safe while another a quarter hour’s walk away may still set a Geiger counter clicking at way-beyond-safe rates. Here I imagine cow skulls on poles, or some such traditional marker, warning the unwary that they stand on the edge of accursed ground.

It’s important to keep in mind that not all the accursed ground in deindustrial North America will be the result of nuclear accidents. There are already areas on the continent so heavily contaminated with toxic pollutants of less glow-in-the-dark varieties that anyone who attempts to grow food or drink the water there can count on a short life and a wretched death. As the industrial system spirals toward its end, and those environmental protections that haven’t been gutted already get flung aside in the frantic quest to keep the system going just a little bit longer, spills and other industrial accidents are very likely to become a good deal more common than they are already.

There are methods of soil and ecosystem bioremediation that can be done with very simple technologies—for example, plants that concentrate toxic metals in their tissues so it can be hauled away to a less dangerous site, and fungi that break down organic toxins—but if they’re to do any good at all, these will have to be preserved and deployed in the teeth of massive social changes and equally massive hardships. Lacking that, and it’s a considerable gamble at this point, the North America of the future will be spotted with areas where birth defects are a common cause of infant mortality and it will be rare to see anyone over the age of forty or so without the telltale signs of cancer.

There’s a bitter irony in the fact that cancer, a relatively rare disease a century and a half ago—most childhood cancers in particular were so rare that individual cases were written up in medical journals —has become the signature disease of industrial society, expanding its occurrence and death toll in lockstep with our mindless dumping of chemical toxins and radioactive waste into the environment. What, after all, is cancer? A disease of uncontrolled growth.

I sometimes wonder if our descendants in the deindustrial world will appreciate that irony. One way or another, I have no doubt that they’ll have their own opinions about the bitter legacy we’re leaving them. Late at night, when sleep is far away, I sometimes remember Ernest Thompson Seton’s heartrending 1927 prose poem “A Lament,” in which he recalled the beauty of the wild West he had known and the desolation of barbed wire and bleached bones he had seen it become. He projected the same curve of devastation forward until it rebounded on its perpetrators—yes, that would be us—and imagined the voyagers of some other nation landing centuries from now at the ruins of Manhattan, and slowly piecing together the story of a vanished people:

Their chiefs and wiser ones shall know
That here was a wastrel race, cruel and sordid,
Weighed and found wanting,
Once mighty but forgotten now.
And on our last remembrance stone,
These wiser ones will write of us:
They desolated their heritage,
They wrote their own doom.

I suspect, though, that our descendants will put things in language a good deal sharper than this. As they think back on the people of the 20th and early 21st centuries who gave them the barren soil and ravaged fisheries, the chaotic weather and rising oceans, the poisoned land and water, the birth defects and cancers that embitter their lives, how will they remember us? I think I know. I think we will be the orcs and Nazgûl of their legends, the collective Satan of their mythology, the ancient race who ravaged the earth and everything on it so they could enjoy lives of wretched excess at the future’s expense. They will remember us as evil incarnate—and from their perspective, it’s by no means easy to dispute that judgment.

Dark Age America: The Rising Oceans

Wed, 2014-08-06 17:02
The vagaries of global climate set in motion by our species’ frankly brainless maltreatment of the only atmosphere we’ve got, the subject of last week’s post here, have another dimension that bears close watching. History, as I suggested last week, can be seen as human ecology in its transformations over time, and every ecosystem depends in the final analysis on the available habitat. For human beings, the habitat that matters is dry land with adequate rainfall and moderate temperatures; we’ve talked about the way that anthropogenic climate change is interfering with the latter two, but it promises to have  significant impacts on the first of those requirements as well.
It’s helpful to put all this in the context of deep time. For most of the last billion years or so, the Earth has been a swampy jungle planet where ice and snow were theoretical possibilities only. Four times in that vast span, though, something—scientists are still arguing about what—turned the planet’s thermostat down sharply, resulting in ice ages millions of years in length. The most recent of these downturns began cooling the planet maybe ten million years ago, in the Miocene epoch; a little less than two million years ago, at the beginning of the Pleistocene epoch, the first of the great continental ice sheets began to spread across the Northern Hemisphere, and the ice age was on.
We’re still in it. During an ice age, a complex interplay of the Earth’s rotational and orbital wobbles drives the Milankovich cycle, a cyclical warming and cooling of the planet that takes around 100,000 years to complete, with long glaciations broken by much shorter interglacials. We’re approaching the end of the current interglacial, and it’s estimated that the current ice age has maybe another ten million years to go; one consequence is that at some point a few millennia in the future, we can pretty much count on the arrival of a new glaciation. In the meantime, we’ve still got continental ice sheets covering Antarctica and Greenland, and a significant amount of year-round ice in mountains in various corners of the world. That’s normal for an interglacial, though not for most of the planet’s history.
The back-and-forth flipflop between glaciations and interglacials has a galaxy of impacts on the climate and ecology of the planet, but one of the most obvious comes from the simple fact that all the frozen water needed to form a continental ice sheet have to come from somewhere, and the only available “somewhere” on this planet is the oceans. As glaciers spread, sea level drops accordingly; 18,000 years ago, when the most recent glaciation hit its final peak, sea level was more than 400 feet lower than today, and roaming tribal hunters could walk all the way from Holland to Ireland and keep going, following reindeer herds a good distance into what’s now the northeast Atlantic.
What followed has plenty of lessons on offer for our future. It used to be part of the received wisdom that ice ages began and ended with, ahem, glacial slowness, and there still seems to be good reason to think that the beginnings are fairly gradual, but the ending of the most recent ice age involved periods of very sudden change. 18,000 years ago, as already mentioned, the ice sheets were at their peak; about 16,000 years ago, the planetary climate began to warm, pushing the ice into a slow retreat. Around 14,700 years ago, the warm Bölling phase arrived, and the ice sheets retreated hundreds of miles; according to several studies, the West Antarctic ice sheet collapsed completely at this time.
The Bölling gave way after around 600 years to the Older Dryas cold period, putting the retreat of the ice on hold. After another six centuries or so, the Older Dryas gave way to a new warm period, the Alleröd, which sent the ice sheets reeling back and raised sea levels hundreds of feet worldwide. Then came a new cold phase, the frigid Younger Dryas, which brought temperatures back for a few centuries to their ice age lows, cold enough to allow the West Antarctic ice sheet to reestablish itself and to restore tundra conditions over large sections of the Northern Hemisphere. Ice core measurements suggest that the temperature drop hit fast, in a few decades or less—a useful reminder that rapid climate change can come from natural sources as well as from our smokestacks and tailpipes.
Just over a millennium later, right around 9600 BC, the Boreal phase arrived, and brought even more spectacular change. According to oxygen isotope measurements from Greenland ice cores—I get challenged on this point fairly often, so I’ll mention that the figure I’m citing is from Steven Mithen’s After The Ice, a widely respected 2003 survey of human prehistory—global temperatures spiked 7° C  in less than a decade, pushing the remaining ice sheets into rapid collapse and sending sea levels soaring. Over the next few thousand years, the planet’s ice cover shrank to a little less than its current level, and sea level rose a bit above what it is today; a gradual cooling trend beginning around 6000 BCE brought both to the status they had at the beginning of the industrial era.
Scientists still aren’t sure what caused the stunning temperature spike at the beginning of the Boreal phase, but one widely held theory is that it was driven by large-scale methane releases from the warming oceans and thawing permafrost. The ocean floor contains huge amounts of methane trapped in unstable methane hydrates; permafrost contains equally huge amounts of dead vegetation that’s kept from rotting by subfreezing temperatures, and when the permafrost thaws, that vegetation rots and releases more methane. Methane is a far more powerful greenhouse gas than carbon dioxide, but it’s also much more transient—once released into the atmosphere, methane breaks down into carbon dioxide and water relatively quickly, with an estimated average lifespan of ten years or so—and so it’s quite a plausible driver for the sort of sudden shock that can be traced in the Greenland ice cores.
If that’s what did it, of course, we’re arguably well on our way there. I discussed in a previous post here credible reports that large sections of the Arctic ocean are fizzing with methane, and I suspect many of my readers have heard of the recently discovered craters in Siberia that appear to have been caused by methane blowouts from thawing permafrost. On top of the current carbon dioxide spike, a methane spike would do a fine job of producing the kind of climate chaos I discussed in last week’s post. That doesn’t equal the kind of runaway feedback loop beloved of a certain sect of contemporary apocalypse-mongers, because there are massive sources of negative feedback that such claims always ignore, but it seems quite likely that the decades ahead of us will be enlivened by a period of extreme climate turbulence driven by significant methane releases.
Meanwhile, two of the world’s three remaining ice sheets—the West Antarctic and Greenland sheets—have already been destabilized by rising temperatures. Between them, these two ice sheets contain enough water to raise sea level around 50 feet globally, and the estimate I’m using for anthropogenic carbon dioxide emissions over the next century provides enough warming to cause the collapse and total melting of both of them. All that water isn’t going to hit the world’s oceans overnight, of course, and a great deal depends on just how fast the melting happens.
The predictions for sea level rise included in the last few IPCC reports assume a slow, linear process of glacial melting. That’s appropriate as a baseline, but the evidence from paleoclimatology shows that ice sheets collapse in relatively sudden bursts of melting, producing what are termed “global meltwater pulses” that can be tracked worldwide by a variety of proxy measurements. Mind you, “relatively sudden” in geological terms is slow by the standards of a human lifetime; the complete collapse of a midsized ice sheet like Greenland’s or West Antarctica’s can take five or six centuries, and that in turn involves periods of relatively fast melting and sea level rise, interspersed with slack periods when sea level creeps up much more slowly.
So far, at least, the vast East Antarctic ice sheet has shown only very modest changes, and most current estimates suggest that it would take something far more drastic than the carbon output of our remaining economically accessible fossil fuel reserves to tip it over into instability; this is a good thing, as East Antarctica’s ice fields contain enough water to drive sea level up 250 feet or so.  Thus a reasonable estimate for sea level change over the next five hundred years involves the collapse of the Greenland and West Antarctic sheets and some modest melting on the edges of the East Antarctic sheet, raising sea level by something over 50 feet, delivered in a series of unpredictable bursts divided by long periods of relative stability or slow change.
The result will be what paleogeographers call “marine transgression”—the invasion of dry land and fresh water by the sea. Fifty feet of sea level change adds up to quite a bit of marine transgression in some areas, much less in others, depending always on local topography. Where the ground is low and flat, the rising seas can penetrate a very long way; in California, for example, the state capital at Sacramento is many miles from the ocean, but since it’s only 30 feet above sea level and connected to the sea by a river, its  skyscrapers will be rising out of a brackish estuary long before Greenland and West Antarctica are bare of ice. The port cities of the Gulf coast are also on the front lines—New Orleans is actually below sea level, and will likely be an early casualty, but every other Gulf port from Brownsville, Texas (elevation 43 feet) to Tampa, Florida (elevation 15 feet) faces the same fate, and most East and West Coast ports face substantial flooding of economically important districts.
The flooding of Sacramento isn’t the end of the world, and there may even be some among my readers who would consider it to be a good thing. What I’d like to point out, though, is the economic impact of the rising waters. Faced with an unpredictable but continuing rise in sea level, communities and societies face one of two extremely expensive choices. They can abandon billions of dollars of infrastructure to the sea and rebuild further inland, or they can invest billions of dollars in flood control. Because the rate of sea level change can’t be anticipated, furthermore, there’s no way to know in advance how far to relocate or how high to build the barriers at any given time, and there are often hard limits to how much change can be done in advance:  port cities, for example, can’t just move away from the sea and still maintain a functioning economy.
This is a pattern we’ll be seeing over and over again in this series of posts. Societies descending into dark ages reliably get caught on the horns of a brutal dilemma. For any of a galaxy of reasons, crucial elements of infrastructure no longer do the job they once did, but reworking or replacing them runs up against two critical difficulties that are hardwired into the process of decline itself. The first is that, as time passes, the resources needed to do the necessary work become increasingly scarce; the second is that, as time passes, the uncertainties about what needs to be done become increasingly large.
The result can be tracked in the decline of every civilization. At first, failing systems are replaced with some success, but the economic impact of the replacement process becomes an ever-increasing burden, and the new systems never do quite manage to work as well as the older ones did in their heyday. As the process continues, the costs keep mounting and the benefits become less reliable; more and more often, scarce resources end up being wasted or put to counterproductive uses because the situation is too uncertain to allow for their optimum allocation. With each passing year, decision makers have to figure out how much of the dwindling stock of resources can be put to productive uses and how much has to be set aside for crisis management, and the raw uncertainty of the times guarantees that these decisions will very often turn out wrong. Eventually, the declining curve in available resources and the rising curve of uncertainty intersect to produce a crisis that spins out of control, and what’s left of a community, an economic sector, or a whole civilization goes to pieces under the impact.
It’s not too hard to anticipate how that will play out in the century or so immediately ahead of us. If, as I’ve suggested, we can expect the onset of a global meltwater pulse from the breakup of the Greenland and West Antarctic ice sheets at some point in the years ahead, the first upward jolt in sea level will doubtless be met with grand plans for flood-control measures in some areas, and relocation of housing and economic activities in others. Some of those plans may even be carried out, though the raw economic impact of worldwide coastal flooding on a global economy already under severe strain from a chaotic climate and a variety of other factors won’t make that easy. Some coastal cities will hunker down behind hurriedly built or enlarged levees, others will abandon low-lying districts and try to rebuild further upslope, still others will simply founder and be partly or wholly abandoned—and all these choices impose costs on society as a whole.
Thereafter, in years and decades when sea level rises only slowly, the costs of maintaining flood control measures and replacing vulnerable infrastructure with new facilities on higher ground will become an unpopular burden, and the same logic that drives climate change denialism today will doubtless find plenty of hearers then as well. In years and decades when sea level surges upwards, the flood control measures and relocation projects will face increasingly severe tests, which some of them will inevitably fail. The twin spirals of rising costs and rising uncertainty will have their usual effect, shredding the ability of a failing society to cope with the challenges that beset it.
It’s even possible in one specific case to make an educated guess as to the nature of the pressures that will finally push the situation over the edge into collapse and abandonment. It so happens that three different processes that follow in the wake of rapid glacial melting all have the same disastrous consequence for the eastern shores of North America.
The first of these is isostatic rebound. When you pile billions of tons of ice on a piece of land, the land sinks, pressing down hundreds or thousands of feet into the Earth’s mantle; melt the ice, and the land rises again. If the melting happens over a brief time, geologically speaking, the rebound is generally fast enough to place severe stress on geological faults all through the region, and thus sharply increases the occurrence of earthquakes. The Greenland ice sheet is by no means exempt from this process, and many of the earthquakes in the area around a rising Greenland will inevitably happen offshore. The likely result? Tsunamis.
The second process is the destabilization of undersea sediments that build up around an ice sheet that ends in the ocean. As the ice goes away, torrents of meltwater pour into the surrounding seas, and isostatic rebound changes the slope of the underlying rock, masses of sediment break free and plunge down the continental slope into the deep ocean. Some of the sediment slides that followed the end of the last ice age were of impressive scale—the Storegga Slide off the coast of Norway around 6220 BCE, which was caused by exactly this process, sent 840 cubic miles of sediment careening down the continental slope. The likely result? More tsunamis.
The third process, which is somewhat more speculative than the first two, is the sudden blowout of large volumes of undersea methane hydrates. Several oceanographers and paleoclimatologists have argued that the traces of very large underwater slides in the Atlantic, dating from the waning days of the last ice age, may well be the traces of such blowouts. As the climate warmed, they suggest, methane hydrates on the continental shelves were destabilized by rising temperatures, and a sudden shock—perhaps delivered by an earthquake, perhaps by something else—triggered the explosive release of thousands or millions of tons of methane all at once. The likely result? Still more tsunamis.
It’s crucial to realize the role that uncertainty plays here, as in so many dimensions of our predicament. No one knows whether tsunamis driven by glacial melting will hammer the shores of the northern Atlantic basin some time in the next week, or some time in the next millennium. Even if tsunamis driven by the collapse of the Greenland ice sheet become statistically inevitable, there’s no way for anyone to know in advance the timing, scale, and direction of any such event. Efficient allocation of resources to East Coast ports becomes a nighmarish challenge when you literally have no way of knowing how soon any given investment might suddenly end up on the bottom of the Atlantic.
If human beings behave as they usually do, what will most likely happen is that the port cities of the US East Coast will keep on trying to maintain business as usual until well after that stops making any kind of economic sense. The faster the seas rise and the sooner the first tsunamis show up, the sooner that response will tip over into its opposite, and people will begin to flee in large numbers from the coasts in search of safety for themselves and their families. My working guess is that the eastern seaboard of dark age America will be sparsely populated, with communities concentrated in those areas where land well above tsunami range lies close to the sea. The Pacific and Gulf coasts will be at much less risk from tsunamis, and so may be more thickly settled; that said, during periods of rapid marine transgression, the mostly flat and vulnerable Gulf Coast may lose a great deal of land, and those who live there will need to be ready to move inland in a hurry.
All these factors make for a shift in the economic and political geography of the continent that will be of quite some importance at a later point in this series of posts. In times of rapid sea level change, maintaining the infrastructure for maritime trade in seacoast ports is a losing struggle; maritime trade is still possible without port infrastructure, but it’s rarely economically viable; and that means that inland waterways with good navigable connections to the sea will take on an even greater importance than they have today. In North America, the most crucial of those are the St. Lawrence Seaway, the Hudson River-Erie Canal linkage to the Great Lakes, and whatever port further inland replaces New Orleans—Baton Rouge is a likely candidate, due to its location and elevation above sea level—once the current Mississippi delta drowns beneath the rising seas.
Even in dark ages, as I’ll demonstrate later on, maritime trade is a normal part of life, and that means that the waterways just listed will become the economic, political, and strategic keys to most of the North American continent. The implications of that geographical reality will be the focus of a number of posts as we proceed.

Dark Age America: Climate

Wed, 2014-07-30 17:24
Over the next year or so, as I’ve mentioned in recent posts, I plan on tracing out as much as possible of what can be known or reasonably guessed about the next five hundred years or so of North American history—the period of the decline and fall of the civilization that now occupies that continent, the dark age in which that familiar trajectory ends, and the first stirrings of the successor societies that will rise out of its ruins. That’s a challenging project, arguably more so than anything else I’ve attempted here, and it also involves some presuppositions that may be unfamiliar even to my regular readers.
To begin with, I’m approaching history—the history of the past as well as of the future—from a strictly ecological standpoint.  I’d like to propose, in fact, that history might best be understood as the ecology of human communities, traced along the dimension of time.  Like every other ecological process, in other words, it’s shaped partly by the pressures of the senvironment and partly by the way its own subsystems interact with one another, and with the subsystems of the other ecologies around it. That’s not a common view; most historical writing these days puts human beings  at the center of the picture, with the natural world as a supposedly static background, while a minority view goes to the other extreme and fixates on natural catastrophes as the sole cause of this or that major historical change.
Neither of these approaches seem particularly useful to me. As our civilization has been trying its level best not to learn for the last couple of centuries, and thus will be learning the hard way in the years immediately ahead, the natural world is not a static background. It’s an active and constantly changing presence that responds in complex ways to human actions. Human societies, in turn, are equally active and equally changeable, and respond in complex ways to nature’s actions. The strange loops generated by a dance of action and interaction along these lines are difficult to track by the usual tools of linear thinking, but they’re the bread and butter of systems theory, and also of all those branches of ecology that treat the ecosystem rather than the individual organism as the basic unit.
The easiest way to show how this perspective works is to watch it in action, and it so happens that one of the most important factors that will shape the history of North America over the next five centuries is particularly amenable to a systems analysis. The factor I have in mind is climate.
Now of course that’s also a political hot potato just at the moment, due to the unwillingness of a great many people across the industrial world to deal with the hard fact that they can’t continue to enjoy their current lifestyles if they want a climatically and ecologically stable planet to live on. It doesn’t matter how often the planet sets new heat records, nor that the fabled Northwest Passage around the top end of Canada—which has been choked with ice since the beginning of recorded history—is open water every summer nowadays, and an increasingly important route for commercial shipping from Europe to the eastern shores of Asia; every time the planet’s increasingly chaotic weather spits out unseasonably cold days in a few places, you can count on hearing well-paid flacks and passionate amateurs alike insisting at the top of their lungs that this proves that anthropogenic climate change is nonsense.
To the extent that this reaction isn’t just propaganda, it shows a blindness to systems phenomena I’ve discussed here before: a learned inability to recognize that change in complex systems does not follow the sort of nice straight lines our current habits of thought prefer. A simple experiment can help show how complex systems respond in the real world, and in the process make it easier to make sense of the sort of climate phenomena we can count on seeing in the decades ahead.
The next time you fill a bathtub, once you’ve turned off the tap, wait until the water is still. Slip your hand into the water, slowly and gently, so that you make as little disturbance in the water as possible. Then move your hand through the water about as fast as a snail moves, and watch and feel how the water adapts to the movement, flowing gently around your hand. .
Once you’ve gotten a clear sense of that, gradually increase the speed with which your hand is moving. After you pass a certain threshold of speed, the movements of the water will take the form of visible waves—a bow wave in front of your hand, a wake behind it in which water rises and falls rhythmically, and wave patterns extending out to the edges of the tub. The faster you move your hand, the larger the waves become, and the more visible the interference patterns as they collide with one another.
Keep on increasing the speed of your hand. You’ll pass a second threshold, and the rhythm of the waves will disintegrate into turbulence: the water will churn, splash, and spray around your hand, and chaotic surges of water will lurch up and down the sides of the tub. If you keep it up, you can get a fair fraction of the bathwater on your bathroom floor, but this isn’t required for the experiment! Once you’ve got a good sense of the difference between the turbulence above the second threshold and the oscillations below it, take your hand out of the water, and watch what happens: the turbulence subsides into wave patterns, the waves shrink, and finally—after some minutes—you have still water again.
This same sequence of responses can be traced in every complex system, governing its response to every kind of disturbance in its surroundings. So long as the change stays below a certain threshold of intensity and rapidity—a threshold that differs for every system and every kind of change—the system will respond smoothly, with the least adjustment that will maintain its own internal balance. Once that threshold is surpassed, oscillations of various kinds spread through the system, growing steadily more extreme as the disturbance becomes stronger, until it passes the second threshold and the system’s oscillations collapse into turbulence and chaos. When chaotic behavior begins to emerge in an oscillating system, in other words, that’s a sign that real trouble may be sitting on the doorstep.
If global temperature were increasing in a nice even line, in other words, we wouldn’t have as much to worry about, because it would be clear from that fact that the resilience of the planet’s climate system was well able to handle the changes that were in process. Once things begin to oscillate, veering outside usual conditions in both directions, that’s a sign that the limits to resilience are coming into sight, with the possibility of chaotic variability in the planetary climate as a whole waiting not far beyond that. We can fine-tune the warning signals a good deal by remembering that every system is made up of subsystems, and those of sub-subsystems, and as a general rule of thumb, the smaller the system, the more readily it moves from local adjustment to oscillation to turbulence in response to rising levels of disturbance.
Local climate is sensitive enough, in fact, that ordinary seasonal changes can yield minor turbulence, which is why the weather is so hard to predict; regional climates are more stable, and normally cycle through an assortment of wavelike oscillations; the cycle of the seasons is one, but there are also multiyear and multidecade cycles of climate that can be tracked on a regional basis. It’s when those regional patterns start showing chaotic behavior—when, let’s say, the usually sizzling Texas summer is suddenly broken by a record cold snap in the middle of July, in a summer that’s shaping up globally to be among the hottest ever measured—that you know the whole system is coming under strain.
Ahem.
I’m not generally a fan of Thomas Friedman, but he scored a direct hit when he warned that what we have to worry about from anthropogenic climate change is not global warming but "global weirding:" in the terms I’ve used in this post, the emergence of chaotic shifts out of a global climate that’s been hit with too much disturbance too fast. A linear change in global temperatures would be harsh, but it would be possible to some extent to shift crop belts smoothly north in the northern hemisphere and south in the southern. If the crop belts disintegrate—if you don’t know whether the next season is going to be warm or cold, wet or dry, short or long—famines become hard to avoid, and cascading impacts on an already strained global economy add to the fun and games.  At this point, for the reasons just shown, that’s the most likely shape of the century or two ahead of us.
In theory, some of that could be avoided if the world’s nations were to stop treating the skies as an aerial sewer in which to dump greenhouse gases. In practice—well, I’ve met far too many climate change activists who still insist that they have to have SUVs to take their kids to soccer practice, and I recall the embarrassed silence that spread a while back when an important British climate scientist pointed out that maybe jetting all over the place to climate conferences was communicating the wrong message at a time when climate scientists and everyone else needed to decrease their carbon footprint. Until the people who claim to be concerned about climate change start showing a willingness to burn much less carbon, it’s unlikely that anyone else will do so, and so I think it’s a pretty safe bet that fossil fuels will continue to be extracted and burnt as long as geological and economic realities permit.
The one bleak consolation here is that those realities are a good deal less flexible than worst-case scenarios generally assume. There are two factors in particular to track here, and both unfold from net energy—the difference between the energy content of fossil fuels as they reach the end consumer and the energy input needed to get them all the way there. The first factor is simply that if a deposit of fossil carbon takes more energy to extract, process, and transport to the end user than the end user can get by burning it, the fossil carbon will stay in the ground. The poster child here is kerogen shale, which has been the bane of four decades of enthusiastic energy projects in the American West and elsewhere. There’s an immense amount of energy locked up in the Green River shale and its equivalents, but every attempt to break into that cookie jar has come to grief on the hard fact that, all things considered, it takes more energy to extract kerogen from shale than you get from burning the kerogen.
The second factor is subtler and considerably more damaging. As fossil fuel deposits with abundant net energy are exhausted, and have to be replaced by deposits with lower net energy, a larger and larger fraction of the total energy supply available to an industrial society has to be diverted from all other economic uses to the process of keeping the energy flowing.  Thus it’s not enough to point to high total energy production and insist that all’s well; the logic of net energy has to be applied here as well, and the total energy input to energy production, processing, and distribution subtracted from total energy production, to get a realistic sense of how much energy is available to power the rest of the economy—and the rest of the economy, remember, is what produces the wealth that makes it possible for individuals, communities, and nations to afford fossil fuels in the first place.
 Long before the last physically extractable deposit of fossil fuel is exhausted, in other words, fossil fuel extraction will have to stop because it’s become an energy sink rather than an energy source. Well before that point is reached, furthermore, the ability of global and national economies to meet the energy costs of fossil fuel extraction will slam face first into hard limits. Demand destruction, which is what economists call the process by which people who can’t afford to buy a product stop using it, is as important here as raw physical depletion; as economies reel under the twin burdens of depleting reserves and rising energy costs for energy production, carbon footprints will shrink willy-nilly as rapid downward mobility becomes the order of the day for most people.
Combine these factors with the economic impacts of "global weirding" itself and you’ve got a good first approximation of the forces that are already massing to terminate the fossil fuel economy with extreme prejudice in the decades ahead. How those are likely to play out the future we’re facing will be discussed at length in several future posts. For the time being, I’ll just note that I expect global fossil fuel consumption and CO2 emissions to peak within a decade or so to either side of 2030, and then tip over into a ragged and accelerating decline, punctuated by economic and natural disasters, that will reach the zero point of the scale well before 2100.
What that means for the future climate of North America is difficult to predict in detail but not so hard to trace in outline. From now until the end of the 21st century, perhaps longer, we can expect climate chaos, accelerating in its geographical spread and collective impact until a couple of decades after CO2 emissions peak, due to the lag time between when greenhouse gases hit the atmosphere and when their effects finally peak. As the rate of emissions slows thereafter, the turbulence will gradually abate, and some time after that—exactly when is anybody’s guess, but 2300 or so is as good a guess as any—the global climate will have settled down into a "new normal" that won’t be normal by our standards at all. Barring further curveballs from humanity or nature, that "new normal" will remain until enough excess CO2 has been absorbed by natural cycles to matter—a process that will take several millennia at least, and therefore falls outside the range of the five centuries or so I want to consider here.
An educated guess at the shape of the "new normal" is possible, because for the last few million years or so, the paleoclimatology of North America has shown a fairly reliable pattern. The colder North America has been, by and large, the heavier the rainfall in the western half of the continent. During the last Ice Age, for example, rainfall in what’s now the desert Southwest was so heavy that it produced a chain of huge pluvial (that is, rain-fed) lakes and supported relatively abundant grassland and forest ecosystems across much of what’s now sagebrush and cactus country.  Some measure of the difference can be caught from the fact that 18,000 years ago, when the last Ice Age was at its height, Death Valley was a sparkling lake surrounded by pine forests. By contrast, the warmer North America becomes, the dryer the western half of the continent gets, and the drying effect spreads east a very long ways.
After the end of the last Ice Age, for example, the world entered what nowadays gets called the Holocene Climatic Optimum; that term’s a misnomer, at least for this continent, because conditions over a good bit of North America then were optimum only for sand fleas and Gila monsters. There’s been a running debate for several decades about whether the Hypsithermal, to use the so-called Optimum’s other name, was warmer than today all over the planet or just in some regions.  Current opinion tends to favor the latter, but the difference doesn’t actually have that much impact on the issue we’re considering:  the evidence from a broad range of sources shows that North America was significantly warmer in the Hypsithermal than it is today, and so that period makes a fairly good first approximation of the conditions this continent is likely to face in a warmer world.
To make sense of the long-term change to North American climates, it’s important to remember that rainfall is far more important than temperature as a determining factor for local ecosystems. If a given region gets more than about 40 inches of rain a year, no matter what the temperature, it’ll normally support some kind of forest; if it gets between 40 and 10 inches a year, you’ve got grassland or, in polar regions, mosses and lichens; if you get less than 10 inches a year, you’ve got desert, whether it’s as hot as the Sahara or as bitterly cold as the Takla Makan. In the Hypsithermal, as the west dried out,  tallgrass prairie extended straight across the Midwest to western Pennsylvania, and much of the Great Plains were desert, complete with sand dunes.
In a world with ample fossil fuel supplies, it’s been possible to ignore such concerns, to the extent of pumping billions of gallons of water a year from aquifers or distant catchment basins to grow crops in deserts and the driest of grasslands, but as fossil fuel supplies sunset out, the shape of human settlement will once again be a function of annual rainfall, as it was everywhere on the planet before 1900 or so. If the Hypsithermal’s a valid model, as seems most likely, most of North America from the Sierra Nevada and Cascade ranges east across the Great Basin and Rocky Mountains to the Great Plains will be desert, as inhospitable as any on Earth, and human settlement will be accordingly sparse: scattered towns in those few places where geology allows a permanent water supply, separated by vast desolate regions inhabited by few hardy nomads or by no one at all.
East of the Great Desert, grassland will extend for a thousand miles or more, east to the  Allegheny foothills, north to a thinner and dryer boreal forest belt shifted several hundred miles closer to the Arctic Ocean, and south to the tropical jungles of the Gulf coast. Further south, in what’s now Mexico, the tropical rain belt will move northwards with shifts in the global atmospheric circulation, and the Gulf coast east of the Sierra Madre Oriental will shift to tropical ecosystems all the way north to, and beyond, the current international border. Between the greatly expanded tropical zone in the south and east and the hyperarid deserts of the north, Mexico will be a land of sharp ecological contrasts
Factor in sea level rise, on the one hand, and the long-term impacts of soil depletion and of toxic and radioactive wastes on the other—issues complicated enough in their causes, trajectory, and results that they’re going to require separate posts—and you’ve got a fairly limited set of regions in which agriculture will be possible in a post-fossil fuel environment: basically, the eastern seaboard from the new coast west to the Alleghenies and the Great Lakes, and river valleys in the eastern half of the Mississippi basin. The midwestern grasslands will support pastoral grazing, and the jungle belts around the new Gulf coast and across southern Mexico will be suitable for tropical crops once the soil has a chance to recover, but the overall human carrying capacity of the continent will be significantly smaller than it was before the industrial age began.
Climate isn’t the only force pushing in that direction, either. We’ll get to the others in the weeks ahead as we continue exploring the deindustrial landscapes of dark age America.

The Gray Light of Morning

Wed, 2014-07-23 16:18
I try to wear my archdruid’s hat lightly in these essays, but every so often I field questions that touch directly on the issues of ultimate meaning that our culture, however clumsily, classifies as “religious.” Two comments in response to the post here two weeks ago raised such issues, in a way that’s relevant enough to this series of posts and important enough to the broader project of this blog to demand a response.
One of them—tip of the aforementioned archdruid’s hat to Repent—asked, “As a Druid, what are your thoughts about divine purpose, reincarnation, and our purpose in the eyes of God? What do you think future ‘ecotechnic’ societies have yet to achieve that will be worthwhile to pursue, that our descendants should suffer through the dark age towards?” The other—tip of the hat to Yupped—asked, “What do you do if you see the big picture of what’s happening around you? How did those early adopters of decline in other collapsing societies maintain their sanity when they knew what was coming? I don’t think I have the mind or the temperament to tell myself stories about the transcendent meaning of suffering in an age of social collapse.”
Those are serious questions, and questions like them are being raised more and more often these days, on this blog and in a great many other places as well. People are beginning to come to grips with the fact that they can no longer count on faith in limitless technological progress to give them an easy answer to the enduring questions of human existence.  As they do that, they’re also having to confront those questions all over again, and finding out in the process that the solution that modern industrial civilization claimed to offer for those same questions was never actually a solution at all.
Psychologists have a concept they call “provisional living.” That’s the insistence, so often heard from people whose lives are stuck on a dysfunctional merry-go-round of self-inflicted crisis, that everything they don’t like about their lives will change just as soon as something else happens: as soon as they lose twenty pounds, get a divorce, quit their lousy job, or what have you. Of course the weight never goes away, the divorce papers never get filed, and so on, because the point of the exercise is to allow daydreams of an imaginary life in which they get everything they think they want take the place of the hard work and hard choices inseparable from personal change in the real world. What provisional living offers the individual neurotic, in turn, faith in the inevitability and beneficence of progress offers industrial society as a whole—or, more precisely, faith in progress used to offer that, back when the promises made in its name didn’t yet look quite so threadbare as they do today.
There was always a massive political subtext in those promises.  The poor were encouraged to believe that technological progress will someday generate so much wealth that their children and grandchildren will be rich; the sick and dying, to dream about a future where medical progress will make every disease curable; the oppressed, to hope for a day when social progress will grant everyone the fair treatment they can’t reliably get here and now, and so on. Meanwhile, and crucially, members of the privileged classes who became uncomfortable the mismatch between industrial civilization’s glittering rhetoric and its tawdry reality were encouraged to see that mismatch as a passing phase that will be swept away by progress at some undefined point in the future, and thus to limit their efforts to change the system to the sort of well-meaning gestures that don’t seriously inconvenience the status quo.
As real as the political subtext was, it’s a mistake to see the myth of progress purely as a matter of propaganda. During the heyday of industrialism, that myth was devoutly believed by a great many people, at all points along the social spectrum, many of whom saw it as the best chance they had for positive change. Faith in progress was a social fact of vast importance, one that shaped the lives of individuals, communities, and nations. The hope of upward mobility that inspired the poor to tolerate the often grueling conditions of their lives, the dream of better living through technology that kept the middle classes laboring at the treadmill, the visions of human destiny that channeled creative minds into the service of  existing institutions—these were real and powerful forces in their day, and drew on high hopes and noble ideals as well as less exalted motives.
The problem that we face now is precisely that those hopes and dreams and visions have passed their pull date. With each passing year, more people have noticed the widening gap between the future we were supposed to get and the one that’s actually been delivered to our doorstep; with each passing year, the voices raised in defense of the old rhetoric of perpetual progress get more defensive, and the once-sparkling imagery they offer for our contemplation looks more and more shopworn. One by one, we are waking up in a cold and unfamiliar place, and the gray light of morning does not bring us good news.
It would be hard enough to face the difficult future ahead of us if we came to the present moment out of an era of sober realism and close attention to the hard facts of the human condition. It’s far harder to find ourselves where we are when that forces us to own up to the hard fact that we’ve been lying to ourselves for three hundred years. Disillusionment is a bitter pill at the best of times.  When the illusion that’s just been shattered has been telling us that the future is obliged to conform to our fondest fantasies, whatever those happen to be, it’s no wonder that it’s as unwelcome as it is.
Bitter though the pill may be, though, it’s got to be choked down, and like the bitter medicines of an earlier day, it has a tonic effect. Come to terms with the fact that faith in progress was always destined to be disappointed, that the law of diminishing returns and the hard limits of thermodynamics made the dream of endless guaranteed betterment a delusion—an appealing delusion, but a delusion all the same—and after the shock wears off, you’ll find yourself standing on common ground shared with the rest of your species, asking questions that they asked and answered in their time.
Most of the people who have ever lived, it bears remembering, had no expectation that the future would be any better than the world that they saw around them. The majority of them assumed as a matter of course that the future would be much like the present, while quite a few of them believed instead that it would be worse.  Down through the generations, they faced the normal human condition of poverty, sickness, toil, grief, injustice, and the inevitability of their own deaths, and still found life sufficiently worth living to meet the challenges of making a living, raising families, and facing each day as it came.
That’s normal for our species.  Buying into a fantasy that insists that the universe is under an obligation to fulfill your daydreams is not. Get past that fantasy, and past the shock of disillusionment that follows its departure, and it’s not actually that difficult to make sense of a world that doesn’t progress and shows no interest in remaking itself to fit an overdeveloped sense of human entitlement. The downside is that you have to give up any attempt to smuggle the same fantasy back into your mind under some other name or form, and when some such belief system has been central to the worldview of your culture for the last three centuries or so, it’s always tempting to find some way to retrieve the fantasy. Still, falling in with that temptation  just lands you back where you were, waiting for a future the universe is serenely unwilling to provide.
It’s probably worth noting that you also have to give up the equal and opposite fantasy that claims that the universe is under an obligation to fulfill a different set of daydreams, the kind that involves the annihilation of everything you don’t like in the universe, whether or not that includes yourself. That’s simply another way of playing the game of provisional living: “I don’t have to do anything because X is supposed to happen (and it won’t)” amounts in practice to the same thing as “I won’t do anything until X happens (and it won’t)”—that is to say, it’s just one more comfortable evasion of responsibility.
There are more constructive ways to deal with the decidedly mixed bag that human existence hands us. If I may risk a significant oversimplification, there are broadly speaking three ways that work. It so happens that the ancient Greeks, who grappled just as incisively with these issues as they did with so much else, evolved three schools of philosophy, each of which took one of these three ways as its central theme. They weren’t the only ones to do that in a thoughtful fashion; those of my readers who know their way around the history of ideas will be able to name any number of examples from other societies and other ages.  I propose to use Greek examples here simply because they’re the schools with which I’m most familiar. As Charles Fort said, one traces a circle beginning anywhere.
The first of the three approaches I have in mind starts with the realization that for most of us, all things considered, being alive beats the stuffing out of the alternative. While life contains plenty of sources of misery, it also contains no shortage of delights, even when today’s absurdly complex technostructure isn’t there to provide them; furthermore, the mind that pays close attention to its own experiences will soon notice that a fairly large percentage of its miseries are self-inflicted, born of pointless worrying about future troubles or vain brooding over past regrets. Unlearn those habits, stop insisting that life is horrible because it isn’t perfect, and it’s generally not too hard to learn to enjoy the very real pleasures that life has to offer and to tolerate its less pleasant features with reasonable grace.
That’s the approach taught by Epicurus, the founder of the Epicurean school of philosophy in ancient Greece. It’s also the foundation of what William James called the healthy-minded way of thinking, the sort of calm realism you so often see in people who’ve been through hard times and come out the other side in one piece. Just now, it’s a very difficult philosophy for many people in the world’s industrial nations to take up, precisely because most of us haven’t been through hard times; we’ve been through an age of extravagance and excess, and like most people in that position, we’re finding the letdown at the party’s end far more difficult to deal with than any actual suffering we might be facing. Get past that common reaction, and the Epicurean way has much to offer.
If it has a weakness, it’s that attending to the good things in life can be very hard work when those good things are in short supply. That’s when the second approach comes into its own. It starts from the  realization that whether life is good or not, here we are, and we each have to choose how we’re going to respond to that stark fact. The same unlearning that shows the Epicurean to avoid self-inflicted misery is a first step, a clearing of the decks that makes room for the decisions that matter, but once this is taken care of, the next step is to face up to the fact that there are plenty of things in the world that could and should be changed, if only someone were willing to get up off the sofa and make the effort required. The second approach thus becomes a philosophy of action, and when action requires risking one’s life—and in really hard times, it very often does—those who embrace the second approach very often find themselves saying, “Well, what of it? I’m going to die sooner or later anyway.”
That’s the approach taught by Zeno, the founder of the Stoic school of philosophy in ancient Greece. It’s among the most common ways of thought in dark ages, sometimes worked out as a philosophy, sometimes expressed in pure action: the ethos of the Spartans and the samurai. That way of thinking about life is taken to its logical extreme in the literature of the pagan Teutonic peoples: you will die, says the Elder Edda, the world will die, even the gods will die, and none of that matters. All that matters is doing the right thing, because it’s the right thing, and because you’ve learned to embrace the certainty of your death and so don’t have to worry about anything but doing the right thing. 
Now of course the same choice can express itself in less stark forms. Every one of my readers who’s had the experience of doing something inconvenient or unpleasant just because it’s the right thing to do has some sense of how that works, and why.  In a civilization on the downward arc, there are many inconvenient or unpleasant things that very badly need to be done, and choosing one of them and doing it is a remarkably effective response to the feelings of meaninglessness and helplessness that afflict so many people just now.  Those who argue that you don’t know whether or not your actions will have any results in the long run are missing the point, because from the perspective I’ve just sketched out, the consequences don’t matter either.  Fiat iustitia, ruat caelum, as the Roman Stoics liked to say:  let justice be done, even if it brings the sky crashing down. 
So those, broadly speaking, are the first two ways that people have dealt constructively with the human condition: in simplest terms, either learn to live with what life brings you, or decide to do something about it. The first choice may seem a little simplistic and the second one may seem a little stark, but both work—that is, both are psychologically healthy responses that often yield good results, which is more than can be said for habits of thought that require the universe to either cater to our fantasies of entitlement or destroy itself to satisfy our pique. Both also mesh fairly well with the habitual material-mindedness of contemporary culture, the assumption that the only things that really matter are those you can hit with a stick, which is common to most civilizations toward the end of their history.
The third option I have in mind also works, but it doesn’t mesh at all with the assumption just noted. Current confusions about the alternatives to that assumption run deep enough that some care will be needed in explaining just what I mean.
The third option starts with the sense that the world as we normally perceive it is not quite real—not illusory, strictly speaking, but derivative. It depends on something else, something that stands outside the world of our ordinary experience and differs from that world not just in detail but in kind.  Since this “something else” is apart from the things we normally use language to describe, it’s remarkably difficult to define or describe in any straightforward way, though something of its nature can be shared with other people through the more roundabout means of metaphor and symbol. Elusive as it is, it can’t simply be ignored, because it shapes the world of our ordinary experience, not according to some human agenda but according to a pattern of its own.
I’d encourage my readers to notice with some care what’s not being said here. The reality that stands behind the world of our ordinary experience is not subject to human manipulation; it isn’t answerable to our fantasies or to our fears.  The viewpoint I’m suggesting is just about as far as you can get from the fashionable notion that human beings create their own reality—which, by the way, is just one more way our overdeveloped sense of entitlement shapes our habits of thinking.  As objects of our own and other’s perceptions, we belong to the world of the not quite real. Under certain circumstances, though, human beings can move into modes of nonordinary perception in which the presence of the underlying reality stops being a theory and becomes an experience, and when this happens a great many of the puzzles and perplexities of human existence suddenly start making sense.
There’s a certain irony in the fact that in ancient Greek culture, the philosophical movement that came to embody this approach to the world took its name from a man named Aristocles, whose very broad shoulders gave him the nickname Plato. That’s ironic because Plato was a transitional figure; behind him stood a long line of Orphic and Pythagorean mystics, whose insights he tried to put into rational form, not always successfully; after him came an even longer line of thinkers, the Neoplatonists, who completed the job he started and worked out a coherent philosophy that relates the world of reality to the world of appearance through the lens of human consciousness.
The Platonist answer isn’t limited to Platonism, of course, any more than the Stoic or Epicurean answer is found only in those two Greek philosophical schools. Implicitly or explicitly, it’s present in most religious traditions that grapple with philosophical issues and manage not to fall prey to the easy answers of apocalyptic fantasy. In the language of mainstream Western religion, we can say that there’s a divine reality, and then there’s a created world and created beings—for example, the author and readers of this blog—which depend for their existence on the divine reality, however this is described. Still, that’s far from the only language in which this way of thinking about the world can be framed.
The Epicurean and Stoic approaches to face an imperfect and challenging world, as already discussed, take that world as it is, and propose ways to deal with it. That’s a wholly reasonable approach from within the sort of worldview that those traditions generally embrace. The Platonic approach, by contrast, proposes that the imperfect and challenging world we encounter is only part of the picture, and that certain disciplines of consciousness allow us to take the rest of the picture into account, not as a policy of blind trust, but as an object of personal experience.  As already suggested, it’s difficult to communicate in ordinary language just what that experience has to say about the reality behind such phrases as “divine purpose,” which is why those who pursue such experiences tend to focus on teaching other people how to do it, and let them make their own discoveries as they do the work.
Knowing the rest of the picture, for that matter, doesn’t make the imperfections and challenges go away.  There are many situations in which either an Epicurean or a Stoic tactic is the best bet even from within a Platonic view of the cosmos—it’s a matter of historical fact that much of the best of the Epicurean and Stoic traditions were absorbed into the classical Neoplatonic synthesis for exactly this reason. The difference is simply that to glimpse something of the whole picture, and to pursue those disciplines that bring such glimpses within reach, provide a perspective that makes sense of the texture of everyday experience as it is, without expecting it to act out human fears and fantasies. That approach isn’t for everyone, but it’s an option, and it’s the one that I tend to trust.
And with that, I’ll set aside my archdruid’s hat again and return to the ordinary business of chronicling the decline and fall of industrial civilization.

Smile For The Aliens

Wed, 2014-07-16 16:44
Last week’s post, with its uncompromising portrayal of what descent into a dark age looks like, fielded the usual quota of voices insisting that it’s different this time. It’s a familiar chorus, and I confess to a certain wry amusement in watching so many changes get rung on what, after all, is ultimately a non sequitur. Grant that it’s different this time: so?  It’s different every time, and it always has been, yet those differences have never stopped history’s remarkably diverse stable of civilizations from plodding down the self-same track toward their common destiny.
It may also have occurred to my readers, and it has certainly occurred to me, that the legions of bloggers and pundits who base their reasonings on the claim that history has nothing to teach us don’t have to face a constant barrage of comments insisting that it’s the same this time. “It’s different this time” isn’t simply one opinion among others, after all; it’s one of the basic articles of faith of the contemporary industrial world, and questioning it reliably elicits screams of outrage even from those who like to tell themselves that they’ve rejected the conventional wisdom of the present day.
Yet that raises another question, one that’s going to bear down with increasing force in the years ahead of us: just how will people cope when some of their most cherished beliefs have to face a cage match with reality, and come out second best?
Such issues are rather on my mind just at the moment. Regular readers may recall that a while back I published a book, The UFO Phenomenon, which managed the not inconsiderable feat of offending both sides of the UFO controversy. It did so by the simple expedient of setting aside the folk mythology that’s been heaped up with equal enthusiasm by true believers in extraterrestrial visitation and true believers in today’s fashionable pseudoskeptical debunkery. After getting past that and a few other sources of confusion, I concluded that the most likely explanation for the phenomenon was that US military and intelligence agencies invented it out of whole cloth after the Second World War, as protective camouflage for an assortment of then-secret aerospace technologies.
That wasn’t the conclusion I expected to reach when I began work on the project; I had several other hypotheses in mind, all of which had to be considerably modified as the research proceeded. It was just too hard not to notice the way that the typical UFO sightings reported in any given decade so closely mimicked whatever the US was testing in secret at any given time—silvery dots or spheres in the late 1940s, when high-altitude balloons were the latest thing in aerial reconnaissance; points or tiny blobs of light high in the air in the 1950s, when the U-2 was still top secret; a phantasmagoria of flying lights and things dropping from the sky in the 1960s, when the SR-71 and the first spy satellites entered service; black triangles in the 1980s, when the first stealth aircraft were being tested, and so on. An assortment of further evidence pointing the same way, not to mention the significant parallels between the UFO phenomenon and those inflatable tanks and nonexistent battalions that tricked the Germans into missing the real preparations for D-Day, were further icing on a saucer-shaped cake.
To call that an unpopular suggestion is to understate the case considerably, though I’m pleased to say it didn’t greatly hurt sales of the book.  In the years since The UFO Phenomenon saw print, though, there’s been a steady stream of declassified documents from US intelligence agencies admitting that, yes, a lot of so-called UFOs were perfectly identifiable if you happened to know what classified projects the US government had in the air just then. It turns out, for example, that roughly half the UFO sightings reported to the Air Force’s Project Blue Book between 1952 and 1969 were CIA spyplanes; the officers in charge of Blue Book used to call the CIA when sightings came in, and issue bogus “explanations” to provide cover for what was, at the time, a top secret intelligence project. I have no reason to think that the publication of The UFO Phenomenon had anything to do with the release of all this data, but it was certainly a welcome confirmation of my analysis.
The most recent bit of confirmation hit the media a few weeks back. Connoisseurs of UFO history know that the Scandinavian countries went through a series of major “flaps”—periods in which many UFO sightings occured in a short time—in the 1950s and 1960s. The latest round of declassified data confirmed that these were sightings of US spyplanes snooping on the Soviet Union. The disclosures didn’t happen to mention whether CIA assets also spread lurid accounts of flying saucer sightings and alien visitations to help muddy the waters. My hypothesis is that that’s what was going on all the way through the history of the UFO phenomenon: fake stories and, where necessary, faked sightings kept public attention fixated on a manufactured mythology of flying saucers from outer space, so that the signal of what was actually happening never made it through the noise.
Many of my readers will already have guessed how the two sides of the UFO controversy responded to the disclosures just mentioned:  by and large, they haven’t responded to them at all. Believers in the extraterrestrial origin of UFOs are still insisting at the top of their lungs that some day very soon, the US government will be forced to ‘fess up to the reality of alien visitation—yes, I field emails from such people regularly. Believers in the null hypothesis, the claim that all UFO sightings result from hoaxes, illusions, or misidentification of ordinary phenomena, are still rehashing the same old arguments when they haven’t gone off to play at being skeptical about something else. That’s understandable, as both sides have ended up with substantial amounts of egg on their face.
Mind you, the believers in the extraterrestrial hypothesis were right about a great many more things than their rivals, and they deserve credit for that. They were right, for example, that people really were seeing unusual things in the skies; they were right that there was a coverup orchestrated by the US government, and that the Air Force was handing out explanations that it knew to be fake; they were even right in guessing that the Groom Lake airfield in Nevada, the legendary “Area 51,” was somehow central to the mystery—that was the main US spyplane testing and training base straight through the decades when the UFO mystery was at its peak. The one thing they got wrong was the real origin of the UFO phenomenon, but for them, unfortunately, that was the one thing that mattered.
The believers in the null hypothesis don’t have much reason to cheer, even though they turned out to be right about that one point. The disclosures have shown with uncomfortable clarity that a good many of the explanations offered by UFO skeptics were actually nonsense, just as their opponents had been pointing out all along. In 1981, for example, Philip Klass, James Oberg, and Robert Sheaffer claimed that they’d identified all the cases  that Project Blue Book labeled as “unknown.” As it happens, they did nothing of the kind; what they actually did was offer untested ad hoc hypotheses to explain away the unknowns, which is not exactly the same thing. It hardly needs to be said that CIA spyplanes played no part in those explanations, and if the “unknown” cases contained the same proportion of spyplanes as the whole collection, as seems likely, roughly half their explanations are wrong—a point that doesn’t exactly do much to inspire confidence in other claims made on behalf of the debunking crusade.
So it’s not surprising that neither side in the controversy has had the least interest in letting all this new data get in the way of keeping up the old argument. The usual human reaction to cognitive dissonance is to exclude the information that’s causing the dissonance, and that’s precisely what both sides, by and large, have done. As the dissonance builds, to be sure, people on the fringes of both scenes will quiely take their leave, new recruits will become few and far between, and eventually surviving communities of believers and debunkers alike will settle into a common pattern familiar to any of my readers familiar with Spiritualist churches, Marxist parties, or the flotsam left behind by the receding tide of other once-influential movements in American society: little circles of true believers fixated on the disputes of an earlier day, hermetically sealed against the disdain and disinterest of the wider society.
They have the freedom to do that, because the presence or absence of alien saucers in Earth’s skies simply doesn’t have that much of an impact on everyday life. Like Spiritualists or Marxists, believers in alien contact and their debunking foes by and large can avoid paying more than the most cursory attention to the failure of their respective crusades. The believers can take comfort in the fact that even in the presence of overwhelming evidence, it’s notoriously hard to prove a negative; the debunkers can take comfort in the fact that, however embarrassing their logical lapses and rhetorical excesses, at least they were right about the origins of the phenomenon.
That freedom isn’t always available to those on the losing side of history. It’s not that hard to keep the faith if you aren’t having your nose rubbed in the reality of your defeat on a daily basis, but it’s quite another matter to cope with the ongoing, overwhelming disconfirmation of beliefs on which you’ve staked your pride, your values, and your sense of meaning and purpose in life. What would life be like these days for the vocal UFO debunkers of recent decades, say, if the flying saucers had turned out to be alien spacecraft after all, the mass saucer landing on the White House lawn so often and so vainly predicted had finally gotten around to happening, and Philip Klass and his fellow believers in the null hypothesis had to field polite requests on a daily basis to have their four-dimensional holopictures taken by giggling, gray-skinned tourists from Zeta Reticuli?
For a living example of the same process at work, consider the implosion of the New Age scene that’s well under way just now.  In the years before the 2008 crash, as my readers will doubtless remember, tens of thousands of people plunged into real estate speculation with copies of Rhonda Byrne’s meretricious The Secretor similar works of New Age pseudophilosophy clutched in their sweaty hands, convinced that they knew how to make the universe make them rich. I knew a fair number of them—Ashland, Oregon, where I lived at the time, had a large and lucrative New Age scene—and so I had a ringside seat as their pride went before the real estate market’s fall. That was a huge blow to the New Age movement, and it was followed in short order by the self-inflicted humiliation of the grand nonevent of December 21, 2012.
Those of my readers who don’t happen to follow trends in the publishing industry may be interested to know that sales of New Age books peaked in 2007 and have been plunging since then; so has the take from New Age seminars, conferences, and a galaxy of other products hawked under the same label. There hadn’t been any shortage of disconfirmations in the previous history of the New Age scene, to be sure, but these two seem to have been just that little bit more than most of the movement’s adherents can gloss over. No doubt the New Age movement will spawn its share of little circles of true believers—the New Thought movement, which was basically the New Age’s previous incarnation, did exactly that when it imploded at the end of the 1920s, and many of those little circles ended up contributing to the rise of the New Age decades later—but as a major cultural phenomenon, it’s circling the drain.
One of the central themes of this blog, in turn, is that an embarrassment on much this same scale waits for all those who’ve staked their pride, their values, and their sense of meaning and purpose in life on the belief that it’s different this time, that our society somehow got an exemption from the common fate of civilizations. If industrial society ends up following the familiar arc of decline and fall into yet another dark age, if all the proud talk about man’s glorious destiny among the stars turns out to be empty wind, if we don’t even get the consolation prize of a downfall cataclysmic enough to drag the rest of the planet down with us—what then?
I’ve come to think that’s what lies behind the steady drumbeat of emails and comments I field week after week insisting that it’s different this time, that it has to be different this time, and clutching at the most remarkable assortment of straws in an attempt to get me to agree with them that it’s different this time. That increasingly frantic chorus has many sources, but much of it is, I believe, a response to a simple fact:  most of the promises made by authoritative voices in contemporary industrial society about the future we’re supposed to get have turned out to be dead wrong.
Given the number of people who like to insist that every technological wet dream will eventually be fulfilled, it’s worth taking the time to notice just how poorly earlier rounds of promises have measured up to the inflexible yardstick of reality.  Of all the gaudy and glittering technological breakthroughs that have been promised with so much confidence over the last half dozen decades or so, from cities on the Moon and nuclear power too cheap to meter straight through to120-year lifespans and cures for cancer and the common cold, how many have actually panned out?  Precious few.  Meanwhile most measures of American public health are slipping further into Third World territory with every year that passes, our national infrastructure is sinking into a morass of malign neglect, and the rising curve of prosperity that was supposed to give every American acces to middle class amenities has vanished in a haze of financial fraud, economic sclerosis, and official statistics so blatantly faked that only the media pretends to believe them any more.
For many Americans these days, furthermore, those broken promises have precise personal equivalents. A great many of the people who were told by New Age authors that they could get rich easily and painlessly by visualizing abundance while investing in dubious real estate ventures found out the hard way that believing those promises amounted to being handed a one-way nonstop ticket to poverty. A great many of the people who were told by equally respected voices that they would attain financial security by mortgaging their futures for the benefit of a rapacious and corrupt academic industry and its allies in the banking sphere are finding out the same thing about the reassuring and seemingly authoritative claims that they took at face value.  For that matter, I wonder how many American voters feel they benefited noticeably from the hope and change that they were promised by the sock puppet they helped put into the White House in 2008 and 2012.
The promises that framed the housing bubble, the student loan bubble, and the breathtaking cynicism of Obama’s campaign, after all, drew on the same logic and the same assumptions that guided all that grand and vaporous talk about the inevitability of cities on the Moon and commuting by jetpack. They all assumed that history is a one-way street that leads from worse to better, to more, bigger, louder, gaudier, and insisted that of course things would turn out that way. Things haven’t turned out that way, they aren’t turning out that way, and it’s becoming increasingly clear that things aren’t going to turn out that way any time this side of the twelfth of Never. I’ve noted here several times now that if you want to predict the future, paying attention to the reality of ongoing decline pretty reliably gives you better results than trusting that the decline won’t continue in its current course.
 The difficulty with that realization, of course, is precisely that so many people have staked their pride, their values, and their sense of meaning and purpose in life on one or another version of the logic I’ve just sketched out. Admitting that the world is under no compulsion to change in the direction they think it’s supposed to change, that it’s currently changing in a direction that most people find acutely unwelcome, and that there are good reasons to think the much-ballyhooed gains of the recent past were the temporary products of the reckless overuse of irreplaceable energy resources, requires the surrender of a deeply and passionately held vision of time and human possibility. Worse, it lands those who do so in a situation uncomfortably close to the crestfallen former UFO debunkers I joked about earlier in this post, having to cope on an everyday basis with a world full of flying saucers and tourists from the stars.
Beneath the farcical dimensions of that image lies a sobering reality. Human beings can’t live for long without some source of values and some sense of meaning in their lives.  That’s why people respond to cognitive dissonance affecting their most cherished values by shoving away the unwelcome data so forcefully, even in the teeth of the evidence. Resistance to cognitive dissonance has its limits, though, and when people have their existing sources of meaning and value swept away by a sufficiently powerful flood of contradictions, they will seek new sources of meaning and value wherever they can find them—no matter how absurd, dysfunctional, or demonic those new meanings and values might look to an unsympathetic observer.  The mass suicide of the members of the Heaven’s Gate UFO cult in 1997 offers one measure of just how far astray those quests for new sources of meaning can go; so, on a much larger scale, does the metastatic nightmare of Nazi Germany.
I wrote in an earlier post this month about the implosion of the sense of political legitimacy that’s quietly sawing the props out from underneath the US federal government, and convincing more and more Americans that the people who claim to represent and govern them are a pack of liars and thieves.  So broad and deep a loss of legitimacy is political dynamite, and normally results within no very long a time frame in the collapse of the government in question. There are no guarantees, though, that whatever system replaces a delegitimzed government will be any better.
That same principle applies with equal force to the collapse of the fundamental beliefs of a civilization. In next week’s post, with this in mind, I plan on talking about potential sources of meaning, purpose and value in a world on its way into a global dark age.

Bright Were The Halls Then

Wed, 2014-07-09 16:51
Arnold Toynbee, whose magisterial writings on history have been a recurring source of inspiration for this blog, has pointed out an intriguing difference between the way civilizations rise and the way they fall. On the way up, he noted, each civilization tends to diverge not merely from its neighbors but from all other civilizations throughout history.  Its political and religious institutions, its arts and architecture, and all the other details of its daily life take on distinctive forms, so that as it nears maturity, even the briefest glance at one of its creations is often enough to identify its source.
 Once the peak is past and the long road down begins, though, that pattern of divergence shifts into reverse, slowly at first, and then with increasing speed. A curious sort of homogenization takes place: distinctive features are lost, and common patterns emerge in their place.  That doesn’t happen all at once, and different cultural forms lose their distinctive outlines at different rates, but the further down the trajectory of decline and fall a civilization proceeds, the more it resembles every other civilization in decline. By the time that trajectory bottoms out, the resemblance is all but total; compare one postcollapse society to another—the societies of post-Roman Europe, let’s say, with those of post-Mycenean Greece—and it can be hard to believe that dark age societies so similar could have emerged out of the wreckage of civilizations so different.
It’s interesting to speculate about why this reversion to the mean should be so regular a theme in the twilight and afermath of so many civilizations. Still, the recurring patterns of decline and fall have another implication—or, if you will, another application. I’ve noted here and elsewhere that modern industrial society, especially but not only here in North America, is showing all the usual symptoms of a civilization on its way toward history’s compost bin. If we’ve started along the familiar track of decline and fall—and I think a very good case can be made for that hypothesis—it should be possible to map the standard features of the way down onto the details of our current situation, and come up with a fairly accurate sense of the shape of the future ahead of us.
All the caveats raised in last week’s Archdruid Report post deserve repetition here, of course. The part of history that can be guessed in advance is a matter of broad trends and overall patterns, not the sort of specific incidents that make up so much of history as it happens.  Exactly how the pressures bearing down on late industrial America will work out in the day-by-day realities of politics, economics, and society will be determined by the usual interplay of individual choices and pure dumb luck. That said, the broad trends and overall patterns are worth tracking in their own right, and some things that look as though they ought to belong to the realm of the unpredictable—for example, the political and military dynamics of border regions, or the relations among the imperial society’s political class, its increasingly disenfranchised lower classes, and the peoples outside its borders—follow predictable patterns in case after case in history, and show every sign of doing the same thing this time around too.
What I’m suggesting, in fact, is that in a very real sense, it’s possible to map out the history of North America over the next five centuries or so in advance. That’s a sweeping claim, and I’m well aware that the immediate response of at least some of my readers will be to reject the possibility out of hand. I’d like to encourage those who have this reaction to try to keep an open mind. In the posts to come, I plan on illustrating every significant point I make with historical examples from the twilight years of other civilizations, as well as evidence from the current example insofar as that’s available yet.  Thus it should be possible for my readers to follow the argument as it unfolds and see how it hangs together.
Now of course all this presupposes that the lessons of the past actually have some relevance to our future. I’m aware that that’s a controversial proposal these days, but to my mind the controversy says more about the popular idiocies of our time than it does about the facts on the ground. I’ve discussed in previous posts how people in today’s America have taken to using thoughtstoppers such as "but it’s different this time!" to protect themselves from learning anything from history—a habit that no doubt does wonders for their peace of mind today, though it pretty much guarantees them a face-first collision with a brick wall of misery and failure not much further down time’s road. Those who insist on clinging to that habit are not going to find the next year or so of posts here to their taste.
They won’t be the only ones. Among the resources I plan on using to trace out the history of the next five centuries is the current state of the art in the environmental sciences, and that includes the very substantial body of evidence and research on anthropogenic climate change. I’m aware that some people consider that controversial, and of course some very rich corporate interests have invested a lot of money into convincing people that it’s controversial, but I’ve read extensively on all sides of the subject, and the arguments against taking anthropogenic climate change seriously strike me as specious. I don’t propose to debate the matter here, either—there are plenty of forums for that. While I propose to leaven current model-based estimates on climate change and sea level rise with the evidence from paleoclimatology, those who insist that there’s nothing at all the matter with treating the atmosphere as an aerial sewer for greenhouse gases are not going to be happy with the posts ahead.
I also propose to discuss industrial civilization’s decline and fall without trying to sugarcoat the harsher dimensions of that process, and that’s going to ruffle yet another set of feathers. Regular readers will recall a post earlier this year discussing the desperate attempts to insist that it won’t be that bad, really it won’t, that were starting to show up in the flurry of criticism each of these weekly essays reliably fields.  That’s even more common now than it was then; nowadays, in fact, whenever one of my posts uses words such as "decline" or "dark age," I can count on being taken to task by critics who insist earnestly that such language is too negative, that of course we’re facing a shift to a different kind of society but I shouldn’t describe it in such disempowering terms, and so on through the whole vocabulary of the obligatory optimism that’s so fashionable among the privileged these days.
I’m pretty sure, as noted in the blog post just cited, that this marks the beginning of a shift by the peak oil community as a whole out of the second of Elisabeth Kubler-Ross’ famous five stages, the stage of anger, into the third stage of bargaining. That’s welcome, in that it brings us closer to the point at which people have finished dealing with their own psychological issues and can get to work coping with the predicament of our time, but it’s still as much an evasion of that predicament as denial and anger were. The fall of a civilization is not a pleasant prospect—and that’s what we’re talking about, of course: the decline and fall of industrial civilization, the long passage through a dark age, and the first stirrings of the successor societies that will build on our ruins. That’s how the life cycle of a civilization ends, and it’s the way that ours is ending right now.
What that means in practice is that most of the familiar assumptions people in the industrial world like to make about the future will be stood on their heads in the decades and centuries ahead. Most of the rhetoric being splashed about these days in support of this or that or the other Great Turning that will save us from the consequences of our own actions assumes, as a matter of course, that a majority of people in the United States—or, heaven help us, in the whole industrial world—can and will come together around some broadly accepted set of values and some agreed-upon plan of action to rescue industrial civilization from the rising spiral of crises that surrounds it. My readers may have noticed that things seem to be moving in the opposite direction, and history suggests that they’re quite correct.
Among the standard phenomena of decline and fall, in fact, is the shattering of the collective consensus that gives a growing society the capacity to act together to accomplish much of anything at all.  The schism between the political class and the rest of the population—you can certainly call these "the 1%" and "the 99%" if you wish—is simply the most visible of the fissures that spread through every declining civilization, breaking it into a crazy quilt of dissident fragments pursuing competing ideals and agendas. That process has a predictable endpoint, too:  as the increasingly grotesque misbehavior of the political class loses it whatever respect and loyalty it once received from the rest of society, and the masses abandon their trust in the political institutions of their society, charismatic leaders from outside the political class fill the vacuum, violence becomes the normal arbiter of power, and the rule of law becomes a polite fiction when it isn’t simply abandoned altogether.
The economic sphere of a society in decline undergoes a parallel fragmentation for different reasons. In ages of economic expansion, the labor of the working classes yields enough profit to cover the costs of a more or less complex superstructure, whether that superstructure consists of the pharaohs and priesthoods of ancient Egypt or the bureaucrats and investment bankers of late industrial America. As expansion gives way to contraction, the production of goods and services no longer yields the profit ot once did, but the members of the political class, whose power and wealth depend on the superstructure, are predictably unwilling to lose their privileged status and have the power to keep themselves fed at everyone else’s expense. The reliable result is a squeeze on productive economic activity that drives a declining civilization into one convulsive financial crisis after another, and ends by shredding its capacity to produce even the most necessary goods and services .
In response, people begin dropping out of the economic mainstream altogether, because scrabbling for subsistence on the economic fringes is less futile than trying to get by in a system increasingly rigged against them. Rising taxes, declining government services, and systematic privatization of public goods by the rich compete to alienate more and more people from the established order, and the debasement of the money system in an attempt to make up for faltering tax revenues drives more and more economic activity into forms of exchange that don’t involve money at all.  As the monetary system fails, in turn, economies of scale become impossible to exploit; the economy fragments and simplifies until bare economic subsistence on local resources, occasionally supplemented by plunder, becomes the sole surviving form of economic activity
Taken together, these patterns of political fragmentation and economic unraveling send the political class of a failing civilization on a feet-first journey through the exit doors of history.  The only skills its members have, by and large, are those needed to manipulate the complex political and economic levers of their society, and their power depends entirely on the active loyalty of their subordinates, all the way down the chain of command, and the passive obedience of the rest of society.  The collapse of political institutions strips the political class of any claim to legitimacy, the breakdown of the economic system limits its ability to buy the loyalty of those that it can no longer inspire, the breakdown of the levers of control strips its members of the only actual power they’ve got, and that’s when they find themselves having to compete for followers with the charismatic leaders rising just then from the lower echelons of society. The endgame, far more often than not, comes when the political class tries to hire the rising leaders of the disenfranchised as a source of muscle to control the rest of the populace, and finds out the hard way that it’s the people who carry the weapons, not the ones who think they’re giving the orders, who actually exercise power.
The implosion of the political class has implications that go well beyond a simple change in personnel at the upper levels of society. The political and social fragmentation mentioned earlier applies just as forcefully to the less tangible dimensions of human life—its ideas and ideals, its beliefs and values and cultural practices. As a civilization tips over into decline, its educational and cultural institutions, its arts, literature, sciences, philosophies and religions all become identified with its political class; this isn’t an accident, as the political class generally goes out of its way to exploit all these things for the sake of its own faltering authority and influence. To those outside the political class, in turn, the high culture of the civilization becomes alien and hateful, and when the political class goes down, the cultural resources that it harnessed to its service go down with it.
Sometimes, some of those resources get salvaged by subcultures for their own purposes, as Christian monks and nuns salvaged portions of classical Greek and Roman philosophy and science for the greater glory of God. That’s not guaranteed, though, and even when it does happen, the salvage crew picks and chooses for its own reasons—the survival of classical Greek astronomy in the early medieval West, for example, happened simply because the Church needed to know how to calculate the date of Easter. Where no such motive exists, losses can be total: of the immense corpus of Roman music, the only thing that survives is a fragment of one tune that takes about 25 seconds to play, and there are historical examples in which even the simple trick of literacy got lost during the implosion of a civilization, and had to be imported centuries later from somewhere else.
All these transformations impact the human ecology of a falling civilization—that is, the basic relationships with the natural world on which every human society depends for day to day survival. Most civilizations know perfectly well what has to be done to keep topsoil in place, irrigation water flowing, harvests coming in, and all the other details of human interaction with the environment on a stable footing. The problem is always how to meet the required costs as economic growth ends, contraction sets in, and the ability of central governments to enforce their edicts begins to unravel. The habit of feeding the superstructure at the expense of everything else impacts the environment just as forcefully as it does the working classes:  just as wages drop to starvation levels and keep falling, funding for necessary investments in infrastructure, fallow periods needed for crop rotation, and the other inputs that keep an agricultural system going in a sustainable manner all get cut. 
As a result, topsoil washes away, agricultural hinterlands degrade into deserts or swamps, vital infrastructure collapses from malign neglect, and the ability of the land to support human life starts on the cascading descent that characterizes the end stage of decline—and so, in turn, does population, because human numbers in the last analysis are a dependent variable, not an independent one. Populations don’t grow or shrink because people just up and decide one day to have more or fewer babies; they’re constrained by ecological limits. In an expanding civilization, as its wealth and resource base increases, the population expands as well, since people can afford to have more children, and since more of the children born each year have access to the nutrition and basic health care that let them survive to breeding age themselves.  When growth gives way to decline, population typically keeps rising for another generation or so due to sheer demographic momentum, and then begins to fall.
The consequences can be traced in the history of every collapsing civilization.  As the rural economy implodes due to agricultural failure on top of the more general economic decline, a growing fraction of the population concentrates in urban slum districts, and as public health measures collapse, these turn into incubators for infectious disease. Epidemics are thus a common feature in the history of declining civilizations, and of course war and famine are also significant factors, but an even larger toll is taken by the constant upward pressure exerted on death rates by poverty, malnutrition, crowding, and stress. As deaths outnumber births, population goes into a decline that can easily continue for centuries. It’s far from uncommon for the population of an area in the wake of a civilization to equal less than 10% of the figure it reached at the precollapse peak.
Factor these patterns together, follow them out over the usual one to three centuries of spiralling decline, and you have the standard picture of a dark age society: a mostly deserted countryside of small and scattered villages where subsistence farmers, illiterate and impoverished, struggle to coax fertility back into the depleted topsoil. Their goverments consist of the personal rule of local warlords, who take a share of each year’s harvest in exchange for protection from raiders and rough justice administered in the shade of any convenient tree. Their literature consists of poems, lovingly memorized and chanted to the sound of a simple stringed instrument, recalling the great deeds of the charismatic leaders of a vanished age, and these same poems also contain everything they know about their history. Their health care consists of herbs, a little rough surgery, and incantations cannily used to exploit the placebo effect. Their science—well, I’ll let you imagine that for yourself.
And the legacy of the past? Here’s some of what an anonymous poet in one dark age had to say about the previous civilization:
Bright were the halls then, many the bath-houses,High the gables, loud the joyful clamor,Many the meadhalls full of delightsUntil mighty Fate overthrew it all. Wide was the slaughter, the plague-time came,Death took away all those brave men.Broken their ramparts, fallen their halls,The city decayed; those who built itFell to the earth. Thus these courts crumble,And roof-tiles fall from this arch of stone.
Fans of Anglo-Saxon poetry will recognize that as a passage from "The Ruin." If the processes of history follow their normal pattern, they will be chanting poems like this about the ruins of our cities four or five centuries from now. How we’ll get there, and what is likely to happen en route, will be the subject of most of the posts here for the next year or so.

In a Handful of Dust

Wed, 2014-07-02 17:03
All things considered, it’s a good time to think about how much we can know about the future in advance. A hundred years ago last Saturday, as all my European readers know and a few of my American readers might have heard, a young Bosnian man named Gavrilo Prinzip lunged out of a crowd in Sarajevo and emptied a pistol into the Archduke Franz Ferdinand and his wife Sophie, who were touring that corner of the ramshackle Austro-Hungarian empire they were expected to inherit in due time. Over the summer months that followed, as a direct result of those gunshots, most of the nations of Europe went to war with one another, and the shockwaves set in motion by that war brought a global order centuries old crashing down.
In one sense, none of this was a surprise. Perceptive observers of the European scene had been aware for decades of the likelihood of a head-on crash between the rising power of Germany and the aging and increasingly fragile British Empire. The decade and a half before war actually broke out had seen an increasingly frantic scramble for military alliances that united longtime rivals Britain and France in a political marriage of convenience with the Russian Empire, in the hope of containing Germany’s growing economic and military might. Every major power poured much of its wealth into armaments, sparking an arms race so rapid that the most powerful warship on the planet in 1906, Britain’s mighty HMS Dreadnought, was hopelessly obsolete when war broke out eight years later.
Inquiring minds could read learned treatises by Halford Mackinder and many other scholars, explaining why conflict between Britain and Germany was inevitable; they could also take in serious fictional treatments of the subject such as George Chesney’s The Battle of Dorking and Saki’s When William Came, or comic versions such as P.G. Wodehouse’s The Swoop!. Though most military thinkers remained stuck in the Napoleonic mode of conflict chronicled in the pages of Karl von Clausewitz’ On War, those observers of the military scene who paid attention to the events of the American Civil War’s closing campaigns might even have been able to sense something of the trench warfare that would dominate the coming war on the western front.
It’s only fair to remember that a great many prophecies in circulation at that same time turned out to be utterly mistaken. Most of them, however, had a theme in common that regular readers of this blog will find quite familiar: the claim that because of some loudly ballyhooed factor or other, it really was different this time. Thus, for example, plenty of pundits insisted in the popular media that economic globalization had made the world’s economies so interdependent that war between the major powers was no longer possible. Equally, there was no shortage of claims that this or that or the other major technological advance had either rendered war impossible, or guaranteed that a war between the great powers would be over in weeks. Then as now, those who knew their history knew that any claim about the future that begins “It’s different this time” is almost certain to be wrong.
All things considered, it was not exactly difficult in the late spring of 1914, for those who were willing to do so, to peer into the future and see the shadow of a major war between Britain and Germany rising up to meet them. There were, in fact, many people who did just that. To go further and guess how it would happen, though, was quite another matter.  Some people came remarkably close; Bismarck, who was one of the keenest political minds of his time, is said to have commented wearily that the next great European war would probably be set off by some idiotic event in the Balkans.  Still, not even Bismarck could have anticipated the cascade of misjudgments and unintended consequences that sent this particular crisis spinning out of control in a way that half a dozen previous crises had not done.
What’s more, the events that followed the outbreak of war in the summer of 1914 quickly flung themselves off the tracks intended for them by the various political leaders and high commands, and carved out a trajectory of their own that nobody anywhere seems to have anticipated. That the Anglo-French alliance would squander its considerable military and economic superiority by refusing to abandon a bad strategy no matter how utterly it failed or how much it cost; that Russia’s immense armies would prove so feeble under pressure; that Germany would combine military genius and political stupidity in so stunningly self-defeating a fashion; that the United States would turn out to be the wild card in the game, coming down decisively on the Allied side just when the war had begun to turn in Germany’s favor—none of that was predicted, or could have been predicted, by anyone.
Nor were the consequences of the war any easier to foresee. On that bright summer day in 1914 when Gavrilo Prinzip burst from the crowd with a pistol in his hand, who could have anticipated the Soviet Union, the Great Depression, the blitzkreig, or the Holocaust? Who would have guessed that the victor in the great struggle between Britain and Germany would turn out to be the United States?  The awareness that Britain and Germany were racing toward a head-on collision did not provide any certain knowledge about how the resulting crash would turn out, or what its consequences would be; all that could be known for sure was that an impact was imminent and the comfortable certainties of the prewar world would not survive the shock.
That dichotomy, between broad patterns that are knowable in advance and specific details that aren’t, is very common in history. It’s possible, for example, that an impartial observer who assessed the state of the Roman Empire in 400 or so could have predicted the collapse of Roman power outside the Eastern Mediterranean littoral. As far as I know, no one did so—the ideological basis of Roman society made the empire’s implosion just as unthinkable then as the end of progress is today—but the possibility was arguably there. Even if an observer had been able to anticipate the overall shape of the Roman and post-Roman future, though, that anticipation wouldn’t have reached as far as the specifics of the collapse, and let’s not even talk about whether our observer might have guessed that the last Emperor of Rome in the west would turn out to be the son of Attila the Hun’s secretary, as in fact he was.
Such reflections are on my mind rather more than usual just now, for reasons that will probably come as no surprise to regular readers of this blog. For a variety of reasons, a few of which I’ll summarize in the paragraphs ahead, I think it’s very possible that the United States and the industrial world in general are near the brink of a convusive era of crisis at least as severe as the one that began in the summer of 1914. It seems very likely to me that in the years immediately ahead, a great many of the comfortable certainties of the last half century or so are going to be thrown overboard once and for all, as waves of drastic political, economic, military, social, and ecological change slam into societies that, despite decades of cogent warnings, have done precisely nothing to prepare for them.
I want to review here some of the reasons why I expect an era of crisis to arrive sooner rather than later. One of the most important of those reasons is the twilight of the late (and soon to be loudly lamented) fracking bubble. I’ve noted in previous posts here that the main product of the current fracking industry is neither oil nor gas, but the same sort of dubiously priced financial paper we all got to know and love in the aftermath of last decade’s real estate bubble. These days, the rickety fabric of American finance depends for its survival on a steady flow of hallucinatory wealth, since the production of mere goods and services no longer produces enough profit to support the Brobdingnagian superstructure of the financial industry and its swarm of attendant businesses. These days, too, an increasingly brittle global political order depends for its survival on the pretense that the United States is still the superpower it was decades ago, and all those strident and silly claims that the US is about to morph into a "Saudi America" flush with oil wealth are simply useful evasions that allow the day of reckoning, with its inevitable reshuffling of political and economic status, to be put off a little longer.
Unfortunately for all those involved, the geological realities on which the fracking bubble depends are not showing any particular willingness to cooperate. The downgrading of the Monterey Shale not long ago was just the latest piece of writing on the wall: one more sign that we’re scraping the bottom of the oil barrel under the delusion that this proves the barrel is still full. The fact that most of the companies in the fracking industry are paying their bills by running up debt, since their expenses are considerably greater than their earnings, is another sign of trouble that ought to be very familiar to those of us who witnessed the housing bubble’s go through its cycle of boom and bust.
Bubbles are like empires; if you watch one rise, you can be sure that it’s going to fall. What you don’t know, and can’t know, is when and how. That’s a trap that catches plenty of otherwise savvy investors. They see a bubble get under way, recognize it as a bubble, put money into it under the fond illusion that they can anticipate the bust and pull their money out right before the bottom drops out...and then, like everyone else, they get caught flatfooted by the end of the bubble and lose their shirts. That’s one of the great and usually unlearned lessons of finance: when a bubble gets going, it’s the pseudo-smart money that piles into it—the really smart money heads for the hills.
So it’s anyone’s guess when exactly the fracking bubble is going to pop, and even more uncertain how much damage it’s going to do to what remains of the US economy. A good midrange guess might be that it’ll have roughly the same impact that the popping of the housing bubble had in 2008 and 2009, but it could be well to either side of that estimate. Crucially, though, the damage that it does will be landing on an economy that has never really recovered from the 2008-2009 housing crash, in which actual joblessness (as distinct from heavily manipulated unemployment figures) is at historic levels and a very large number of people are scrambling for survival. At this point, another sharp downturn would make things much worse for a great many millions whose prospects aren’t that good to begin with, and that has implications that cross the border from economics into politics.
Meanwhile, the political scene in the United States is primed for an explosion. One of my regular readers—tip of the archdruid’s hat to Andy Brown—is a research anthropologist who recently spent ten weeks traveling around the United States asking people about their opinions and feelings concerning government. What he found was that, straight across geographical, political, and economic dividing lines, everyone he interviewed described the US government as the corrupt sock puppet of wealthy interests. He noted that he couldn’t recall ever encountering so broad a consensus on any political subject, much less one as explosive as this. 
Recent surveys bear him out. Only 7% of Americans feel any significant confidence in Congress.  Corresponding figures for the presidency and the Supreme Court are 29% and 30% respectively; fewer than a third of Americans, that is, place much trust in the political institutions whose birth we’ll be celebrating in a few days. This marks a tectonic shift of immense importance.  Not that many decades ago, substantial majorities of Americans believed in the essential goodness of the institutions that governed their country. Even those who condemned the individuals running those institutions—and of course that’s always been one of our national sports—routinely phrased those condemnations in terms reflecting a basic faith in the institutions themselves, and in the American experiment as a whole.
Those days are evidently over. The collapse of legitimacy currently under way in the United States is a familiar sight to students of history, who can point to dozens of comparable examples; each of these was followed, after no very long delay, by the collapse of the system of government whose legitimacy in the eyes of its people had gone missing in action. Those of my readers who are curious about such things might find it educational to read a good history of the French or the Russian revolutions, the collapse of the Weimar Republic or the Soviet Union, or any of the other implosions of political authority that have littered the last few centuries with rubble: when a system loses legitimacy in the eyes of the people it claims to lead, the end of that system is on its way.
The mechanics behind the collapse are worth a glance as well. Whether or not political power derives from the consent of the governed, as American political theory insists, it’s unarguably true that political power depends from moment to moment on the consent of the people who do the day-to-day work of governing:  the soldiers, police officers, bureaucrats and clerks whose job is is to see to it that orders from the leadership get carried out. Their obedience is the linchpin on which the survival of a regime rests, and it’s usually also the fault line along which regimes shatter, because these low-ranking and poorly paid functionaries aren’t members of the elite. They’re ordinary working joes and janes, subject to the same cultural pressures as their neighbors, and they generally stop believing in the system they serve about the same time as their neighbors do. That doesn’t stop them from serving it, but it does very reliably make them unwilling to lay down their lives in its defense, and if a viable alternative emerges, they’re rarely slow to jump ship.
Here in America, as a result of the processes just surveyed, we’ve got a society facing a well-known pattern of terminal crisis, with a gridlocked political system that’s lost its legitimacy in the eyes of the people it governs, coupled with a baroque and dysfunctional economic system lurching toward another cyclical collapse under the weight of its own hopelessly inefficient management of wealth. This is not a recipe for a comfortable future. The situation has become dire enough that some of the wealthiest beneficiaries of the system—usually the last people to notice what’s happening, until the mob armed with torches and pitchforks shows up at their mansion’s front door—have belatedly noticed that robbing the rest of society blind is not a habit with a long shelf life, and have begun to suggest that if the rich don’t fancy the thought of dangling from lampposts, they might want to consider a change in approach. 
In its own way, this recognition is a promising sign. Similar realizations some seventy years ago put Franklin Roosevelt in the White House and spared the United States the hard choice between civil war and authoritarian rule that so many other countries were facing just then.  Unless a great many more members of our kleptocratic upper class experience the same sort of wake-up call in a hurry, though, the result this time is likely to be far too little and much too late.
Here again, though, a recognition that some kind of crash is coming doesn’t amount to foreknowledge of when it’s going to hit, how it’s going to play out, or what the results will be. If the implosion of the fracking bubble leads to one more round of bailouts for the rich and cutbacks for the poor, we could see the inner cities explode as they did in the long hot summers of the 1960s, setting off the insurgency that was so narrowly avoided in those years, and plunging the nation into a long nightmare of roadside bombs, guerrilla raids, government reprisals, and random drone strikes. If a talented demagogue shows up in the right place and time, we might instead see the rise of a neofascist movement that would feed on the abandoned center of American politics and replace the rusted scraps of America’s democratic institutions with a shiny new dictatorship.
If the federal government’s gridlock stiffens any further toward rigor mortis, for that matter, we could see the states force a constitutional convention that could completely rewrite the terms of our national life, or simply dissolve the Union and allow new regional nations to take shape.  Alternatively, if a great many factors break the right way, and enough people in and out of the corridors of power take the realities of our predicament seriously and unexpectedly grow some gonads—either kind, take your pick—we might just be able to stumble through the crisis years into an era of national retrenchment and reassessment, in which many of the bad habits picked up during America’s century of empire get chucked in history’s compost bin, and some of the ideals that helped inspire this country get a little more attention for a while. That may not be a likely outcome, but I think it’s still barely possible.
All we can do is wait and see what happens, or try to take action in the clear awareness that we can’t know what effects our actions will have. Thinking about that predicament, I find myself remembering lines from the bleak and brilliant poetic testament of the generation that came of age in the aftermath of those gunshots in Sarajevo, T.S. Eliot’s The Waste Land:
What are the roots that clutch, what branches growOut of this stony rubbish? Son of man,You cannot say, or guess, for you know onlyA heap of broken images, where the sun beats,And the dead tree gives no shelter, the cricket no relief,And the dry stone no sound of water. OnlyThere is shadow under this red rock(Come in under the shadow of this red rock),And I will show you something different from eitherYour shadow at morning striding behind youOr your shadow at evening rising up to meet you:I will show you fear in a handful of dust.
It’s a crisp metaphor for the challenges of our time, as it was of those in the time about which Eliot wrote. For that matter, the quest to see something other than our own shadows projected forward on the future or backward onto the past has a broader significance for the project of this blog. With next week’s post, I plan on taking that quest a step further. The handful of dust I intend to offer my readers for their contemplation is the broader trajectory of which the impending crisis of the United States is one detail: the descent of industrial civilization over the next few centuries into a deindustrial dark age.