AODA Blog

The Cimmerian Hypothesis, Part Three: The End of the Dream

Wed, 2015-07-29 16:47
Let's take a moment to recap the argument of the last two posts here on The Archdruid Report before we follow it through to its conclusion. There are any number of ways to sort out the diversity of human social forms, but one significant division lies between those societies that don’t concentrate population, wealth, and power in urban centers, and those that do. One important difference between the societies that fall into these two categories is that urbanized societies—we may as well call these by the time-honored term “civilizations”—reliably crash and burn after a lifespan of roughly a thousand years, while societies that lack cities have no such fixed lifespans and can last for much longer without going through the cycle of rise and fall, punctuated by dark ages, that defines the history of civilizations.
It’s probably necessary to pause here and clear up what seems to be a common misunderstanding. To say that societies in the first category can last for much more than a thousand years doesn’t mean that all of them do this. I mention this because I fielded a flurry of comments from people who pointed to a few examples of  societies without cities that collapsed in less than a millennium, and insisted that this somehow disproved my hypothesis. Not so; if everyone who takes a certain diet pill, let’s say, suffers from heart damage, the fact that some people who don’t take the diet pill suffer heart damage from other causes doesn’t absolve the diet pill of responsibility. In the same way, the fact that civilizations such as Egypt and China have managed to pull themselves together after a dark age and rebuild a new version of their former civilization doesn’t erase the fact of the collapse and the dark age that followed it.
The question is why civilizations crash and burn so reliably. There are plenty of good reasons why this might happen, and it’s entirely possible that several of them are responsible; the collapse of civilization could be an overdetermined process. Like the victim in the cheap mystery novel who was shot, stabbed, strangled, clubbed over the head, and then chucked out a twentieth floor window, that is, civilizations that fall may have more causes of death than were actually necessary. The ecological costs of building and maintaining cities, for example, place much greater strains on the local environment than the less costly and concentrated settlement patterns of nonurban societies, and the rising maintenance costs of capital—the driving force behind the theory of catabolic collapse I’ve proposed elsewhere—can spin out of control much more easily in an urban setting than elsewhere. Other examples of the vulnerability of urbanized societies can easily be worked out by those who wish to do so.
That said, there’s at least one other factor at work. As noted in last week’s post, civilizations by and large don’t have to be dragged down the slope of decline and fall; instead, they take that route with yells of triumph, convinced that the road to ruin will infallibly lead them to heaven on earth, and attempts to turn them aside from that trajectory typically get reactions ranging from blank incomprehension to furious anger. It’s not just the elites who fall into this sort of self-destructive groupthink, either: it’s not hard to find, in a falling civilization, people who claim to disagree with the ideology that’s driving the collapse, but people who take their disagreement to the point of making choices that differ from those of their more orthodox neighbors are much scarcer. They do exist; every civilization breeds them, but they make up a very small fraction of the population, and they generally exist on the fringes of society, despised and condemned by all those right-thinking people whose words and actions help drive the accelerating process of decline and fall.
The next question, then, is how civilizations get caught in that sort of groupthink. My proposal, as sketched out last week, is that the culprit is a rarely noticed side effect of urban life. People who live in a mostly natural environment—and by this I mean merely an environment in which most things are put there by nonhuman processes rather than by human action—have to deal constantly with the inevitable mismatches between the mental models of the universe they carry in their heads and the universe that actually surrounds them. People who live in a mostly artificial environment—an environment in which most things were made and arranged by human action—don’t have to deal with this anything like so often, because an artificial environment embodies the ideas of the people who constructed and arranged it. A natural environment therefore applies negative or, as it’s also called, corrective feedback to human models of the way things are, while an artificial environment applies positive feedback—the sort of thing people usually mean when they talk about a feedback loop.
This explains, incidentally, one of the other common differences between civilizations and other kinds of human society: the pace of change. Anthropologists not so long ago used to insist that what they liked to call “primitive societies”—that is, societies that have relatively simple technologies and no cities—were stuck in some kind of changeless stasis. That was nonsense, but the thin basis in fact that was used to justify the nonsense was simply that the pace of change in low-tech, non-urban societies, when they’re left to their own devices, tends to be fairly sedate, and usually happens over a time scale of generations. Urban societies, on the other hand, change quickly, and the pace of change tends to accelerate over time: a dead giveaway that a positive feedback loop is at work.
Notice that what’s fed back to the minds of civilized people by their artificial environment isn’t simply human thinking in general. It’s whatever particular set of mental models and habits of thought happen to be most popular in their civilization. Modern industrial civilization, for example, is obsessed with simplicity; our mental models and habits of thought value straight lines, simple geometrical shapes, hard boundaries, and clear distinctions. That obsession, and the models and mental habits that unfold from it, have given us an urban environment full of straight lines, simple geometrical shapes, hard boundaries, and clear distinctions—and thus reinforce our unthinking assumption that these things are normal and natural, which by and large they aren’t.
Modern industrial civilization is also obsessed with the frankly rather weird belief that growth for its own sake is a good thing. (Outside of a few specific cases, that is. I’ve wondered at times whether the deeply neurotic American attitude toward body weight comes from the conflict between current fashions in body shape and the growth-is-good mania of the rest of our culture; if bigger is better, why isn’t a big belly better than a small one?) In a modern urban American environment, it’s easy to believe that growth is good, since that claim is endlessly rehashed whenever some new megawhatsit replaces something of merely human scale, and since so many of the costs of malignant growth get hauled out of sight and dumped on somebody else. In settlement patterns that haven’t been pounded into their present shape by true believers in industrial society’s growth-for-its-own-sake ideology, people are rather more likely to grasp the meaning of the words “too much.”
I’ve used examples from our own civilization because they’re familiar, but every civilization reshapes its urban environment in the shape of its own mental models, which then reinforce those models in the minds of the people who live in that environment. As these people in turn shape that environment, the result is positive feedback: the mental models in question become more and more deeply entrenched in the built environment and thus also the collective conversation of the culture, and in both cases, they also become more elaborate and more extreme. The history of architecture in the western world over the last few centuries is a great example of this latter: over that time, buildings became ever more completely defined by straight lines, flat surfaces, simple geometries, and hard boundaries between one space and another—and it’s hardly an accident that popular culture in urban communities has simplified in much the same way over that same timespan.
One way to understand this is to see a civilization as the working out in detail of some specific set of ideas about the world. At first those ideas are as inchoate as dream-images, barely grasped even by the keenest thinkers of the time. Gradually, though, the ideas get worked out explicitly; conflicts among them are resolved or papered over in standardized ways; the original set of ideas becomes the core of a vast, ramifying architecture of thought which defines the universe to the inhabitants of that civilization. Eventually, everything in the world of human experience is assigned some place in that architecture of thought; everything that can be hammered into harmony with the core set of ideas has its place in the system, while everything that can’t gets assigned the status of superstitious nonsense, or whatever other label the civilization likes to use for the realities it denies.
The further the civilization develops, though, the less it questions the validity of the basic ideas themselves, and the urban environment is a critical factor in making this happen. By limiting, as far as possible, the experiences available to influential members of society to those that fit the established architecture of thought, urban living makes it much easier to confuse mental models with the universe those models claim to describe, and that confusion is essential if enough effort, enthusiasm, and passion are to be directed toward the process of elaborating those models to their furthest possible extent.
A branch of knowledge that has to keep on going back to revisit its first principles, after all, will never get far beyond them. This is why philosophy, which is the science of first principles, doesn’t “progress” in the simpleminded sense of that word—Aristotle didn’t disprove Plato, nor did Nietzsche refute Schopenhauer, because each of these philosophers, like all others in that challenging field, returned to the realm of first principles from a different starting point and so offered a different account of the landscape. Original philosophical inquiry thus plays a very large role in the intellectual life of every civilization early in the process of urbanization, since this helps elaborate the core ideas on which the civilization builds its vision of reality; once that process is more or less complete, though, philosophy turns into a recherchéintellectual specialty or gets transformed into intellectual dogma.
Cities are thus the Petri dishes in which civilizations ripen their ideas to maturity—and like Petri dishes, they do this by excluding contaminating influences. It’s easy, from the perspective of a falling civilization like ours, to see this as a dreadful mistake, a withdrawal from contact with the real world in order to pursue an abstract vision of things increasingly detached from everything else. That’s certainly one way to look at the matter, but there’s another side to it as well.
Civilizations are far and away the most spectacularly creative form of human society. Over the course of its thousand-year lifespan, the inhabitants of a civilization will create many orders of magnitude more of the products of culture—philosophical, scientific, and religious traditions, works of art and the traditions that produce and sustain them, and so on—than an equal number of people living in non-urban societies and experiencing the very sedate pace of cultural change already mentioned. To borrow a metaphor from the plant world, non-urban societies are perennials, and civilizations are showy annuals that throw all their energy into the flowering process.  Having flowered, civilizations then go to seed and die, while the perennial societies flower less spectacularly and remain green thereafter.
The feedback loop described above explains both the explosive creativity of civilizations and their equally explosive downfall. It’s precisely because civilizations free themselves from the corrective feedback of nature, and divert an ever larger portion of their inhabitants’ brainpower from the uses for which human brains were originally adapted by evolution, that they generate such torrents of creativity. Equally, it’s precisely because they do these things that civilizations run off the rails into self-feeding delusion, lose the capacity to learn the lessons of failure or even notice that failure is taking place, and are destroyed by threats they’ve lost the capacity to notice, let alone overcome. Meanwhile, other kinds of human societies move sedately along their own life cycles, and their creativity and their craziness—and they have both of these, of course, just as civilizations do—are kept within bounds by the enduring negative feedback loops of nature.
Which of these two options is better? That’s a question of value, not of fact, and so it has no one answer. Facts, to return to a point made in these posts several times, belong to the senses and the intellect, and they’re objective, at least to the extent that others can say, “yes, I see it too.” Values, by contrast, are a matter of the heart and the will, and they’re subjective; to call something good or bad doesn’t state an objective fact about the thing being discussed. It always expresses a value judgment from some individual point of view. You can’t say “x is better than y,” and mean anything by it, unless you’re willing to field such questions as “better by what criteria?” and “better for whom?”
Myself, I’m very fond of the benefits of civilization. I like hot running water, public libraries, the rule of law, and a great many other things that you get in civilizations and generally don’t get outside of them. Of course that preference is profoundly shaped by the fact that I grew up in a civilization; if I’d happened to be the son of yak herders in central Asia or tribal horticulturalists in upland Papua New Guinea, I might well have a different opinion—and I might also have a different opinion even if I’d grown up in this civilization but had different needs and predilections. Robert E. Howard, whose fiction launched the series of posts that finishes up this week, was a child of American civilization at its early twentieth century zenith, and he loathed civilization and all it stood for.
This is one of the two reasons that I think it’s a waste of time to get into arguments over whether civilization is a good thing. The other reason is that neither my opinion nor yours, dear reader, nor the opinion of anybody else who might happen to want to fulminate on the internet about the virtues or vices of civilization, is worth two farts in an EF-5 tornado when it comes to the question of whether or not future civilizations will rise and fall on this planet after today’s industrial civilization completes the arc of its destiny. Since the basic requirements of urban life first became available not long after the end of the last ice age, civilizations have risen wherever conditions favored them, cycled through their lifespans, and fell, and new civilizations rose again in the same places if the conditions remained favorable for that process.
Until the coming of the fossil fuel age, though, civilization was a localized thing, in a double sense. On the one hand, without the revolution in transport and military technology made possible by fossil fuels, any given civilization could only maintain control over a small portion of the planet’s surface for more than a fairly short time—thus as late as 1800, when the industrial revolution was already well under way, the civilized world was still divided into separate civilizations that each pursued its own very different ideas and values. On the other hand, without the economic revolution made possible by fossil fuels, very large sections of the world were completely unsuited to civilized life, and remained outside the civilized world for all practical purposes. As late as 1800, as a result, quite a bit of the world’s land surface was still inhabited by hunter-gatherers, nomadic pastoralists, and tribal horticulturalists who owed no allegiance to any urban power and had no interest in cities and their products at all—except for the nomadic pastoralists, that is, who occasionally liked to pillage one.
The world’s fossil fuel reserves aren’t renewable on any time scale that matters to human beings. Since we’ve burnt all the easily accessible coal, oil, and natural gas on the planet, and are working our way through the stuff that’s difficult to get even with today’s baroque and energy-intensive technologies, the world’s first fossil-fueled human civilization is guaranteed to be its last as well. That means that once the deindustrial dark age ahead of us is over, and conditions favorable for the revival of civilization recur here and there on various corners of the planet, it’s a safe bet that new civilizations will build atop the ruins we’ve left for them.
The energy resources they’ll have available to them, though, will be far less abundant and concentrated than the fossil fuels that gave industrial civilization its global reach.  With luck, and some hard work on the part of people living now, they may well inherit the information they need to make use of sun, wind, and other renewable energy resources in ways that the civilizations before ours didn’t know how to do. As our present-day proponents of green energy are finding out the hard way just now, though, this doesn’t amount to the kind of energy necessary to maintain our kind of civilization.
I’ve argued elsewhere, especially in my book The Ecotechnic Future, that modern industrial society is simply the first, clumsiest, and most wasteful form of what might be called technic society, the subset of human societies that get a significant amount of their total energy from nonbiotic sources—that is, from something other than human and animal muscles fueled by the annual product of photosynthesis. If that turns out to be correct, future civilizations that learn to use energy sparingly may be able to accomplish some of the things that we currently do by throwing energy around with wild abandon, and they may also learn how to do remarkable things that are completely beyond our grasp today. Eventually there may be other global civilizations, following out their own unique sets of ideas about the world through the usual process of dramatic creativity followed by dramatic collapse.
That’s a long way off, though. As the first global civilization gives way to the first global dark age, my working guess is that civilization—that is to say, the patterns of human society necessary to support the concentration of population, wealth, and power in urban centers—is going to go away everywhere, or nearly everywhere, over the next one to three centuries. A planet hammered by climate change, strewn with chemical and radioactive poisons, and swept by mass migrations is not a safe place for cities and the other amenities of civilized life. As things calm down, say, half a millennium from now, a range of new civilizations will doubtless emerge in those parts of the planet that have suitable conditions for urban life, while human societies of other kinds will emerge everywhere else on the planet that human life is possible at all.
I realize that this is not exactly a welcome prospect for those people who’ve bought into industrial civilization’s overblown idea of its own universal importance. Those who believe devoutly that our society is the cutting edge of humanity’s future, destined to march on gloriously forever to the stars, will be as little pleased by the portrait of the future I’ve painted as their equal and opposite numbers, for whom our society is the end of history and must surely be annihilated, along with all seven billion of us, by some glorious cataclysm of the sort beloved by Hollywood scriptwriters. Still, the universe is under no obligation to cater to anybody’s fantasies, you know. That’s a lesson Robert E. Howard knew well and wove into the best of his fiction, the stories of Conan among them—and it’s a lesson worth learning now, at least for those who hope to have some influence over how the future affects them, their families, and their communities, in an age of decline and fall.

The Cimmerian Hypothesis, Part Two: A Landscape of Hallucinations

Wed, 2015-07-22 19:17
Last week’s post covered a great deal of ground—not surprising, really, for an essay that started from a quotation from a Weird Tales story about Conan the Barbarian—and it may be useful to recap the core argument here. Civilizations—meaning here human societies that concentrate power, wealth, and population in urban centers—have a distinctive historical trajectory of rise and fall that isn’t shared by societies that lack urban centers. There are plenty of good reasons why this should be so, from the ecological costs of urbanization to the buildup of maintenance costs that drives catabolic collapse, but there’s also a cognitive dimension.
Look over the histories of fallen civilizations, and far more often than not, societies don’t have to be dragged down the slope of decline and fall. Rather, they go that way at a run, convinced that the road to ruin must inevitably lead them to heaven on earth. Arnold Toynbee, whose voluminous study of the rise and fall of civilizations has been one of the main sources for this blog since its inception, wrote at length about the way that the elite classes of falling civilizations lose the capacity to come up with new responses for new situations, or even to learn from their mistakes; thus they keep on trying to use the same failed policies over and over again until the whole system crashes to ruin. That’s an important factor, no question, but it’s not just the elites who seem to lose track of the real world as civilizations go sliding down toward history’s compost heap, it’s the masses as well.
Those of my readers who want to see a fine example of this sort of blindness to the obvious need only check the latest headlines. Within the next decade or so, for example, the entire southern half of Florida will become unfit for human habitation due to rising sea levels, driven by our dumping of greenhouse gases into an already overloaded atmosphere. Low-lying neighborhoods in Miami already flood with sea water whenever a high tide and a strong onshore wind hit at the same time; one more foot of sea level rise and salt water will pour over barriers into the remaining freshwater sources, turning southern Florida into a vast brackish swamp and forcing the evacuation of most of the millions who live there.
That’s only the most dramatic of a constellation of climatic catastrophes that are already tightening their grip on much of the United States. Out west, the rain forests of western Washington are burning in the wake of years of increasingly severe drought, California’s vast agricultural acreage is reverting to desert, and the entire city of Las Vegas will probably be out of water—as in, you turn on the tap and nothing but dust comes out—in less than a decade. As waterfalls cascade down the seaward faces of Antarctic and Greenland glaciers, leaking methane blows craters in the Siberian permafrost, and sea level rises at rates considerably faster than the worst case scenarios scientists were considering a few years ago, these threats are hardly abstract issues; is anyone in America taking them seriously enough to, say, take any concrete steps to stop using the atmosphere as a gaseous sewer, starting with their own personal behavior? Surely you jest.
No, the Republicans are still out there insisting at the top of their lungs that any scientific discovery that threatens their rich friends’ profits must be fraudulent, the Democrats are still out there proclaiming just as loudly that there must be some way to deal with anthropogenic climate change that won’t cost them their frequent-flyer miles, and nearly everyone outside the political sphere is making whatever noises they think will allow them to keep on pursuing exactly those lifestyle choices that are bringing on planetary catastrophe. Every possible excuse to insist that what’s already happening won’t happen gets instantly pounced on as one more justification for inertia—the claim currently being splashed around the media that the Sun might go through a cycle of slight cooling in the decades ahead is the latest example. (For the record, even if we get a grand solar minimum, its effects will be canceled out in short order by the impact of ongoing atmospheric pollution.)
Business as usual is very nearly the only option anybody is willing to discuss, even though the long-predicted climate catastrophes are already happening and the days of business as usual in any form are obviously numbered. The one alternative that gets air time, of course, is the popular fantasy of instant planetary dieoff, which gets plenty of attention because it’s just as effective an excuse for inaction as faith in business as usual. What next to nobody wants to talk about is the future that’s actually arriving exactly as predicted: a future in which low-lying coastal regions around the country and the world have to be abandoned to the rising seas, while the Southwest and large portions of the mountain west become more inhospitable than the eastern Sahara or Arabia’s Empty Quarter.
If the ice melt keeps accelerating at its present pace, we could be only a few decades form the point at which it’s Manhattan Island’s turn to be abandoned, because everything below ground level is permanently  flooded with seawater and every winter storm sends waves rolling right across the island and flings driftwood logs against second story windows. A few decades more, and waves will roll over the low-lying neighborhoods of Houston, Boston, Seattle, and Washington DC, while the ruined buildings that used to be New Orleans rise out of the still waters of a brackish estuary and the ruined buildings that used to be Las Vegas are half buried by the drifting sand. Take a moment to consider the economic consequences of that much infrastructure loss, that much destruction of built capital, that many people who somehow have to be evacuated and resettled, and think about what kind of body blow that will deliver to an industrial society that is already in bad shape for other reasons.
None of this had to happen. Half a century ago, policy makers and the public alike had already been presented with a tolerably clear outline of what was going to happen if we proceeded along the trajectory we were on, and those same warnings have been repeated with increasing force year by year, as the evidence to support them has mounted up implacably—and yet nearly all of us nodded and smiled and kept going. Nor has this changed in the least as the long-predicted catastrophes have begun to show up right on schedule. Quite the contrary: faced with a rising spiral of massive crises, people across the industrial world are, with majestic consistency, doing exactly those things that are guaranteed to make those crises worse.
So the question that needs to be asked, and if possible answered, is why civilizations—human societies that concentrate population, power, and wealth in urban centers—so reliably lose the capacity to learn from their mistakes and recognize that a failed policy has in fact failed.  It’s also worth asking why they so reliably do this within a finite and predictable timespan: civilizations last on average around a millennium before they crash into a dark age, while uncivilized societies routinely go on for many times that period. Doubtless any number of factors drive civilizations to their messy ends, but I’d like to suggest a factor that, to my knowledge, hasn’t been discussed in this context before.
Let’s start with what may well seem like an irrelevancy. There’s been a great deal of discussion down through the years in environmental circles about the way that the survival and health of the human body depends on inputs from nonhuman nature. There’s been a much more modest amount of talk about the human psychological and emotional needs that can only be met through interaction with natural systems. One question I’ve never seen discussed, though, is whether the human intellect has needs that are only fulfilled by a natural environment.
As I consider that question, one obvious answer comes to mind: negative feedback.
The human intellect is the part of each of us that thinks, that tries to make sense of the universe of our experience. It does this by creating models. By “models” I don’t just mean those tightly formalized and quantified models we call scientific theories; a poem is also a model of part of the universe of human experience, so is a myth, so is a painting, and so is a vague hunch about how something will work out. When a twelve-year-old girl pulls the petals off a daisy while saying “he loves me, he loves me not,” she’s using a randomization technique to decide between two models of one small but, to her, very important portion of the universe, the emotional state of whatever boy she has in mind.
With any kind of model, it’s critical to remember Alfred Korzybski’s famous rule: “the map is not the territory.” A model, to put the same point another way, is a representation; it represents the way some part of the universe looks when viewed from the perspective of one or more members of our species of social primates, using the idiosyncratic and profoundly limited set of sensory equipments, neural processes, and cognitive frameworks we got handed by our evolutionary heritage. Painful though this may be to our collective egotism, it’s not unfair to say that human mental models are what you get when you take the universe and dumb it down to the point that our minds can more or less grasp it.
What keeps our models from becoming completely dysfunctional is the negative feedback we get from the universe. For the benefit of readers who didn’t get introduced to systems theory, I should probably take a moment to explain negative feedback. The classic example is the common household thermostat, which senses the temperature of the air inside the house and activates a switch accordingly. If the air temperature is below a certain threshold, the thermostat turns the heat on and warms things up; if the air temperature rises above a different, slightly higher threshold, the thermostat turns the heat off and lets the house cool down.
In a sense, a thermostat embodies a very simple model of one very specific part of the universe, the temperature inside the house. Like all models, this one includes a set of implicit definitions and a set of value judgments. The definitions are the two thresholds, the one that turns the furnace on and the one that turns it off, and the value judgments label temperatures below the first threshold “too cold” and those above the second “too hot.” Like every human model, the thermostat model is unabashedly anthropocentric—“too cold” by the thermostat’s standard would be uncomfortably warm for a polar bear, for example—and selects out certain factors of interest to human beings from a galaxy of other things we don’t happen to want to take into consideration.
The models used by the human intellect to make sense of the universe are usually less simple than the one that guides a thermostat—there are unfortunately exceptions—but they work according to the same principle. They contain definitions, which may be implicit or explicit: the girl plucking petals from the daisy may have not have an explicit definition of love in mind when she says “he loves me,” but there’s some set of beliefs and expectations about what those words imply underlying the model. They also contain value judgments: if she’s attracted to the boy in question, “he loves me” has a positive value and “he loves me not” has a negative one.
Notice, though, that there’s a further dimension to the model, which is its interaction with the observed behavior of the thing it’s supposed to model. Plucking petals from a daisy, all things considered, is not a very good predictor of the emotional states of twelve-year-old boys; predictions made on the basis of that method are very often disproved by other sources of evidence, which is why few girls much older than twelve rely on it as an information source. Modern western science has formalized and quantified that sort of reality testing, but it’s something that most people do at least occasionally. It’s when they stop doing so that we get the inability to recognize failure that helps to drive, among many other things, the fall of civilizations.
Individual facets of experienced reality thus provide negative feedback to individual models. The whole structure of experienced reality, though, is capable of providing negative feedback on another level—when it challenges the accuracy of the entire mental process of modeling.
Nature is very good at providing negative feedback of that kind. Here’s a human conceptual model that draws a strict line between mammals, on the one hand, and birds and reptiles, on the other. Not much more than a century ago, it was as precise as any division in science: mammals have fur and don’t lay eggs, reptiles and birds don’t have fur and do lay eggs. Then some Australian settler met a platypus, which has fur and lays eggs. Scientists back in Britain flatly refused to take it seriously until some live platypuses finally made it there by ship. Plenty of platypus egg was splashed across plenty of distinguished scientific faces, and definitions had to be changed to make room for another category of mammals and the evolutionary history necessary to explain it.
Here’s another human conceptual model, the one that divides trees into distinct species. Most trees in most temperate woodlands, though, actually have a mix of genetics from closely related species. There are few red oaks; what you have instead are mostly-red, partly-red, and slightly-red oaks. Go from the northern to the southern end of a species’ distribution, or from wet to dry regions, and the variations within the species are quite often more extreme than those that separate trees that have been assigned to different species. Here’s still another human conceptual model, the one that divides trees from shrubs—plenty of species can grow either way, and the list goes on.
The human mind likes straight lines, definite boundaries, precise verbal definitions. Nature doesn’t. People who spend most of their time dealing with undomesticated natural phenomena, accordingly, have to get used to the fact that nature is under no obligation to make the kind of sense the human mind prefers. I’d suggest that this is why so many of the cultures our society calls “primitive”—that is, those that have simple material technologies and interact directly with nature much of the time—so often rely on nonlogical methods of thought: those our culture labels “mythological,” “magical,” or—I love this term—“prescientific.” (That the “prescientific” will almost certainly turn out to be the postscientific as well is one of the lessons of history that modern industrial society is trying its level best to ignore.) Nature as we experience it isn’t simple, neat, linear, and logical, and so it makes sense that the ways of thinking best suited to dealing with nature directly aren’t simple, neat, linear, and logical either.
 With this in mind, let’s return to the distinction discussed in last week’s post. I noted there that a city is a human settlement from which the direct, unmediated presence of nature has been removed as completely as the available technology permits. What replaces natural phenomena in an urban setting, though, is as important as what isn’t allowed there. Nearly everything that surrounds you in a city was put there deliberately by human beings; it is the product of conscious human thinking, and it follows the habits of human thought just outlined. Compare a walk down a city street to a walk through a forest or a shortgrass prairie: in the city street, much more of what you see is simple, neat, linear, and logical. A city is an environment reshaped to reflect the habits and preferences of the human mind.
I suspect there may be a straightforwardly neurological factor in all this. The human brain, so much larger compared to body weight than the brains of most of our primate relatives, evolved because having a larger brain provided some survival advantage to those hominins who had it, in competition with those who didn’t. It’s probably a safe assumption that processing information inputs from the natural world played a very large role in these advantages, and this would imply, in turn, that the human brain is primarily adapted for perceiving things in natural environments—not, say, for building cities, creating technologies, and making the other common products of civilization.
Thus some significant part of the brain has to be redirected away from the things that it’s adapted to do, in order to make civilizations possible. I’d like to propose that the simplified, rationalized, radically information-poor environment of the city plays a crucial role in this. (Information-poor? Of course; the amount of information that comes cascading through the five keen senses of an alert hunter-gatherer standing in an African forest is vastly greater than what a city-dweller gets from the blank walls and the monotonous sounds and scents of an urban environment.) Children raised in an environment that lacks the constant cascade of information natural environments provide, and taught to redirect their mental powers toward such other activities as reading and mathematics, grow up with cognitive habits and, in all probability, neurological arrangements focused toward the activities of civilization and away from the things to which the human brain is adapted by evolution.
One source of supporting evidence for this admittedly speculative proposal is the worldwide insistence on the part of city-dwellers that people who live in isolated rural communities, far outside the cultural ambit of urban life, are just plain stupid. What that means in practice, of course, is that people from isolated rural communities aren’t used to using their brains for the particular purposes that city people value. These allegedly “stupid” countryfolk are by and large extraordinarily adept at the skills they need to survive and thrive in their own environments. They may be able to listen to the wind and know exactly where on the far side of the hill a deer waits to be shot for dinner, glance at a stream and tell which riffle the trout have chosen for a hiding place, watch the clouds pile up and read from them how many days they’ve got to get the hay in before the rains come and rot it in the fields—all of which tasks require sophisticated information processing, the kind of processing that human brains evolved doing.
Notice, though, how the urban environment relates to the human habit of mental modeling. Everything in a city was a mental model before it became a building, a street, an item of furniture, or what have you. Chairs look like chairs, houses like houses, and so on; it’s so rare for humanmade items to break out of the habitual models of our species and the particular culture that built them that when this happens, it’s a source of endless comment. Where a natural environment constantly challenges human conceptual models, an urban environment reinforces them, producing a feedback loop that’s probably responsible for most of the achievements of civilization.
I suggest, though, that the same feedback loop may also play a very large role in the self-destruction of civilizations. People raised in urban environments come to treat their mental models as realities, more real than the often-unruly facts on the ground, because everything they encounter in their immediate environments reinforces those models. As the models become more elaborate and the cities become more completely insulated from the complexities of nature, the inhabitants of a civilization move deeper and deeper into a landscape of hallucinations—not least because as many of those hallucinations get built in brick and stone, or glass and steel, as the available technology permits. As a civilization approaches its end, the divergence between the world as it exists and the mental models that define the world for the civilization’s inmates becomes total, and its decisions and actions become lethally detached from reality—with consequences that we’ll discuss in next week’s post.

The Cimmerian Hypothesis, Part One: Civilization and Barbarism

Wed, 2015-07-15 17:16
One of the oddities of the writer’s life is the utter unpredictability of inspiration. There are times when I sit down at the keyboard knowing what I have to write, and plod my way though the day’s allotment of prose in much the same spirit that a gardener turns the earth in the beds of a big garden; there are times when a project sits there grumbling to itself and has to be coaxed or prodded into taking shape on the page; but there are also times when something grabs hold of me, drags me kicking and screaming to the keyboard, and holds me there with a squamous paw clamped on my shoulder until I’ve finished whatever it is that I’ve suddenly found out that I have to write.
Over the last two months, I’ve had that last experience on a considerably larger scale than usual; to be precise, I’ve just completed the first draft of a 70,000-word novel in eight weeks. Those of my readers and correspondents who’ve been wondering why I’ve been slower than usual to respond to them now know the reason. The working title is Moon Path to Innsmouth; it deals, in the sidelong way for which fiction is so well suited, with quite a number of the issues discussed on this blog; I’m pleased to say that I’ve lined up a publisher, and so in due time the novel will be available to delight the rugose hearts of the Great Old Ones and their eldritch minions everywhere.
None of that would be relevant to the theme of the current series of posts on The Archdruid Report, except that getting the thing written required quite a bit of reference to the weird tales of an earlier era—the writings of H.P. Lovecraft, of course, but also those of Clark Ashton Smith and Robert E. Howard, who both contributed mightily to the fictive mythos that took its name from Lovecraft’s squid-faced devil-god Cthulhu. One Howard story leads to another—or at least it does if you spent your impressionable youth stewing your imagination in a bubbling cauldron of classic fantasy fiction, as I did—and that’s how it happened that I ended up revisiting the final lines of “Beyond the Black River,” part of the saga of Conan of Cimmeria, Howard’s iconic hero:
“‘Barbarism is the natural state of mankind,’ the borderer said, still staring somberly at the Cimmerian. ‘Civilization is unnatural. It is a whim of circumstance. And barbarism must always ultimately triumph.’”
It’s easy to take that as nothing more than a bit of bluster meant to add color to an adventure story—easy but, I’d suggest, inaccurate. Science fiction has made much of its claim to be a “literature of ideas,” but a strong case can be made that the weird tale as developed by Lovecraft, Smith, Howard, and their peers has at least as much claim to the same label, and the ideas that feature in a classic weird tale are often a good deal more challenging than those that are the stock in trade of most science fiction: “gee, what happens if I extrapolate this technological trend a little further?” and the like. The authors who published with Weird Tales back in the day, in particular, liked to pose edgy questions about the way that the posturings of our species and its contemporary cultures appeared in the cold light of a cosmos that’s wholly uninterested in our overblown opinion of ourselves.
Thus I think it’s worth giving Conan and his fellow barbarians their due, and treating what we may as well call the Cimmerian hypothesis as a serious proposal about the underlying structure of human history. Let’s start with some basics. What is civilization? What is barbarism? What exactly does it mean to describe one state of human society as natural and another unnatural, and how does that relate to the repeated triumph of barbarism at the end of every civilization?
The word “civilization” has a galaxy of meanings, most of them irrelevant to the present purpose. We can take the original meaning of the word—in late Latin, civilisatio—as a workable starting point; it means “having or establishing settled communities.” A people known to the Romans was civilized if its members lived in civitates, cities or towns. We can generalize this further, and say that a civilization is a form of society in which people live in artificial environments. Is there more to civilization than that? Of course there is, but as I hope to show, most of it unfolds from the distinction just traced out.
A city, after all, is a human environment from which the ordinary workings of nature have been excluded, to as great an extent as the available technology permits. When you go outdoors in a city,  nearly all the things you encounter have been put there by human beings; even the trees are where they are because someone decided to put them there, not by way of the normal processes by which trees reproduce their kind and disperse their seeds. Those natural phenomena that do manage to elbow their way into an urban environment—tropical storms, rats, and the like—are interlopers, and treated as such. The gradient between urban and rural settlements can be measured precisely by what fraction of the things that residents encounter is put there by human action, as compared to the fraction that was put there by ordinary natural processes.
What is barbarism? The root meaning here is a good deal less helpful. The Greek word βαρβαροι, barbaroi, originally meant “people who say ‘bar bar bar’” instead of talking intelligibly in Greek. In Roman times that usage got bent around to mean “people outside the Empire,” and thus in due time to “tribes who are too savage to speak Latin, live in cities, or give up without a fight when we decide to steal their land.” Fast forward a century or two, and that definition morphed uncomfortably into “tribes who are too savage to speak Latin, live in cities, or stay peacefully on their side of the border” —enter Alaric’s Visigoths, Genseric’s Vandals, and the ebullient multiethnic horde that marched westwards under the banners of Attila the Hun.
This is also where Conan enters the picture. In crafting his fictional Hyborian Age, which was vaguely located in time betwen the sinking of Atlantis and the beginning of recorded history, Howard borrowed freely from various corners of the past, but the Roman experience was an important ingredient—the story cited above, framed by a struggle between the kingdom of Aquilonia and the wild Pictish tribes beyond the Black River, drew noticeably on Roman Britain, though it also took elements from the Old West and elsewhere. The entire concept of a barbarian hero swaggering his way south into the lands of civilization, which Howard introduced to fantasy fiction (and which has been so freely and ineptly plagiarized since his time), has its roots in the late Roman and post-Roman experience, a time when a great many enterprising warriors did just that, and when some, like Conan, became kings.
What sets barbarian societies apart from civilized ones is precisely that a much smaller fraction of the environment barbarians encounter results from human action. When you go outdoors in Cimmeria—if you’re not outdoors to start with, which you probably are—nearly everything you encounter has been put there by nature. There are no towns of any size, just scattered clusters of dwellings in the midst of a mostly unaltered environment. Where your Aquilonian town dweller who steps outside may have to look hard to see anything that was put there by nature, your Cimmerian who shoulders his battle-ax and goes for a stroll may have to look hard to see anything that was put there by human beings.
What’s more, there’s a difference in what we might usefully call the transparency of human constructions. In Cimmeria, if you do manage to get in out of the weather, the stones and timbers of the hovel where you’ve taken shelter are recognizable lumps of rock and pieces of tree; your hosts smell like the pheromone-laden social primates they are; and when their barbarian generosity inspires them to serve you a feast, they send someone out to shoot a deer, hack it into gobbets, and cook the result in some relatively simple manner that leaves no doubt in anyone’s mind that you’re all chewing on parts of a dead animal. Follow Conan’s route down into the cities of Aquilonia, and you’re in a different world, where paint and plaster, soap and perfume, and fancy cookery, among many other things, obscure nature’s contributions to the human world.
So that’s our first set of distinctions. What makes human societies natural or unnatural? It’s all too easy  to sink into a festering swamp of unsubstantiated presuppositions here, since people in every human society think of their own ways of doing things as natural and normal, and everyone else’s ways of doing the same things as unnatural and abnormal. Worse, there’s the pervasive bad habit in industrial Western cultures of lumping all non-Western cultures with relatively simple technologies together as “primitive man”—as though there’s only one of him, sitting there in a feathered war bonnet and a lionskin kilt playing the didgeridoo—in order to flatten out human history into an imaginary straight line of progress that leads from the caves, through us, to the stars.
In point of anthropological fact, the notion of “primitive man” as an allegedly unspoiled child of nature is pure hokum, and generally racist hokum at that. “Primitive” cultures—that is to say, human societies that rely on relatively simple technological suites—differ from one another just as dramatically as they differ from modern Western industrial societies; nor do simpler technological suites correlate with simpler cultural forms. Traditional Australian aboriginal societies, which have extremely simple material technologies, are considered by many anthropologists to have among the most intricate cultures known anywhere, embracing stunningly elaborate systems of knowledge in which cosmology, myth, environmental knowledge, social custom, and scores of other fields normally kept separate in our society are woven together into dizzyingly complex tapestries of knowledge.
What’s more, those tapestries of knowledge have changed and evolved over time. The hokum that underlies that label “primitive man” presupposes, among other things, that societies that use relatively simple technological suites have all been stuck in some kind of time warp since the Neolithic—think of the common habit of speech that claims that hunter-gatherer tribes are “still in the Stone Age” and so forth. Back of that habit of speech is the industrial world’s irrational conviction that all human history is an inevitable march of progress that leads straight to our kind of society, technology, and so forth. That other human societies might evolve in different directions and find their own wholly valid ways of making a home in the universe is anathema to most people in the industrial world these days—even though all the evidence suggests that this way of looking at the history of human culture makes far more sense of the data than does the fantasy of inevitable linear progress toward us.
Thus traditional tribal societies are no more natural than civilizations are, in one important sense of the word “natural;” that is, tribal societies are as complex, abstract, unique, and historically contingent as civilizations are. There is, however, one kind of human society that doesn’t share these characteristics—a kind of society that tends to be intellectually and culturally as well as technologically simpler than most, and that recurs in astonishingly similar forms around the world and across time. We’ve talked about it at quite some length in this blog; it’s the distinctive dark age society that emerges in the ruins of every fallen civilization after the barbarian war leaders settle down to become petty kings, the survivors of the civilization’s once-vast population get to work eking out a bare subsistence from the depleted topsoil, and most of the heritage of the wrecked past goes into history’s dumpster.
If there’s such a thing as a natural human society, the basic dark age society is probably it, since it emerges when the complex, abstract, unique, and historically contingent cultures of the former civilization and its hostile neighbors have both imploded, and the survivors of the collapse have to put something together in a hurry with nothing but raw human relationships and the constraints of the natural world to guide them. Of course once things settle down the new society begins moving off in its own complex, abstract, unique, and historically contingent direction; the dark age societies of post-Mycenean Greece, post-Roman Britain, post-Heian Japan, and their many equivalents have massive similarities, but the new societies that emerged from those cauldrons of cultural rebirth had much less in common with one another than their forbears did.
In Howard’s fictive history, the era of Conan came well before the collapse of Hyborian civilization; he was not himself a dark age warlord, though he doubtless would have done well in that setting. The Pictish tribes whose activities on the Aquilonian frontier inspired the quotation cited earlier in this post weren’t a dark age society, either, though if they’d actually existed, they’d have been well along the arc of transformation that turns the hostile neighbors of a declining civilization into the breeding ground of the warbands that show up on cue to finish things off. The Picts of Howard’s tale, though, were certainly barbarians—that is, they didn’t speak Aquilonian, live in cities, or stay peaceably on their side of the Black River—and they were still around long after the Hyborian civilizations were gone.
That’s one of the details Howard borrowed from history. By and large, human societies that don’t have urban centers tend to last much longer than those that do. In particular, human societies that don’t have urban centers don’t tend to go through the distinctive cycle of decline and fall ending in a dark age that urbanized societies undergo so predictably. There are plenty of factors that might plausibly drive this difference, many of which have been discussed here and elsewhere, but I’ve come to suspect something subtler may be at work here as well. As we’ve seen, a core difference between civilizations and other human societies is that people in civilizations tend to cut themselves off from the immediate experience of nature nature to a much greater extent than the uncivilized do. Does this help explain why civilizations crash and burn so reliably, leaving the barbarians to play drinking games with mead while sitting unsteadily on the smoldering ruins?
As it happens, I think it does.
As we’ve discussed at length in the last three weekly posts here, human intelligence is not the sort of protean, world-transforming superpower with limitless potential it’s been labeled by the more overenthusiastic partisans of human exceptionalism. Rather, it’s an interesting capacity possessed by one species of social primates, and quite possibly shared by some other animal species as well. Like every other biological capacity, it evolved through a process of adaptation to the environment—not, please note, to some abstract concept of the environment, but to the specific stimuli and responses that a social primate gets from the African savanna and its inhabitants, including but not limited to other social primates of the same species. It’s indicative that when our species originally spread out of Africa, it seems to have settled first in those parts of the Old World that had roughly savanna-like ecosystems, and only later worked out the bugs of living in such radically different environments as boreal forests, tropical jungles, and the like.
The interplay between the human brain and the natural environment is considerably more significant than has often been realized. For the last forty years or so, a scholarly discipline called ecopsychology has explored some of the ways that interactions with nature shape the human mind. More recently, in response to the frantic attempts of American parents to isolate their children from a galaxy of largely imaginary risks, psychologists have begun to talk about “nature deficit disorder,” the set of emotional and intellectual dysfunctions that show up reliably in children who have been deprived of the normal human experience of growing up in intimate contact with the natural world.
All of this should have been obvious from first principles. Studies of human and animal behavior alike have shown repeatedly that psychological health depends on receiving certain highly specific stimuli at certain stages in the maturation process. The famous experiments by Henry Harlow, who showed that monkeys raised  with a mother-substitute wrapped in terrycloth grew up more or less normal, while those raised with a bare metal mother-substitute turned out psychotic even when all their other needs were met, are among the more famous of these, but there have been many more, and many of them can be shown to affect human capacities in direct and demonstrable ways. Children learn language, for example, only if they’re exposed to speech during a certain age window; lacking the right stimulus at the right time, the capacity to use language shuts down and apparently can’t be restarted again.
In this latter example, exposure to speech is what’s known as a triggering stimulus—something from outside the organism that kickstarts a process that’s already hardwired into the organism, but will not get under way until and unless the trigger appears. There are other kinds of stimuli that play different roles in human and animal development. The maturation of the human mind, in fact, might best be seen as a process in which inputs from the environment play a galaxy of roles, some of them of critical importance. What happens when the natural inputs that were around when human intelligence evolved get shut out of the experiences of maturing humans, and replaced by a very different set of inputs put there by human beings? We’ll discuss that next week, in the second part of this post.

Darwin's Casino

Wed, 2015-07-08 16:25
Our age has no shortage of curious features, but for me, at least, one of the oddest is the way that so many people these days don’t seem to be able to think through the consequences of their own beliefs. Pick an ideology, any ideology, straight across the spectrum from the most devoutly religious to the most stridently secular, and you can count on finding a bumper crop of people who claim to hold that set of beliefs, and recite them with all the uncomprehending enthusiasm of a well-trained mynah bird, but haven’t noticed that those beliefs contradict other beliefs they claim to hold with equal devotion.
I’m not talking here about ordinary hypocrisy. The hypocrites we have with us always; our species being what it is, plenty of people have always seen the advantages of saying one thing and doing another. No, what I have in mind is saying one thing and saying another, without ever noticing that if one of those statements is true, the other by definition has to be false. My readers may recall the way that cowboy-hatted heavies in old Westerns used to say to each other, “This town ain’t big enough for the two of us;” there are plenty of ideas and beliefs that are like that, but too many modern minds resemble nothing so much as an OK Corral where the gunfight never happens.
An example that I’ve satirized in an earlier post here is the bizarre way that so many people on the rightward end of the US political landscape these days claim to be, at one and the same time, devout Christians and fervid adherents of Ayn Rand’s violently atheist and anti-Christian ideology.  The difficulty here, of course, is that Jesus tells his followers to humble themselves before God and help the poor, while Rand told hers to hate God, wallow in fantasies of their own superiority, and kick the poor into the nearest available gutter.  There’s quite precisely no common ground between the two belief systems, and yet self-proclaimed Christians who spout Rand’s turgid drivel at every opportunity make up a significant fraction of the Republican Party just now.
Still, it’s only fair to point out that this sort of weird disconnect is far from unique to religious people, or for that matter to Republicans. One of the places it crops up most often nowadays is the remarkable unwillingness of people who say they accept Darwin’s theory of evolution to think through what that theory implies about the limits of human intelligence.
If Darwin’s right, as I’ve had occasion to point out here several times already, human intelligence isn’t the world-shaking superpower our collective egotism likes to suppose. It’s simply a somewhat more sophisticated version of the sort of mental activity found in many other animals. The thing that supposedly sets it apart from all other forms of mentation, the use of abstract language, isn’t all that unique; several species of cetaceans and an assortment of the brainier birds communicate with their kin using vocalizations that show all the signs of being languages in the full sense of the word—that is, structured patterns of abstract vocal signs that take their meaning from convention rather than instinct.
What differentiates human beings from bottlenosed porpoises, African gray parrots, and other talking species is the mere fact that in our case, language and abstract thinking happened to evolve in a species that also had the sort of grasping limbs, fine motor control, and instinctive drive to pick things up and fiddle with them, that primates have and most other animals don’t.  There’s no reason why sentience should be associated with the sort of neurological bias that leads to manipulating the environment, and thence to technology; as far as the evidence goes, we just happen to be the one species in Darwin’s evolutionary casino that got dealt both those cards. For all we know, bottlenosed porpoises have a rich philosophical, scientific, and literary culture dating back twenty million years; they don’t have hands, though, so they don’t have technology. All things considered, this may be an advantage, since it means they won’t have had to face the kind of self-induced disasters our species is so busy preparing for itself due to the inveterate primate tendency to, ahem, monkey around with things.
I’ve long suspected that one of the reasons why human beings haven’t yet figured out how to carry on a conversation with bottlenosed porpoises, African gray parrots, et al. in their own language is quite simply that we’re terrified of what they might say to us—not least because it’s entirely possible that they’d be right. Another reason for the lack of communication, though, leads straight back to the limits of human intelligence. If our minds have emerged out of the ordinary processes of evolution, what we’ve got between our ears is simply an unusually complex variation on the standard social primate brain, adapted over millions of years to the mental tasks that are important to social primates—that is, staying fed, attracting mates, competing for status, and staying out of the jaws of hungry leopards.
Notice that “discovering the objective truth about the nature of the universe” isn’t part of this list, and if Darwin’s theory of evolution is correct—as I believe it to be—there’s no conceivable way it could be. The mental activities of social primates, and all other living things, have to take the rest of the world into account in certain limited ways; our perceptions of food, mates, rivals, and leopards, for example, have to correspond to the equivalent factors in the environment; but it’s actually an advantage to any organism to screen out anything that doesn’t relate to immediate benefits or threats, so that adequate attention can be paid to the things that matter. We perceive colors, which most mammals don’t, because primates need to be able to judge the ripeness of fruit from a distance; we don’t perceive the polarization of light, as bees do, because primates don’t need to navigate by the angle of the sun.
What’s more, the basic mental categories we use to make sense of the tiny fraction of our surroundings that we perceive are just as much a product of our primate ancestry as the senses we have and don’t have. That includes the basic structures of human language, which most research suggests are inborn in our species, as well as such derivations from language as logic and the relation between cause and effect—this latter simply takes the grammatical relation between subjects, verbs, and objects, and projects it onto the nonlinguistic world. In the real world, every phenomenon is part of an ongoing cascade of interactions so wildly hypercomplex that labels like “cause” and “effect” are hopelessly simplistic; what’s more, a great many things—for example, the decay of radioactive nuclei—just up and happen randomly without being triggered by any specific cause at all. We simplify all this into cause and effect because just enough things appear to work that way to make the habit useful to us.
Another thing that has much more to do with our cognitive apparatus than with the world we perceive is number. Does one apple plus one apple equal two apples? In our number-using minds, yes; in the real world, it depends entirely on the size and condition of the apples in question. We convert qualities into quantities because quantities are easier for us to think with.  That was one of the core discoveries that kickstarted the scientific revolution; when Galileo became the first human being in history to think of speed as a quantity, he made it possible for everyone after him to get their minds around the concept of velocity in a way that people before him had never quite been able to do.
In physics, converting qualities to quantities works very, very well. In some other sciences, the same thing is true, though the further you go away from the exquisite simplicity of masses in motion, the harder it is to translate everything that matters into quantitative terms, and the more inevitably gets left out of the resulting theories. By and large, the more complex the phenomena under discussion, the less useful quantitative models are. Not coincidentally, the more complex the phenomena under discussion, the harder it is to control all the variables in play—the essential step in using the scientific method—and the more tentative, fragile, and dubious the models that result.
So when we try to figure out what bottlenosed porpoises are saying to each other, we’re facing what’s probably an insuperable barrier. All our notions of language are social-primate notions, shaped by the peculiar mix of neurology and hardwired psychology that proved most useful to bipedal apes on the East African savannah over the last few million years. The structures that shape porpoise speech, in turn, are social-cetacean notions, shaped by the utterly different mix of neurology and hardwired psychology that’s most useful if you happen to be a bottlenosed porpoise or one of its ancestors.
Mind you, porpoises and humans are at least fellow-mammals, and likely have common ancestors only a couple of hundred million years back. If you want to talk to a gray parrot, you’re trying to cross a much vaster evolutionary distance, since the ancestors of our therapsid forebears and the ancestors of the parrot’s archosaurian progenitors have been following divergent tracks since way back in the Paleozoic. Since language evolved independently in each of the lineages we’re discussing, the logic of convergent evolution comes into play: as with the eyes of vertebrates and cephalopods—another classic case of the same thing appearing in very different evolutionary lineages—the functions are similar but the underlying structure is very different. Thus it’s no surprise that it’s taken exhaustive computer analyses of porpoise and parrot vocalizations just to give us a clue that they’re using language too.
The takeaway point I hope my readers have grasped from this is that the human mind doesn’t know universal, objective truths. Our thoughts are simply the way that we, as members of a particular species of social primates, to like to sort out the universe into chunks simple enough for us to think with. Does that make human thought useless or irrelevant? Of course not; it simply means that its uses and relevance are as limited as everything else about our species—and, of course, every other species as well. If any of my readers see this as belittling humanity, I’d like to suggest that fatuous delusions of intellectual omnipotence aren’t a useful habit for any species, least of all ours. I’d also point out that those very delusions have played a huge role in landing us in the rising spiral of crises we’re in today.
Human beings are simply one species among many, inhabiting part of the earth at one point in its long lifespan. We’ve got remarkable gifts, but then so does every other living thing. We’re not the masters of the planet, the crown of evolution, the fulfillment of Earth’s destiny, or any of the other self-important hogwash with which we like to tickle our collective ego, and our attempt to act out those delusional roles with the help of a lot of fossil carbon hasn’t exactly turned out well, you must admit. I know some people find it unbearable to see our species deprived of its supposed place as the precious darlings of the cosmos, but that’s just one of life’s little learning experiences, isn’t it? Most of us make a similar discovery on the individual scale in the course of growing up, and from my perspective, it’s high time that humanity do a little growing up of its own, ditch the infantile egotism, and get to work making the most of the time we have on this beautiful and fragile planet.
The recognition that there’s a middle ground between omnipotence and uselessness, though, seems to be very hard for a lot of people to grasp just now. I don’t know if other bloggers in the doomosphere have this happen to them, but every few months or so I field a flurry of attempted comments by people who want to drag the conversation over to their conviction that free will doesn’t exist. I don’t put those comments through, and not just because they’re invariably off topic; the ideology they’re pushing is, to my way of thinking, frankly poisonous, and it’s also based on a shopworn Victorian determinism that got chucked by working scientists rather more than a century ago, but is still being recycled by too many people who didn’t hear the thump when it landed in the trash can of dead theories.
A century and a half ago, it used to be a commonplace of scientific ideology that cause and effect ruled everything, and the whole universe was fated to rumble along a rigidly invariant sequence of events from the beginning of time to the end thereof. The claim was quite commonly made that a sufficiently vast intelligence, provided with a sufficiently complete data set about the position and velocity of every particle in the cosmos at one point in time, could literally predict everything that would ever happen thereafter. The logic behind that claim went right out the window, though, once experiments in the early 20th century showed conclusively that quantum phenomena are random in the strictest sense of the world. They’re not caused by some hidden variable; they just happen when they happen, by chance.
What determines the moment when a given atom of an unstable isotope will throw off some radiation and turn into a different element? Pure dumb luck. Since radiation discharges from single atoms of unstable isotopes are the most important cause of genetic mutations, and thus a core driving force behind the process of evolution, this is much more important than it looks. The stray radiation that gave you your eye color, dealt an otherwise uninteresting species of lobefin fish the adaptations that made it the ancestor of all land vertebrates, and provided the raw material for countless other evolutionary transformations:  these were entirely random events, and would have happened differently if certain unstable atoms had decayed at a different moment and sent their radiation into a different ovum or spermatozoon—as they very well could have. So it doesn’t matter how vast the intelligence or complete the data set you’ve got, the course of life on earth is inherently impossible to predict, and so are a great many other things that unfold from it.
With the gibbering phantom of determinism laid to rest, we can proceed to the question of free will. We can define free will operationally as the ability to produce genuine novelty in behavior—that is, to do things that can’t be predicted. Human beings do this all the time, and there are very good evolutionary reasons why they should have that capacity. Any of my readers who know game theory will recall that the best strategy in any competitive game includes an element of randomness, which prevents the other side from anticipating and forestalling your side’s actions. Food gathering, in game theory terms, is a competitive game; so are trying to attract a mate, competing for social prestige, staying out of the jaws of hungry leopards, and most of the other activities that pack the day planners of social primates.
Unpredictability is so highly valued by our species, in fact, that every human culture ever recorded has worked out formal ways to increase the total amount of sheer randomness guiding human action. Yes, we’re talking about divination—for those who don’t know the jargon, this term refers to what you do with Tarot cards, the I Ching, tea leaves, horoscopes, and all the myriad other ways human cultures have worked out to take a snapshot of the nonrational as a guide for action. Aside from whatever else may be involved—a point that isn’t relevant to this blog—divination does a really first-rate job of generating unpredictability. Flipping a coin does the same thing, and most people have confounded the determinists by doing just that on occasion, but fully developed divination systems like those just named provide a much richer palette of choices than the simple coin toss, and thus enable people to introduce a much richer range of novelty into their actions.
Still, divination is a crutch, or at best a supplement; human beings have their own onboard novelty generators, which can do the job all by themselves if given half a chance.  The process involved here was understood by philosophers a long time ago, and no doubt the neurologists will get around to figuring it out one of these days as well. The core of it is that humans don’t respond directly to stimuli, external or internal.  Instead, they respond to their own mental representations of stimuli, which are constructed by the act of cognition and are laced with bucketloads of extraneous material garnered from memory and linked to the stimulus in uniquely personal, irrational, even whimsical ways, following loose and wildly unpredictable cascades of association and contiguity that have nothing to do with logic and everything to do with the roots of creativity. 
Each human society tries to give its children some approximation of its own culturally defined set of representations—that’s what’s going on when children learn language, pick up the customs of their community, ask for the same bedtime story to be read to them for the umpteenth time, and so on. Those culturally defined representations proceed to interact in various ways with the inborn, genetically defined representations that get handed out for free with each brand new human nervous system.  The existence of these biologically and culturally defined representations, and of various ways that they can be manipulated to some extent by other people with or without the benefit of mass media, make up the ostensible reason why the people mentioned above insist that free will doesn’t exist.
Here again, though, the fact that the human mind isn’t omnipotent doesn’t make it powerless. Think about what happens, say, when a straight stick is thrust into water at an angle, and the stick seems to pick up a sudden bend at the water’s surface, due to differential refraction in water and air. The illusion is as clear as anything, but if you show this to a child and let the child experiment with it, you can watch the representation “the stick is bent” give way to “the stick looks bent.” Notice what’s happening here: the stimulus remains the same, but the representation changes, and so do the actions that result from it. That’s a simple example of how representations create the possibility of freedom.
In the same way, when the media spouts some absurd bit of manipulative hogwash, if you take the time to think about it, you can watch your own representation shift from “that guy’s having an orgasm from slurping that fizzy brown sugar water” to “that guy’s being paid to pretend to have an orgasm, so somebody can try to convince me to buy that fizzy brown sugar water.” If you really pay attention, it may shift again to “why am I wasting my time watching this guy pretend to get an orgasm from fizzy brown sugar water?” and may even lead you to chuck your television out a second story window into an open dumpster, as I did to the last one I ever owned. (The flash and bang when the picture tube imploded, by the way, was far more entertaining than anything that had ever appeared on the screen.)
Human intelligence is limited. Our capacities for thinking are constrained by our heredity, our cultures, and our personal experiences—but then so are our capacities for the perception of color, a fact that hasn’t stopped artists from the Paleolithic to the present from putting those colors to work in a galaxy of dizzyingly original ways. A clear awareness of the possibilities and the limits of the human mind makes it easier to play the hand we’ve been dealt in Darwin’s casino—and it also points toward a generally unsuspected reason why civilizations come apart, which we’ll discuss next week.

The Dream of the Machine

Wed, 2015-07-01 16:02
As I type these words, it looks as though the wheels are coming off the global economy. Greece and Puerto Rico have both suspended payments on their debts, and China’s stock market, which spent the last year in a classic speculative bubble, is now in the middle of a classic speculative bust. Those of my readers who’ve read John Kenneth Galbraith’s lively history The Great Crash 1929 already know all about the Chinese situation, including the outcome—and since vast amounts of money from all over the world went into Chinese stocks, and most of that money is in the process of turning into twinkle dust, the impact of the crash will inevitably proliferate through the global economy.
So, in all probability, will the Greek and Puerto Rican defaults. In today’s bizarre financial world, the kind of bad debts that used to send investors backing away in a hurry attract speculators in droves, and so it turns out that some big New York hedge funds are in trouble as a result of the Greek default, and some of the same firms that got into trouble with mortgage-backed securities in the recent housing bubble are in the same kind of trouble over Puerto Rico’s unpayable debts. How far will the contagion spread? It’s anybody’s guess.
Oh, and on another front, nearly half a million acres of Alaska burned up in a single day last week—yes, the fires are still going—while ice sheets in Greenland are collapsing so frequently and forcefully that the resulting earthquakes are rattling seismographs thousands of miles away. These and other signals of a biosphere in crisis make good reminders of the fact that the current economic mess isn’t happening in a vacuum. As Ugo Bardi pointed out in a thoughtful blog post, finance is the flotsam on the surface of the ocean of real exchanges of real goods and services, and the current drumbeat of financial crises are symptomatic of the real crisis—the arrival of the limits to growth that so many people have been discussing, and so many more have been trying to ignore, for the last half century or so.
A great many people in the doomward end of the blogosphere are talking about what’s going on in the global economy and what’s likely to blow up next. Around the time the next round of financial explosions start shaking the world’s windows, a great many of those same people will likely be talking about what to do about it all.  I don’t plan on joining them in that discussion. As blog posts here have pointed out more than once, time has to be considered when getting ready for a crisis. The industrial world would have had to start backpedaling away from the abyss decades ago in order to forestall the crisis we’re now in, and the same principle applies to individuals.  The slogan “collapse now and avoid the rush!” loses most of its point, after all, when the rush is already under way.
Any of my readers who are still pinning their hopes on survival ecovillages and rural doomsteads they haven’t gotten around to buying or building yet, in other words, are very likely out of luck. They, like the rest of us, will be meeting this where they are, with what they have right now. This is ironic, in that ideas that might have been worth adopting three or four years ago are just starting to get traction now. I’m thinking here particularly of a recent article on how to use permaculture to prepare for a difficult future, which describes the difficult future in terms that will be highly familiar to readers of this blog. More broadly, there’s a remarkable amount of common ground between that article and the themes of my book Green Wizardry. The awkward fact remains that when the global banking industry shows every sign of freezing up the way it did in 2008, putting credit for land purchases out of reach of most people for years to come, the article’s advice may have come rather too late.
That doesn’t mean, of course, that my readers ought to crawl under their beds and wait for death. What we’re facing, after all, isn’t the end of the world—though it may feel like that for those who are too deeply invested, in any sense of that last word you care to use, in the existing order of industrial society. As Visigothic mommas used to remind their impatient sons, Rome wasn’t sacked in a day. The crisis ahead of us marks the end of what I’ve called abundance industrialism and the transition to scarcity industrialism, as well as the end of America’s global hegemony and the emergence of a new international order whose main beneficiary hasn’t been settled yet. Those paired transformations will most likely unfold across several decades of economic chaos, political turmoil, environmental disasters, and widespread warfare. Plenty of people got through the equivalent cataclysms of the first half of the twentieth century with their skins intact, even if the crisis caught them unawares, and no doubt plenty of people will get through the mess that’s approaching us in much the same condition.
Thus I don’t have any additional practical advice, beyond what I’ve already covered in my books and blog posts, to offer my readers just now. Those who’ve already collapsed and gotten ahead of the rush can break out the popcorn and watch what promises to be a truly colorful show.  Those who didn’t—well, you might as well get some popcorn going and try to enjoy the show anyway. If you come out the other side of it all, schoolchildren who aren’t even born yet may eventually come around to ask you awed questions about what happened when the markets crashed in ‘15.
In the meantime, while the popcorn is popping and the sidewalks of Wall Street await their traditional tithe of plummeting stockbrokers, I’d like to return to the theme of last week’s post and talk about the way that the myth of the machine—if you prefer, the widespread mental habit of thinking about the world in mechanistic terms—pervades and cripples the modern mind.
Of all the responses that last week’s post fielded, those I found most amusing, and also most revealing, were those that insisted that of course the universe is a machine, so is everything and everybody in it, and that’s that. That’s amusing because most of the authors of these comments made it very clear that they embraced the sort of scientific-materialist atheism that rejects any suggestion that the universe has a creator or a purpose. A machine, though, is by definition a purposive artifact—that is, it’s made by someone to do something. If the universe is a machine, then, it has a creator and a purpose, and if it doesn’t have a creator and a purpose, logically speaking, it can’t be a machine.
That sort of unintentional comedy inevitably pops up whenever people don’t think through the implications of their favorite metaphors. Still, chase that habit further along its giddy path and you’ll find a deeper absurdity at work. When people say “the universe is a machine,” unless they mean that statement as a poetic simile, they’re engaging in a very dubious sort of logic. As Alfred Korzybski pointed out a good many years ago, pretty much any time you say “this is that,” unless you implicitly or explicitly qualify what you mean in very careful terms, you’ve just babbled nonsense.
The difficulty lies in that seemingly innocuous word “is.” What Korzybski called the “is of identity”—the use of the word “is” to represent  =, the sign of equality—makes sense only in a very narrow range of uses.  You can use the “is of identity” with good results in categorical definitions; when I commented above that a machine is a purposive artifact, that’s what I was doing. Here is a concept, “machine;” here are two other concepts, “purposive” and “artifact;” the concept “machine” logically includes the concepts “purposive” and “artifact,” so anything that can be described by the words “a machine” can also be described as “purposive” and “an artifact.” That’s how categorical definitions work.
Let’s consider a second example, though: “a machine is a purple dinosaur.” That utterance uses the same structure as the one we’ve just considered.  I hope I don’t have to prove to my readers, though, that the concept “machine” doesn’t include the concepts “purple” and “dinosaur” in any but the most whimsical of senses.  There are plenty of things that can be described by the label “machine,” in other words, that can’t be described by the labels “purple” or “dinosaur.” The fact that some machines—say, electronic Barney dolls—can in fact be described as purple dinosaurs doesn’t make the definition any less silly; it simply means that the statement “no machine is a purple dinosaur” can’t be justified either.
With that in mind, let’s take a closer look at the statement “the universe is a machine.” As pointed out earlier, the concept “machine” implies the concepts “purposive” and “artifact,” so if the universe is a machine, somebody made it to carry out some purpose. Those of my readers who happen to belong to Christianity, Islam, or another religion that envisions the universe as the creation of one or more deities—not all religions make this claim, by the way—will find this conclusion wholly unproblematic. My atheist readers will disagree, of course, and their reaction is the one I want to discuss here. (Notice how “is” functions in the sentence just uttered: “the reaction of the atheists” equals “the reaction I want to discuss.” This is one of the few other uses of “is” that doesn’t tend to generate nonsense.)
In my experience, at least, atheists faced with the argument about the meaning of the word “machine” I’ve presented here pretty reliably respond with something like “It’s not a machine in that sense.” That response takes us straight to the heart of the logical problems with the “is of identity.” In what sense is the universe a machine? Pursue the argument far enough, and unless the atheist storms off in a huff—which admittedly tends to happen more often than not—what you’ll get amounts to “the universe and a machine share certain characteristics in common.” Go further still—and at this point the atheist will almost certainly storm off in a huff—and you’ll discover that the characteristics that the universe is supposed to share with a machine are all things we can’t actually prove one way or another about the universe, such as whether it has a creator or a purpose.
The statement “the universe is a machine,” in other words, doesn’t do what it appears to do. It appears to state a categorical identity; it actually states an unsupported generalization in absolute terms. It takes a mental model abstracted from one corner of human experience and applies it to something unrelated.  In this case, for polemic reasons, it does so in a predictably one-sided way: deductions approved by the person making the statement (“the universe is a machine, therefore it lacks life and consciousness”) are acceptable, while deductions the person making the statement doesn’t like (“the universe is a machine, therefore it was made by someone for some purpose”) get the dismissive response noted above.
This sort of doublethink appears all through the landscape of contemporary nonconversation and nondebate, to be sure, but the problems with the “is of identity” don’t stop with its polemic abuse. Any time you say “this is that,” and mean something other than “this has some features in common with that,” you’ve just fallen into one of the corel boobytraps hardwired into the structure of human thought.
Human beings think in categories. That’s what made ancient Greek logic, which takes categories as its basic element, so massive a revolution in the history of human thinking: by watching the way that one category includes or excludes another, which is what the Greek logicians did, you can squelch a very large fraction of human stupidities before they get a foothold. What Alfred Korzybski pointed out, in effect, is that there’s a metalogic that the ancient Greeks didn’t get to, and logical theorists since their time haven’t really tackled either: the extremely murky relationship between the categories we think with and the things we experience, which don’t come with category labels spraypainted on them.
Here is a green plant with a woody stem. Is it a tree or a shrub? That depends on exactly where you draw the line between those two categories, and as any botanist can tell you, that’s neither an easy nor an obvious thing. As long as you remember that categories exist within the human mind as convenient handles for us to think with, you can navigate around the difficulties, but when you slip into thinking that the categories are more real than the things they describe, you’re in deep, deep trouble.
It’s not at all surprising that human thought should have such problems built into it. If, as I do, you accept the Darwinian thesis that human beings evolved out of prehuman primates by the normal workings of the laws of evolution, it follows logically that our nervous systems and cognitive structures didn’t evolve for the purpose of understanding the truth about the cosmos; they evolved to assist us in getting food, attracting mates, fending off predators, and a range of similar, intellectually undemanding tasks. If, as many of my theist readers do, you believe that human beings were created by a deity, the yawning chasm between creator and created, between an infinite and a finite intelligence, stands in the way of any claim that human beings can know the unvarnished truth about the cosmos. Neither viewpoint supports the claim that a category created by the human mind is anything but a convenience that helps our very modest mental powers grapple with an ultimately incomprehensible cosmos.
Any time human beings try to make sense of the universe or any part of it, in turn, they have to choose from among the available categories in an attempt to make the object of inquiry fit the capacities of their minds. That’s what the founders of the scientific revolution did in the seventeenth century, by taking the category of “machine” and applying it to the universe to see how well it would fit. That was a perfectly rational choice from within their cultural and intellectual standpoint. The founders of the scientific revolution were Christians to a man, and some of them (for example, Isaac Newton) were devout even by the standards of the time; the idea that the universe had been made by someone for some purpose, after all, wasn’t problematic in the least to people who took it as given that the universe was made by God for the purpose of human salvation. It was also a useful choice in practical terms, because it allowed certain features of the universe—specifically, the behavior of masses in motion—to be accounted for and modeled with a clarity that previous categories hadn’t managed to achieve.
The fact that one narrowly defined aspect of the universe seems to behave like a machine, though, does not prove that the universe is a machine, any more than the fact that one machine happens to look like a purple dinosaur proves that all machines are purple dinosaurs. The success of mechanistic models in explaining the behavior of masses in motion proved that mechanical metaphors are good at fitting some of the observed phenomena of physics into a shape that’s simple enough for human cognition to grasp, and that’s all it proved. To go from that modest fact to the claim that the universe and everything in it are machines involves an intellectual leap of pretty spectacular scale. Part of the reason that leap was taken in the seventeenth century was the religious frame of scientific inquiry at that time, as already mentioned, but there was another factor, too.
It’s a curious fact that mechanistic models of the universe appeared in western European cultures, and become wildly popular there, well before the machines did. In the early seventeenth century, machines played a very modest role in the life of most Europeans; most tasks were done using hand tools powered by human and animal muscle, the way they had been done since the dawn of the agricultural revolution eight millennia or so before. The most complex devices available at the time were pendulum clocks, printing presses, handlooms, and the like—you know, the sort of thing that people these days use instead of machines when they want to get away from technology.
For reasons that historians of ideas are still trying to puzzle out, though, western European thinkers during these same years were obsessed with machines, and with mechanical explanations for the universe. Those latter ranged from the plausible to the frankly preposterous—RenéDescartes, for example, proposed a theory of gravity in which little corkscrew-shaped particles went zooming up from the earth to screw themselves into pieces of matter and yank them down. Until Isaac Newton, furthermore, theories of nature based on mechanical models didn’t actually explain that much, and until the cascade of inventive adaptations of steam power that ended with James Watt’s epochal steam engine nearly a century after Newton, the idea that machines could elbow aside craftspeople using hand tools and animals pulling carts was an unproven hypothesis. Yet a great many people in western Europe believed in the power of the machine as devoutly as their ancestors had believed in the power of the bones of the local saints.
A habit of thought very widespread in today’s culture assumes that technological change happens first and the world of ideas changes in response to it. The facts simply won’t support that claim, though. As the history of mechanistic ideas in science shows clearly, the ideas come first and the technologies follow—and there’s good reason why this should be so. Technologies don’t invent themselves, after all. Somebody has to put in the work to invent them, and then other people have to invest the resources to take them out of the laboratory and give them a role in everyday life. The decisions that drive invention and investment, in turn, are powerfully shaped by cultural forces, and these in turn are by no means as rational as the people influenced by them generally like to think.
People in western Europe and a few of its colonies dreamed of machines, and then created them. They dreamed of a universe reduced to the status of a machine, a universe made totally transparent to the human mind and totally subservient to the human will, and then set out to create it. That latter attempt hasn’t worked out so well, for a variety of reasons, and the rising tide of disasters sketched out in the first part of this week’s post unfold in large part from the failure of that misbegotten dream. In the next few posts, I want to talk about why that failure was inevitable, and where we might go from here.

The Delusion of Control

Wed, 2015-06-24 15:45
I'm sure most of my readers have heard at least a little of the hullaballoo surrounding the release of Pope Francis’ encyclical on the environment, Laudato Si. It’s been entertaining to watch, not least because so many politicians in the United States who like to use Vatican pronouncements as window dressing for their own agendas have been left scrambling for cover now that the wind from Rome is blowing out of a noticeably different quarter.
Take Rick Santorum, a loudly Catholic Republican who used to be in the US Senate and now spends his time entertaining a variety of faux-conservative venues with his signature flavor of hate speech. Santorum loves to denounce fellow Catholics who disagree with Vatican edicts as “cafeteria Catholics,” and announced a while back that John F. Kennedy’s famous defense of the separation of church and state made him sick to his stomach. In the wake of Laudato Si, care to guess who’s elbowing his way to the head of the cafeteria line? Yes, that would be Santorum, who’s been insisting since the encyclical came out that the Pope is wrong and American Catholics shouldn’t be obliged to listen to him.
What makes all the yelling about Laudato Si a source of wry amusement to me is that it’s not actually a radical document at all. It’s a statement of plain common sense. It should have been obvious all along that treating the air as a gaseous sewer was a really dumb idea, and in particular, that dumping billions upon billions of tons of infrared-reflecting gases into the atmosphere would change its capacity for heat retention in unwelcome ways. It should have been just as obvious that all the other ways we maltreat the only habitable planet we’ve got were guaranteed to end just as badly. That this wasn’t obvious—that huge numbers of people find it impossible to realize that you can only wet your bed so many times before you have to sleep in a damp spot—deserves much more attention than it’s received so far.
It’s really a curious blindness, when you think about it. Since our distant ancestors climbed unsteadily down from the trees of late Pliocene Africa, the capacity to anticipate threats and do something about them has been central to the success of our species. A rustle in the grass might indicate the approach of a leopard, a series of unusually dry seasons might turn the local water hole into undrinkable mud: those of our ancestors who paid attention to such things, and took constructive action in response to them, were more likely to survive and leave offspring than those who shrugged and went on with business as usual. That’s why traditional societies around the world are hedged about with a dizzying assortment of taboos and customs meant to guard against every conceivable source of danger.
Somehow, though, we got from that to our present situation, where substantial majorities across the world’s industrial nations seem unable to notice that something bad can actually happen to them, where thoughtstoppers of the “I’m sure they’ll think of something” variety take the place of thinking about the future, and where, when something bad does happen to someone, the immediate response is to find some way to blame the victim for what happened, so that everyone else can continue to believe that the same thing can’t happen to them. A world where Laudato Si is controversial, not to mention necessary, is a world that’s become dangerously detached from the most basic requirements of collective survival.
For quite some time now, I’ve been wondering just what lies behind the bizarre paralogic with which most people these days turn blank and uncomprehending eyes on their onrushing fate. The process of writing last week’s blog post on the astonishing stupidity of US foreign policy, though, seems to have helped me push through to clarity on the subject. I may be wrong, but I think I’ve figured it out.
Let’s begin with the issue at the center of last week’s post, the realy remarkable cluelessness with which US policy toward Russia and China has convinced both nations they have nothing to gain from cooperating with a US-led global order, and are better off allying with each other and opposing the US instead. US politicians and diplomats made that happen, and the way they did it was set out in detail in a recent and thoughtful article by Paul R. Pillar in the online edition of The National Interest.
Pillar’s article pointed out that the United States has evolved a uniquely counterproductive notion of how negotiation works. Elsewhere on the planet, people understand that when you negotiate, you’re seeking a compromise where you get whatever you most need out of the situation, while the other side gets enough of its own agenda met to be willing to cooperate. To the US, by contrast, negotiation means that the other side complies with US demands, and that’s the end of it. The idea that other countries might have their own interests, and might expect to receive some substantive benefit in exchange for cooperation with the US, has apparently never entered the heads of official Washington—and the absence of that idea has resulted in the cascading failures of US foreign policy in recent years.
It’s only fair to point out that the United States isn’t the only practitioner of this kind of self-defeating behavior. A first-rate example has been unfolding in Europe in recent months—yes, that would be the ongoing non-negotiations between the Greek government and the so-called troika, the coalition of unelected bureaucrats who are trying to force Greece to keep pursuing a failed economic policy at all costs. The attitude of the troika is simple: the only outcome they’re willing to accept is capitulation on the part of the Greek government, and they’re not willing to give anything in return. Every time the Greek government has tried to point out to the troika that negotiation usually involves some degree of give and take, the bureaucrats simply give them a blank look and reiterate their previous demands.
That attitude has had drastic political consequences. It’s already convinced Greeks to elect a radical leftist government in place of the compliant centrists who ruled the country in the recent past. If the leftists fold, the neofascist Golden Dawn party is waiting in the wings. The problem with the troika’s stance is simple: the policies they’re insisting that Greece must accept have never—not once in the history of market economies—produced anything but mass impoverishment and national bankruptcy. The Greeks, among many other people, know this; they know that Greece will not return to prosperity until it defaults on its foreign debts the way Russia did in 1998, and scores of other countries have done as well.
If the troika won’t settle for a negotiated debt-relief program, and the current Greek government won’t default, the Greeks will elect someone else who will, no matter who that someone else happens to be; it’s that, after all, or continue along a course that’s already caused the Greek economy to lose a quarter of its precrisis GDP, and shows no sign of stopping anywhere this side of failed-state status. That this could quite easily hand Greece over to a fascist despot is just one of the potential problems with the troika’s strategy. It’s astonishing that so few people in Europe seem to be able to remember what happened the last time an international political establishment committed itself to the preservation of a failed economic orthodoxy no matter what; those of my readers who don’t know what I’m talking about may want to pick up any good book on the rise of fascism in Europe between the wars.
Let’s step back from specifics, though, and notice the thinking that underlies the dysfunctional behavior in Washington and Brussels alike. In both cases, the people who think they’re in charge have lost track of the fact that Russia, China, and Greece have needs, concerns, and interests of their own, and aren’t simply dolls that the US or EU can pose at will. These other nations can, perhaps, be bullied by threats over the short term, but that’s a strategy with a short shelf life.  Successful diplomacy depends on giving the other guy reasons to want to cooperate with you, while demanding cooperation at gunpoint guarantees that the other guy is going to look for ways to shoot back.
The same sort of thinking in a different context underlies the brutal stupidity of American drone attacks in the Middle East. Some wag in the media pointed out a while back that the US went to war against an enemy 5,000 strong, we’ve killed 10,000 of them, and now there are only 20,000 left. That’s a good summary of the situation; the US drone campaign has been a total failure by every objective measure, having worked out consistently to the benefit of the Muslim extremist groups against which it’s aimed, and yet nobody in official Washington seems capable of noticing this fact.
It’s hard to miss the conclusion, in fact, that the Obama administration thinks that in pursuing its drone-strike program, it’s playing some kind of video game, which the United States can win if it can rack up enough points. Notice the way that every report that a drone has taken out some al-Qaeda leader gets hailed in the media: hey, we nailed a commander, doesn’t that boost our score by five hundred? In the real world, meanwhile the indiscriminate slaughter of civilians by US drone strikes has become a core factor convincing Muslims around the world that the United States is just as evil as the jihadis claim, and thus sending young men by the thousands to join the jihadi ranks. Has anyone in the Obama administration caught on to this straightforward arithmetic of failure? Surely you jest.
For that matter, I wonder how many of my readers recall the much-ballyhooed “surge” in Afghanistan several years back.  The “surge” was discussed at great length in the US media before it was enacted on Afghan soil; talking heads of every persuasion babbled learnedly about how many troops would be sent, how long they’d stay, and so on. It apparently never occurred to anybody in the Pentagon or the White House that the Taliban could visit websites and read newspapers, and get a pretty good idea of what the US forces in Afghanistan were about to do. That’s exactly what happened, too; the Taliban simply hunkered down for the duration, and popped back up the moment the extra troops went home.
Both these examples of US military failure are driven by the same problem discussed earlier in the context of diplomacy: an inability to recognize that the other side will reliably respond to US actions in ways that further its own agenda, rather than playing along with the US. More broadly, it’s the same failure of thought that leads so many people to assume that the biosphere is somehow obligated to give us all the resources we want and take all the abuse we choose to dump on it, without ever responding in ways that might inconvenience us.
We can sum up all these forms of acquired stupidity in a single sentence: most people these days seem to have lost the ability to grasp that the other side can learn.
The entire concept of learning has been so poisoned by certain bad habits of contemporary thought that it’s probably necessary to pause here. Learning, in particular, isn’t the same thing as rote imitation. If you memorize a set of phrases in a foreign language, for example, that doesn’t mean you’ve learned that language. To learn the language means to grasp the underlying structure, so that you can come up with your own phrases and say whatever you want, not just what you’ve been taught to say.
In the same way, if you memorize a set of disconnected factoids about history, you haven’t learned history. This is something of a loaded topic right now in the US, because recent “reforms” in the American  public school system have replaced learning with rote memorization of disconnected factoids that are then regurgitated for multiple choice tests. This way of handling education penalizes those children who figure out how to learn, since they might well come up with answers that differ from the ones the test expects. That’s one of many ways that US education these days actively discourages learning—but that’s a subject for another post.
To learn is to grasp the underlying structure of a given subject of knowledge, so that the learner can come up with original responses to it. That’s what Russia and China did; they grasped the underlying structure of US diplomacy, figured out that they had nothing to gain by cooperating with that structure, and came up with a creative response, which was to ally against the United States. That’s what Greece is doing, too.  Bit by bit, the Greeks seem to be figuring out the underlying structure of troika policy, which amounts to the systematic looting of southern Europe for the benefit of Germany and a few of its allies, and are trying to come up with a response that doesn’t simply amount to unilateral submission.
That’s also what the jihadis and the Taliban are doing in the face of US military activity. If life hands you lemons, as the saying goes, make lemonade; if the US hands you drone strikes that routinely slaughter noncombatants, you can make very successful propaganda out of it—and if the US hands you a surge, you roll your eyes, hole up in your mountain fastnesses, and wait for the Americans to get bored or distracted, knowing that this won’t take long. That’s how learning works, but that’s something that US planners seem congenitally unable to take into account.
The same analysis, interestingly enough, makes just as much sense when applied to nonhuman nature. As Ervin Laszlo pointed out a long time ago in Introduction to Systems Philosophy, any sufficiently complex system behaves in ways that approximate intelligence.  Consider the way that bacteria respond to antibiotics. Individually, bacteria are as dumb as politicians, but their behavior on the species level shows an eerie similarity to learning; faced with antibiotics, a species of bacteria “tries out” different biochemical approaches until it finds one that sidesteps the antibiotic. In the same way, insects and weeds “try out” different responses to pesticides and herbicides until they find whatever allows them to munch on crops or flourish in the fields no matter how much poison the farmer sprays on them.
We can even apply the same logic to the environmental crisis as a whole. Complex systems tend to seek equilibrium, and will respond to anything that pushes them away from equilibrium by pushing back the other way. Any field biologist can show you plenty of examples: if conditions allow more rabbits to be born in a season, for instance, the population of hawks and foxes rises accordingly, reducing the rabbit surplus to a level the ecosystem can support. As humanity has put increasing pressure on the biosphere, the biosphere has begun to push back with increasing force, in an increasing number of ways; is it too much to think of this as a kind of learning, in which the biosphere “tries out” different ways to balance out the abusive behavior of humanity, until it finds one or more that work?
Now of course it’s long been a commonplace of modern thought that natural systems can’t possibly learn. The notion that nature is static, timeless, and unresponsive, a passive stage on which human beings alone play active roles, is welded into modern thought, unshaken even by the realities of biological evolution or the rising tide of evidence that natural systems are in fact quite able to adapt their way around human meddling. There’s a long and complex history to the notion of passive nature, but that’s a subject for another day; what interests me just now is that since 1990 or so, the governing classes of the United States, and some other Western nations as well, have applied the same frankly delusional logic to everything in the world other than themselves.
“We’re an empire now, and when we act, we create our own reality,” neoconservative guru Karl Rove is credited as saying to reporter Ron Suskind. “We’re history’s actors, and you, all of you, will be left to just study what we do.” That seems to be the thinking that governs the US government these days, on both sides of the supposed partisan divide. Obama says we’re in a recovery, and if the economy fails to act accordingly, why, rooms full of industrious flacks churn out elaborately fudged statistics to erase that unwelcome reality. That history’s self-proclaimed actors might turn out to be just one more set of flotsam awash on history’s vast tides has never entered their darkest dream.
Let’s step back from specifics again, though. What’s the source of this bizarre paralogic—the delusion that leads politicians to think that they create reality, and that everyone and everything else can only fill the roles they’ve been assigned by history’s actors?  I think I know. I think it comes from a simple but remarkably powerful fact, which is that the people in question, along with most people in the privileged classes of the industrial world, spend most of their time, from childhood on, dealing with machines.
We can define a machine as a subset of the universe that’s been deprived of the capacity to learn. The whole point of building a machine is that it does what you want, when you want it, and nothing else. Flip the switch on, and it turns on and goes through whatever rigidly defined set of behaviors it’s been designed to do; flip the switch off, and it stops. It may be fitted with controls, so you can manipulate its behavior in various tightly limited ways; nowadays, especially when computer technology is involved, the set of behaviors assigned to it may be complex enough that an outside observer may be fooled into thinking that there’s learning going on. There’s no inner life behind the facade, though.  It can’t learn, and to the extent that it pretends to learn, what happens is the product of the sort of rote memorization described above as the antithesis of learning.
A machine that learned would be capable of making its own decisions and coming up with a creative response to your actions—and that’s the opposite of what machines are meant to do, because that response might well involve frustrating your intentions so the machine can get what it wants instead. That’s why the trope of machines going to war against human beings has so large a presence in popular culture: it’s exactly because we expect machines not to act like people, not to pursue their own needs and interests, that the thought of machines acting the way we do gets so reliable a frisson of horror.
The habit of thought that treats the rest of the cosmos as a collection of machines, existing only to fulfill whatever purpose they might be assigned by their operators, is another matter entirely. Its origins can be traced back to the dawning of the scientific revolution in the seventeenth century, when a handful of thinkers first began to suggest that the universe might not be a vast organism—as everybody in the western world had presupposed for millennia before then—but might instead be a vast machine. It’s indicative that one immediate and popular response to this idea was to insist that other living things were simply “meat machines” who didn’t actually suffer pain under the vivisector’s knife, but had been designed by God to imitate sounds of pain in order to inspire feelings of pity in human beings.
The delusion of control—the conviction, apparently immune to correction by mere facts, that the world is a machine incapable of doing anything but the things we want it to do—pervades contemporary life in the world’s industrial societies. People in those societies spend so much more time dealing with machines than they do interacting with other people and other living things without a machine interface getting in the way, that it’s no wonder that this delusion is so widespread. As long as it retains its grip, though, we can expect the industrial world, and especially its privileged classes, to stumble onward from one preventable disaster to another. That’s the inner secret of the delusion of control, after all: those who insist on seeing the world in mechanical terms end up behaving mechanically themselves. Those who deny all other things the ability to learn lose the ability to learn from their own mistakes, and lurch robotically onward along a trajectory that leads straight to the scrapheap of the future.

An Affirming Flame

Wed, 2015-06-17 16:57
According to an assortment of recent news stories, this Thursday, June 18, is the make-or-break date by which a compromise has to be reached between Greece and the EU if a Greek default, with the ensuing risk of a potential Greek exit from the Eurozone, is to be avoided. If that’s more than just media hype, there’s a tremendous historical irony in the fact.  June 18 is after all the 200th anniversary of the Battle of Waterloo, where a previous attempt at European political and economic integration came to grief.
Now of course there are plenty of differences between the two events. In 1815 the preferred instrument of integration was raw military force; in 2015, for a variety of reasons, a variety of less overt forms of political and economic pressure have taken the place of Napoleon’s Grande Armée. The events of 1815 were also much further along the curve of defeat than those of 2015.  Waterloo was the end of the road for France’s dream of pan-European empire, while the current struggles over the Greek debt are taking place at a noticeably earlier milepost along the same road. The faceless EU bureaucrats who are filling Napoleon’s role this time around thus won’t be on their way to Elba for some time yet.
“What discords will drive Europe into that artificial unity—only dry or drying sticks can be tied into a bundle—which is the decadence of every civilization?” William Butler Yeats wrote that in 1936. It was a poignant question but also a highly relevant one, since the discords in question were moving rapidly toward explosion as he penned the last pages of A Vision, where those words appear.  Like most of those who see history in cyclical terms, Yeats recognized that the patterns that recur from age to age  are trends and motifs rather than exact narratives.  The part played by a conqueror in one era can end up in the hands of a heroic failure in the next, for circumstances can define a historical role but not the irreducibly human strengths and foibles of the person who happens to fill it.
Thus it’s not too hard to look at the rising spiral of stresses in the European Union just now and foresee the eventual descent of the continent into a mix of domestic insurgency and authoritarian nationalism, with the oncoming tide of mass migration from Africa and the Middle East adding further pressure to an already explosive mix. Exactly how that will play out over the next century, though, is a very tough question to answer. A century from now, due to raw demography, many countries in Europe will be majority-Muslim nations that look to Mecca for the roots of their faith and culture—but which ones, and how brutal or otherwise will the transition be? That’s impossible to know in advance.
There are plenty of similar examples just now; for the student of historical cycles, 2015 practically defines the phrase “target-rich environment.” Still, I want to focus on something a little different here. Partly, this is because the example I have in mind makes a good opportunity to point out the the way that what philosophers call the contingent nature of events—in less highflown language, the sheer cussedness of things—keeps history’s dice constantly rolling. Partly, though, it’s because this particular example is likely to have a substantial impact on the future of everyone reading this blog.
Last year saw a great deal of talk in the media about possible parallels between the current international situation and that of the world precisely a century ago, in the weeks leading up to the outbreak of the First World War.  Mind you, since I contributed to that discussion, I’m hardly in a position to reject the parallels out of hand. Still, the more I’ve observed the current situation, the more I’ve come to think that a different date makes a considerably better match to present conditions. To be precise, instead of a replay of 1914, I think we’re about to see an equivalent of 1939—but not quite the 1939 we know.
Two entirely contingent factors, added to all the other pressures driving toward that conflict, made the Second World War what it was. The first, of course, was the personality of Adolf Hitler. It was probably a safe bet that somebody in Weimar Germany would figure out how to build a bridge between the politically active but fragmented nationalist Right and the massive but politically inert German middle classes, restore Germany to great-power status, and gear up for a second attempt to elbow aside the British Empire. That the man who happened to do these things was an eccentric anti-Semite ideologue who combined shrewd political instincts, utter military incompetence, and a frankly psychotic faith in his own supposed infallibility, though, was in no way required by the logic of history.
Had Corporal Hitler taken an extra lungful of gas on the Western Front, someone else would likely have filled the same role in the politics of the time. We don’t even have to consider what might have happened if the nation that birthed Frederick the Great and Otto von Bismarck had come up with a third statesman of the same caliber. If the German head of state in 1939 had been merely a capable pragmatist with adequate government and military experience, and guided Germany’s actions by a logic less topsy-turvy than Hitler’s, the trajectory of those years would have been far different.
The second contingent factor that defined the outcome of the great wars of the twentieth century is broader in focus than the quirks of a single personality, but it was just as subject to those vagaries that make hash out of attempts at precise historical prediction. As discussed in an earlier post on this blog, it was by no means certain that America would be Britain’s ally when war finally came. From the Revolution onward, Britain was in many Americans’ eyes the national enemy; as late as the 1930s, when the US Army held its summer exercises, the standard scenario involved a British invasion of US territory.
All along, there was an Anglophile party in American cultural life, and its ascendancy in the years after 1900 played a major role in bringing the United States into two world wars on Britain’s side. Still, there was a considerably more important factor in play, which was a systematic British policy of conciliating the United States. From the American Civil War on, Britain allowed the United States liberties it would never have given any other power,  When the United States expanded its influence in Latin America and the Carribbean, Britain allowed itself to be upstaged there; when the United States shook off its  isolationism and built a massive blue-water navy, the British even allowed US naval vessels to refuel at British coaling stations during the global voyage of the “Great White Fleet” in 1907-9.
This was partly a reflection of the common cultural heritage that made many British politicians think of the United States as a sort of boisterous younger brother of theirs, and partly a cold-eyed recognition, in the wake of the Civil War, that war between Britain and the United States would almost certainly lead to a US invasion of Canada that Britain was very poorly positioned to counter. Still, there was another issue of major importance. To an extent few people realized at the time, the architecture of European peace after Waterloo depended on political arrangements that kept the German-speaking lands of the European core splintered into a diffuse cloud of statelets too small to threaten any of the major powers.
The great geopolitical fact of the 1860s was the collapse of that cloud into the nation of Germany, under the leadership of the dour northeastern kingdom of Prussia. In 1866, the Prussians pounded the stuffing out of Austria and brought the rest of the German states into a federation; in 1870-1871, the Prussians and their allies did the same thing to France, which was a considerably tougher proposition—this was the same French nation, remember, which brought Europe to its knees in Napoleon’s day—and the federation became the German Empire. The Austro-Hungarian Empire was widely considered the third great power in Europe until 1866; until 1870, France was the second; everybody knew that sooner or later the Germans were going to take on great power number one.
British policy toward the United States from 1871 onward was thus tempered by the harsh awareness that Britain could not afford to alienate a rising power who might become an ally, or at least a friendly neutral, when the inevitable war with Germany arrived. Above all, an alliance between Germany and the United States would have been Britain’s death warrant, and everyone in the Foreign Office and the Admiralty in London had to know that. The thought of German submarines operating out of US ports, German and American fleets combining to take on the Royal Navy, and American armies surging into Canada and depriving Britain of a critical source of raw materials and recruits while the British Army was pinned down elsewhere, must have given British planners many sleepless nights.
After 1918, that recognition must have been even more sharply pointed, because US loans and munitions shipments played a massive role in saving the western Allies from collapse in the face of the final German offensive in the autumn of 1917, and turned the tide in a war that, until then, had largely gone Germany’s way. During the two decades leading up to 1939, as Germany recovered and rearmed, British governments did everything they could to keep the United States on their side, with results that paid off handsomely when the Second World War finally came.
Let’s imagine, though, an alternative timeline in which the Foreign Office and the Admiralty from 1918 on are staffed by idiots. Let’s further imagine that Parliament is packed with clueless ideologues whose sole conception of foreign policy is that everyone, everywhere, ought to be bludgeoned into compliance with Britain’s edicts, no matter how moronic those happen to be. Let’s say, in particular, that one British government after another conducts its policy toward the United States on the basis of smug self-centered arrogance, and any move the US makes to assert itself on the international stage can count on an angry response from London. The United States launches an aircraft carrier? A threat to world peace, the London Timesroars.  The United States exerts diplomatic pressure on Mexico, and builds military bases in Panama? British diplomats head for the Carribbean and Latin America to stir up as much opposition to America’s agenda as possible.
Let’s say, furthermore, that in this alternative timeline, Adolf Hitler did indeed take one too many deep breaths on the Western Front, and lies in a military cemetery, one more forgotten casualty of the Great War. In his absence, the German Workers Party remains a fringe group, and the alliance between the nationalist Right and the middle classes is built instead by the Deutsche Volksfreiheitspartei (DVFP), which seizes power in 1934. Ulrich von Hassenstein, the new Chancellor, is a competent insider who knows how to listen to his diplomats and General Staff, and German foreign and military policy under his leadership pursues the goal of restoring Germany to world-power status using considerably less erratic means than those used by von Hassenstein’s equivalent in our timeline.
Come 1939, finally, as rising tensions between Germany and the Anglo-French alliance over Poland’s status move toward war, Chancellor von Hassenstein welcomes US President Charles Lindbergh to Berlin, where the two heads of state sign a galaxy of treaties and trade agreements and talk earnestly to the media about the need to establish a multipolar world order to replace Britain’s global hegemony. A second world war is in the offing, but the shape of that war will be very different from the one that broke out in our version of 1939, and while the United States almost certainly will be among the victors, Britain almost certainly will not.
Does all this sound absurd? Let’s change the names around and see.
Just as the great rivalry of the first half of the twentieth century was fought out between Britain and Germany, the great rivalry of the century’s second half was between the United States and Russia. If nuclear weapons hadn’t been invented, it’s probably a safe bet that at some point the rivalry would have ended in another global war.  As it was, the threat of mutual assured destruction meant that the struggle for global power had to be fought out less directly, in a flurry of proxy wars, sponsored insurgencies, economic warfare, subversion, sabotage, and bare-knuckle diplomacy. In that war, the United States came out on top, and Soviet Russia went the way of Imperial Germany, plunging into the same sort of political and economic chaos that beset the Weimar Republic in its day.
The supreme strategic imperative of the United States in that war was finding ways to drive as deep a wedge as possible between Russia and China, in order to keep them from taking concerted action against the US. That wasn’t all that difficult a task, since the two nations have very little in common and many conflicting interests. Nixon’s 1972 trip to China was arguably the defining moment in the Cold War, the point at which China’s separation from the Soviet bloc became total and Chinese integration with the American economic order began. From that point on, for Russia, it was basically all downhill.
In the aftermath of Russia’s defeat, the same strategic imperative remained, but the conditions of the post-Cold War world made it almost absurdly easy to carry out. All that would have been needed were American policies that gave Russia and China meaningful, concrete reasons to think that their national interests and aspirations would be easier to achieve in cooperation with a US-led global order than in opposition to it. Granting Russia and China the same position of regional influence that the US accords to Germany and Japan as a matter of course probably would have been enough. A little forbearance, a little foreign aid, a little adroit diplomacy, and the United States would have been in the catbird’s seat, with Russia and China glaring suspiciously at each other across their long and problematic mutual border, and bidding against each other for US support in their various disagreements.
But that’s not what happened, of course.
What happened instead was that the US embraced a foreign policy so astonishingly stupid that I’m honestly not sure the English language has adequate resources to describe it. Since 1990, one US administration after another, with the enthusiastic bipartisan support of Congress and the capable assistance of bureaucrats across official Washington from the Pentagon and the State Department on down, has pursued policies guaranteed to force Russia and China to set aside their serious mutual differences and make common cause against us. Every time the US faced a choice between competing policies, it’s consistently chosen the option most likely to convince Russia, China, or both nations at once that they had nothing to gain from further cooperation with American agendas.
What’s more, the US has more recently managed the really quite impressive feat of bringing Iran into rapprochement with the emerging Russo-Chinese alliance. It’s hard to think of another nation on Earth that has fewer grounds for constructive engagement with Russia or China than the Islamic Republic of Iran, but several decades of cluelessly hamfisted American blundering and bullying finally did the job. My American readers can now take pride in the state-of-the-art Russian air defense systems around Tehran, the bustling highways carrying Russian and Iranian products to each other’s markets, and the Russian and Chinese intelligence officers who are doubtless settling into comfortable digs on the north shore of the Persian Gulf, where they can snoop on the daisy chain of US bases along the south shore. After all, a quarter century of US foreign policy made those things happen.
It’s one thing to engage in this kind of serene disregard for reality when you’ve got the political unity, the economic abundance, and the military superiority to back it up. The United States today, like the British Empire in 1939, no longer has those. We’ve got an impressive fleet of aircraft carriers, sure, but Britain had an equally impressive fleet of battleships in 1939, and you’ll notice how much good those did them. Like Britain in 1939, the United States today is perfectly prepared for a kind of war that nobody fights any more, while rival nations less constrained by the psychology of previous investment and less riddled with institutionalized graft are fielding novel weapons systems designed to do end runs around our strengths and focus with surgical precision on our weaknesses.
Meanwhile, inside the baroque carapace of carriers, drones, and all the other high-tech claptrap of an obsolete way of war, the United States is a society in freefall, far worse off than Britain was during its comparatively mild 1930s downturn. Its leaders have forfeited the respect of a growing majority of its citizens; its economy has morphed into a Potemkin-village capitalism in which the manipulation of unpayable IOUs in absurd and rising amounts has all but replaced the actual production of goods and services; its infrastructure is so far fallen into decay that many US counties no longer pave their roads; most Americans these days think of their country’s political institutions as the enemy and its loudly proclaimed ideals as some kind of sick joke—and in both cases, not without reason. The national unity that made victory in two world wars and the Cold War possible went by the boards a long time ago, drowned in a tub by Tea Party conservatives who thought they were getting rid of government and limousine liberals who were going through the motions of sticking it to the Man.
I could go on tracing parallels for some time—in particular, despite a common rhetorical trope of US Russophobes, Vladimir Putin is not an Adolf Hitler but a fair equivalent of the Ulrich von Hassenstein of my alternate-history narrative—but here again, my readers can do the math themselves. The point I want to make is that all the signs suggest we are entering an era of international conflict in which the United States has thrown away nearly all its potential strengths, and handed its enemies advantages they would never have had if our leaders had the brains the gods gave geese. Since nuclear weapons still foreclose the option of major wars between the great powers, the conflict in question will doubtless be fought using the same indirect methods as the Cold War; in fact, it’s already being fought by those means, as the victims of proxy wars in Ukraine, Syria, and Yemen already know. The question in my mind is simply how soon those same methods get applied on American soil.
We thus stand at the beginning of a long, brutal epoch, as unforgiving as the one that dawned in 1939. Those who pin Utopian hopes on the end of American hegemony will get to add disappointment to that already bitter mix, since hegemony remains the same no matter who happens to be perched temporarily in the saddle. (I also wonder how many of the people who think they’ll rejoice at the end of American hegemony have thought through the impact on their hopes of collective betterment, not to mention their own lifestyles, once the 5% of the world’s population who live in the US can no longer claim a quarter or so of the world’s resources and wealth.) If there’s any hope possible at such a time, to my mind, it’s the one W.H. Auden proposed as the conclusion of his bleak and brilliant poem “September 1, 1939”:
Defenceless under the night,
Our world in stupor lies;
Yet, dotted everywhere,
Ironic points of light
Flash out wherever the just
Exchange their messages:
May I, composed like them
Of Eros and of dust,
Beleaguered by the same
Negation and despair,
Show an affirming flame.

The Era of Dissolution

Wed, 2015-06-10 20:06
The last of the five phases of the collapse process we’ve been discussing here in recent posts is the era of dissolution. (For those that haven’t been keeping track, the first four are the eras of pretense, impact, response, and breakdown). I suppose you could call the era of dissolution the Rodney Dangerfield of collapse, though it’s not so much that it gets no respect; it generally doesn’t even get discussed.
To some extent, of course, that’s because a great many of the people who talk about collapse don’t actually believe that it’s going to happen. That lack of belief stands out most clearly in the rhetorical roles assigned to collapse in so much of modern thinking. People who actually believe that a disaster is imminent generally put a lot of time and effort into getting out of its way in one way or another; it’s those who treat it as a scarecrow to elicit predictable emotional reactions from other people, or from themselves, who never quite manage to walk their talk.
Interestingly, the factor that determines the target of scarecrow-tactics of this sort seems to be political in nature. Groups that think they have a chance of manipulating the public into following their notion of good behavior tend to use the scarecrow of collapse to affect other people; for them, collapse is the horrible fate that’s sure to gobble us up if we don’t do whatever it is they want us to do. Those who’ve given up any hope of getting a response from the public, by contrast, turn the scarecrow around and use it on themselves; for them, collapse is a combination of Dante’s Inferno and the Big Rock Candy Mountain, the fantasy setting where the wicked get the walloping they deserve while they themselves get whatever goodies they’ve been unsuccessful at getting  in the here and now.
Then, of course, you get the people for whom collapse is less scarecrow than teddy bear, the thing that allows them to drift off comfortably to sleep in the face of an unwelcome future. It’s been my repeated observation that many of those who insist that humanity will become totally extinct in the very near future fall into this category. Most people, faced with a serious threat to their own lives, will take drastic action to save themselves; faced with a threat to the survival of their family or community, a good many people will take actions so drastic as to put their own lives at risk in an effort to save others they care about. The fact that so many people who insist that the human race is doomed go on to claim that the proper reaction is to sit around feeling very, very sad about it all does not inspire confidence in the seriousness of that prediction—especially when feeling very, very sad seems mostly to function as an excuse to keep enjoying privileged lifestyles for just a little bit longer.
So we have the people for whom collapse is a means of claiming unearned power, the people for whom it’s a blank screen on which to project an assortment of self-regarding fantasies, and the people for whom it’s an excuse to do nothing in the face of a challenging future. All three of those are popular gimmicks with an extremely long track record, and they’ll doubtless all see plenty of use millennia after industrial civilization has taken its place in the list of failed civilizations. The only problem with them is that they don’t happen to provide any useful guidance for those of us who have noticed that collapse isn’t merely a rhetorical gimmick meant to get emotional reactions—that it’s something that actually happens, to actual civilizations, and that it’s already happening to ours.
From the three perspectives already discussed, after all, realistic questions about what will come after the rubble stops bouncing are entirely irrelevant. If you’re trying to use collapse as a boogeyman to scare other people into doing what you tell them, your best option is to combine a vague sense of dread with an assortment of cherrypicked factoids intended to make a worst-case scenario look not nearly bad enough; if you’re trying to use collapse as a source of revenge fantasies where you get what you want and the people you don’t like get what’s coming to them, daydreams of various levels and modes of dampness are far more useful to you than sober assessments; while if you’re trying to use collapse as an excuse to maintain an unsustainable and planet-damaging SUV lifestyle, your best bet is to insist that everyone and everything dies all at once, so nothing will ever matter again to anybody.
On the other hand, there are also those who recognize that collapse happens, that we’re heading toward one, and that it might be useful to talk about what the world might look like on the far side of that long and difficult process. I’ve tried to sketch out a portrait of the postcollapse world in last year’s series of posts here on Dark Age America, and I haven’t yet seen any reason to abandon that portrait of a harsh but livable future, in which a sharply reduced global population returns to agrarian or nomadic lives in those areas of the planet not poisoned by nuclear or chemical wastes or rendered uninhabitable by prolonged drought or the other impacts of climate change, and in which much or most of today’s scientific and technological knowledge is irretrievably lost.
The five phases of collapse discussed in this latest sequence of posts is simply a description of how we get there—or, more precisely, of one of the steps by which we get there. That latter point’s a detail that a good many of my readers, and an even larger fraction of my critics, seem to have misplaced. The five-stage model is a map of how human societies shake off an unsustainable version of business as usual and replace it with something better suited to the realities of the time. It applies to a very wide range of social transformations, reaching in scale from the local to the global and in intensity from the relatively modest to the cataclysmic. To insist that it’s irrelevant because the current example of the species covers more geographical area than any previous example, or has further to fall than most, is like insisting that a law of physics that governs the behavior of marbles and billiards must somehow stop working just because you’re trying to do the same thing with bowling balls.
A difference of scale is not a difference of kind. Differences of scale have their own implications, which we’ll discuss a little later on in this post, but the complex curve of decline is recognizably the same in small things as in big ones, in the most as in the least catastrophic examples. That’s why I’ve used a relatively modest example—the collapse of the economic system of 1920s America and the resulting Great Depression—and an example from the midrange—the collapse of the French monarchy and the descent of 18th-century Europe into the maelstrom of the Napoleonic Wars—to provide convenient outlines for something toward the upper end of the scale—the decline and fall of modern industrial civilization and the coming of a deindustrial dark age. Let’s return to those examples, and see how the thread of collapse winds to its end.
As we saw in last week’s thrilling episode, the breakdown stage of the Great Depression came when the newly inaugurated Roosevelt administration completely redefined the US currency system. Up to that time, US dollar bills were in effect receipts for gold held in banks; after that time, those receipts could no longer be exchanged for gold, and the gold held by the US government became little more than a public relations gimmick. That action succeeded in stopping the ghastly credit crunch that shuttered every bank and most businesses in the US in the spring of 1933.
Roosevelt’s policies didn’t get rid of the broader economic dysfunction the 1929 crash had kickstarted. That was inherent in the industrial system itself, and remains a massive issue today, though its effects were papered over for a while by a series of temporary military, political, and economic factors that briefly enabled the United States to prosper at the expense of the rest of the world. The basic issue is simply that replacing human labor with machines powered by fossil fuel results in unemployment, and no law of nature or economics requires that new jobs can be found or created to replace the ones that are eliminated by mechanization. The history of the industrial age has been powerfully shaped by a whole series of attempts to ignore, evade, or paper over that relentless arithmetic.
Until 1940, the Roosevelt administration had no more luck with that project than the governments of most other nations.  It wasn’t until the Second World War made the lesson inescapable that anyone realized that the only way to provide full employment in an industrial society was to produce far more goods than consumers could consume, and let the military and a variety of other gimmicks take up the slack. That was a temporary gimmick, due to stark limitations in the resource base needed to support the mass production of useless goods, but in 1940, and even more so in 1950, few people recognized that and fewer cared. It’s our bad luck to be living at the time when that particular bill is coming due.
The first lesson to learn from the history of collapse, then, is that the breakdown phase doesn’t necessarily solve all the problems that brought it about. It doesn’t even necessarily take away every dysfunctional feature of the status quo. What it does with fair reliability is eliminate enough of the existing order of things that the problems being caused by that order decline to a manageable level. The more deeply rooted the problematic features of the status quo are in the structure of society and daily life, the harder it will be to change them, and the more likely other features are to be changed: in the example just given, it was much easier to break the effective link between the US currency and gold, and expand the money supply enough to get the economy out of cardiac arrest, than it was to break a link between mechanization and unemployment that’s hardwired into the basic logic of industrialism.
What this implies in turn is that it’s entirely possible for one collapse to cycle through the five stages we’ve explored, and then to have the era of dissolution morph straight into a new era of pretense in which the fact that all society’s problems haven’t been solved is one of the central things nobody in any relation to the centers of power wants to discuss. If the Second World War, the massive expansion of the petroleum economy, the invention of suburbia, the Cold War, and a flurry of other events hadn’t ushered in the immensely wasteful but temporarily prosperous boomtime of late 20th century America, there might well have been another vast speculative bubble in the mid- to late 1940s, resulting in another crash, another depression, and so on. This is after all what we’ve seen over the last twenty years: the tech stock bubble and bust, the housing bubble and bust, the fracking bubble and bust, each one hammering the economy further down the slope of decline.
With that in mind, let’s turn to our second example, the French Revolution. This is particularly fascinating since the aftermath of that particular era of breakdown saw a nominal return to the conditions of the era of pretense. After Napoleon’s final defeat in 1815, the Allied powers found an heir to the French throne and plopped him into the throne of the Bourbons as Louis XVIII to well-coached shouts of “Vive le Roi!” On paper, nothing had changed.
In reality, everything had changed, and the monarchy of post-Napoleonic France had roots about as deep and sturdy as the democracy of post-Saddam Iraq. Louis XVIII was clever enough to recognize this, and so managed to end his reign in the traditional fashion, feet first from natural causes. His heir Charles X was nothing like so clever, and got chucked off the throne after six years on it by another revolution in 1830. King Louis-Philippe went the same way in 1848—the French people were getting very good at revolution by that point. There followed a Republic, an Empire headed by Napoleon’s nephew, and finally another Republic which lasted out the century. All in all, French politics in the 19th century was the sort of thing you’d expect to see in an unusually excitable banana republic.
The lesson to learn from this example is that it’s very easy, and very common, for a society in the dissolution phase of collapse to insist that nothing has changed and pretend to turn back the clock. Depending on just how traumatic the collapse has been, everybody involved may play along with the charade, the way everyone in Rome nodded and smiled when Augustus Caesar pretended to uphold the legal forms of the defunct Roman Republic, and their descendants did exactly the same thing centuries later when Theodoric the Ostrogoth pretended to uphold the legal forms of the defunct Roman Empire. Those who recognize the charade as charade and play along without losing track of the realities, like Louis XVIII, can quite often navigate such times successfully; those who mistake charade for reality, like Charles X, are cruising for a bruising and normally get it in short order.
Combine these two lessons and you’ll get what I suspect will turn out to be a tolerably good sketch of the American future. Whatever comes out of the impact, response, and breakdown phases of the crisis looming ahead of the United States just now—whether it’s a fragmentary mess of successor states, a redefined nation beginning to recover from a period of personal rule by some successful demagogue or, just possibly, a battered and weary republic facing a long trudge back to its foundational principles, it seems very likely that everyone involved will do their level best to insist that nothing has really changed. If the current constitution has been abolished, it may be officially reinstated with much fanfare; there may be new elections, and some shuffling semblance of the two-party system may well come lurching out of the crypts for one or two more turns on the stage.
None of that will matter. The nation will have changed decisively in ways we can only begin to envision at this point, and the forms of twentieth-century American politics will cover a reality that has undergone drastic transformations, just as the forms of nineteenth-century French monarchy did. In due time, by some combination of legal and extralegal means, the forms will be changed to reflect the new realities, and the territory we now call the United States of America—which will almost certainly have a different name, and may well be divided into several different and competing nations by then—will be as prepared to face the next round of turmoil as it’s going to get.
Yes, there will be a next round of turmoil. That’s the thing that most people miss when thinking about the decline and fall of a civilization: it’s not a single event, or even a single linear process. It’s a whole series of cascading events that vary drastically in their importance, geographical scope, and body count. That’s true of every process of historic change.
It was true even of so simple an event as the 1929 crash and Great Depression: 1929 saw the crash, 1930 the suckers’ rally, 1931 the first wave of European bank failures, 1932 the unraveling of the US banking system, and so on until bombs falling on Pearl Harbor ushered in a different era. It was even more true of the French Revolution: between 1789 and 1815 France basically didn’t have a single year without dramatic events and drastic changes of one kind or another, and the echoes of the Revolution kept things stirred up for decades to come. Check out the fall of civilizations and you’ll see the same thing unfolding on a truly vast scale, with crisis after crisis along an arc centuries in length.
The process that’s going on around us is the decline and fall of industrial civilization. Everything we think of as normal and natural, modern and progressive, solid and inescapable is going to melt away into nothingness in the years, decades, and centuries ahead, to be replaced first by the very different but predictable institutions of a dark age, and then by the new and wholly unfamiliar forms of the successor societies of the far future. There’s nothing inevitable about the way we do things in today’s industrial world; our political arrangements, our economic practices, our social instutions, our cultural habits, our sciences and our technologies all unfold from industrial civilization’s distinctive and profoundly idiosyncratic worldview.  So does the central flaw in the entire baroque edifice, our lethally muddleheaded inability to understand our inescapable dependence on the biosphere that supports our lives. All that is going away in the time before us—but it won’t go away suddenly, or all at once.
Here in the United States, we’re facing one of the larger downward jolts in that prolonged process, the end of American global empire and of the robust economic benefits that the machinery of empire pumps from the periphery to the imperial center. Until recently, the five per cent of us who lived here got to enjoy a quarter of the world’s energy supply and raw materials and a third of its manufactured products. Those figures have already decreased noticeably, with consequences that are ringing through every corner of our society; in the years to come they’re going to decrease much further still, most likely to something like a five per cent share of the world’s wealth or even a little less. That’s going to impact every aspect of our lives in ways that very few Americans have even begun to think about.
All of that is taking place in a broader context, to be sure. Other countries will have their own trajectories through the arc of industrial civilization’s decline and fall, and some of those trajectories will be considerably less harsh in the short term than ours. In the long run, the human population of the globe is going to decline sharply; the population bubble that’s causing so many destructive effects just now will be followed in due time by a population bust, in which those four guys on horseback will doubtless play their usual roles. In the long run, furthermore, the vast majority of today’s technologies are going to go away as the resource base needed to support them gets used up, or stops being available due to other bottlenecks. Those are givens—but the long run is not the only scale that matters.
It’s not at all surprising that the foreshocks of that immense change are driving the kind of flight to fantasy criticized in the opening paragraphs of this essay. That’s business as usual when empires go down; pick up a good cultural history of the decline and fall of any empire in the last two millennia or so and you’ll find plenty of colorful prophecies of universal destruction. I’d like to encourage my readers, though, to step back from those fantasies—entertaining as they are—and try to orient themselves instead to the actual shape of the future ahead of us. That shape’s not only a good deal less gaseous than the current offerings of the Apocalypse of the Month Club (internet edition), it also offers an opportunity to do something about the future—a point we’ll be discussing further in posts to come.

The Era of Breakdown

Wed, 2015-06-03 16:49
The fourth of the stages in the sequence of collapse we’ve been discussing is the era of breakdown. (For those who haven’t been keeping track, the first three phases are the eras of pretense, impact, and response; the final phase, which we’ll be discussing next week, is the era of dissolution.) The era of breakdown is the phase that gets most of the press, and thus inevitably no other stage has attracted anything like the crop of misperceptions, misunderstandings, and flat-out hokum as this one.
The era of breakdown is the point along the curve of collapse at which business as usual finally comes to an end. That’s where the confusion comes in. It’s one of the central articles of faith in pretty much every human society that business as usual functions as a bulwark against chaos, a defense against whatever problems the society might face. That’s exactly where the difficulty slips in, because in pretty much every human society, what counts as business as usual—the established institutions and familiar activities on which everyone relies day by day—is the most important cause of the problems the society faces, and the primary cause of collapse is thus quite simply that societies inevitably attempt to solve their problems by doing all the things that make their problems worse.
The phase of breakdown is the point at which this exercise in futility finally grinds to a halt. The three previous phases are all attempts to avoid breakdown: in the phase of pretense, by making believe that the problems don’t exist; in the phase of impact, by making believe that the problems will go away if only everyone doubles down on whatever’s causing them; and in the phase of response, by making believe that changing something other than the things that are causing the problems will fix the problems. Finally, after everything else has been tried, the institutions and activities that define business as usual either fall apart or are forcibly torn down, and then—and only then—it becomes possible for a society to do something about its problems.
It’s important not to mistake the possibility of constructive action for the inevitability of a solution. The collapse of business as usual in the breakdown phase doesn’t solve a society’s problems; it doesn’t even prevent those problems from being made worse by bad choices. It merely removes the primary obstacle to a solution, which is the wholly fictitious aura of inevitability that surrounds the core institutions and activities that are responsible for the problems. Once people in a society realize that no law of God or nature requires them to maintain a failed status quo, they can then choose to dismantle whatever fragments of business as usual haven’t yet fallen down of their own weight.
That’s a more important action than it might seem at first glance. It doesn’t just put an end to the principal cause of the society’s problems. It also frees up resources that have been locked up in the struggle to keep business as usual going at all costs, and those newly freed resources very often make it possible for a society in crisis to transform itself drastically in a remarkably short period of time. Whether those transformations are for good or ill, or as usually happens, a mixture of the two, is another matter, and one I’ll address a little further on.
Stories in the media, some recent, some recently reprinted, happen to have brought up a couple of first-rate examples of the way that resources get locked up in unproductive activities during the twilight years of a failing society. A California newspaper, for example, recently mentioned that Elon Musk’s large and much-ballyhooed fortune is almost entirely a product of government subsidies. Musk is a smart guy; he obviously realized a good long time ago that federal and state subsidies for technology was where the money was at, and he’s constructed an industrial empire funded by US taxpayers to the tune of many billions of dollars. None of his publicly traded firms has ever made a profit, and as long as the subsidies keep flowing, none of them ever has to; between an overflowing feed trough of government largesse and the longstanding eagerness of fools to be parted from their money by way of the stock market, he’s pretty much set for life.
This is business as usual in today’s America. An article from 2013 pointed out, along the same lines, that the profits made by the five largest US banks were almost exactly equal to the amount of taxpayer money those same five banks got from the government. Like Elon Musk, the banks in question have figured out where the money is, and have gone after it with their usual verve; the revolving door that allows men in suits to shuttle back and forth between those same banks and the financial end of the US government doesn’t exactly hinder that process. It’s lucrative, it’s legal, and the mere fact that it’s bankrupting the real economy of goods and services in order to further enrich an already glutted minority of kleptocrats is nothing anyone in the citadels of power worries about.
A useful light on a different side of the same process comes from an editorial (in PDF) which claims thatsomething like half of all current scientific papers are unreliable junk. Is this the utterance of an archdruid, or some other wild-eyed critic of science? No, it comes from the editor of Lancet, one of the two or three most reputable medical journals on the planet. The managing editor of The New England Journal of Medicine, which has a comparable ranking to Lancet, expressed much the same opinion of the shoddy experimental design, dubious analysis, and blatant conflicts of interest that pervade contemporary scientific research.
Notice that what’s happening here affects the flow of information in the same way that misplaced government subsidies affect the flow of investment. The functioning of the scientific process, like that of the market, depends on the presupposition that everyone who takes part abides by certain rules. When those rules are flouted, individual actors profit, but they do so at the expense of the whole system: the results of scientific research are distorted so that (for example) pharmaceutical firms can profit from drugs that don’t actually have the benefits claimed for them, just as the behavior of the market is distorted so that (for example) banks that would otherwise struggle for survival, and would certainly not be able to pay their CEOs gargantuan bonuses, can continue on their merry way.
The costs imposed by these actions are real, and they fall on all other participants in science and the economy respectively. Scientists these days, especially but not only in such blatantly corrupt fields as pharmaceutical research, face a lose-lose choice between basing their own investigations on invalid studies, on the one hand, or having to distrust any experimental results they don’t replicate themselves, on the other. Meanwhile the consumers of the products of scientific research—yes, that would be all of us—have to contend with the fact that we have no way of knowing whether any given claim about the result of research is the product of valid science or not. Similarly, the federal subsidies that direct investment toward politically savvy entrepreneurs like Elon Musk, and politically well-connected banks such as Goldman Sachs, and away from less parasitic and more productive options distort the entire economic system by preventing the normal workings of the market from weeding out nonviable projects and firms, and rewarding the more viable ones.
Turn to the  historical examples we’ve been following for the last three weeks, and distortions of the same kind are impossible to miss. In the US economy before and during the stock market crash of 1929 and its long and brutal aftermath, a legal and financial system dominated by a handful of very rich men saw to it that the bulk of the nation’s wealth flowed uphill, out of productive economic activities and into speculative ventures increasingly detached from the productive economy. When the markets imploded, in turn, the same people did their level best to see to it that their lifestyles weren’t affected even though everyone else’s was. The resulting collapse in consumer expenditures played a huge role in driving the cascading collapse of the US economy that, by the spring of 1933, had shuttered every consumer bank in the nation and driven joblessness and impoverishment to record highs.
That’s what Franklin Roosevelt fixed. It’s always amused me that the people who criticize FDR—and of course there’s plenty to criticize in a figure who, aside from his far greater success as a wartime head of state, can best be characterized as America’s answer to Mussolini—always talk about the very mixed record of the economic policies of his second term. They rarely bother to mention the Hundred Days, in which FDR stopped a massive credit collapse in its tracks. The Hundred Days and their aftermath are the part of FDR’s presidency that mattered most; it was in that brief period that he slapped shock paddles on an economy in cardiac arrest and got a pulse going, by violating most of the rules that had guided the economy up to that time. That casual attitude toward economic dogma is one of the two things his critics have never been able to forgive; the other is that it worked.
In the same way, France before, during, and immediately after the Revolution was for all practical purposes a medieval state that had somehow staggered its way to the brink of the nineteenth century. The various revolutionary governments that succeeded one another in quick succession after 1789 made some badly needed changes, but it was left to Napoléon Bonaparte to drag France by the scruff of its collective neck out of the late Middle Ages. Napoléon has plenty of critics—and of course there’s plenty to criticize in a figure who was basically what Mussolini wanted to be when he grew up—but the man’s domestic policies were by and large inspired. To name only two of his most important changes, he replaced the sprawling provinces of medieval France with a system of smaller and geographically meaningful départements, and abolished the entire body of existing French law in favor of a newly created legal system, the Code Napoléon. When he was overthrown, those stayed; in fact, a great many other countries in Europe and elsewhere proceeded to adopt the Code Napoléon in place of their existing legal systems. There were several reasons for this, but one of the most important was that the new Code simply made that much more sense.

Both men were able to accomplish what they did, in turn, because abolishing the political, economic, and cultural distortions imposed on their respective countries by a fossilized status quo freed up all the resources that had bene locked up in maintaining those distortions. Slapping a range of legal barriers and taxes on the more egregious forms of speculative excess—another of the major achievements of the Roosevelt era—drove enough wealth back into the productive economy to lay the foundations of America’s postwar boom; in the same way, tipping a galaxy of feudal customs into history’s compost bin transformed France from the economic basket case it was in 1789 to the conqueror of Europe twenty years later, and the succesful and innovative economic and cultural powerhouse it became during most of the nineteenth century thereafter.

That’s one of the advantages of revolutionary change. By breaking down existing institutions and the encrusted layers of economic parasitism that inevitably build up around them over time, it reliably breaks loose an abundance of resources that were not available in the prerevolutionary period. Here again, it’s crucial to remember that the availability of resources doesn’t guarantee that they’ll be used wisely; they may be thrown away on absurdities of one kind or another. Nor, even more critically, does it mean that the same abundance of resources will be available indefinitely. The surge of additional resources made available by catabolizing old and corrupt systems is a temporary jackpot, not a permanent state of affairs. That said, when you combine the collapse of fossilized institutions that stand in the way of change, and a sudden rush of previously unavailable resources of various kinds, quite a range of possibilities previously closed to a society suddenly come open.

Applying this same pattern to the crisis of modern industrial civilization, though, requires attention to certain inescapable but highly unwelcome realities. In 1789, the problem faced by France was the need to get rid of a thousand years of fossilized political, economic, and social institutions at a time when the coming of the industrial age had made them hopelessly dysfunctional. In 1929, the problem faced by the United States was the need to pry the dead hand of an equally dysfunctional economic orthodoxy off the throat of the nation so that its economy would actually function again. In both cases, the era of breakdown was catalyzed by a talented despot, and was followed, after an interval of chaos and war, by a period of relative prosperity.

We may well get the despot this time around, too, not to mention the chaos and war, but the period of prosperity is probably quite another matter. The problem we face today, in the United States and more broadly throughout the world’s industrial societies, is that all the institutions of industrial civilization presuppose limitless economic growth, but the conditions that provided the basis for continued economic growth simply aren’t there any more. The 300-year joyride of industrialism was made possible by vast and cheaply extractable reserves of highly concentrated fossil fuels and other natural resources, on the one hand, and a biosphere sufficiently undamaged that it could soak up the wastes of human industry without imposing burdens on the economy, on the other. We no longer have either of those requirements.

With every passing year, more and more of the world’s total economic output has to be diverted from other activities to keep fossil fuels and other resources flowing into the industrial world’s power plants, factories, and fuel tanks; with every passing year, in turn, more and more of the world’s total economic output has to be diverted from other activities to deal with the rising costs of climate change and other ecological disruptions. These are the two jaws of the trap sketched out more than forty years ago in the pages of The Limits to Growth, still the most accurate (and thus inevitably the most savagely denounced) map of the predicament we face. The consequences of that trap can be summed up neatly: on a finite planet, after a certain point—the point of diminishing returns, which we’ve already passed—the costs of growth rise faster than the benefits, and finally force the global economy to its knees.

The task ahead of us is thus in some ways the opposite of the one that France faced in the aftermath of 1789. Instead of replacing a sclerotic and failing medieval economy with one better suited to a new era of industrial expansion, we need to replace a sclerotic and failing industrial economy with one better suited to a new era of deindustrial contraction. That’s a tall order, no question, and it’s not something that can be achieved easily, or in a single leap. In all probability, the industrial world will have to pass through the whole sequence of phases we’ve been discussing several times before things finally bottom out in the deindustrial dark ages to come.

Still, I’m going to shock my fans and critics alike here by pointing out that there’s actually some reason to think that positive change on more than an individual level will be possible as the industrial world slams facefirst into the limits to growth. Two things give me that measured sense of hope. The first is the sheer scale of the resources locked up in today’s spectacularly dysfunctional political, economic, and social institutions, which will become available for other uses when those institutions come apart. The $83 billion a year currently being poured down the oversized rathole of the five biggest US banks, just for starters, could pay for a lot of solar water heaters, training programs for organic farmers, and other things that could actually do some good.

Throw in the resources currently being chucked into all of the other attempts currently under way to prop up a failing system, and you’ve got quite the jackpot that could, in an era of breakdown, be put to work doing things worth while. It’s by no means certain, as already noted, that these resources will go to the best possible use, but it’s all but certain that they’ll go to something less stunningly pointless than, say, handing Elon Musk his next billion dollars.

The second thing that gives me a measured sense of hope is at once subtler and far more profound. These days, despite a practically endless barrage of rhetoric to the contrary, the great majority of Americans are getting fewer and fewer benefits from the industrial system, and are being forced to pay more and more of its costs, so that a relatively small fraction of the population can monopolize an ever-increasing fraction of the national wealth and contribute less and less in exchange. What’s more, a growing number of Americans are aware of this fact. The traditional schism of a collapsing society into a dominant minority and an internal proletariat, to use Arnold Toynbee’s terms, is a massive and accelerating social reality in the United States today.

As that schism widens, and more and more Americans are forced into the Third World poverty that’s among the unmentionable realities of public life in today’s United States, several changes of great importance are taking place. The first, of course, is precisely that a great many Americans are perforce learning to live with less—not in the playacting style popular just now on the faux-green end of the privileged classes, but really, seriously living with much less, because that’s all there is. That’s a huge shift and a necessary one, since the absurd extravagance many Americans consider to be a normal lifestyle is among the most important things that will be landing in history’s compost heap in the not too distant future.
At the same time, the collective consensus that keeps the hopelessly dysfunctional institutions of today’s status quo glued in place is already coming apart, and can be expected to dissolve completely in the years ahead. What sort of consensus will replace it, after the inevitable interval of chaos and struggle, is anybody’s guess at this point—though it’s vanishingly unlikely to have anything to do with the current political fantasies of left and right. It’s just possible, given luck and a great deal of hard work, that whatever new system gets cobbled together during the breakdown phase of our present crisis will embody at least some of the values that will be needed to get our species back into some kind of balance with the biosphere on which our lives depend. A future post will discuss how that might be accomplished—after, that is, we explore the last phase of the collapse process: the era of dissolution, which will be the theme of next week’s post.

The Era of Response

Wed, 2015-05-27 17:21
The third stage of the process of collapse, following what I’ve called the eras of pretense and impact, is the era of response. It’s easy to misunderstand what this involves, because both of the previous eras have their own kinds of response to whatever is driving the collapse; it’s just that those kinds of response are more precisely nonresponses, attempts to make the crisis go away without addressing any of the things that are making it happen.
If you want a first-rate example of the standard nonresponse of the era of pretense, you’ll find one in the sunny streets of Miami, Florida right now. As a result of global climate change, sea level has gone up and the Gulf Stream has slowed down. One consequence is that these days, whenever Miami gets a high tide combined with a stiff onshore wind, salt water comes boiling up through the storm sewers of the city all over the low-lying parts of town. The response of the Florida state government has been to ssue an order to all state employees that they’re not allowed to utter the phrase “climate change.”
That sort of thing is standard practice in an astonishing range of subjects in America these days. Consider the roles that the essentially nonexistent recovery from the housing-bubble crash of 2008-9 has played in political rhetoric since that time. The current inmate of the White House has been insisting through most of two turns that happy days are here again, and the usual reams of doctored statistics have been churned out in an effort to convince people who know better that they’re just imagining that something is wrong with the economy. We can expect to hear that same claim made in increasingly loud and confident tones right up until the day the bottom finally drops out. 
With the end of the era of pretense and the arrival of the era of impact comes a distinct shift in the standard mode of nonresponse, which can be used quite neatly to time the transition from one era to another. Where the nonresponses of the era of pretense insist that there’s nothing wrong and nobody has to do anything outside the realm of business as usual, the nonresponses of the era of impact claim just as forcefully that whatever’s gone wrong is a temporary difficulty and everything will be fine if we all unite to do even more of whatever activity defines business as usual. That this normally amounts to doing more of whatever made the crisis happen in the first place, and thus reliably makes things worse is just one of the little ironies history has to offer.
What unites the era of pretense with the era of impact is the unshaken belief that in the final analysis, there’s nothing essentially wrong with the existing order of things. Whatever little difficulties may show up from time to time may be ignored as irrelevant or talked out of existence, or they may have to be shoved aside by some concerted effort, but it’s inconceivable to most people in these two eras that the existing order of things is itself the source of society’s problems, and has to be changed in some way that goes beyond the cosmetic dimension. When the inconceivable becomes inescapable, in turn, the second phase gives way to the third, and the era of response has arrived.
This doesn’t mean that everyone comes to grips with the real issues, and buckles down to the hard work that will be needed to rebuild society on a sounder footing. Winston Churchill once noted with his customary wry humor that the American people can be counted on to do the right thing, once they have exhausted every other possibility. He was of course quite correct, but the same rule can be applied with equal validity to every other nation this side of Utopia, too. The era of response, in practice, generally consists of a desperate attempt to find something that will solve the crisis du jour, other than the one thing that everyone knows will solve the crisis du jour but nobody wants to do.
Let’s return to the two examples we’ve been following so far, the outbreak of the Great Depression and the coming of the French Revolution. In the aftermath of the 1929 stock market crash, once the initial impact was over and the “sucker’s rally” of early 1930 had come and gone, the federal government and the various power centers and pressure groups that struggled for influence within its capacious frame were united in pursuit of a single goal: finding a way to restore prosperity without doing either of the things that had to be done in order to restore prosperity.  That task occupied the best minds in the US elite from the summer of 1930 straight through until April of 1933, and the mere fact that their attempts to accomplish this impossibility proved to be a wretched failure shouldn’t blind anyone to the Herculean efforts that were involved in the attempt.
The first of the two things that had to be tackled in order to restore prosperity was to do something about the drastic imbalance in the distribution of income in the United States. As noted in previous posts, an economy dependent on consumer expenditures can’t thrive unless consumers have plenty of money to spend, and in the United States in the late 1920s, they didn’t—well, except for the very modest number of those who belonged to the narrow circles of the well-to-do. It’s not often recalled these days just how ghastly the slums of urban America were in 1929, or how many rural Americans lived in squalid one-room shacks of the sort you pretty much have to travel to the Third World to see these days. Labor unions and strikes were illegal in 1920s America; concepts such as a minimum wage, sick pay, and health benefits didn’t exist, and the legal system was slanted savagely against the poor.
You can’t build prosperity in a consumer society when a good half of your citizenry can’t afford more than the basic necessities of life. That’s the predicament that America found clamped to the tender parts of its economic anatomy at the end of the 1920s. In that decade, as in our time, the temporary solution was to inflate a vast speculative bubble, under the endearing delusion that this would flood the economy with enough unearned cash to make the lack of earned income moot. That worked over the short term and then blew up spectacularly, since a speculative bubble is simply a Ponzi scheme that the legal authorities refuse to prosecute as such, and inevitably ends the same way.
There were, of course, effective solutions to the problem of inadequate consumer income. They were exactly those measures that were taken once the era of response gave way to the era of breakdown; everyone knew what they were, and nobody with access to political or economic power was willing to see them put into effect, because those measures would require a modest decline in the relative wealth and political dominance of the rich as compared to everyone else. Thus, as usually happens, they were postponed until the arrival of the era of breakdown made it impossible to avoid them any longer.
The second thing that had to be changed in order to restore prosperity was even more explosive, and I’m quite certain that some of my readers will screech like banshees the moment I mention it. The United States in 1929 had a precious metal-backed currency in the most literal sense of the term. Paper bills in those days were quite literally receipts for a certain quantity of gold—1.5 grams, for much of the time the US spent on the gold standard. That sort of arrangement was standard in most of the world’s industrial nations; it was backed by a dogmatic orthodoxy all but universal among respectable economists; and it was strangling the US economy.
It’s fashionable among certain sects on the economic fringes these days to look back on the era of the gold standard as a kind of economic Utopia in which there were no booms and busts, just a warm sunny landscape of stability and prosperity until the wicked witches of the Federal Reserve came along and spoiled it all. That claim flies in the face of economic history. During the entire period that the United States was on the gold standard, from 1873 to 1933, the US economy was a moonscape cratered by more than a dozen significant depressions. There’s a reason for that, and it’s relevant to our current situation—in a backhanded manner, admittedly.
Money, let us please remember, is not wealth. It’s a system of arbitrary tokens that represent real wealth—that is, actual, nonfinancial goods and services. Every society produces a certain amount of real wealth each year, and those societies that use money thus need to have enough money in circulation to more or less correspond to the annual supply of real wealth. That sounds simple; in practice, though, it’s anything but. Nowadays, for example, the amount of real wealth being produced in the United States each year is contracting steadily as more and more of the nation’s economic output has to be diverted into the task of keeping it supplied with fossil fuels. That’s happening, in turn, because of the limits to growth—the awkward but inescapable reality that you can’t extract infinite resources, or dump limitless wastes, on a finite planet.
The gimmick currently being used to keep fossil fuel extraction funded and cover the costs of the rising impact of environmental disruptions, without cutting into a culture of extravagance that only cheap abundant fossil fuel and a mostly intact biosphere can support, is to increase the money supply ad infinitum. That’s become the bedrock of US economic policy since the 2008-9 crash. It’s not a gimmick with a long shelf life; as the mismatch between real wealth and the money supply balloons, distortions and discontinuities are surging out through the crawlspaces of our economic life, and crisis is the most likely outcome.
In the United States in the first half or so of the twentieth century, by contrast, the amount of real wealth being produced each year soared, largely because of the steady increases in fossil fuel energy being applied to every sphere of life. While the nation was on the gold standard, though, the total supply of money could only grow as fast as gold could be mined out of the ground, which wasn’t even close to fast enough. So you had more goods and services being produced than there was money to pay for them; people who wanted goods and services couldn’t buy them because there wasn’t enough money to go around; business that wanted to expand and hire workers were unable to do so for the same reason. The result was that moonscape of economic disasters I mentioned a moment ago.
The necessary response at that time was to go off the gold standard. Nobody in power wanted to do this, partly because of the dogmatic economic orthodoxy noted earlier, and partly because a money shortage paid substantial benefits to those who had guaranteed access to money. The rentier class—those people who lived off income from their investments—could count on stable or falling prices as long as the gold standard stayed in place, and the mere fact that the same stable or falling prices meant low wages, massive unemployment, and widespread destitution troubled them not at all. Since the rentier class included the vast majority of the US economic and political elite, in turn, going off the gold standard was unthinkable until it became unavoidable.
The period of the French revolution from the fall of the Bastille in 1789 to the election of the National Convention in 1792 was a period of the same kind, though driven by different forces. Here the great problem was how to replace the Old Regime—not just the French monarchy, but the entire lumbering mass of political, economic, and social laws, customs, forms, and institutions that France had inherited from the Middle Ages and never quite gotten around to adapting to drastically changed conditions—with something that would actually work. It’s among the more interesting features of the resulting era of response that nearly every detail differed from the American example just outlined, and yet the results were remarkably similar.
Thus the leaders of the National Assembly who suddenly became the new rulers of France in the summer of 1789 had no desire whatsoever to retain the traditional economic arrangements that gave France’s former elites their stranglehold on an oversized share of the nation’s wealth. The abolition of manorial rights that summer, together with the explosive rural uprisingsagainst feudal landlords and their chateaux in the wake of the Bastille’s fall, gutted the feudal system and left most of its former beneficiaries the choice between fleeing into exile and trying to find some way to make ends meet in a society that had no particular market for used aristocrats. The problem faced by the National Assembly wasn’t that of prying the dead fingers of a failed system off the nation’s throat; it was that of trying to find some other basis for national unity and effective government.
It’s a surprisingly difficult challenge. Those of my readers who know their way around current events will already have guessed that an attempt was made to establish a copy of whatever system was most fashionable among liberals at the time, and that this attempt turned out to be an abject failure. What’s more, they’ll have been quite correct. The National Assembly moved to establish a constitutional monarchy along British lines, bring in British economic institutions, and the like; it was all very popular among liberal circles in France and, naturally, in Britain as well, and it flopped. Those who recall the outcome of the attempt to turn Iraq into a nice pseudo-American democracy in the wake of the US invasion will have a tolerably good sense of how the project unraveled.
One of the unwelcome but reliable facts of history is that democracy doesn’t transplant well. It thrives only where it grows up naturally, out of the civil institutions and social habits of a people; when liberal intellectuals try to impose it on a nation that hasn’t evolved the necessary foundations for it, the results are pretty much always a disaster. That latter was the situation in France at the time of the Revolution. What happened thereafter  is what almost always happens to a failed democratic experiment: a period of chaos, followed by the rise of a talented despot who’s smart and ruthless enough to impose order on a chaotic situation and allow new, pragmatic institutions to emerge to replace those destroyed by clueless democratic idealists. In many cases, though by no means all, those pragmatic institutions have ended up providing a bridge to a future democracy, but that’s another matter.
Here again, those of my readers who have been paying attention to current events already know this; the collapse of the Soviet Union was followed in classic form by a failed democracy, a period of chaos, and the rise of a talented despot. It’s a curious detail of history that the despots in question are often rather short. Russia has had the great good fortune to find, as its despot du jour, a canny realist who has successfully brought it back from the brink of collapse and reestablished it as a major power with a body count considerably smaller than usual.. France was rather less fortunate; the despot it found, Napoleon Bonaparte, turned out to be a megalomaniac with an Alexander the Great complex who proceeded to plunge Europe into a quarter century of cataclysmic war. Mind you, things could have been even worse; when Germany ended up in a similar situation, what it got was Adolf Hitler.
Charismatic strongmen are a standard endpoint for the era of response, but they properly belong to the era that follows, the era of breakdown, which will be discussed next week. What I want to explore here is how an era of response might work out in the future immediately before us, as the United States topples from its increasingly unsteady imperial perch and industrial civilization as a whole slams facefirst into the limits to growth. The examples just cited outline the two most common patterns by which the era of response works itself out. In the first pattern, the old elite retains its grip on power, and fumbles around with increasing desperation for a response to the crisis. In the second, the old elite is shoved aside, and the new holders of power are left floundering in a political vacuum.
We could see either pattern in the United States. For what it’s worth, I suspect the latter is the more likely option; the spreading crisis of legitimacy that grips the country these days is exactly the sort of thing you saw in France before the Revolution, and in any number of other countries in the few decades just prior to revolutionary political and social change. Every time a government tries to cope with a crisis by claiming that it doesn’t exist, every time some member of the well-to-do tries to dismiss the collective burdens its culture of executive kleptocracy imposes on the country by flinging abuse at critics, every time institutions that claim to uphold the rule of law defend the rule of entrenched privilege instead, the United States takes another step closer to the revolutionary abyss.
I use that last word advisedly. It’s a common superstition in every troubled age that any change must be for the better—that the overthrow of a bad system must by definition lead to the establishment of a better one. This simply isn’t true. The vast majority of revolutions have established governments that were far more abusive than the ones they replaced. The exceptions have generally been those that brought about a social upheaval without wrecking the political system: where, for example, an election rather than a coup d’etat or a mass rising put the revolutionaries in power, and the political institutions of an earlier time remained in place with only such reshaping as new necessities required.
We could still see that sort of transformation as the United States sees the end of its age of empire and has to find its way back to a less arrogant and extravagant way of functioning in the world. I don’t think it’s likely, but I think it’s possible, and it would probably be a good deal less destructive than the other alternative. It’s worth remembering, though, that history is under no obligation to give us the future we think we want.

The Era of Impact

Wed, 2015-05-20 15:03
Of all the wistful superstitions that cluster around the concept of the future in contemporary popular culture, the most enduring has to be the notion that somehow, sooner or later, something will happen to shake the majority out of its complacency and get it to take seriously the crisis of our age. Week after week, I field comments and emails that presuppose that belief. People want to know how soon I think the shock of awakening will finally hit, or wonder whether this or that event will do the trick, or simply insist that the moment has to come sooner or later.
To all such inquiries and expostulations I have no scrap of comfort to offer. Quite the contrary, what history shows is that a sudden awakening to the realities of a difficult situation is far and away the least likely result of what I’ve called the era of impact, the second of the five stages of collapse. (The first, for those who missed last week’s post, is the era of pretense; the remaining three, which will be covered in the coming weeks, are the eras of response, breakdown, and dissolution.)
The era of impact is the point at which it becomes clear to most people that something has gone wrong with the most basic narratives of a society—not just a little bit wrong, in the sort of way that requires a little tinkering here and there, but really, massively, spectacularly wrong. It arrives when an asset class that was supposed to keep rising in price forever stops rising, does its Wile E. Coyote moment of hang time, and then drops like a stone. It shows up when an apparently entrenched political system, bristling with soldiers and secret police, implodes in a matter of days or weeks and is replaced by a provisional government whose leaders look just as stunned as everyone else. It comes whenever a state of affairs that was assumed to be permanent runs into serious trouble—but somehow it never seems to succeed in getting people to notice just how temporary that state of affairs always was.
Since history is the best guide we’ve got to how such events work out in the real world, I want to take a couple of examples of the kind just outlined and explore them in a little more detail. The stock market bubble of the 1920s makes a good case study on a relatively small scale. In the years leading up to the crash of 1929, stock values in the US stock market quietly disconnected themselves from the economic fundamentals and began what was, for the time, an epic climb into la-la land. There were important if unmentionable reasons for that airy detachment from reality; the most significant was the increasingly distorted distribution of income in 1920s America, which put more and more of the national wealth in the hands of fewer and fewer people and thus gutted the national economy.
It’s one of the repeated lessons of economic history that money in the hands of the rich does much less good for the economy as a whole than money in the hands of the working classes and the poor. The reasoning here is as simple as it is inescapable. Industrial economies survive and thrive on consumer expenditures, but consumer expenditures are limited by the ability of consumers to buy the things they want and need. As money is diverted away from the lower end of the economic pyramid, you get demand destruction—the process by which those who can’t afford to buy things stop buying them—and consumer expenditures fall off. The rich, by contrast, divert a large share of their income out of the consumer economy into investments; the richer they get, the more of the national wealth ends up in investments rather than consumer expenditures; and as consumer expenditures falter, and investments linked to the consumer economy falter in turn, more and more money ends up in illiquid speculative vehicles that are disconnected from the productive economy and do nothing to stimulate demand.
That’s what happened in the 1920s. All through the decade in the US, the rich got richer and the poor got screwed, speculation took the place of productive investment throughout the US economy, and the well-to-do wallowed in the wretched excess chronicled in F. Scott Fitzgerald’s The Great Gatsby while most other people struggled to get by. The whole decade was a classic era of pretense, crowned by the delusional insistence—splashed all over the media of the time—that everyone in the US could invest in the stock market and, since the market was of course going to keep on rising forever, everyone in the US would thus inevitably become rich.
It’s interesting to note that there were people who saw straight through the nonsense and tried to warn their fellow Americans about the inevitable consequences. They were denounced six ways from Sunday by all right-thinking people, in language identical to that used more recently on those of us who’ve had the effrontery to point out that an infinite supply of oil can’t be extracted from a finite planet.  The people who insisted that the soaring stock values of the late 1920s were the product of one of history’s great speculative bubbles were dead right; they had all the facts and figures on their side, not to mention plain common sense; but nobody wanted to hear it.
When the stock market peaked just before the Labor Day weekend in 1929 and started trending down, therefore, the immediate response of all right-thinking people was to insist at the top of their lungs that nothing of the sort was happening, that the market was simply catching its breath before its next great upward leap, and so on. Each new downward lurch was met by a new round of claims along these lines, louder, more dogmatic, and more strident than the one that preceded it, and nasty personal attacks on anyone who didn’t support the delusional consensus filled the media of the time.
People were still saying those things when the bottom dropped out of the market.
Tuesday, October 29, 1929 can reasonably be taken as the point at which the era of pretense gave way once and for all to the era of impact. That’s not because it was the first day of the crash—there had been ghastly slumps on the previous Thursday and Monday, on the heels of two months of less drastic but still seriously ugly declines—but because, after that day, the pundits and the media pretty much stopped pretending that nothing was wrong. Mind you, next to nobody was willing to talk about what exactly had gone wrong, or why it had gone wrong, but the pretense that the good fairy of capitalism had promised Americans happy days forever was out the window once and for all.
It’s crucial to note, though, that what followed this realization was the immediate and all but universal insistence that happy days would soon be back if only everyone did the right thing. It’s even more crucial to note that what nearly everyone identified as “the right thing”—running right out and buying lots of stocks—was a really bad idea that bankrupted many of those who did it, and didn’t help the imploding US economy at all.
It’s probably necessary to talk about this in a little more detail, since it’s been an article of blind faith in the United States for many decades now that it’s always a good idea to buy and hold stocks. (I suspect that stockbrokers have had a good deal to do with the promulgation of this notion.) It’s been claimed that someone who bought stocks in 1929 at the peak of the bubble, and then held onto them, would have ended up in the black eventually, and for certain values of “eventually,” this is quite true—but it took the Dow Jones industrial average until the mid-1950s to return to its 1929 high, and so for a quarter of a century our investor would have been underwater on his stock purchases.
What’s more, the Dow isn’t necessarily a good measure of stocks generally; many of the darlings of the market in the 1920s either went bankrupt in the Depression or never again returned to their 1929 valuations. Nor did the surge of money into stocks in the wake of the 1929 crash stave off the Great Depression, or do much of anything else other than provide a great example of the folly of throwing good money after bad. The moral to this story? In an era of impact, the advice you hear from everyone around you may not be in your best interest.
That same moral can be shown just as clearly in the second example I have in mind, the French Revolution. We talked briefly in last week’s post about the way that the French monarchy and aristocracy blinded themselves to the convulsive social and economic changes that were pushing France closer and closer to a collective explosion on the grand scale, and pursued business as usual long past the point at which business as usual was anything but a recipe for disaster. Even when the struggle between the Crown and the aristocracy forced Louis XVI to convene the États-Généraux—the rarely-held national parliament of France, which had powers more or less equivalent to a constitutional convention in the US—next to nobody expected anything but long rounds of political horse-trading from which some modest shifts in the balance of power might result.
That was before the summer of 1789. On June 17, the deputies of the Third Estate—the representatives of the commoners—declared themselves a National Assembly and staged what amounted to a coup d’etat; on July 14, faced with the threat of a military response from the monarchy, the Parisian mob seized the Bastille, kickstarting a wave of revolt across the country that put government and military facilities in the hands of the revolutionary National Guard and broke the back of the feudal system; on August 4, the National Assembly abolished all feudal rights and legal distinctions between the classes. Over less than two months, a political and social system that had been welded firmly in place for a thousand years all came crashing to the ground.
Those two months marked the end of the era of pretense and the arrival of the era of impact. The immediate response, with a modest number of exceptions among the aristocracy and the inner circles of the monarchy’s supporters, was frantic cheering and an insistence that everything would soon settle into a wonderful new age of peace, prosperity, and liberty. All the overblown dreams of the philosophes about a future age governed by reason were trotted out and treated as self-evident fact. Of course that’s not what happened; once it was firmly in power, the National Assembly used its unchecked authority as abusively as the monarchy had once done; factional struggles spun out of control, and before long mob rule and the guillotine were among the basic facts of life in Revolutionary France. 
Among the most common symptoms of an era of impact, in other words, is the rise of what we may as well call “crackpot optimism”—the enthusiastic and all but universal insistence, in the teeth of the evidence, that the end of business as usual will turn out to be the door to a wonderful new future. In the wake of the 1929 stock market crash, people were urged to pile back into the market in the belief that this would cause the economy to boom again even more spectacularly than before, and most of the people who followed this advice proceeded to lose their shirts. In the wake of the revolution of 1789, likewise, people across France were encouraged to join with their fellow citizens in building the shining new utopia of reason, and a great many of those who followed that advice ended up decapitated or, a little later, dying of gunshot or disease in the brutal era of pan-European warfare that extended almost without a break from the cannonade of Valmy in 1792 to the battle of Waterloo in 1815.
And the present example? That’s a question worth exploring, if only for the utterly pragmatic reason that most of my readers are going to get to see it up close and personal.
That the United States and the industrial world generally are deep in an era of pretense is, I think, pretty much beyond question at this point. We’ve got political authorities, global bankers, and a galaxy of pundits insisting at the top of their lungs that nothing is wrong, everything is fine, and we’ll be on our way to the next great era of prosperity if we just keep pursuing a set of boneheaded policies that have never—not once in the entire span of human history—brought prosperity to the countries that pursued them. We’ve got shelves full of books for sale in upscale bookstores insisting, in the strident language usual to such times, that life is wonderful in this best of all possible worlds, and it’s going to get better forever because, like, we have technology, dude! Across the landscape of the cultural mainstream, you’ll find no shortage of cheerleaders insisting at the top of their lungs that everything’s going to be fine, that even though they said ten years ago that we only have ten years to do something before disaster hits, why, we still have ten years before disaster hits, and when ten more years pass by, why, you can be sure that the same people will be insisting that we have ten more.
This is the classic rhetoric of an era of pretense. Over the last few years, though, it’s seemed to me that the voices of crackpot optimism have gotten more shrill, the diatribes more fact-free, and the logic even shoddier than it was in Bjorn Lomborg’s day, which is saying something. We’ve reached the point that state governments are making it a crime to report on water quality and forbidding officials from using such unwelcome phrases as “climate change.” That’s not the action of people who are confident in their beliefs; it’s the action of a bunch of overgrown children frantically clenching their eyes shut, stuffing their fingers in their ears, and shouting “La, la, la, I can’t hear you.”
That, in turn, suggests that the transition to the era of impact may be fairly close. Exactly when it’s likely to arrive is a complex question, and exactly what’s going to land the blow that will crack the crackpot optimism and make it impossible to ignore the arrival of real trouble is an even more complex one. In 1929, those who hadn’t bought into the bubble could be perfectly sure—and in fact, a good many of them were perfectly sure—that the usual mechanism that brings bubbles to a catastrophic end was about to terminate the boom of the 1920s with extreme prejudice, as indeed it did. In the last decades of the French monarchy, it was by no means clear exactly what sequence of events would bring the Ancien Régime crashing down, but such thoughtful observers as Talleyrand knew that something of the sort was likely to follow the crisis of legitimacy then under way.
The problem with trying to predict the trigger that will bring our current situation to a sudden stop is that we’re in such a target-rich environment. Looking over the potential candidates for the sudden shock that will stick a fork in the well-roasted corpse of business as usual, I’m reminded of the old board game Clue. Will Mr. Boddy’s killer turn out to be Colonel Mustard in the library with a lead pipe, Professor Plum in the conservatory with a candlestick, or Miss Scarlet in the dining room with a rope?
In much the same sense, we’ve got a global economy burdened to the breaking point with more than a quadrillion dollars of unpayable debt; we’ve got a global political system coming apart at the seams as the United States slips toward the usual fate of empires and its rivals circle warily, waiting for the kill; we’ve got a domestic political system here in the US entering a classic prerevolutionary condition under the impact of a textbook crisis of legitimacy; we’ve got a global climate that’s hammered by our rank stupidity in treating the atmosphere as a gaseous sewer for our wastes; we’ve got a global fossil fuel industry that’s frantically trying to pretend that scraping the bottom of the barrel means that the barrel is full, and the list goes on. It’s as though Colonel Mustard, Professor Plum, Miss Scarlet, and the rest of them all ganged up on Mr. Boddy at once, and only the most careful autopsy will be able to determine which of them actually dealt the fatal blow.
In the midst of all this uncertainty, there are three things that can, I think, be said for certain about the end of the current era of pretense and the coming of the era of impact. The first is that it’s going to happen. When something is unsustainable, it’s a pretty safe bet that it won’t be sustained indefinitely, and a society that keeps on embracing policies that swap short-term gains for long-term problems will sooner or later end up awash in the consequences of those policies. Timing such transitions is difficult at best; it’s an old adage among stock traders that the market can stay irrational longer than you can stay solvent. Still, points made above—especially the increasingly shrill tone of the defenders of the existing order—suggest to me that the era of impact may be here within a decade or so at the outside.
The second thing that can be said for certain about the coming era of impact is that it’s not the end of the world. Apocalyptic fantasies are common and popular in eras of pretense, and for good reason; fixating on the supposed imminence of the Second Coming, human extinction, or what have you, is a great way to distract yourself from the real crisis that’s breathing down your neck. If the real crisis in question is partly or wholly a result of your own actions, while the apocalyptic fantasy can be blamed on someone or something else, that adds a further attraction to the fantasy.
The end of industrial civilization will be a long, bitter, painful cascade of conflicts, disasters, and  accelerating decline in which a vast number of people are going to die before they otherwise would, and a great many things of value will be lost forever. That’s true of any falling civilization, and the misguided decisions of the last forty years have pretty much guaranteed that the current example is going to have an extra helping of all these unwelcome things. I’ve discussed at length, in earlier posts in the Dark Age America sequence here and in other sequences as well, why the sort of apocalyptic sudden stop beloved of Hollywood scriptwriters is the least likely outcome of the predicament of our time; still, insisting on the imminence and inevitability of some such game-ending event will no doubt be as popular as usual in the years immediately ahead.
The third thing that I think can be said for certain about the coming era of impact, though, is the one that counts. If it follows the usual pattern, as I expect it to do, once the crisis hits there will be serious, authoritative, respectable figures telling everyone exactly what they need to do to bring an end to the troubles and get the United States and the world back on track to renewed peace and prosperity. Taking these pronouncements seriously and following their directions will be extremely popular, and it will almost certainly also be a recipe for unmitigated disaster. If forewarned is forearmed, as the saying has it, this is a piece of firepower to keep handy as the era of pretense winds down. In next week’s post, we’ll talk about comparable weaponry relating to the third stage of collapse—the era of response.

The Era of Pretense

Wed, 2015-05-13 17:00
I've mentioned in previous posts here on The Archdruid Report the educational value of the comments I receive from readers in the wake of each week’s essay. My post two weeks ago on the death of the internet was unusually productive along those lines.  One of the comments I got in response to that post gave me the theme for last week’s essay, but there was at least one other comment calling for the same treatment. Like the one that sparked last week’s post, it appeared on one of the many other internet forums on which The Archdruid Report, and it unintentionally pointed up a common and crucial failure of imagination that shapes, or rather misshapes, the conventional wisdom about our future.
Curiously enough, the point that set off the commenter in question was the same one that incensed the author of the denunciation mentioned in last week’s post: my suggestion in passing that fifty years from now, most Americans may not have access to electricity or running water. The commenter pointed out angrily that I’d claimed that the twilight of industrial civilization would be a ragged arc of decline over one to three centuries. Now, he claimed, I was saying that it was going to take place in the next fifty years, and this apparently convinced him that everything I said ought to be dismissed out of hand.
I run into this sort of confusion all the time. If I suggest that the decline and fall of a civilization usually takes several centuries, I get accused of inconsistency if I then note that one of the sharper downturns included in that process may be imminent.  If I point out that the United States is likely within a decade or two of serious economic and political turmoil, driven partly by the implosion of its faltering global hegemony and partly by a massive crisis of legitimacy that’s all but dissolved the tacit contract between the existing order of US society and the masses who passively support it, I get accused once again of inconsistency if I then say that whatever comes out the far side of that crisis—whether it’s a battered and bruised United States or a patchwork of successor states—will then face a couple of centuries of further decline and disintegration before the deindustrial dark age bottoms out.
Now of course there’s nothing inconsistent about any of these statements. The decline and fall of a civilization isn’t a single event, or even a single linear process; it’s a complex fractal reality composed of many different events on many different scales in space and time. If it takes one to three centuries, as usual, those centuries are going to be taken up by an uneven drumbeat of wars, crises, natural disasters, and assorted breakdowns on a variety of time frames with an assortment of local, regional, national, or global effects. The collapse of US global hegemony is one of those events; the unraveling of the economic and technological framework that currently provides most Americans with electricity and running water is another, but neither of those is anything like the whole picture.
It’s probably also necessary to point out that any of my readers who think that being deprived of electricity and running water is the most drastic kind of collapse imaginable have, as the saying goes, another think coming. Right now, in our oh-so-modern world, there are billions of people who get by without regular access to electricity and running water, and most of them aren’t living under dark age conditions. A century and a half ago, when railroads, telegraphs, steamships, and mechanical printing presses were driving one of history’s great transformations of transport and information technology, next to nobody had electricity or running water in their homes. The technologies of 1865 are not dark age technologies; in fact, the gap between 1865 technologies and dark age technologies is considerably greater, by most metrics, than the gap between 1865 technologies and the ones we use today.
Furthermore, whether or not Americans have access to running water and electricity may not have as much to say about the future of industrial society everywhere in the world as the conventional wisdom would suggest.  I know that some of my American readers will be shocked out of their socks to hear this, but the United States is not the whole world. It’s not even the center of the world. If the United States implodes over the next two decades, leaving behind a series of bankrupt failed states to squabble over its territory and the little that remains of its once-lavish resource base, that process will be a great source of gaudy and gruesome stories for the news media of the world’s other continents, but it won’t affect the lives of the readers of those stories much more than equivalent events in Africa and the Middle East affect the lives of Americans today.
As it happens, over the next one to three centuries, the benefits of industrial civilization are going to go away for everyone. (The costs will be around a good deal longer—in the case of the nuclear wastes we’re so casually heaping up for our descendants, a good quarter of a million years, but those and their effects are rather more localized than some of today’s apocalyptic rhetoric likes to suggest.) The reasoning here is straightforward. White’s Law, one of the fundamental principles of human ecology, states that economic development is a function of energy per capita; the immense treasure trove of concentrated energy embodied in fossil fuels, and that alone, made possible the sky-high levels of energy per capita that gave the world’s industrial nations their brief era of exuberance; as fossil fuels deplete, and remaining reserves require higher and higher energy inputs to extract, the levels of energy per capita the industrial nations are used to having will go away forever.
It’s important to be clear about this. Fossil fuels aren’t simply one energy source among others; in terms of concentration, usefulness, and fungibility—that is, the ability to be turned into any other form of energy that might be required—they’re in a category all by themselves. Repeated claims that fossil fuels can be replaced with nuclear power, renewable energy resources, or what have you sound very good on paper, but every attempt to put those claims to the test so far has either gone belly up in short order, or become a classic subsidy dumpster surviving purely on a diet of government funds and mandates.
Three centuries ago, the earth’s fossil fuel reserves were the largest single deposit of concentrated energy in this part of the universe; now we’ve burnt through nearly all the easily accessible reserves, and we’re scrambling to keep the tottering edifice of industrial society going by burning through the dregs that remain. As those run out, the remaining energy resources—almost all of them renewables—will certainly sustain a variety of human societies, and some of those will be able to achieve a fairly high level of complexity and maintain some kinds of advanced technologies. The kind of absurd extravagance that passes for a normal standard of living among the more privileged inmates of the industrial nations is another matter, and as the fossil fuel age sunsets out, it will end forever.
The fractal trajectory of decline and fall mentioned earlier in this post is simply the way this equation works out on the day-to-day scale of ordinary history. Still, those of us who happen to be living through a part of that trajectory might reasonably be curious about how it’s likely to unfold in our lifetimes. I’ve discussed in a previous series of posts, and in my book Decline and Fall: The End of Empire and the Future of Democracy in 21st Century America, how the end of US global hegemony is likely to unfold, but as already noted, that’s only a small portion of the broader picture. Is a broader view possible?
Fortunately history, the core resource I’ve been using to try to make sense of our future, has plenty to say about the broad patterns that unfold when civilizations decline and fall. Now of course I know all I have to do is mention that history might be relevant to our present predicament, and a vast chorus of voices across the North American continent and around the world will bellow at rooftop volume, “But it’s different this time!” With apologies to my regular readers, who’ve heard this before, it’s probably necessary to confront that weary thoughtstopper again before we proceed.
As I’ve noted before, claims that it’s different this time are right where it doesn’t matter and wrong where it counts.  Predictions made on the basis of history—and not just by me—have consistently predicted events over the last decade or so far more accurately than predictions based on the assumption that history doesn’t matter. How many times, dear reader, have you heard someone insist that industrial civilization is going to crash to ruin in the next six months, and then watched those six months roll merrily by without any sign of the predicted crash? For that matter, how many times have you heard someone insist that this or that policy that’s never worked any other time that it’s been tried, or this or that piece of technological vaporware that’s been the subject of failed promises for decades, will inevitably put industrial society back on its alleged trajectory to the stars—and how many times has the policy or the vaporware been quietly shelved, and something else promoted using the identical rhetoric, when it turned out not to perform as advertised?
It’s been a source of wry amusement to me to watch the same weary, dreary, repeatedly failed claims of imminent apocalypse and inevitable progress being rehashed year after year, varying only in the fine details of the cataclysm du jour and the techno-savior du jour, while the future nobody wants to talk about is busily taking shape around us. Decline and fall isn’t something that will happen sometime in the conveniently distant future; it’s happening right now in the United States and around the world. The amusement, though, is tempered with a sense of familiarity, because the period in which decline is under way but nobody wants to admit that fact is one of the recurring features of the history of decline.
There are, very generally speaking, five broad phases in the decline and fall of a civilization. I know it’s customary in historical literature to find nice dull labels for such things, but I’m in a contrary mood as I write this, so I’ll give them unfashionably colorful names: the eras of pretense, impact, response, breakdown, and dissolution. Each of these is complex enough that it’ll need a discussion of its own; this week, we’ll talk about the era of pretense, which is the one we’re in right now.
Eras of pretense are by no means limited to the decline and fall of civilizations. They occur whenever political, economic, or social arrangements no longer work, but the immediate costs of admitting that those arrangements don’t work loom considerably larger in the collective imagination than the future costs of leaving those arrangements in place. It’s a curious but consistent wrinkle of human psychology that this happens even if those future costs soar right off the scale of frightfulness and lethality; if the people who would have to pay the immediate costs don’t want to do so, in fact, they will reliably and cheerfully pursue policies that lead straight to their own total bankruptcy or violent extermination, and never let themselves notice where they’re headed.
Speculative bubbles are a great setting in which to watch eras of pretense in full flower. In the late phases of a bubble, when it’s clear to anyone who has two spare neurons to rub together that the boom du jour is cobbled together of equal parts delusion and chicanery, the people who are most likely to lose their shirts in the crash are the first to insist at the top of their lungs that the bubble isn’t a bubble and their investments are guaranteed to keep on increasing in value forever. Those of my readers who got the chance to watch some of their acquaintances go broke in the real estate bust of 2008-9, as I did, will have heard this sort of self-deception at full roar; those who missed the opportunity can make up for the omission by checking out the ongoing torrent of claims that the soon-to-be-late fracking bubble is really a massive energy revolution that will make America wealthy and strong again.
The history of revolutions offers another helpful glimpse at eras of pretense. France in the decades before 1789, to cite a conveniently well-documented example, was full of people who had every reason to realize that the current state of affairs was hopelessly unsustainable and would have to change. The things about French politics and economics that had to change, though, were precisely those things that the French monarchy and aristocracy were unwilling to change, because any such reforms would have cost them privileges they’d had since time out of mind and were unwilling to relinquish.
Louis XIV, who finished up his long and troubled reign a supreme realist, is said to have muttered “Après moi, le déluge”—“Once I’m gone, this sucker’s going down” may not be a literal translation, but it catches the flavor of the utterance—but that degree of clarity was rare in his generation, and all but absent in those of his increasingly feckless successors. Thus the courtiers and aristocrats of the Old Regime amused themselves at the nation’s expense, dabbled in avant-garde thought, and kept their eyes tightly closed to the consequences of their evasions of looming reality, while the last opportunities to excuse themselves from a one-way trip to visit the guillotine and spare France the cataclysms of the Terror and the Napoleonic wars slipped silently away.
That’s the bitter irony of eras of pretense. Under most circumstances, they’re the last period when it would be possible to do anything constructive on the large scale about the crisis looming immediately ahead, but the mass evasion of reality that frames the collective thinking of the time stands squarely in the way of any such constructive action. In the era of pretense before a speculative bust, people who could have quietly cashed in their positions and pocketed their gains double down on their investments, and guarantee that they’ll be ruined once the market stops being liquid. In the era of pretense before a revolution, in the same way, those people and classes that have the most to lose reliably take exactly those actions that ensure that they will in fact lose everything. If history has a sense of humor, this is one of the places that it appears in its most savage form.
The same points are true, in turn, of the eras of pretense that precede the downfall of a civilization. In a good many cases, where too few original sources survive, the age of pretense has to be inferred from archeological remains. We don’t know what motives inspired the ancient Mayans to build their biggest pyramids in the years immediately before the Terminal Classic period toppled over into a savage political and demographic collapse, but it’s hard to imagine any such project being set in motion without the usual evasions of an era of pretense being involved  Where detailed records of dead civilizations survive, though, the sort of rhetorical handwaving common to bubbles before the bust and decaying regimes on the brink of revolution shows up with knobs on. Thus the panegyrics of the Roman imperial court waxed ever more lyrical and bombastic about Rome’s invincibility and her civilizing mission to the nations as the Empire stumbled deeper into its terminal crisis, echoing any number of other court poets in any number of civilizations in their final hours.
For that matter, a glance through classical Rome’s literary remains turns up the remarkable fact that those of her essayists and philosophers who expressed worries about her survival wrote, almost without exception, during the Republic and the early Empire; the closer the fall of Rome actually came, the more certainty Roman authors expressed that the Empire was eternal and the latest round of troubles was just one more temporary bump on the road to peace and prosperity. It took the outsider’s vision of Augustine of Hippo to proclaim that Rome really was falling—and even that could only be heard once the Visigoths sacked Rome and the era of pretense gave way to the age of impact.
The present case is simply one more example to add to an already lengthy list. In the last years of the nineteenth century, it was common for politicians, pundits, and mass media in the United States, the British empire, and other industrial nations to discuss the possibility that the advanced civilization of the time might be headed for the common fate of nations in due time. The intellectual history of the twentieth century is, among other things, a chronicle of how that discussion was shoved to the margins of our collective discourse, just as the ecological history of the same century is among other things a chronicle of how the worries of the previous era became the realities of the one we’re in today. The closer we’ve moved toward the era of impact, that is, the more unacceptable it has become for anyone in public life to point out that the problems of the age are not just superficial.
Listen to the pablum that passes for political discussion in Washington DC or the mainstream US media these days, or the even more vacuous noises being made by party flacks as the country stumbles wearily toward yet another presidential election. That the American dream of upward mobility has become an American nightmare of accelerating impoverishment outside the narrowing circle of the kleptocratic rich, that corruption and casual disregard for the rule of law are commonplace in political institutions from local to Federal levels, that our medical industry charges more than any other nation’s and still provides the worst health care in the industrial world, that our schools no longer teach anything but contempt for learning, that the national infrastructure and built environment are plunging toward Third World conditions at an ever-quickening pace, that a brutal and feckless foreign policy embraced by both major parties is alienating our allies while forcing our enemies to set aside their mutual rivalries and make common cause against us: these are among the issues that matter, but they’re not the issues you’ll hear discussed as the latest gaggle of carefully airbrushed candidates go through their carefully scripted elect-me routines on their way to the 2016 election.
If history teaches anything, though, it’s that eras of pretense eventually give way to eras of impact. That doesn’t mean that the pretense will go away—long after Alaric the Visigoth sacked Rome, for example, there were still plenty of rhetors trotting out the same tired clichés about Roman invincibility—but it does mean that a significant number of people will stop finding the pretense relevant to their own lives. How that happens in other historical examples, and how it might happen in our own time, will be the theme of next week’s post.

The Whisper of the Shutoff Valve

Wed, 2015-05-06 18:35
Last week’s post on the impending decline and fall of the internet fielded a great many responses. That was no surprise, to be sure; nor was I startled in the least to find that many of them rejected the thesis of the post with some heat. Contemporary pop culture’s strident insistence that technological progress is a clock that never runs backwards made such counterclaims inevitable.
Still, it’s always educational to watch the arguments fielded to prop up the increasingly shaky edifice of the modern mythology of progress, and the last week was no exception. A response I found particularly interesting from that standpoint appeared on one of the many online venues where Archdruid Report posts appear. One of the commenters insisted that my post should be rejected out of hand as mere doom and gloom; after all, he pointed out, it was ridiculous for me to suggest that fifty years from now, a majority of the population of the United States might be without reliable electricity or running water.
I’ve made the same prediction here and elsewhere a good many times. Each time, most of my readers or listeners seem to have taken it as a piece of sheer rhetorical hyperbole. The electrical grid and the assorted systems that send potable water flowing out of faucets are so basic to the rituals of everyday life in today’s America that their continued presence is taken for granted.  At most, it’s conceivable that individuals might choose not to connect to them; there’s a certain amount of talk about off-grid living here and there in the alternative media, for example.  That people who want these things might not have access to them, though, is pretty much unthinkable.
Meanwhile, in Detroit and Baltimore, tens of thousands of residents are in the process of losing their access to water and electricity.
The situation in both cities is much the same, and there’s every reason to think that identical headlines will shortly appear in reference to other cities around the nation. Not that many decades ago, Detroit and Baltimore were important industrial centers with thriving economies. Along with more than a hundred other cities in America’s Rust Belt, they were thrown under the bus with the first wave of industrial offshoring in the 1970s.  The situation for both cities has only gotten worse since that time, as the United States completed its long transition from a manufacturing economy producing goods and services to a bubble economy that mostly produces unpayable IOUs.
These days, the middle-class families whose tax payments propped up the expansive urban systems of an earlier day have long since moved out of town. Most of the remaining residents are poor, and the ongoing redistribution of wealth in America toward the very rich and away from everyone else has driven down the income of the urban poor to the point that many of them can no longer afford to pay their water and power bills. City utilities in Detroit and Baltimore have been sufficiently sensitive to political pressures that large-scale utility shutoffs have been delayed, but shifts in the political climate in both cities are bringing the delays to an end; water bills have increased steadily, more and more people have been unable to pay them, and the result is as predictable as it is brutal.
The debate over the Detroit and Baltimore shutoffs has followed the usual pattern, as one side wallows in bash-the-poor rhetoric while the other side insists plaintively that access to utilities is a human right. Neither side seems to be interested in talking about the broader context in which these disputes take shape. There are two aspects to that broader context, and it’s a tossup which is the more threatening.
The first aspect is the failure of the US economy to recover in any meaningful sense from the financial crisis of 2008. Now of course politicians from Obama on down have gone overtime grandstanding about the alleged recovery we’re in. I invite any of my readers who bought into that rhetoric to try the following simple experiment. Go to your favorite internet search engine and look up how much the fracking industry has added to the US gross domestic product each year from 2009 to 2014. Now subtract that figure from the US gross domestic product for each of those years, and see how much growth there’s actually been in the rest of the economy since the real estate bubble imploded.
What you’ll find, if you take the time to do that, is that the rest of the US economy has been flat on its back gasping for air for the last five years. What makes this even more problematic, as I’ve noted in several previous posts here, is that the great fracking boom about which we’ve heard so much for the last five years was never actually the game-changing energy revolution its promoters claimed; it was simply another installment in the series of speculative bubbles that has largely replaced constructive economic activity in this country over the last two decades or so.
What’s more, it’s not the only bubble currently being blown, and it may not even be the largest. We’ve also got a second tech-stock bubble, with money-losing internet corporations racking up absurd valuations in the stock market while they burn through millions of dollars of venture capital; we’ve got a student loan bubble, in which billions of dollars of loans that will never be paid back have been bundled, packaged, and sold to investors just like all those no-doc mortgages were a decade ago; car loans are getting the same treatment; the real estate market is fizzing again in many urban areas as investors pile into another round of lavishly marketed property investments—well, I could go on for some time. It’s entirely possible that if all the bubble activity were to be subtracted from the last five years or so of GDP, the result would show an economy in freefall.
Certainly that’s the impression that emerges if you take the time to check out those economic statistics that aren’t being systematically jiggered by the US government for PR purposes. The number of long-term unemployed in America is at an all-time high; roads, bridges, and other basic infrastructure is falling to pieces; measurements of US public health—generally considered a good proxy for the real economic condition of the population—are well below those of other industrial countries, heading toward Third World levels; abandoned shopping malls litter the landscape while major retailers announce more than 6000 store closures. These are not things you see in an era of economic expansion, or even one of relative stability; they’re markers of decline.
The utility shutoffs in Detroit and Baltimore are further symptoms of the same broad process of economic unraveling. It’s true, as pundits in the media have been insisting since the story broke, that utilities get shut off for nonpayment of bills all the time. It’s equally true that shutting off the water supply of 20,000 or 30,000 people all at once is pretty much unprecedented. Both cities, please note, have had very large populations of poor people for many decades now.  Those who like to blame a “culture of poverty” for the tangled relationship between US governments and the American poor, and of course that trope has been rehashed by some of the pundits just mentioned, haven’t yet gotten around to explaining how the culture of poverty all at once inspired tens of thousands of people who had been paying their utility bills to stop doing so.
There are plenty of good reasons, after all, why poor people who used to pay their bills can’t do so any more. Standard business models in the United States used to take it for granted that the best way to run the staffing dimensions of any company, large or small, was to have as many full-time positions as possible and to use raises and other practical incentives to encourage employees who were good at their jobs to stay with the company. That approach has been increasingly unfashionable in today’s America, partly due to perverse regulatory incentives that penalize employers for offering full-time positions, partly to the emergence of attitudes in corner offices that treat employees as just another commodity. (I doubt it’s any kind of accident that most corporations nowadays refer to their employment offices as “human resource departments.” What do you do with a resource? You exploit it.)
These days, most of the jobs available to the poor are part-time, pay very little, and include nasty little clawbacks in the form of requirements that employees pay out of pocket for uniforms, equipment, and other things that employers used to provide as a matter of course. Meanwhile housing prices and rents are rising well above their post-2008 dip, and a great many other necessities are becoming more costly—inflation may be under control, or so the official statistics say, but anyone who’s been shopping at the same grocery store for the last eight years knows perfectly well that prices kept on rising anyway.
So you’ve got falling incomes running up against rising costs for food, rent, and utilities, among other things. In the resulting collision, something’s got to give, and for tens of thousands of poor Detroiters and Baltimoreans, what gave first was the ability to keep current on their water bills. Expect to see the same story playing out across the country as more people on the bottom of the income pyramid find themselves in the same situation. What you won’t hear in the media, though it’s visible enough if you know where to look and are willing to do so, is that people above the bottom of the income pyramid are also losing ground, being forced down toward economic nonpersonhood. From the middle classes down, everyone’s losing ground.
That process doesn’t continue any further than the middle class, to be sure. It’s been pointed out repeatedly that over the last four decades or so, the distribution of wealth in America has skewed further and further out of balance, with the top 20% of incomes taking a larger and larger share at the expense of everybody else. That’s an important factor in bringing about the collision just described. Some thinkers on the radical fringes of American society, which is the only place in the US you can talk about such things these days, have argued that the raw greed of the well-to-do is the sole reason why so many people lower down the ladder are being pushed further down still.
Scapegoating rhetoric of that sort is always comforting, because it holds out the promise—theoretically, if not practically—that something can be done about the situation. If only the thieving rich could be lined up against a convenient brick wall and removed from the equation in the time-honored fashion, the logic goes, people in Detroit and Baltimore could afford to pay their water bills!  I suspect we’ll hear such claims increasingly often as the years pass and more and more Americans find their access to familiar comforts and necessities slipping away.  Simple answers are always popular in such times, not least when the people being scapegoated go as far out of their way to make themselves good targets for such exercises as the American rich have done in recent decades.
John Kenneth Galbraith’s equation of the current US political and economic elite with the French aristocracy on the eve of revolution rings even more true than it did when he wrote it back in 1992, in the pages of The Culture of Contentment. The unthinking extravagances, the casual dismissal of the last shreds of noblesse oblige, the obsessive pursuit of personal advantages and private feuds without the least thought of the potential consequences, the bland inability to recognize that the power, privilege, wealth, and sheer survival of the aristocracy depended on the system the aristocrats themselves were destabilizing by their actions—it’s all there, complete with sprawling overpriced mansions that could just about double for Versailles. The urban mobs that played so large a role back in 1789 are warming up for their performances as I write these words; the only thing left to complete the picture is a few tumbrils and a guillotine, and those will doubtless arrive on cue.
The senility of the current US elite, as noted in a previous post here, is a massive political fact in today’s America. Still, it’s not the only factor in play here. Previous generations of wealthy Americans recognized without too much difficulty that their power, prosperity, and survival depended on the willingness of the rest of the population to put up with their antics. Several times already in America’s history, elite groups have allied with populist forces to push through reforms that sharply weakened the power of the wealthy elite, because they recognized that the alternative was a social explosion even more destructive to the system on which elite power depends.
I suppose it’s possible that the people currently occupying the upper ranks of the political and economic pyramid in today’s America are just that much more stupid than their equivalents in the Jacksonian, Progressive, and New Deal eras. Still, there’s at least one other explanation to hand, and it’s the second of the two threatening contextual issues mentioned earlier.
Until the nineteenth century, fresh running water piped into homes for everyday use was purely an affectation of the very rich in a few very wealthy and technologically adept societies. Sewer pipes to take dirty water and human wastes out of the house belonged in the same category. This wasn’t because nobody knew how plumbing works—the Romans had competent plumbers, for example, and water faucets and flush toilets were to be found in Roman mansions of the imperial age. The reason those same things weren’t found in every Roman house was economic, not technical.
Behind that economic issue lay an ecological reality.  White’s Law, one of the foundational principles of human ecology, states that economic development is a function of energy per capita. For a society before the industrial age, the Roman Empire had an impressive amount of energy per capita to expend; control over the agricultural economy of the Mediterranean basin, modest inputs from sunlight, water and wind, and a thriving slave industry fed by the expansion of Roman military power all fed into the capacity of Roman society to develop itself economically and technically. That’s why rich Romans had running water and iced drinks in summer, while their equivalents in ancient Greece a few centuries earlier had to make do without either one.
Fossil fuels gave industrial civilization a supply of energy many orders of magnitude greater than any previous human civilization has had—a supply vast enough that the difference remains huge even after the vast expansion of population that followed the industrial revolution. There was, however, a catch—or, more precisely, two catches. To begin with, fossil fuels are finite, nonrenewable resources; no matter how much handwaving is employed in the attempt to obscure this point—and whatever else might be in short supply these days, that sort of handwaving is not—every barrel of oil, ton of coal, or cubic foot of natural gas that’s burnt takes the world one step closer to the point at which there will be no economically extractable reserves of oil, coal, or natural gas at all.
That’s catch #1. Catch #2 is subtler, and considerably more dangerous. Oil, coal, and natural gas don’t leap out of the ground on command. They have to be extracted and processed, and this takes energy. Companies in the fossil fuel industries have always targeted the deposits that cost less to extract and process, for obvious economic reasons. What this means, though, is that over time, a larger and larger fraction of the energy yield of oil, coal, and natural gas has to be put right back into extracting and processing oil, coal, and natural gas—and this leaves less and less for all other uses.
That’s the vise that’s tightening around the American economy these days. The great fracking boom, to the extent that it wasn’t simply one more speculative gimmick aimed at the pocketbooks of chumps, was an attempt to make up for the ongoing decline of America’s conventional oilfields by going after oil that was far more expensive to extract. The fact that none of the companies at the heart of the fracking boom ever turned a profit, even when oil brought more than $100 a barrel, gives some sense of just how costly shale oil is to get out of the ground. The financial cost of extraction, though, is a proxy for the energy cost of extraction—the amount of energy, and of the products of energy, that had to be thrown into the task of getting a little extra oil out of marginal source rock.
Energy needed to extract energy, again, can’t be used for any other purpose. It doesn’t contribute to the energy surplus that makes economic development possible. As the energy industry itself takes a bigger bite out of each year’s energy production, every other economic activity loses part of the fuel that makes it run. That, in turn, is the core reason why the American economy is on the ropes, America’s infrastructure is falling to bits—and Americans in Detroit and Baltimore are facing a transition to Third World conditions, without electricity or running water.
I suspect, for what it’s worth, that the shutoff notices being mailed to tens of thousands of poor families in those two cities are a good working model for the way that industrial civilization itself will wind down. It won’t be sudden; for decades to come, there will still be people who have access to what Americans today consider the ordinary necessities and comforts of everyday life; there will just be fewer of them each year. Outside that narrowing circle, the number of economic nonpersons will grow steadily, one shutoff notice at a time.
As I’ve pointed out in previous posts, the line of fracture between the senile elite and what Arnold Toynbee called the internal proletariat—the people who live within a failing civilization’s borders but receive essentially none of its benefits—eventually opens into a chasm that swallows what’s left of the civilization. Sometimes the tectonic processes that pull the chasm open are hard to miss, but there are times when they’re a good deal more difficult to sense in action, and this is one of these latter times. Listen to the whisper of the shutoff valve, and you’ll hear tens of thousands of Americans being cut off from basic services the rest of us, for the time being, still take for granted.

The Death of the Internet: A Pre-Mortem

Wed, 2015-04-29 17:25
The mythic role assigned to progress in today’s popular culture has any number of odd effects, but one of the strangest is the blindness to the downside that clamps down on the collective imagination of our time once people become convinced that something or other is the wave of the future. It doesn’t matter in the least how many or obvious the warning signs are, or how many times the same tawdry drama has been enacted.  Once some shiny new gimmick gets accepted as the next glorious step in the invincible march of progress, most people lose the ability to imagine that the wave of the future might just do what waves generally do: that is to say, crest, break, and flow back out to sea, leaving debris scattered on the beach in its wake.
It so happens that I grew up in the middle of just such a temporary wave of the future, in the south Seattle suburbs in the 1960s, where every third breadwinner worked for Boeing. The wave in question was the supersonic transport, SST for short: a jetliner that would fly faster than sound, cutting hours off long flights. The inevitability of the SST was an article of faith locally, and not just because Boeing was building one; an Anglo-French consortium was in the lead with the Concorde, and the Soviets were working on the Tu-144, but the Boeing 2707 was expected to be the biggest and baddest of them all, a 300-seat swing-wing plane that was going to make commercial supersonic flight an everyday reality.
Long before the 2707 had even the most ghostly sort of reality, you could buy model kits of the plane, complete with Pan Am decals, at every hobby store in the greater Seattle area. For that matter, take Interstate 5 south from downtown Seattle past the sprawling Boeing plant just outside of town, and you’d see the image of the 2707 on the wall of one of the huge assembly buildings, a big delta-winged shape in white and gold winging its way through the imagined air toward the gleaming future in which so many people believed back then.
There was, as it happened, a small problem with the 2707, a problem it shared with all the other SST projects; it made no economic sense at all. It was, to be precise, what an earlier post here called  a subsidy dumpster: that is, a project that was technically feasible but economically impractical, and existed mostly as a way to pump government subsidies into Boeing’s coffers. Come 1971, the well ran dry: faced with gloomy numbers from the economists, worried calculations from environmental scientists, and a public not exactly enthusiastic about dozens of sonic booms a day rattling plates and cracking windows around major airports, Congress cut the project’s funding.
That happened right when the US economy generally, and the notoriously cyclical airplane industry in particular, were hitting downturns. Boeing was Seattle’s biggest employer in those days, and when it laid off employees en masse, the result was a local depression of legendary severity. You heard a lot of people in those days insisting that the US had missed out on the next aviation boom, and Congress would have to hang its head in shame once Concordes and Tu-144s were hauling passengers all over the globe. Of course that’s not what happened; the Tu-144 flew a handful of commercial flights and then was grounded for safety reasons, and the Concorde lingered on, a technical triumph but an economic white elephant, until the last plane retired from service in 2003.
All this has been on my mind of late as I’ve considered the future of the internet. The comparison may seem far-fetched, but then that’s what supporters of the SST would have said if anyone had compared the Boeing 2707 to, say, the zeppelin, another wave of the future that turned out to make too little economic sense to matter. Granted, the internet isn’t a subsidy dumpster, and it’s also much more complex than the SST; if anything, it might be compared to the entire system of commercial air travel, which we still have with us or the moment. Nonetheless, a strong case can be made that the internet, like the SST, doesn’t actually make economic sense; it’s being propped up by a set of financial gimmickry with a distinct resemblance to smoke and mirrors; and when those go away—and they will—much of what makes the internet so central a part of pop culture will go away as well.
It’s probably necessary to repeat here that the reasons for this are economic, not technical. Every time I’ve discussed the hard economic realities that make the internet’s lifespan in the deindustrial age  roughly that of a snowball in Beelzebub’s back yard, I’ve gotten a flurry of responses fixating on purely  technical issues. Those issues are beside the point.  No doubt it would be possible to make something like the internet technically feasible in a society on the far side of the Long Descent, but that doesn’t matter; what matters is that the internet has to cover its operating costs, and it also has to compete with other ways of doing the things that the internet currently does.
It’s a source of wry amusement to me that so many people seem to have forgotten that the internet doesn’t actually do very much that’s new. Long before the internet, people were reading the news, publishing essays and stories, navigating through unfamiliar neighborhoods, sharing photos of kittens with their friends, ordering products from faraway stores for home delivery, looking at pictures of people with their clothes off, sending anonymous hate-filled messages to unsuspecting recipients, and doing pretty much everything else that they do on the internet today. For the moment, doing these things on the internet is cheaper and more convenient than the alternatives, and that’s what makes the internet so popular. If that changes—if the internet becomes more costly and less convenient than other options—its current popularity is unlikely to last.
Let’s start by looking at the costs. Every time I’ve mentioned the future of the internet on this blog, I’ve gotten comments and emails from readers who think that the price of their monthly internet service is a reasonable measure of the cost of the internet as a whole. For a useful corrective to this delusion, talk to people who work in data centers. You’ll hear about trucks pulling up to the loading dock every single day to offload pallet after pallet of brand new hard drives and other components, to replace those that will burn out that same day. You’ll hear about power bills that would easily cover the electricity costs of a small city. You’ll hear about many other costs as well. Data centers are not cheap to run, there are many thousands of them, and they’re only one part of the vast infrastructure we call the internet: by many measures, the most gargantuan technological project in the history of our species.
Your monthly fee for internet service covers only a small portion of what the internet costs. Where does the rest come from? That depends on which part of the net we’re discussing. The basic structure is paid for by internet service providers (ISPs), who recoup part of the costs from your monthly fee, part from the much larger fees paid by big users, and part by advertising. Content providers use some mix of advertising, pay-to-play service fees, sales of goods and services, packaging and selling your personal data to advertisers and government agencies, and new money from investors and loans to meet their costs. The ISPs routinely make a modest profit on the deal, but many of the content providers do not. Amazon may be the biggest retailer on the planet, for example, and its cash flow has soared in recent years, but its expenses have risen just as fast, and it rarely makes a profit. Many other content provider firms, including fish as big as Twitter, rack up big losses year after year.
How do they stay in business? A combination of vast amounts of investment money and ultracheap debt. That’s very common in the early decades of a new industry, though it’s been made a good deal easier by the Fed’s policy of next-to-zero interest rates. Investors who dream of buying stock in the next Microsoft provide venture capital for internet startups, banks provide lines of credit for existing firms, the stock and bond markets snap up paper of various kinds churned out by internet businesses, and all that money goes to pay the bills. It’s a reasonable gamble for the investors; they know perfectly well that a great many of the firms they’re funding will go belly up within a few years, but the few that don’t will either be bought up at inflated prices by one of the big dogs of the online world, or will figure out how to make money and then become big dogs themselves.
Notice, though, that this process has an unexpected benefit for ordinary internet users: a great many services are available for free, because venture-capital investors and lines of credit are footing the bill for the time being. Boosting the number of page views and clickthroughs is far more important for the future of an internet company these days than making a profit, and so the usual business plan is to provide plenty of free goodies to the public without worrying about the financial end of things. That’s very convenient just now for internet users, but it fosters the illusion that the internet costs nothing.
As mentioned earlier, this sort of thing is very common in the early decades of a new industry. As the industry matures, markets become saturated, startups become considerably riskier, and venture capital heads for greener pastures.  Once this happens, the companies that dominate the industry have to stay in business the old-fashioned way, by earning a profit, and that means charging as much as the market will bear, monetizing services that are currently free, and cutting service to the lowest level that customers will tolerate. That’s business as usual, and it means the end of most of the noncommercial content that gives the internet so much of its current role in popular culture.
All other things being equal, in other words, the internet can be expected to follow the usual trajectory of a maturing industry, becoming more expensive, less convenient, and more tightly focused on making a quick buck with each passing year. Governments have already begun to tax internet sales, removing one of the core “stealth subsidies” that boosted the internet at the expense of other retail sectors, and taxation of the internet will only increase as cash-starved officials contemplate the tidal waves of money sloshing back and forth online. None of these changes will kill the internet, but they’ll slap limits on the more utopian fantasies currently burbling about the web, and provide major incentives for individuals and businesses to back away from the internet and do things in the real world instead.
Then there’s the increasingly murky world of online crime, espionage, and warfare, which promises to push very hard in the same direction in the years ahead.  I think most people are starting to realize that on the internet, there’s no such thing as secure data, and the costs of conducting business online these days include a growing risk of having your credit cards stolen, your bank accounts looted, your identity borrowed for any number of dubious purposes, and the files on your computer encrypted without your knowledge, so that you can be forced to pay a ransom for their release—this latter, or so I’ve read, is the latest hot new trend in internet crime.
Online crime is one of the few fields of criminal endeavor in which raw cleverness is all you need to make out, as the saying goes, like a bandit. In the years ahead, as a result, the internet may look less like an information superhighway and more like one of those grim inner city streets where not even the muggers go alone. Trends in online espionage and warfare are harder to track, but either or both could become a serious burden on the internet as well.
Online crime, espionage, and warfare aren’t going to kill the internet, any more than the ordinary maturing of the industry will. Rather, they’ll lead to a future in which costs of being online are very often greater than the benefits, and the internet is by and large endured rather than enjoyed. They’ll also help drive the inevitable rebound away from the net. That’s one of those things that always happens and always blindsides the cheerleaders of the latest technology: a few decades into its lifespan, people start to realize that they liked the old technology better, thank you very much, and go back to it. The rebound away from the internet has already begun, and will only become more visible as time goes on, making a great many claims about the future of the internet look as absurd as those 1950s articles insisting that in the future, every restaurant would inevitably be a drive-in.
To be sure, the resurgence of live theater in the wake of the golden age of movie theaters didn’t end cinema, and the revival of bicycling in the aftermath of the automobile didn’t make cars go away. In the same way, the renewal of interest in offline practices and technologies isn’t going to make the internet go away. It’s simply going to accelerate the shift of avant-garde culture away from an increasingly bleak, bland, unsafe, and corporate- and government-controlled internet and into alternative venues. That won’t kill the internet, though once again it will put a stone marked R.I.P. atop the grave of a lot of the utopian fantasies that have clustered around today’s net culture.
All other things being equal, in fact, there’s no reason why the internet couldn’t keep on its present course for years to come. Under those circumstances, it would shed most of the features that make it popular with today’s avant-garde, and become one more centralized, regulated, vacuous mass medium, packed to the bursting point with corporate advertising and lowest-common-denominator content, with dissenting voices and alternative culture shut out or shoved into corners where nobody ever looks. That’s the normal trajectory of an information technology in today’s industrial civilization, after all; it’s what happened with radio and television in their day, as the gaudy and grandiose claims of the early years gave way to the crass commercial realities of the mature forms of each medium.
But all other things aren’t equal.
Radio and television, like most of the other familiar technologies that define life in a modern industrial society, were born and grew to maturity in an expanding economy. The internet, by contrast, was born during the last great blowoff of the petroleum age—the last decades of the twentieth century, during which the world’s industrial nations took the oil reserves that might have cushioned the transition to sustainability, and blew them instead on one last orgy of over-the-top conspicuous consumption—and it’s coming to maturity in the early years of an age of economic contraction and ecological blowback.
The rising prices, falling service quality, and relentless monetization of a maturing industry, together with the increasing burden of online crime and the inevitable rebound away from internet culture, will thus be hitting the internet in a time when the global economy no longer has the slack it once did, and the immense costs of running the internet in anything like its present form will have to be drawn from a pool of real wealth that has many other demands on it. What’s more, quite a few of those other demands will be far more urgent than the need to provide consumers with a convenient way to send pictures of kittens to their friends. That stark reality will add to the pressure to monetize internet services, and provide incentives to those who choose to send their kitten pictures by other means.
It’s crucial to remember here, as noted above, that the internet is simply a cheaper and more convenient way of doing things that people were doing long before the first website went live, and a big part of the reason why it’s cheaper and more convenient right now is that internet users are being subsidized by the investors and venture capitalists who are funding the internet industry. That’s not the only subsidy on which the internet depends, though. Along with the rest of industrial society, it’s also subsidized by half a billion years of concentrated solar energy in the form of fossil fuels.  As those deplete, the vast inputs of energy, labor, raw materials, industrial products, and other forms of wealth that sustain the internet will become increasingly expensive to provide, and ways of distributing kitten pictures that don’t require the same inputs will prosper in the resulting competition.
There are also crucial issues of scale. Most pre-internet communications and information technologies scale down extremely well. A community of relatively modest size can have its own public library, its own small press, its own newspaper, and its own radio station running local programming, and could conceivably keep all of these functioning and useful even if the rest of humanity suddenly vanished from the map. Internet technology doesn’t have that advantage. It’s orders of magnitude more complex and expensive than a radio transmitter, not to mention the 14th-century technology of printing presses and card catalogs; what’s more, on the scale of a small community, the benefits of using internet technology instead of simpler equivalents wouldn’t come close to justifying the vast additional cost.
Now of course the world of the future isn’t going to consist of a single community surrounded by desolate wasteland. That’s one of the reasons why the demise of the internet won’t happen all at once. Telecommunications companies serving some of the more impoverished parts of rural America are already letting their networks in those areas degrade, since income from customers doesn’t cover the costs of maintenance.  To my mind, that’s a harbinger of the internet’s future—a future of uneven decline punctuated by local and regional breakdowns, some of which will be fixed for a while.
That said, it’s quite possible that there will still be an internet of some sort fifty years from now. It will connect government agencies, military units, defense contractors, and the handful of universities that survive the approaching implosion of the academic industry here in the US, and it may provide email and a few other services to the very rich, but it will otherwise have a lot more in common with the original DARPAnet than with the 24/7 virtual cosmos imagined by today’s more gullible netheads.
Unless you’re one of the very rich or an employee of one of the institutions just named, furthermore, you won’t have access to the internet of 2065.  You might be able to hack into it, if you have the necessary skills and are willing to risk a long stint in a labor camp, but unless you’re a criminal or a spy working for the insurgencies flaring in the South or the mountain West, there’s not much point to the stunt. If you’re like most Americans in 2065, you live in Third World conditions without regular access to electricity or running water, and you’ve got other ways to buy things, find out what’s going on in the world, find out how to get to the next town and, yes, look at pictures of people with their clothes off. What’s more, in a deindustrializing world, those other ways of doing things will be cheaper, more resilient, and more useful than reliance on the baroque intricacies of a vast computer net.
Exactly when the last vestiges of the internet will sputter to silence is a harder question to answer. Long before that happens, though, it will have lost its current role as one of the poster children of the myth of perpetual progress, and turned back into what it really was all the time: a preposterously complex way to do things most people have always done by much simpler means, which only seemed to make sense during that very brief interval of human history when fossil fuels were abundant and cheap.
***In other news, I’m pleased to announce that the third anthology of deindustrial SF stories from this blog’s “Space Bats” contest, After Oil 3: The Years of Rebirth, is now available in print and e-book formats. Those of my readers who’ve turned the pages of the two previous After Oil anthologies already know that this one has a dozen eminently readable and thought-provoking stories about the world on the far side of the Petroleum Age; the rest of you—why, you’re in for a treat. Those who are interested in contributing to the next After Oil anthology will find the details here.

A Field Guide to Negative Progress

Wed, 2015-04-22 17:23
I've commented before in these posts that writing is always partly a social activity. What Mortimer Adler used to call the Great Conversation, the dance of ideas down the corridors of the centuries, shapes every word in a writer’s toolkit; you can hardly write a page in English without drawing on a shade of meaning that Geoffrey Chaucer, say, or William Shakespeare, or Jane Austen first put into the language. That said, there’s also a more immediate sense in which any writer who interacts with his or her readers is part of a social activity, and one of the benefits came my way just after last week’s post.
That post began with a discussion of the increasingly surreal quality of America’s collective life these days, and one of my readers—tip of the archdruidical hat to Anton Mett—had a fine example to offer. He’d listened to an economic report on the media, and the talking heads were going on and on about the US economy’s current condition of, ahem, “negative growth.” Negative growth? Why yes, that’s the opposite of growth, and it’s apparently quite a common bit of jargon in economics just now.
Of course the English language, as used by the authors named earlier among many others, has no shortage of perfectly clear words for the opposite of growth. “Decline” comes to mind; so does “decrease,” and so does “contraction.” Would it have been so very hard for the talking heads in that program, or their many equivalents in our economic life generally, to draw in a deep breath and actually come right out and say “The US economy has contracted,” or “GDP has decreased,” or even “we’re currently in a state of economic decline”? Come on, economists, you can do it!
But of course they can’t.  Economists in general are supposed to provide, shall we say, negative clarity when discussing certain aspects of contemporary American economic life, and talking heads in the media are even more subject to this rule than most of their peers. Among the things about which they’re supposed to be negatively clear, two are particularly relevant here; the first is that economic contraction happens, and the second is that that letting too much of the national wealth end up in too few hands is a very effective way to cause economic contraction. The logic here is uncomfortably straightforward—an economy that depends on consumer expenditures only prospers if consumers have plenty of money to spend—but talking about that equation would cast an unwelcome light on the culture of mindless kleptocracy entrenched these days at the upper end of the US socioeconomic ladder. So we get to witness the mass production of negative clarity about one of the main causes of negative growth.
It’s entrancing to think of other uses for this convenient mode of putting things. I can readily see it finding a role in health care—“I’m sorry, ma’am,” the doctor says, “but your husband is negatively alive;” in sports—“Well, Joe, unless the Orioles can cut down that negative lead of theirs, they’re likely headed for a negative win;” and in the news—“The situation in Yemen is shaping up to be yet another negative triumph for US foreign policy.” For that matter, it’s time to update one of the more useful proverbs of recent years: what do you call an economist who makes a prediction? Negatively right.
Come to think of it, we might as well borrow the same turn of phrase for the subject of last week’s post, the deliberate adoption of older, simpler, more independent technologies in place of today’s newer, more complex, and more interconnected ones. I’ve been talking about that project so far under the negatively mealy-mouthed label “intentional technological regress,” but hey, why not be cool and adopt the latest fashion? For this week, at least, we’ll therefore redefine our terms a bit, and describe the same thing as “negative progress.” Since negative growth sounds like just another kind of growth, negative progress ought to pass for another kind of progress, right?
With this in mind, I’d like to talk about some of the reasons that individuals, families, organizations, and communities, as they wend their way through today’s cafeteria of technological choices, might want to consider loading up their plates with a good hearty helping of negative progress.
Let’s start by returning to one of the central points raised here in earlier posts, the relationship between progress and the production of externalities. By and large, the more recent a technology is, the more of its costs aren’t paid by the makers or the users of the technology, but are pushed off onto someone else. As I pointed out a post two months ago, this isn’t accidental; quite the contrary, as noted in the post just cited, it’s hardwired into the relationship between progress and market economics, and bids fair to play a central role in the unraveling of the entire project of industrial civilization.
The same process of increasing externalities, though, has another face when seen from the point of view of the individual user of any given technology. When you externalize any cost of a technology, you become dependent on whoever or whatever picks up the cost you’re not paying. What’s more, you become dependent on the system that does the externalizing, and on whoever controls that system. Those dependencies aren’t always obvious, but they impose costs of their own, some financial and some less tangible. What’s more, unlike the externalized costs, a great many of these secondary costs land directly on the user of the technology.
It’s interesting, and may not be entirely accidental, that there’s no commonly used term for the entire structure of externalities and dependencies that stand behind any technology. Such a term is necessary here, so for the present purpose,  we’ll call the structure just named the technology’s externality system. Given that turn of phrase, we can restate the point about progress made above: by and large, the more recent a technology is, the larger the externality system on which it depends.
An example will be useful here, so let’s compare the respective externality systems of a bicycle and an automobile. Like most externality systems, these divide up more or less naturally into three categories: manufacture, maintenance, and use. Everything that goes into fabricating steel parts, for instance, all the way back to the iron ore in the mine, is an externality of manufacture; everything that goes into making lubricating oil, all the way back to drilling for the oil well, is an externality of maintenance; everything that goes into building roads suitable for bikes and cars is an externality of use.
Both externality systems are complex, and include a great many things that aren’t obvious at first glance. The point I want to make here, though, is that the car’s externality system is far and away the more complex of the two. In fact, the bike’s externality system is a subset of the car’s, and this reflects the specific historical order in which the two technologies were developed. When the technologies that were needed for a bicycle’s externality system came into use, the first bicycles appeared; when all the additional technologies needed for a car’s externality system were added onto that foundation, the first cars followed. That sort of incremental addition of externality-generating technologies is far and away the most common way that technology progresses.
We can thus restate the pattern just analyzed in a way that brings out some of its less visible and more troublesome aspects: by and large, each new generation of technology imposes more dependencies on its users than the generation it replaces. Again, a comparison between bicycles and automobiles will help make that clear. If you want to ride a bike, you’ve committed yourself to dependence on all the technical, economic, and social systems that go into manufacturing, maintaining, and using the bike; you can’t own, maintain, and ride a bike without the steel mills that produce the frame, the chemical plants that produce the oil you squirt on the gears, the gravel pits that provide raw material for roads and bike paths, and so on.
On the other hand, you’re not dependent on a galaxy of other systems that provide the externality system for your neighbor who drives. You don’t depend on the immense network of pipelines, tanker trucks, and gas stations that provide him with fuel; you don’t depend on the interstate highway system or the immense infrastructure that supports it; if you did the sensible thing and bought a bike that was made by a local craftsperson, your dependence on vast multinational corporations and all of their infrastructure, from sweatshop labor in Third World countries to financial shenanigans on Wall Street, is considerably smaller than that of your driving neighbor. Every dependency you have, your neighbor also has, but not vice versa.
Whether or not these dependencies matter is a complex thing. Obviously there’s a personal equation—some people like to be independent, others are fine with being just one more cog in the megamachine—but there’s also a historical factor to consider. In an age of economic expansion, the benefits of dependency very often outweigh the costs; standards of living are rising, opportunities abound, and it’s easy to offset the costs of any given dependency. In a stable economy, one that’s neither growing nor contracting, the benefits and costs of any given dependency need to be weighed carefully on a case by case basis, as one dependency may be worth accepting while another costs more than it’s worth.
On the other hand, in an age of contraction and decline—or, shall we say, negative expansion?—most dependencies are problematic, and some are lethal. In a contracting economy, as everyone scrambles to hold onto as much as possible of the lifestyles of a more prosperous age, your profit is by definition someone else’s loss, and dependency is just another weapon in the Hobbesian war of all against all. By many measures, the US economy has been contracting since before the bursting of the housing bubble in 2008; by some—in particular, the median and modal standards of living—it’s been contracting since the 1970s, and the unmistakable hissing sound as air leaks out of the fracking bubble just now should be considered fair warning that another round of contraction is on its way.
With that in mind, it’s time to talk about the downsides of dependency.
First of all, dependency is expensive. In the struggle for shares of a shrinking pie in a contracting economy, turning any available dependency into a cash cow is an obvious strategy, and one that’s already very much in play. Consider the conversion of freeways into toll roads, an increasingly popular strategy in large parts of the United States. Consider, for that matter, the soaring price of health care in the US, which hasn’t been accompanied by any noticeable increase in quality of care or treatment outcomes. In the dog-eat-dog world of economic contraction, commuters and sick people are just two of many captive populations whose dependencies make them vulnerable to exploitation. As the spiral of decline continues, it’s safe to assume that any dependency that can be exploited will be exploited, and the more dependencies you have, the more likely you are to be squeezed dry.
The same principle applies to power as well as money; thus, whoever owns the systems on which you depend, owns you. In the United States, again, laws meant to protect employees from abusive behavior on the part of employers are increasingly ignored; as the number of the permanently unemployed keeps climbing year after year, employers know that those who still have jobs are desperate to keep them, and will put up with almost anything in order to keep that paycheck coming in. The old adage about the inadvisability of trying to fight City Hall has its roots in this same phenomenon; no matter what rights you have on paper, you’re not likely to get far with them when the other side can stop picking up your garbage and then fine you for creating a public nuisance, or engage in some other equally creative use of their official prerogatives. As decline accelerates, expect to see dependencies increasingly used as levers for exerting various kinds of economic, political, and social power at your expense.
Finally, and crucially, if you’re dependent on a failing system, when the system goes down, so do you. That’s not just an issue for the future; it’s a huge if still largely unmentioned reality of life in today’s America, and in most other corners of the industrial world as well. Most of today’s permanently unemployed got that way because the job on which they depended for their livelihood got offshored or automated out of existence; much of the rising tide of poverty across the United States is a direct result of the collapse of political and social systems that once countered the free market’s innate tendency to drive the gap between rich and poor to Dickensian extremes. For that matter, how many people who never learned how to read a road map are already finding themselves in random places far from help because something went wrong with their GPS units?
It’s very popular among those who recognize the problem with being shackled to a collapsing system to insist that it’s a problem for the future, not the present.  They grant that dependency is going to be a losing bet someday, but everything’s fine for now, so why not enjoy the latest technological gimmickry while it’s here? Of course that presupposes that you enjoy the latest technological gimmicry, which isn’t necessarily a safe bet, and it also ignores the first two difficulties with dependency outlined above, which are very much present and accounted for right now. We’ll let both those issues pass for the moment, though, because there’s another factor that needs to be included in the calculation.
A practical example, again, will be useful here. In my experience, it takes around five years of hard work, study, and learning from your mistakes to become a competent vegetable gardener. If you’re transitioning from buying all your vegetables at the grocery store to growing them in your backyard, in other words, you need to start gardening about five years before your last trip to the grocery store. The skill and hard work that goes into growing vegetables is one of many things that most people in the world’s industrial nations externalize, and those things don’t just pop back to you when you leave the produce section of the store for the last time. There’s a learning curve that has to be undergone.
Not that long ago, there used to be a subset of preppers who grasped the fact that a stash of cartridges and canned wieners in a locked box at their favorite deer camp cabin wasn’t going to get them through the downfall of industrial civilization, but hadn’t factored in the learning curve. Businesses targeting the prepper market thus used to sell these garden-in-a-box kits, which had seed packets for vegetables, a few tools, and a little manual on how to grow a garden. It’s a good thing that Y2K, 2012, and all those other dates when doom was supposed to arrive turned out to be wrong, because I met a fair number of people who thought that having one of those kits would save them even though they last grew a plant from seed in fourth grade. If the apocalypse had actually arrived, survivors a few years later would have gotten used to a landscape scattered with empty garden-in-a-box kits, overgrown garden patches, and the skeletal remains of preppers who starved to death because the learning curve lasted just that much longer than they did.
The same principle applies to every other set of skills that has been externalized by people in today’s industrial society, and will be coming back home to roost as economic contraction starts to cut into the viability of our externality systems. You can adopt them now, when you have time to get through the learning curve while there’s still an industrial society around to make up for the mistakes and failures that are inseparable from learning, or you can try to adopt them later, when those same inevitable mistakes and failures could very well land you in a world of hurt. You can also adopt them now, when your dependencies haven’t yet been used to empty your wallet and control your behavior, or you can try to adopt them later, when a much larger fraction of the resources and autonomy you might have used for the purpose will have been extracted from you by way of those same dependencies.
This is a point I’ve made in previous posts here, but it applies with particular force to negative progress—that is, to the deliberate adoption of older, simpler, more independent technologies in place of the latest, dependency-laden offerings from the corporate machine. As decline—or, shall we say, negative growth—becomes an inescapable fact of life in postprogress America, decreasing your dependence on sprawling externality systems is going to be an essential tactic.
Those who become early adopters of the retro future, to use an edgy term from last week’s post, will have at least two, and potentially three, significant advantages. The first, as already noted, is that they’ll be much further along the learning curve by the time rising costs, increasing instabilities, and cascading systems failures either put the complex technosystems out of reach or push the relationship between costs and benefits well over into losing-proposition territory. The second is that as more people catch onto the advantages of older, simpler, more sustainable technologies, surviving examples will become harder to find and more expensive to buy; in this case as in many others, collapsing first ahead of the rush is, among other things, the more affordable option.
The third advantage? Depending on exactly which old technologies you happen to adopt, and whether or not you have any talent for basement-workshop manufacture and the like, you may find yourself on the way to a viable new career as most other people will be losing their jobs—and their shirts. As the global economy comes unraveled and people in the United States lose their current access to shoddy imports from Third World sweatshops, there will be a demand for a wide range of tools and simple technologies that still make sense in a deindustrializing world. Those who already know how to use such technologies will be prepared to teach others how to use them; those who know how to repair, recondition, or manufacture those technologies will be prepared to barter, or to use whatever form of currency happens to replace today’s mostly hallucinatory forms of money, to good advantage.
My guess, for what it’s worth, is that salvage trades will be among the few growth industries in the 21st century, and the crafts involved in turning scrap metal and antique machinery into tools and machines that people need for their homes and workplaces will be an important part of that economic sector. To understand how that will work, though, it’s probably going to be necessary to get a clearer sense of the way that today’s complex technostructures are likely to come apart. Next week, with that in mind, we’ll spend some time thinking about the unthinkable—the impending death of the internet.

The Retro Future

Wed, 2015-04-15 18:16
Is it just me, or has the United States taken yet another great leap forward into the surreal over the last few days? Glancing through the news, I find another round of articles babbling about how fracking has guaranteed America a gaudy future as a petroleum and natural gas exporter. Somehow none of these articles get around to mentioning that the United States is a major net importer of both commodities, that most of the big-name firms in the fracking industry have been losing money at a rate of billions a year since the boom began, and that the pileup of bad loans to fracking firms is pushing the US banking industry into a significant credit crunch, but that’s just par for the course nowadays.
Then there’s the current tempest in the media’s teapot, Hillary Clinton’s presidential run. I’ve come to think of Clinton as the Khloe Kardashian of American politics, since she owed her original fame to the mere fact that she’s related to someone else who once caught the public eye. Since then she’s cycled through various roles because, basically, that’s what Famous People do, and the US presidency is just the next reality-TV gig on her bucket list. I grant that there’s a certain wry amusement to be gained from watching this child of privilege, with the help of her multimillionaire friends, posturing as a champion of the downtrodden, but I trust that none of my readers are under the illusion that this rhetoric will amount to anything more than all that chatter about hope and change eight years ago.
Let us please be real: whoever mumbles the oath of office up there on the podium in 2017, whether it’s Clinton or the interchangeably Bozoesque figures currently piling one by one out of the GOP’s clown car to contend with her, we can count on more of the same: more futile wars, more giveaways to the rich at everyone else’s expense, more erosion of civil liberties, more of all the other things Obama’s cheerleaders insisted back in 2008 he would stop as soon as he got into office.  As Arnold Toynbee pointed out a good many years ago, one of the hallmarks of a nation in decline is that the dominant elite sinks into senility, becoming so heavily invested in failed policies and so insulated from the results of its own actions that nothing short of total disaster will break its deathgrip on the body politic.
While we wait for the disaster in question, though, those of us who aren’t part of the dominant elite and aren’t bamboozled by the spectacle du jour might reasonably consider what we might do about it all. By that, of course, I don’t mean that it’s still possible to save industrial civilization in general, and the United States in particular, from the consequences of their history. That possibility went whistling down the wind a long time ago. Back in 2005, the Hirsch Report showed that any attempt to deal with the impending collision with the hard ecological limits of a finite planet had to get under way at least twenty years before the peak of global conventional petroleum reserves, if there was to be any chance of avoiding massive disruptions. As it happens, 2005 also marked the peak of conventional petroleum production worldwide, which may give you some sense of the scale of the current mess.
Consider, though, what happened in the wake of that announcement. Instead of dealing with the hard realities of our predicament, the industrial world panicked and ran the other way, with the United States well in the lead. Strident claims that ethanol—er, solar—um, biodiesel—okay, wind—well, fracking, then—would provide a cornucopia of cheap energy to replace the world’s rapidly depleting reserves of oil, coal, and natural gas took the place of a serious energy policy, while conservation, the one thing that might have made a difference, was as welcome as garlic aioli at a convention of vampires.
That stunningly self-defeating response had a straightforward cause, which was that everyone except a few of us on the fringes treated the whole matter as though the issue was how the privileged classes of the industrial world could maintain their current lifestyles on some other resource base.  Since that question has no meaningful answer, questions that could have been answered—for example, how do we get through the impending mess with at least some of the achievements of the last three centuries intact?—never got asked at all. At this point, as a result, ten more years have been wasted trying to come up with answers to the wrong question, and most of the  doors that were still open in 2005 have been slammed shut by events since that time.
Fortunately, there are still a few possibilities for constructive action open even this late in the game. More fortunate still, the ones that will likely matter most don’t require Hillary Clinton, or any other member of America’s serenely clueless ruling elite, to do something useful for a change. They depend, rather, on personal action, beginning with individuals, families, and local communities and spiraling outward from there to shape the future on wider and wider scales.
I’ve talked about two of these possibilities at some length in posts here. The first can be summed up simply enough in a cheery sentence:  “Collapse now and avoid the rush!”  In an age of economic contraction—and behind the current facade of hallucinatory paper wealth, we’re already in such an age—nothing is quite so deadly as the attempt to prop up extravagant lifestyles that the real economy of goods and services will no longer support. Those who thrive in such times are those who downshift ahead of the economy, take the resources that would otherwise be wasted on attempts to sustain the unsustainable, and apply them to the costs of transition to less absurd ways of living. The acronym L.E.S.S.—“Less Energy, Stuff, and Stimulation”—provides a good first approximation of the direction in which such efforts at controlled collapse might usefully move.
The point of this project isn’t limited to its advantages on the personal scale, though these are fairly substantial. It’s been demonstrated over and over again that personal example is far more effective than verbal rhetoric at laying the groundwork for collective change. A great deal of what keeps so many people pinned in the increasingly unsatisfying and unproductive lifestyles sold to them by the media is simply that they can’t imagine a better alternative. Those people who collapse ahead of the rush and demonstrate that it’s entirely possible to have a humane and decent life on a small fraction of the usual American resource footprint are already functioning as early adopters; with every month that passes, I hear from more people—especially young people in their teens and twenties—who are joining them, and helping to build a bridgehead to a world on the far side of the impending crisis.
The second possibility is considerably more complex, and resists summing up so neatly. In a series of posts here  in 2010 and 2011, and then in my book Green Wizardry, I sketched out the toolkit of concepts and approaches that were central to the appropriate technology movement back in the 1970s, where I had my original education in the subjects central to this blog. I argued then, and still believe now, that by whatever combination of genius and sheer dumb luck, the pioneers of that movement managed to stumble across a set of approaches to the work of sustainability that are better suited to the needs of our time than anything that’s been proposed since then.
Among the most important features of what I’ve called the “green wizardry” of appropriate tech is the fact that those who want to put it to work don’t have to wait for the Hillary Clintons of the world to lift a finger. Millions of dollars in government grants and investment funds aren’t necessary, or even particularly useful. From its roots in the Sixties counterculture, the appropriate tech scene inherited a focus on do-it-yourself projects that could be done with hand tools, hard work, and not much money. In an age of economic contraction, that makes even more sense than it did back in the day, and the ability to keep yourself and others warm, dry, fed, and provided with many of the other needs of life without potentially lethal dependencies on today’s baroque technostructures has much to recommend it.
Nor, it has to be said, is appropriate tech limited to those who can afford a farm in the country; many of the most ingenious and useful appropriate tech projects were developed by and for people living in ordinary homes and apartments, with a small backyard or no soil at all available for gardening. The most important feature of appropriate tech, though, is that the core elements of its toolkit—intensive organic gardening and small-scale animal husbandry, homescale solar thermal technologies, energy conservation, and the like—are all things that will still make sense long after the current age of fossil fuel extraction has gone the way of the dinosaurs. Getting these techniques into as many hands as possible now is thus not just a matter of cushioning the impacts of the impending era of crisis; it’s also a way to start building the sustainable world of the future right now.
Those two strategies, collapsing ahead of the rush and exploring the green wizardry of appropriate technology, have been core themes of this blog for quite a while now. There’s a third project, though, that I’ve been exploring in a more abstract context here for a while now, and it’s time to talk about how it can be applied to some of the most critical needs of our time.
In the early days of this blog, I pointed out that technological progress has a feature that’s not always grasped by its critics, much less by those who’ve turned faith in progress into the established religion of our time. Very few new technologies actually meet human needs that weren’t already being met, and so the arrival of a new technology generally leads to the abandonment of an older technology that did the same thing. The difficulty here is that new technologies nowadays are inevitably more dependent on global technostructures, and the increasingly brittle and destructive economic systems that support them, than the technologies they replace. New technologies look more efficient than old ones because more of the work is being done somewhere else, and can therefore be ignored—for now.
This is the basis for what I’ve called the externality trap. As technologies get more complex, that complexity allows more of their costs to be externalized—that is to say, pushed onto someone other than the makers or users of the technology. The pressures of a market economy guarantee that those economic actors who externalize more of their costs will prosper at the expense of those who externalize less. The costs thus externalized, though, don’t go away; they get passed from hand to hand like hot potatoes and finally pile up in the whole systems—the economy, the society, the biosphere itself—that have no voice in economic decisions, but are essential to the prosperity and survival of every economic actor, and sooner or later those whole systems will break down under the burden.  Unlimited technological progress in a market economy thus guarantees the economic, social, and/or environmental destruction of the society that fosters it.
The externality trap isn’t just a theoretical possibility. It’s an everyday reality, especially but not only in the United States and other industrial societies. There are plenty of forces driving the rising spiral of economic, social, and environmental disruption that’s shaking the industrial world right down to its foundations, but among the most important is precisely the unacknowledged impact of externalized costs on the whole systems that support the industrial economy. It’s fashionable these days to insist that increasing technological complexity and integration will somehow tame that rising spiral of crisis, but the externality trap suggests that exactly the opposite is the case—that the more complex and integrated technologies become, the more externalities they will generate. It’s precisely because technological complexity makes it easy to ignore externalized costs that progress becomes its own nemesis.
Yes, I know, suggesting that progress isn’t infallibly beneficent is heresy, and suggesting that progress will necessarily terminate itself with extreme prejudice is heresy twice over. I can’t help that; it so happens that in most declining civilizations, ours included, the things that most need to be said are the things that, by and large, nobody wants to hear. That being the case, I might as well make it three for three and point out that the externality trap is a problem rather than a predicament. The difference, as longtime readers know, is that problems can be solved, while predicaments can only be faced. We don’t have to keep loading an ever-increasing burden of externalized costs on the whole systems that support us—which is to say, we don’t have to keep increasing the complexity and integration of the technologies that we use in our daily lives. We can stop adding to the burden; we can even go the other way.
Now of course suggesting that, even thinking it, is heresy on the grand scale. I’m reminded of a bit of technofluff in the Canadian media a week or so back that claimed to present a radically pessimistic view of the next ten years. Of course it had as much in common with actual pessimism as lite beer has with a pint of good brown ale; the worst thing the author, one Douglas Coupland, is apparently able to imagine is that industrial society will keep on doing what it’s doing now—though the fact that more of what’s happening now apparently counts as radical pessimism these days is an interesting point, and one that deserves further discussion.
The detail of this particular Dystopia Lite that deserves attention here, though, is Coupland’s dogmatic insistence that “you can never go backward to a lessened state of connectedness.” That’s a common bit of rhetoric out of the mouths of tech geeks these days, to be sure, but it isn’t even remotely true. I know quite a few people who used to be active on social media and have dropped the habit. I know others who used to have allegedly smart phones and went back to ordinary cell phones, or even to a plain land line, because they found that the costs of excess connectedness outweighed the benefits. Technological downshifting is already a rising trend, and there are very good reasons for that fact.
Most people find out at some point in adolescence that there really is such a thing as drinking too much beer. I think a lot of people are slowly realizing that the same thing is true of connectedness, and of the other prominent features of today’s fashionable technologies. One of the data points that gives me confidence in that analysis is the way that people like Coupland angrily dismiss the possibility. Part of his display of soi-disant pessimism is the insistence that within a decade, people who don’t adopt the latest technologies will be dismissed as passive-aggressive control freaks. Now of course that label could be turned the other way just as easily, but the point I want to make here is that nobody gets that bent out of shape about behaviors that are mere theoretical possibilities. Clearly, Coupland and his geek friends are already contending with people who aren’t interested in conforming to the technosphere.
It’s not just geek technologies that are coming in for that kind of rejection, either. These days, in the town where I live, teenagers whose older siblings used to go hotdogging around in cars ten years ago are doing the same thing on bicycles today. Granted, I live in a down-at-the-heels old mill town in the north central Appalachians, but there’s more to it than that. For a lot of these kids, the costs of owning a car outweigh the benefits so drastically that cars aren’t cool any more. One consequence of that shift in cultural fashion is that these same kids aren’t contributing anything like so much to the buildup of carbon dioxide in the atmosphere, or to the other externalized costs generated by car ownership.
I’ve written here already about deliberate technological regression as a matter of public policy. Over the last few months, though, it’s become increasingly clear to me that deliberate technological regression as a matter of personal choice is also worth pursuing. Partly this is because the deathgrip of failed policies on the political and economic order of the industrial world, as mentioned earlier, is tight enough that any significant change these days has to start down here at the grassroots level, with individuals, families, and communities, if it’s going to get anywhere at all; partly, it’s because technological regression, like anything else that flies in the face of the media stereotypes of our time, needs the support of personal example in order to get a foothold; partly, it’s because older technologies, being less vulnerable to the impacts of whole-system disruptions, will still be there meeting human needs when the grid goes down, the economy freezes up, or something really does break the internet, and many of them will still be viable when the fossil fuel age is a matter for the history books.
Still, there’s another aspect, and it’s one that the essay by Douglas Coupland mentioned above managed to hit squarely: the high-tech utopia ballyhooed by the first generation or so of internet junkies has turned out in practice to be a good deal less idyllic, and in fact a good deal more dystopian, than its promoters claimed. All the wonderful things we were supposedly going to be able to do turned out in practice to consist of staring at little pictures on glass screens and pushing buttons, and these are not exactly the most interesting activities in the world, you know. The people who are dropping out of social media and ditching their allegedly smart phones for a less connected lifestyle have noticed this.
What’s more, a great many more people—the kids hotdogging on bikes here in Cumberland are among them—are weighing  the costs and benefits of complex technologies with cold eyes, and deciding that an older, simpler technology less dependent on global technosystems is not just more practical, but also, and importantly, more fun. True believers in the transhumanist cyberfuture will doubtless object to that last point, but the deathgrip of failed ideas on societies in decline isn’t limited to the senile elites mentioned toward the beginning of this post; it can also afflict the fashionable intellectuals of the day, and make them proclaim the imminent arrival of the future’s rising waters when the tide’s already turned and is flowing back out to sea.
I’d like to suggest, in fact, that it’s entirely possible that we could be heading toward a future in which people will roll their eyes when they think of Twitter, texting, 24/7 connectivity, and the rest of today’s overblown technofetishism—like, dude, all that stuff is so twenty-teens! Meanwhile, those of us who adopt the technologies and habits of earlier eras, whether that adoption is motivated by mere boredom with little glass screens or by some more serious set of motives, may actually be on the cutting edge: the early adopters of the Retro Future. We’ll talk about that more in the weeks ahead.