AODA Blog

Retrotopia: Dawn Train from Pittsburgh

Wed, 2015-08-26 17:07
This is the first of a series of posts using the tools of narrative fiction to explore an alternative shape for the future. A hint to readers who haven't been with The Archdruid Report for long: don't expect all your questions to be answered right away.

**********

I got to the Pittsburgh station early. It was a shabby remnant of what must once have been one of those grand old stations you see on history vids, nothing but a bleak little waiting room below and a stair rising alongside a long-defunct escalator to the platforms up top. The waiting room had fresh paint on the walls and the vending machines were the sort of thing you’d find anywhere.  Other than that, the whole place looked as though it had been locked up around the time the last Amtrak trains stopped running and sat there unused for forty years until the border opened up again.
The seats were fiberglass, and must have been something like three quarters of a century old. I found one that didn’t look too likely to break when I sat on it, settled down, got out my veepad and checked the schedule for the umpteenth time. The train I would be riding was listed as on time, arrival 5:10 am Pittsburgh station, departure 5:35 am, scheduled arrival in Toledo Central Station 11:12 am. I tapped the veepad again, checked the news. The election was still all over the place—President Barfield’s concession speech, a flurry of op-ed pieces from various talking heads affiliated with the losing parties about how bad Ellen Montrose would be for the country. I snorted, paged on. Other stories competed for attention: updates on the wars in California and the Balkans, bad news about the hemorrhagic-fever epidemic in Latin America, and worse news from Antarctica, where yet another big ice sheet had just popped loose and was drifting north toward the shipping lanes. 
While the news scrolled past, other passengers filed into the waiting room a few at a time. I could just make them out past the image field the veepad projected into my visual cortex. Two men and a woman in ordinary bioplastic businesswear came in and sat together, talking earnestly about some investment or other. An elderly couple whose clothes made them look like they came straight out of a history vid sat down close to the stair and sat quietly. A little later, a family of four in clothing that looked even more old-fashioned—Mom had a bonnet on her head, and I swear I’m not making that up—came in with carpetbag luggage, and plopped down not far from me. I wasn’t too happy about that, kids being what they are these days, but these two sat down and, after a little bit of squirming, got out a book each and started reading quietly. I wondered if they’d been drugged.
A little later, another family of four came in, wearing the kind of cheap shabby clothes that might as well have the words “urban poor” stamped all over them, and hauling big plastic bags that looked as though everything they owned was stuffed inside. They looked tense, scared, excited. They sat by themselves in a corner, the parents talking to each other in low voices, the kids watching everything with wide eyes and saying nothing. I wondered about them, shrugged mentally, went back to the news.
I’d finished the news and was starting through the day’s textmail, when the loudspeaker on the wall cleared its electronic throat with a hiss of static and said, “Train Twenty-One, service to Toledo via Steubenville, Canton and Sandusky, arriving at Platform One. Please have your tickets and passports ready. Train Twenty-One to Toledo, Platform One.”
I tapped the veepad to sleep, stuffed it in my pocket, got out of my seat with the others, climbed the stairs to the platform. The sky was just turning gray with the first hint of morning, and the air was cold; the whistle of the train sounded long and lonely in the middle distance. I turned to look. I’d never been on a train before, and most of what I knew about them came from history vids and the research I’d done for this trip. Based on what I’d heard about my destination, I wondered if the locomotive would be a rattletrap antique with a big smokestack pumping coal smoke into the air.
What came around the bend into view wasn’t much like my momentary fantasy, though. It was the sort of locomotive you’d have found on any American railroad around 1950, a big diesel-electric machine with a blunt nose and a single big headlight shining down on the track. It whistled again, and then the roar of the engines rose to drown out everything else. The locomotive roared past the platform, and the only thing that surprised me was the smell of french fries that came rushing past with it. Behind it was a long string of boxcars, and behind those, a baggage car and three passenger cars.
The train slowed to a walking pace and then stopped as the passenger cars came up to the platform. A conductor in a blue uniform and hat swung down from the last car. “Tickets and passports, please,” he said, and I got out my veepad, woke it, activated the flat screen and got both documents on it.
“Physical passport, please,” the conductor said when he got to me.
“Sorry.” I fumbled in my pocket, handed it to him. He checked it, smiled, said, “Thank you, Mr. Carr. You probably know this already, but you’ll need a paper ticket for the return trip.”
“I’ve got it, thanks.”
“Great.” He moved on to the family with the plastic bag luggage. The mother said something in a low voice, handed over tickets and something that didn’t look like a passport. “That’s fine,” said the conductor. “You’ll need to have your immigration papers out when we get to the border.”
The woman murmured something else, and the conductor went onto the elderly couple, leaving me to wonder about what I’d just heard. Immigration? That implied, first, that these people actually wanted to live in the Lakeland Republic, and second, that they were being allowed in. Neither of those seemed likely to me. I made a note on my veepad to ask about immigration once I got to Toledo, and to compare what they told me to what I could find out once I got back to Philadelphia.
The conductor finished taking tickets and checking passports, and called out, “All aboard!”
I went with the others to the first of the three passenger cars, climbed the stair, turned left. The interior was about what I’d expected, row after row of double seats facing forward, but everything looked clean and bright and there was a lot more leg room than I was used to. I went about halfway up, slung my suitcase in the overhead rack and settled in the window seat. We sat for a while, and then the car jolted once and began to roll forward. 
We went through the western end of Pittsburgh first of all, past the big dark empty skyscrapers of the Golden Triangle, and then across the river and into the western suburbs. Those were shantytowns built out of the scraps of old housing developments and strip malls, the sort of thing you find around most cities these days when you don’t find worse, mixed in with old rundown housing developments that probably hadn’t seen a bucket of paint or a new roof since the United States came apart. Then the suburbs ended, and things got uglier.
The country west of Pittsburgh got hit hard during the Second Civil War, I knew, and harder still when the border was closed after Partition. I’d wondered, while planning the trip, how much it had recovered in the three years since the Treaty of Richmond. Looking out of the window as the sky turned gray behind us, I got my answer: not much. There were some corporate farms that showed signs of life, but the small towns the train rolled through were bombed-out shells, and there were uncomfortable stretches where every house and barn I could see was a tumbledown ruin and young trees were rising in what had to have been fields and pastures a few decades back. After a while it was too depressing to keep looking out the window, and I pulled out my veepad again and spent a good long while answering textmails and noting down some questions I’d want to ask in Toledo.
I’d gotten caught up on mail when the door at the back end of the car slid open. “Ladies and gentlemen,” the conductor said, “we’ll be arriving at the border in about five minutes. You’ll need to have your passports ready, and immigrants should have their papers out as well. Thank you.”
We rolled on through a dense stand of trees, and then into open ground. Up ahead, a pair of roads cut straight north and south across country. Until three years ago, there’d been a tall razor-wire fence between them, soldiers patrolling our side, the other side pretty much a complete mystery.  The fence was gone now, and there were two buildings for border guards, one on each side of the line. The one on the eastern side was a modern concrete-and-steel item that looked like a skyscraper had stopped there, squatted, and laid an egg. As we got closer to it, I could see the border guards in digital-fleck camo, helmets, and flak vests, standing around with assault rifles.
Then we passed over into Lakeland Republic territory, and I got a good look at the building on the other side. It was a pleasant-looking brick structure that could have been a Carnegie-era public library or the old city hall of some midsized town, and the people who came out of the big arched doorways to meet the train as it slowed to a halt didn’t look like soldiers at all.
The door slid open again, and I turned around. One of the border guards, a middle-aged woman with coffee-colored skin, came into the car. She was wearing a white uniform blouse and blue pants, and the only heat she was carrying was a revolver tucked unobtrusively in a holster at her hip. She had a clipboard with her, and went up the aisle, checking everybody’s passports against a list.
I handed her mine when she reached me. “Mr. Carr,” she said with a broad smile. “We heard you’d be coming through this morning. Welcome to the Lakeland Republic.”
“Thank you,” I said. She handed me back the passport, and went on to the family with the plastic bag luggage. They handed her a sheaf of papers, and she went through them quickly, signed something halfway through, and then handed them back. “Okay, you’re good,” she said. “Welcome to the Lakeland Republic.”
“We’re in?” the mother of the family asked, as though she didn’t believe it.
“You’re in,” the border guard told her. “Legal as legal can be.”
“Oh my God. Thank you.” She burst into tears, and her husband hugged her and patted her on the back. The border guard gave him a grin and went on to the family in the old-fashioned clothing.
I thought about that while the border guard finished checking passports and left the car. Outside, two more guards with a dog finished going along the train, and gave a thumbs up to the conductor. A minute later, the train started rolling again. That’s it? I wondered. No metal detectors, no x-rays, nothing? Either they were very naive or very confident.
We passed the border zone and a screen of trees beyond it, and suddenly the train was rolling through a landscape that couldn’t have been more different from the one on the other side of the line. It was full of farms, but they weren’t the big corporate acreages I was used to. I counted houses and barns as we passed, and guesstimated the farms were one to two hundred acres each; all of them were in mixed crops, not efficient monocropping. The harvest was mostly in, but I’d grown up in farm country and knew what a field looked like after it was put into corn, wheat, cabbages, turnips, industrial hemp, or what have you. Every farm seemed to have all of those and more, not to mention cattle in the pasture, pigs in a pen, a garden and an orchard. I shook my head, baffled. It was a hopelessly inefficient way to run agribusiness, I knew that from my time in business school, and yet the briefing papers I’d read while getting ready for this trip said that the Lakeland Republic exported plenty of agricultural products and imported almost none. I wondered if the train would pass some real farms further in.
We passed more of the little mixed farms, and a couple of little towns that were about as far from being bombed-out shells as you care to imagine. There were homes with lights on and businesses that were pretty obviously getting ready to open for the day. All of them had little brick train stations, though we didn’t stop at any of those—I wondered if they had light rail or something. Watching the farms and towns move past, I thought about the contrast with the landscape on the other side of the border, and winced, then stopped and reminded myself that the farms and towns had to be subsidized. Small towns weren’t any more economically viable than small farms, after all. Was all this some kind of Potemkin village setup, for the purpose of impressing visitors?
The door at the back of the car slid open, and the conductor came in. “Next stop, Steubenville,” he said. “Folks, we’ve got a bunch of people coming on in Steubenville, so please don’t take up any more seats than you have to.”
Steubenville had been part of the state of Ohio before Partition, I remembered. The name of the town stirred something else in my memory, though. I couldn’t quite get the recollection to surface, and decided to look it up. I pulled out my veepad, tapped it, and got a dark field and the words: no signal. I tapped it again, got the same thing, opened the connectivity window and found out that the thing wasn’t kidding. There was no metanet signal anywhere within range. I stared at it, wondered how I was going to check the news or keep up with my textmail, and then wondered: how the plut am I going to buy anything, or pay my hotel bill?
The dark field didn’t have any answers. I decided I’d have to sort that out when I got to Toledo; I’d been invited, after all. Maybe they had connectivity in the big cities, or something. The story was that there wasn’t metanet anywhere in the Lakeland Republic, but I had my doubts about that—how can you manage anything this side of a bunch of  mud huts without net connections? No doubt, I decided, they had some kind of secure net or something. We’d talked about doing something of the same kind back in Philadelphia more than once, just for government use, so the next round of netwars didn’t trash our infrastructure the way the infrastructure of the old union got trashed by the Chinese in ‘21.
Still, the dark field and those two words upset me more than I wanted to admit. It had been more years than I wanted to think about since I’d been more than a click away from the metanet, and being cut off from it left me feeling adrift.
The sun cleared low clouds behind us, and the train rolled into what I guessed was East Steubenville. I’d expected the kind of suburbs I’d seen on the way out of Pittsburgh, dreary rundown housing interspersed with the shantytowns of the poor. What I saw instead left me shaken. The train passed tree-lined streets full of houses that had bright paint on the walls and shingles on the roofs, little local business districts with shops and restaurants open for business, and a school that didn’t look like a medium-security prison. The one thing that puzzled me was that there were no cars visible, just tracks down some of the streets and once, improbably, an old-fashioned streetcar that paced the train for a while and then veered off in a different direction. Most of the houses seemed to have gardens out back, and the train passed one big empty lot that was divided into garden plots and had signs around it saying “community garden.” I wondered if that meant food was scarce here.
A rattle and a bump, and the train was crossing the Ohio River on a big new railroad bridge. Ahead was Steubenville proper. That’s when I remembered the thing that tried to surface earlier:  there was a battle at Steubenville, a big one, toward the end of the Second Civil War. I remembered details from  headlines I’d seen when I was a kid, and a history vid I’d watched a couple of years ago; a Federal army held the Ohio crossings against Alliance forces for most of two months before Anderson punched straight through the West Virginia front and made the whole thing moot. I remembered photos of what Steubenville looked like after the fighting: a blackened landcape of ruins where every wall high enough to hide a soldier behind it had gotten hit by its own personal artillery shell.
That wasn’t what I saw spreading out ahead as the train crossed the Ohio, though. The Steubenville I saw was a pleasant-looking city with a downtown full of three- and four-story buildings, surrounded by neighborhoods of houses, some row houses and some detached. There were streetcars on the west side of the river, too—I spotted two of them as we got close to the shore—and also a few cars, though not many of the latter.  The trees that lined the streets were small enough that you could tell they’d been planted after the fighting was over. Other than that, Steubenville looked like a comfortable, established community.
I stared out the window as the train rolled off the bridge and into Steubenville, trying to make sense of what I was seeing. Back on the other side of the border, and everywhere else I’d been in what used to be the United States, you still saw wreckage from the war years all over the place. Between the debt crisis and the state of the world economy, the money that would have been needed to rebuild or even demolish the ruins was just too hard to come by. Things should have been much worse here, since the Lakeland Republic had been shut out of world credit markets for thirty years after the default of ‘32—but they weren’t worse. They looked considerably better. I reached for my veepad, remembered that I couldn’t get a signal, and frowned. If they couldn’t even afford the infrastructure for the metanet, how the plut could they afford to rebuild their housing stock?
The cheerful brick buildings of Steubenville’s downtown didn’t offer me any answers. I sat back, frowning, as the train rattled through a switch and rolled into the Steubenville station. “Steubenville,” the conductor called out from the door behind me, and the train began to slow.

The Last Refuge of the Incompetent

Wed, 2015-08-19 16:37
There are certain advantages to writing out the ideas central to this blog in weekly bursts. Back in the days before the internet, when a galaxy of weekly magazines provided the same free mix of ideas and opinions that fills the blogosphere today, plenty of writers kept themselves occupied turning out articles and essays for the weeklies, and the benefits weren’t just financial: feedback from readers, on the one hand, and the contributions of other writers in related fields, on the other, really do make it easier to keep slogging ahead at the writer’s lonely trade.
This week’s essay has benefited from that latter effect, in a somewhat unexpected way. In recent weeks, here and there in the corners of the internet I frequent, there’s been another round of essays and forum comments insisting that it’s time for the middle-class intellectuals who frequent the environmental and climate change movements to take up violence against the industrial system. That may not seem to have much to do with the theme of the current sequence of posts—the vacuum that currently occupies the place in our collective imagination where meaningful visions of the future used to be found—but there’s a connection, and following it out will help explain one of the core themes I want to discuss.
The science fiction author Isaac Asimov used to say that violence is the last refuge of the incompetent. That’s a half-truth at best, for there are situations in which effective violence is the only tool that will do what needs to be done—we’ll get to that in a moment. It so happens, though, that a particular kind of incompetence does indeed tend to turn to violence when every other option has fallen flat, and goes down in a final outburst of pointless bloodshed. It’s unpleasantly likely at this point that the climate change movement, or some parts of it, may end up taking that route into history’s dumpster; here again, we’ll get to that a little further on in this post.
It’s probably necessary to say at the outset that the arguments I propose to make here have nothing to do with the ethics of violence, and everything to do with its pragmatics as a means of bringing about social change. Ethics in general are a complete quagmire in today’s society.  Nietzsche’s sly description of moral philosophy as the art of propping up inherited prejudices with bad logic has lost none of its force since he wrote it, and since his time we’ve also witnessed the rise of professional ethicists, whose jobs consist of coming up with plausible excuses for whatever their corporate masters want to do this week. The ethical issues surrounding violence are at least as confused as those around any of the other messy realities of human life, and in some ways, more so than most.
Myself, I consider violence enitrely appropriate in some situations. Many of my readers may have heard, for example, of an event that took place a little while back in Kentucky, where a sex worker was attacked by a serial killer.  While he was strangling her, she managed to get hold of his handgun, and proceeded to shoot him dead. To my mind, her action was morally justified. Once he attacked her, no matter what she did, somebody was going to die, and killing him not only turned the violence back on its originator, it also saved the lives of however many other women the guy might have killed before the police got to him—if they ever did; crimes against sex workers, and for that matter crimes against women, are tacitly ignored by a fairly large number of US police departments these days.
Along the same lines, a case can be made that revolutionary violence against a political and economic system is morally justified if the harm being done by that system is extreme enough. That’s not a debate I’m interested in exploring here, though.  Again, it’s not ethics but pragmatics that I want to discuss, because whether or not revolutionary violence is justified in some abstract moral sense is far less important right now than whether it’s an effective response to the situation we’re in. That’s not a question being asked, much less answered, by the people who are encouraging environmental and climate change activists to consider violence against the system.
Violence is not a panacea. It’s a tool, and like any other tool, it’s well suited to certain tasks and utterly useless for others. Political violence in particular is a surprisingly brittle and limited tool. Even when it has the support of a government’s resource base, it routinely flops or backfires, and a group that goes in for political violence without the resources and technical assistance of some government somewhere has to play its hand exceedingly well, or it’s going to fail. Furthermore, there are many cases in which violence isn’t useful as a means of social change, as other tools can do the job more effectively.
Pay attention to the history of successful revolutions and it’s not hard to figure out how to carry out political violence—and far more importantly, how not to do so. The most important point to learn from history is that successful violence in a political context doesn’t take place in a vacuum. It’s the final act of a long process, and the more thoroughly that process is carried out, the less violence is needed when crunch time comes. Let’s take a few paragraphs to walk through the process and see how it’s done.
The first and most essential step in the transformation of any society is the delegitimization of the existing order. That doesn’t involve violence, and in fact violence at this first stage of the process is catastrophically counterproductive—a lesson, by the way, that the US military has never been able to learn, which is why its attempts to delegitimize its enemies (usually phrased in such language as “winning minds and hearts”) have always been so embarrassingly inept and ineffective. The struggle to delegitimize the existing order has to be fought on cultural, intellectual, and ideological battlefields, not physical ones, and its targets are not people or institutions but the aura of legitimacy and inevitability that surrounds any established political and economic order. 
Those of my readers who want to know how that’s done might want to read up on the cultural and intellectual life of France in the decades before the Revolution. It’s a useful example, not least because the people who wanted to bring down the French monarchy came from almost exactly the same social background as today’s green radicals: disaffected middle-class intellectuals with few resources other than raw wit and erudition. That turned out to be enough, as they subjected the monarchy—and even more critically, the institutions and values that supported it—to sustained and precise attack from constantly shifting positions, engaging in savage mockery one day and earnest pleas for reform the next, exploiting every weakness and scandal for maximum effect. By the time the crisis finally arrived in 1789, the monarchy had been so completely defeated on the battlefield of public opinion that next to nobody rallied to its defense until after the Revolution was a fait accompli.
The delegitimization of the existing order is only the first step in the process. The second step is political, and consists of building a network of alliances with existing and potential power centers and pressure groups that might be willing to support revolutionary change. Every political system, whatever its official institutional form might be, consists in practice of just such a network of power centers—that is, groups of people who have significant political, economic, or social influence—and pressure groups—that is, other groups of people who lack such influence but can give or withhold their support in ways that can sometimes extract favors from the power centers.
In today’s America, for example, the main power centers are found in what we may as well call the bureaucratic-industrial complex, the system of revolving-door relationships that connect big corporations, especially the major investment banks, with the major Federal bureaucracies, especially the Treasury and the Pentagon. There are other power centers as well—for example, the petroleum complex, which has its own ties to the Pentagon—which cooperate and compete by turns with the New York-DC axis of influence—and then there are pressure groups of many kinds, some more influential, some less, some reduced to the status of captive constituencies whose only role in the political process is to rally the vote every four years and have their agenda ignored by their supposed friends in office in between elections. The network of power centers, pressure groups, and captive constituencies that support the existing order of things is the real heart of political power, and it’s what has to be supplanted in order to bring systemic change.
Effective revolutionaries know that in order to overthrow the existing order of society, they have to put together a comparable network that will back them against the existing order, and grow it to the point that it starts attracting key power centers away from the network of the existing order. That’s a challenge, but not an impossible one. In any troubled society, there are always plenty of potential power centers that have been excluded from the existing order and its feeding trough, and are thus interested in backing a change that will give them the power they want and don’t have. In France before the Revolution, for example, there were plenty of wealthy middle-class people who were shut out of the political system by the aristocracy and the royal court, and the philosophes went out of their way to appeal to them and get their support—an easy job, since the philosophes and the nouveaux-richesshared similar backgrounds. That paid off handsomely once the crisis came.
In any society, troubled or not, there are also always pressure groups, plenty of them, that are interested in getting more access to the various goodies that power centers can dole out, and can be drawn into alliance with a rising protorevolutionary faction. The more completely the existing order of things has been delegitimized, the easier it is to build such alliances, and the alliances can in turn be used to feed the continuing process of delegitimization. Here again, as in the first stage of the process, violence is a hindrance rather than a help, and it’s best if the subject never even comes up for discussion; assembling the necessary network of alliances is much easier when nobody has yet had to face up to the tremendous risks involved in revolutionary violence.
By the time the endgame arrives, therefore, you’ve got an existing order that no longer commands the respect and loyalty of most of the population, and a substantial network of pressure groups and potential power centers supporting a revolutionary agenda. Once the situation reaches that stage, the question of how to arrange the transfer of power from the old regime to the new one is a matter of tactics, not strategy. Violence is only one of the available options, and again, it’s by no means always the most useful one. There are many ways to break the existing order’s last fingernail grip on the institutions of power, once that grip has been loosened by the steps already mentioned.
What happens, on the other hand, to groups that don’t do the necessary work first, and turn to violence anyway? Here again, history has plenty to say about that, and the short form is that they lose. Without the delegitimization of the existing order of society and the creation of networks of support among pressure groups and potential power centers, turning to political violence guarantees total failure.
For some reason, for most of the last century, the left has been unable or unwilling to learn that lesson. What’s happened instead, over and over again, is that a movement pursuing radical change starts out convinced that the existing order of society already lacks popular legitimacy, and so fails to make a case that appeals to anybody outside its own ranks. Having failed at the first step, it tries to pressure existing power centers and pressure groups into supporting its agenda, rather than building a competing network around its own agenda, and gets nowhere. Finally, having failed at both preliminary steps, it either crumples completely or engages in pointless outbursts of violence against the system, which are promptly and brutally crushed. Any of my readers who remember the dismal history of the New Left in the US during the 1960s and early 1970s already know this story, right down to the fine details.
With this in mind, let’s look at the ways in which the climate change movement has followed this same trajectory of abject failure over the last fifteen years or so.
The task of the climate change movement at the dawn of the twenty-first century was difficult but by no means impossible. Their ostensible goal was to create a consensus in the world’s industrial nations that would support the abandonment of fossil fuels and a transition to the less energy-intensive ways of living that renewable resources can provide. That would have required a good many well-off people to accept a decline in their standards of living, but that’s far from the insuperable obstacle so many people seem to think it must be. When Winston Churchill told the British people “I have nothing to offer but blood, toil, tears, and sweat,” his listeners roared their approval. For reasons that probably reach far into our evolutionary past, a call to shared sacrifice usually gets a rousing response, so long as the people who are being asked to sacrifice have reason to believe something worthwhile will come of it.
That, however, was precisely what the climate change movement was unable to provide. It’s harsh but not, I think, unfair to describe the real agenda of the movement as the attempt to create a future in which the industrial world’s middle classes could keep on enjoying the benefits of their privileged lifestyle without wrecking the atmosphere in the process. Of course it’s not exactly easy to convince everyone else in the world to put aside all their own aspirations for the sake of the already privileged, and so the spokespeople of the climate change movement generally didn’t talk about what they hoped to achieve. Instead, they fell into the most enduring bad habit of the left, and ranted instead about how awful the future would be if the rest of the world didn’t fall into line behind them.
On the off chance that any of my readers harbor revolutionary ambitions, may I offer a piece of helpful advice? If you want people to follow your lead, you have to tell them where you intend to take them. Talking exclusively about what’s going to happen if they don’t follow you will not cut it. Rehashing the same set of talking points about how everyone’s going to die if the whole world doesn’t rally around you emphatically will not cut it. The place where you’re leading them can be difficult and dangerous, the way there can be full of struggle, sacrifice and suffering, and they’ll still flock to your banner—in fact, young men will respond to that kind of future more enthusiastically than to any other, especially if you can lighten the journey with beer and the occasional barbecue—but you have to be willing to talk about your destination. You also have to remember that the phrase “shared sacrifice” includes the word “shared,” and not expect everyone else to give up something so that you don’t have to.
So the climate change movement entered the arena with one hand tied behind its back and the other hand hauling a heavy suitcase stuffed to the bursting point with middle class privilege. Its subsequent behavior did nothing to overcome that initial disadvantage. When the defenders of the existing order counterattacked, as of course they did, the climate change movement did nothing to retake the initiative and undermine its adversaries; preaching to the green choir took the place of any attempt to address the concerns of the wider public; over and over again, climate change activists allowed the other side to define the terms of the debate and then whined about the resulting defeat rather than learning anything from it. Of course the other side used every trick in the book, and then some; so? That’s how the game is played. Successful movements for change realize that, and plan accordingly.
We don’t even have to get into the abysmal failure of the climate change movement to seek out allies among the many pressure groups and potential power centers that might have backed it, if it had been able to win the first and most essential struggle in the arena of public opinion. The point I want to make is that at this point in the curve of failure, violence really is the last refuge of the incompetent. What, after all, would be the result if some of the middle class intellectuals who make up the core of the climate change movement were to pick up some guns, assemble the raw materials for a few bombs, and try to use violence to make their point? They might well kill some people before the FBI guns them down or hauls them off to life-plus terms in Leavenworth; they would very likely finish off climate change activism altogether, by making most Americans fear and distrust anyone who talks about it—but would their actions do the smallest thing to slow the dumping of greenhouse gases into the atmosphere and the resulting climate chaos? Of course not.
What makes the failure of the climate change movement so telling is that during the same years that it peaked and crashed, another movement has successfully conducted a prerevolutionary campaign of the classic sort here in the US. While the green Left has been spinning its wheels and setting itself up for failure, the populist Right has carried out an extremely effective program of delegitimization aimed at the federal government and, even more critically, the institutions and values that support it. Over the last fifteen years or so, very largely as a result of that program, a great many Americans have gone from an ordinary, healthy distrust of politicians to a complete loss of faith in the entire American project. To a remarkable extent, the sort of rock-ribbed middle Americans who used to insist that of course the American political system is the best in the world are now convinced that the American political system is their enemy, and the enemy of everything they value.
The second stage of the prerevolutionary process, the weaving of a network of alliances with pressure groups and potential power centers, is also well under way. Watch which groups are making common cause with one another on the rightward fringes of society these days and you can see a competent revolutionary strategy at work. This isn’t something I find reassuring—quite the contrary, in fact; aside from my own admittedly unfashionable feelings of patriotism, one consistent feature of revolutions is that the government that comes into power after the shouting and the shooting stop is always more repressive than the one that was in power beforehand. Still, the way things are going, it seems likely to me that the US will see the collapse of its current system of government, probably accompanied with violent revolution or civil war, within a decade or two.
Meanwhile, as far as I can see, the climate change movement is effectively dead in its tracks, and we no longer have time to make something happen before the rising spiral of climate catastrophe begins—as my readers may have noticed, that’s already well under way. From here on in, it’s probably a safe bet that anthropogenic climate change will accelerate until it fulfills the prophecy of The Limits to Growth and forces the global industrial economy to its knees. Any attempt to bring human society back into some kind of balance with ecological reality will have to get going during and after that tremendous crisis. That requires playing a long game, but then that’s going to be required anyway, to do the things that the climate change movement failed to do, and do them right this time.
With that in mind, I’m going to be taking this blog in a slightly different direction next week, and for at least a few weeks to come. I’ve talked in previous posts about intentional technological regression as an option, not just for individuals but as a matter of public policy. I’ve also talked at quite some length about the role that narrative plays in helping to imagine alternative futures. With that in mind, I’ll be using the tools of fiction to suggest a future that zooms off at right angles to the expectations of both ends of the current political spectrum. Pack a suitcase, dear readers; your tickets will be waiting at the station. Next Wednesday evening, we’ll be climbing aboard a train for Retrotopia.

The War Against Change

Wed, 2015-08-12 17:27
Last week’s post explored the way that the Democratic party over the last four decades has abandoned any claim to offer voters a better future, and has settled for offering them a future that’s not quite as bad as the one the Republicans have in mind. That momentous shift can be described in many ways, but the most useful of them, to my mind, is one that I didn’t bring up last week: the Democrats have become America’s conservative party.
Yes, I know. That’s not something you’re supposed to say in today’s America, where “conservative” and “liberal” have become meaningless vocal sounds linked with the greedy demands of each party’s assortment of pressure groups and the plaintive cries of its own flotilla of captive constituencies. Still, back in the day when those words still meant something, “conservative” meant exactly what the word sounds like: a political stance that focuses on conserving some existing state of affairs, which liberals and radicals want to replace with some different state of affairs. Conservative politicians and parties—again, back when the word meant something—used to defend existing political arrangements against attempts to change them.
That’s exactly what the Democratic Party has been doing for decades now. What it’s trying to preserve, of course, is the welfare-state system of the New Deal of the 1930s and the Great Society programs of the 1960s—or, more precisely, the fragments of that system that still survive. That’s the status quo that the Democrats are attempting to hold in place. The consequences of that conservative mission are unfolding around us in any number of ways, but the one that comes to mind just now is the current status of presidential candidate Bernard Sanders as a lightning rod for an all too familiar delusion of the wing of the Democratic party that still considers itself to be on the left.
The reason Sanders comes to mind so readily just now is that last week’s post attracted an odd response from some of its readers. In the course of that post—which was not, by the way, on the subject of the American presidential race—I happened to mention three out of the twenty-odd candidates currently in the running. Somehow I didn’t get taken to task by supporters of Michael O’Malley, Ted Cruz, Jesse Ventura, or any of the other candidates I didn’t mention, with one exception: supporters of Sanders came out of the woodwork to denounce me for not discussing their candidate, as though he had some kind of inalienable right to air time in a blog post that, again, was not about the election.
I found the whole business a source of wry amusement, but it also made two points that are relevant to this week’s post. On the one hand, what makes Sanders’ talking points stand out among those of his rivals is that he isn’t simply talking about maintaining the status quo; his proposals include steps that would restore a few of the elements of the welfare state that have been dismantled over the last four decades. That’s the extent of his radicalism—and of course it speaks reams about the state of the Democratic party more generally that so modest, even timid, a proposal is fielding shrieks of outrage from the political establishment just now.
The second point, and to my mind the more interesting of the two, is the way that Sanders’ campaign has rekindled the same messianic fantasies that clustered around Bill Clinton and Barack Obama in their first presidential runs. I remember rather too clearly the vehement proclamations by diehard liberals in 1992 that putting Clinton in office would surely undo all the wrongs of the Reagan and Bush I eras; I hope none of my readers have forgotten the identical fantasies that gathered around Barack Obama in 2008. We can apparently expect another helping of them this time around, with Sanders as the beneficiary, and no doubt those of us who respond to them with anything short of blind enthusiasm will be denounced just as heatedly this time, too.
It bears remembering that despite those fantasies, Bill Clinton spent eight years in the White House following Ronald Reagan’s playbook nearly to the letter, and Barack Obama has so far spent his two terms doing a really inspired imitation of the third and fourth terms of George W. Bush. If by some combination of sheer luck and hard campaigning, Bernie Sanders becomes the next president of the United States, it’s a safe bet that the starry-eyed leftists who helped put him into office will once again get to spend four or eight years trying to pretend that their candidate isn’t busy betraying all of the overheated expectations that helped put him into office. As Karl Marx suggested in one of his essays, if history repeats itself, the first time is tragedy but the second is generally farce; he didn’t mention what the third time around was like, but we may just get to find out.
The fact that this particular fantasy has so tight a grip on the imagination of the Democratic party’s leftward wing is also worth studying. There are many ways that a faction whose interests are being ignored by the rest of its party, and by the political system in general, can change that state of affairs. Unquestioning faith that this or that leader will do the job for them is not generally a useful strategy under such conditions, though, especially when that faith takes the place of any more practical activity. History has some very unwelcome things to say, for that matter, about the dream of political salvation by some great leader; so far it seems limited to certain groups on the notional left of the electorate, but if it spreads more widely, we could be looking at the first stirrings of the passions and fantasies that could bring about a new American fascism.
Meanwhile, just as the Democratic party in recent decades has morphed into America’s conservative party, the Republicans have become its progressive party. That’s another thing you’re not supposed to say in today’s America, because of the bizarre paralogic that surrounds the concept of progress in our collective discourse. What the word “progress” means, as I hope at least some of my readers happen to remember, is continuing further in the direction we’re already going—and that’s all it means. To most Americans today, though, the actual meaning of the word has long since been obscured behind a burden of vague emotion that treats “progressive” as a synonym for “good.” Notice that this implies the very odd belief that the direction in which we’re going is good, and can never be anything other than good.
For the last forty years, mind you, America has been moving steadily along an easily defined trajectory. We’ve moved step by step toward more political and economic inequality, more political corruption, more impoverishment for those outside the narrowing circles of wealth and privilege, more malign neglect toward the national infrastructure, and more environmental disruption, along with a steady decline in literacy and a rolling collapse in public health, among other grim trends. These are the ways in which we’ve been progressing, and that’s the sense in which the GOP counts as America’s current progressive party: the policies being proposed by GOP candidates will push those same changes even further than they’ve already gone, resulting in more inequality, corruption, impoverishment, and so on.
So the 2016 election is shaping up to be a contest between one set of candidates who basically want to maintain the wretchedly unsatisfactory conditions facing the American people today, and another set who want to make those conditions worse, with one outlier on the Democratic side who says he wants to turn the clock back to 1976 or so, and one outlier on the Republican side who apparently wants to fast forward things to the era of charismatic dictators we can probably expect in the not too distant future. It’s not too hard to see why so many people looking at this spectacle aren’t exactly seized with enthusiasm for any of the options being presented to them by the existing political order.
The question that interests me most about all this is the one I tried to raise last week—why, in the face of so many obvious dysfunctions, are so many people in and out of the political arena frozen into a set of beliefs that convince them that the only possibilities available to us involve either staying exactly where we are or going further along the route that’s landed us in this mess? No doubt a good many things have contributed to that bizarre mental fixation, but there’s one factor that may not have received the attention it deserves: the remarkable dominance of a particular narrative in the most imaginative fiction and mass media of our time. As far as I know, nobody’s given that narrative a name yet, so I’ll exercise that prerogative and call it The War Against Change.
You know that story inside and out. There’s a place called Middle-Earth, or the Hogwarts School of Wizardry, or what have you—the name doesn’t matter, the story’s the same in every case. All of a sudden this place is threatened by an evil being named Sauron, or Voldemort, or—well, you can fill in the blanks for yourself. Did I mention that this evil being is evil? Yes, in fact, he’s evilly evil out of sheer evil evilness, without any motive other than the one just named.  What that evilness amounts to in practice, though, is that he wants to change things. Of course the change is inevitably portrayed in the worst possible light, but what it usually comes down to is that the people who currently run things will lose their positions of power, and will be replaced by the bad guy and his minions—any resemblance to the rhetoric surrounding US presidential elections is doubtless coincidental.
But wait!  Before the bad guy and his evil minions can change things, a plucky band of heroes goes swinging into action to stop his evil scheme, and of course they succeed in the nick of time. The bad guy gets the stuffing pounded out of him, the people who are supposed to run things keep running things, everything settles down just the way it was before he showed up. Change is stopped in its tracks, and all of the characters who matter breathe a big sigh of relief and live happily ever after, or until filming starts on the sequel, take your pick.
Now of course that’s a very simplified version of The War Against Change. In the hands of a really capable author, and we’ll get to one of those in a minute, that story can quite readily yield great literature. Even so, it’s a very curious sort of narrative to be as popular as it is, especially for a society that claims to be in love with change and novelty. The War Against Change takes place in a world in which everything’s going along just the way things are supposed to be.  The bad guy shows up and tries to change things, he gets clobbered by the good guys, and then everything goes on just the way it was. Are there, ahem, problems with the way things are run? Might changing things be a good idea, if the right things are changed?  Does the bad guy and his evil minions possibly even have motives other than sheer evilly evil evilness for wanting to change things?  That’s not part of the narrative. At most, one or more of the individuals who are running things may be problematic, and have to be pushed aside by our plucky band of heroes so they can get on with the business of bashing the bad guy.
It happens now and then, in fact, that authors telling the story of The War Against Change go out of their way to make fun of the possibility that anyone might reasonably object to the established order of things. Did anyone else among my readers feel vaguely sick while reading the Harry Potter saga, when they encountered Rowling’s rather shrill mockery of Hermione whatsername’s campaign on behalf of the house elves? To me, at least, it was rather too reminiscent of “No, no, our darkies love their Massa!”
That’s actually a side issue, though. The core of the narrative is that the goal of the good guys, the goal that defines them as good guys, is to make sure that nothing changes. That becomes a source of tremendous if unintentional irony in the kind of imaginative fiction that brings imagery from mythology and legend into a contemporary setting. I’m thinking here, as one example out of many, of a series of five children’s novels—The Dark Is Rising sequence by Susan Cooper—the first four of which were among the delights of my childhood. You have two groups of magical beings, the Light and the Dark—yes, it’s pretty Manichean—who are duking it out in present-day Britain.
The Dark, as you’ve all probably figured out already, is trying to change things, and the Light is doing the plucky hero routine and trying to stop them. That’s all the Light does; it doesn’t, heaven help us, do anything about the many other things that a bunch of magical beings might conceivably want to fix in 1970s Britain. The Light has no agenda of its own at all; it’s there to stop the Dark from changing things, and that’s it. Mind you, the stories are packed full of splendid, magical stuff, the sort of thing that’s guaranteed to win the heart and feed the imagination of any child stuck in the dark heart of American suburbia, as I was at the time.
Then came the fifth book, Silver on the Tree, which was published in 1977.  The Light and the Dark finally had their ultimate cataclysmic showdown, the Dark is prevented from changing things...and once that’s settled, the Light packs its bags and heads off into the sunset, leaving the protagonists sitting there in present-day Britain with all the magic gone for good. I loathed the book. So did a lot of other people—I’ve never yet heard it discussed without terms like “wretchedly disappointing” being bandied around—but I suspect the miserable ending was inescapable, given the frame into which the story had already been fixed. Cooper had committed herself to telling the story of The War Against Change, and it was probably impossible for her to imagine any other ending.
 Now of course there’s a reason why this particular narrative took on so overwhelming a role in the imaginative fiction and media of the late twentieth century, and that reason is the force of nature known as J.R.R. Tolkien. I’m by no means sure how many of my readers who weren’t alive in the 1960s and 1970s have any idea how immense an impact Tolkien’s sprawling trilogy The Lord of the Rings had on the popular imagination of that era, at a time when buttons saying "Frodo Lives!" and "Go Go Gandalf" were everywhere and every reasonably hip bookstore sold posters with the vaguely psychedelic front cover art of the first Ballantine paperback edition of The Fellowship of the Ring. In the formative years of the Boomer generation, Tolkien’s was a name to conjure with.
What makes this really odd, all things considered, is that Tolkien himself was a political reactionary who opposed nearly everything his youthful fans supported. The Boomers who were out there trying to change the system in the Sixties were simultaneously glorifying a novel that celebrates war, monarchy, feudal hierarchy, and traditional gender roles, and includes an irritable swipe at the social welfare program of post-World War Two Britain—that’s what Lotho Sackville-Baggins’ government of the Shire amounts to, with its “gatherers” and “sharers.” When Tolkien put together his grand epic of The War Against Change, he knew exactly what he was doing; when the youth culture of the Sixties adopted him as their patron saint—much to his horror, by the way—I’m not at all sure the same thing could be said about them.
What sets The Lord of the Rings apart from common or garden variety versions of The War Against Change, in fact, is precisely Tolkien’s own remarkably clear understanding of what he was trying to do, and how that strategy tends to play out in the real world. The Lord of the Rings gets much of its power and pathos precisely because its heroes fought The War Against Change knowing that even if they won, they would lose; the best they could do is put a brake on the pace of change and keep the last dim legacies of the Elder Days for a little longer before they faded away forever. Tolkien nourished his literary sense on Beowulf and the Norse sagas, with their brooding sense of doom’s inevitability, and on traditional Christian theology, with its promise of hope beyond the circles of the world, and managed to play these two against each other brilliantly—but then Tolkien, as a reactionary, understood what it was like to keep fighting for something even though he knew that the entire momentum of history was against him.
Does all this seem galaxies away from the crass political realities with which this week’s post began? Think again, dear reader. Listen to the rhetoric of the candidates as they scramble for their party’s nomination—well, except for Hillary Clinton, who’s too busy declaiming “I am so ready to lead!” at the nearest available mirror—and you’ll hear The War Against Change endlessly rehashed. What do the Republican candidates promise? Why, to save America from the evil Democrats, who want to change things. What do the Democratic candidates promise? To save America from the evil Republicans, ditto. Pick a pressure group, any pressure group, and the further in from the fringes they are, the more likely they are to frame their rhetoric in terms of The War Against Change, too.
I’ve noted before, for that matter, the weird divergence between the eagerness of the mainstream to talk about anthropogenic global warming and their utter unwillingness to talk about peak oil and other forms of resource depletion. There are several massive factors behind that, but I’ve come to think that one of the most important is that you can frame the climate change narrative in terms of The War Against Change—we must keep the evil polluters from changing things!—but you can’t do that with peak oil. The end of the age of cheap abundant energy means that things have to change, not because the motiveless malignity of some cackling villain would have it so, but because the world no longer contains the resources that would be needed to keep things going the way they’ve gone so far.
That said, if it’s going to be necessary to change things—and it is—then it’s time to start thinking about options for the future that don’t consist of maintaining a miserably unsatisfactory status quo or continuing along a trajectory that’s clearly headed toward something even worse. The first step in making change is imagining change, and the first step in imagining change is recognizing that “more of the same” isn’t going to cut it. Next week, I plan on taking some of the ideas I’ve floated here in recent months, and putting them together in a deliberately unconventional way.

The Suicide of the American Left

Wed, 2015-08-05 17:02
Regular readers of this blog know that I generally avoid partisan politics in the essays posted here. There are several reasons for that unpopular habit, but the most important of them is that we don’t actually have partisan politics in today’s America, except in a purely nominal sense. It’s true that politicians by and large group themselves into one of two parties, which make a great show of their rivalry on a narrow range of issues. Get past the handful of culture-war hot buttons that give them their favorite opportunities for grandstanding, though, and you’ll find an ironclad consensus, especially on those issues that have the most to say about the future of the United States and the world.
It’s popular on the disaffected fringes of both parties to insist that the consensus in question comes solely from the other side; dissident Democrats claim that Democratic politicians have basically adopted the GOP platform, while disgruntled Republicans claim that their politicians have capitulated to the Democratic agenda. Neither of these claims, as it happens, are true. Back when the two parties still stood for something, for example, Democrats in Congress could be counted on to back organized labor and family farmers against their corporate adversaries and to fight attempts on the part of bankers to get back into the speculation business, while their opposite numbers in the GOP were ferocious in their opposition to military adventurism overseas and government expansion at home.
Nowadays? The Democrats long ago threw their former core constituencies under the bus and ditched the Depression-era legislation that stopped kleptocratic bankers from running the economy into the ground, while the Republicans decided that they’d never met a foreign entanglement or a government handout they didn’t like—unless, of course, the latter benefited the poor.  An ever more intrusive and metastatic bureaucratic state funneling trillions to corrupt corporate interests, an economic policy made up primarily of dishonest statistics and money-printing operations, and a monomaniacally interventionist foreign policy: that’s the bipartisan political consensus in Washington DC these days, and it’s a consensus that not all that long ago would have been rejected with volcanic fury by both parties if anyone had been so foolish as to suggest it.
The gap between the current Washington consensus and the former ideals of the nation’s political parties, not to mention the wishes of the people on whose sovereign will the whole system is supposed to depend, has attracted an increasing amount of attention in recent years. That’s driven quite a bit of debate, and no shortage of fingerpointing, about the origins and purposes of the policies that are welded into place in US politics these days. On the left, the most popular candidates just now for the position of villainous influence behind it all mostly come from the banking industry; on the right, the field is somewhat more diverse; and there’s no shortage of options from further afield.
Though I know it won’t satisfy those with a taste for conspiracy theory, I’d like to suggest a simpler explanation. The political consensus in Washington DC these days can best be characterized as an increasingly frantic attempt, using increasingly risky means, to maintain business as usual for the political class at a time when “business as usual” in any sense of that phrase is long past its pull date. This, in turn, is largely the product of the increasingly bleak corner into which past policies have backed this country, but it’s also in part the result of a massively important but mostly unrecognized turn of events: by and large, neither the contemporary US political class nor anyone else with a significant presence in American public life seems to be able to imagine a future that differs in any meaningful way from what we’ve got right now.
I’d like to take a moment here to look at that last point from a different angle, with the assistance of that tawdry quadrennial three-ring circus now under way, which will sooner or later select the next inmate for the White House. For anyone who enjoys the spectacle of florid political dysfunction, the 2016 presidential race promises to be the last word in target-rich environments. The Republican party in particular has flung itself with creditable enthusiasm into the task of taking my circus metaphor as literally as possible—what, after all, does the GOP resemble just at the moment, if not one of those little cars that roll out under the big top and fling open the doors, so that one clown after another can come tumbling out into the limelight?
They’ve already graced the electoral big top with a first-rate collection of clowns, too. There’s Donald Trump, whose campaign is shaping up to be the loudest invocation of pure uninhibited führerprinzip since, oh, 1933 or so; there’s Scott Walker, whose attitudes toward working Americans suggest that he’d be quite happy to sign legislation legalizing slavery if his rich friends asked him for it; there’s—well, here again, “target-rich environment” is the phrase that comes forcefully to mind. The only people who have to be sweating just now, other than ordinary Americans trying to imagine any of the current round of GOP candidates as the titular leader of their country, are gag writers for satiric periodicals such as The Onion, who have to go to work each day and face the brutally unforgiving task of coming up with something more absurd than the press releases and public statements of the candidates in question.
Still, I’m going to leave those tempting possibilities alone for the moment, and focus on a much more dreary figure, since she and her campaign offer a useful glimpse at the yawning void beneath what’s left of the American political system. Yes, that would be Hillary Clinton, the officially anointed frontrunner for the Democratic nomination. It’s pretty much a foregone conclusion that she’ll lose this campaign the way she lost the 2008 race, and for the same reason: neither she nor her handlers seem to have noticed that she’s got to offer the American people some reason to want to vote for her.
In a way, Clinton is the most honest of the current crop of presidential candidates, though this is less a matter of personal integrity than of sheer inattention. I frankly doubt that the other candidates have a single noble motive for seeking office among them, but they have at least realized that they have to go through the motions of having convictions and pursuing policies they think are right. Clinton and her advisers apparently didn’t get that memo, and as a result, she’s not even going through the motions. Her campaign basically consists of posing for the cameras, dodging substantive questions, uttering an assortment of vague sound bites to encourage the rich friends who are backing her, and making plans for her inauguration, as though there wasn’t an election to get through first.
Still, there’s more going on here than the sheer incompetence of a campaign that hasn’t yet noticed that a sense of entitlement isn’t a qualification for office. The deeper issue that will doom the Clinton candidacy can be phrased as a simple question: does anyone actually believe for a moment that electing Hillary Clinton president will change anything that matters?
Those other candidates who are getting less tepid responses from the voters than Clinton are doing so precisely because a significant number of voters think that electing one of them will actually change something. The voters in question are wrong, of course. Barack Obama is the wave of the future here as elsewhere; after his monumentally cynical 2008 campaign, which swept him into office on a torrent of vacuous sound bites about hope and change, he proceeded to carry out exactly the same domestic and foreign policies we’d have gotten had George W. Bush served two more terms. Equally, whoever wins the 2016 election will keep those same policies in place, because those are the policies that have the unanimous support of the political class; it’s just that everybody but Clinton will do their level best to pretend that they’re going to do something else, as Obama did, until the day after the election.
Those policies will be kept in place, in turn, because any other choice would risk pulling the plug on a failing system. I’m not at all sure how many people outside the US have any idea just how frail and brittle the world’s so-called sole hyperpower is just at this moment. To borrow a point made trenchantly some years back by my fellow blogger Dmitry Orlov, the US resembles nothing so much as the Soviet Union in the years just before the Berlin Wall came down: a grandiose international presence, backed by a baroque military arsenal and an increasingly shrill triumphalist ideology, perched uneasily atop a hollow shell of a society that has long since tipped over the brink into economic and cultural freefall.
Neither Hillary Clinton nor any of the other candidates in the running for the 2016 election will change anything that matters, in turn, because any change that isn’t strictly cosmetic risks bringing the entire tumbledown, jerry-rigged structure of American political and economic power crashing down around everyone’s ears. That’s why, to switch examples, Barack Obama a few days ago brought out with maximum fanfare a new energy policy that consists of doing pretty much what his administration has been doing for the last six years already, as though doing what you’ve always done and expecting a different result wasn’t a good functional definition of insanity. Any other approach to energy and climate change, or any of a hundred other issues, risks triggering a crisis that the United States can’t survive in its current form—and the fact that such a crisis is going to happen sooner or later anyway just adds spice to the bubbling pot.
The one thing that can reliably bring a nation through a time of troubles of the sort we’re facing is a vision of a different future, one that appeals to enough people to inspire them to unite their energies with those of the nation’s official leadership, and put up with the difficulties of the transition. That’s what got the United States through its three previous existential crises: the Revolutionary War, the Civil War, and the Great Depression. In each case, when an insupportable status quo finally shattered, enough of the nation united around a charismatic leader, and a vision of a future that was different from the present, to pull some semblance of a national community through the chaos.
We don’t have such a vision in American politics now. To an astonishing degree, in fact, American culture has lost the ability to imagine any future that isn’t simply an endless rehash of the present—other, that is, than the perennially popular fantasy of apocalyptic annihilation, with or without the salvation of a privileged minority via Rapture, Singularity, or what have you. That’s a remarkable change for a society that not so long ago was brimming with visionary tomorrows that differed radically from the existing order of things. It’s especially remarkable in that the leftward end of the American political spectrum, the end that’s nominally tasked with the job of coming up with new visions, has spent the last forty years at the forefront of the flight from alternative futures.
I’m thinking here, as one example out of many, of an event I attended a while back, put together by one of the longtime names of the American left, and featuring an all-star cast of equally big names in what passes for environmentalism and political radicalism these days. With very few exceptions, every one of the speakers put their time on the podium into vivid descriptions of the villainy of the designated villains and all the villainous things they were going to do unless they were stopped. It was pretty grueling; at the end of the first full day, going up the stairs to the street level, I watched as a woman turned to a friend and said, “Well, that just about makes me want to go out and throw myself off a bridge”—and neither the friend nor anybody else argued. 
Let’s take a closer look, though, at the strategy behind the event. Was there, at this event, any real discussion of how to stop the villains in question, other than a rehash of proposals that have failed over and over again for the last four decades? Not that I heard. Did anyone offer some prospect other than maintaining the status quo endlessly against the attempts of the designated villains to make things worse? Not only was there nothing of the kind, I heard backchannel from more than one participant that the organizer had a long history of discouraging anybody at his events from offering the least shred of that sort of hope.

Dismal as it was, the event was worth attending, as it conducted an exact if unintentional autopsy of the corpse of the American left, and made the cause of death almost impossible to ignore. At the dawn of the Reagan era, to be specific, most of the movements in this country that used to push for specific goals on the leftward end of things stopped doing so, and redefined themselves in wholly reactive and negative terms: instead of trying to enact their own policies, they refocused entirely on trying to stop the enactment of opposing policies by the other side. By and large, they’re still at it, even though the results have amounted to four decades of nearly unbroken failure, and the few successes—such as the legalization of same-sex marriage—were won by pressure groups unconnected to, and usually  unsupported by, the professional activists of the official left.
There are at least two reasons why a strategy of pure reaction, without any coherent attempt to advance an agenda of its own or even a clear idea of what that agenda might be, has been a fruitful source of humiliation and defeat for the American left. The first is that this approach violates one of the most basic rules of strategy: you win when you seize the initiative and force the other side to respond to your actions, and you lose by passively responding to whatever the other side comes up with. In any contest, without exception, if you surrender the initiative and let the other side set the terms of the conflict, you’re begging to be beaten, and will normally get your wish in short order.
That in itself is bad enough. A movement that defines itself in purely negative terms, though, and attempts solely to prevent someone else’s agenda from being enacted rather than pursuing a concrete agenda of its own, suffers from another massive problem: the best such a movement can hope for is a continuation of the status quo, because the only choice it offers is the one between business as usual and something worse. That’s fine if most people are satisfied with the way things are, and are willing to fling themselves into the struggle for the sake of a set of political, economic, and social arrangements that they consider worth fighting for.
I’m not sure why so many people on the leftward end of American politics haven’t noticed that this is not the case today. One hypothesis that comes to mind is that by and large, the leftward end of the American political landscape is dominated by middle class and upper middle class white people from the comparatively prosperous coastal states. Many of them belong to the upper 20% by income of the American population, and the rest aren’t far below that threshold. The grand bargain of the Reagan years, by which the middle classes bought a guarantee of their wealth and privilege by letting their former allies in the working classes get thrown under the bus, has profited them hugely, and holding onto what they gained by that maneuver doubtless ranks high on their unstated list of motives—much higher, certainly, than pushing for a different future that might put their privileges in jeopardy.
The other major power bloc that supports the American left these days offers an interesting lesson in the power of positive goals. That bloc is made up of certain relatively disadvantaged ethnic groups, above all the African-American community. The Democratic party has been able to hold the loyalty of most African-Americans through decades of equivocation, meaningless gestures, and outright betrayal, precisely because it can offer them a specific vision of a better future: that is, a future in which Americans of African ancestry get treated just like white folk. No doubt it’ll sink in one of these days that the Democratic party has zero interest in actually seeing that future arrive—if that happened, after all, it would lose one of the most reliable of its captive constituencies—but until that day arrives, the loyalty of the African-American community to a party that offers them precious little but promises is a testimony to the power of a positive vision for the future.
That’s something that the Democratic party doesn’t seem to be able to offer anyone else in America, though. Even on paper, what have the last half dozen or so Democratic candidates for president offered? Setting aside crassly manipulative sound bites of the “hope and change” variety, it’s all been attempts to keep things going the way they’ve been going, bracketed with lurid threats about the GOP’s evil plans to make things so much worse. That’s why, for example, the Democratic party has been eager to leap on climate change as a campaign issue, even though their performance in office on that issue is indistinguishable from that of the Republicans they claim to oppose: it’s easy to frame climate change as a conflict between keeping things the way they are and making them much worse, and that’s basically the only tune the American left knows how to play these days.
The difficulty, of course, is that after forty years of repeated and humiliating failure, the Democrats and the other leftward movements in American political life are caught in a brutal vise of their own making. On the one hand, very few people actually believe any more that the left is capable of preventing things from getting worse. There’s good reason for that lack of faith, since a great many things have been getting steadily worse for the majority of Americans since the 1970s, and the assorted technological trinkets and distractions that have become available since then don’t do much to make up for the absence of stable jobs with decent wages, functioning infrastructure, affordable health care, and all the other amenities that have gone gurgling down the nation’s drain since then.
Yet there’s another factor, of course, as hinted above. If the best you can offer the voters is a choice between what they have now and something worse, and what they have now is already pretty wretched, you’re not likely to get much traction. That’s the deeper issue behind the unenthusiastic popular response to Hillary Clinton’s antics, and I’d like to suggest it’s also what’s behind Donald Trump’s success in the polls—no matter how awful a president he’d be, the logic seems to run, at least he’d be different. When a nation reaches that degree of impatience with a status quo no one with access to power is willing to consider changing, an explosion is not far away.

The Cimmerian Hypothesis, Part Three: The End of the Dream

Wed, 2015-07-29 16:47
Let's take a moment to recap the argument of the last two posts here on The Archdruid Report before we follow it through to its conclusion. There are any number of ways to sort out the diversity of human social forms, but one significant division lies between those societies that don’t concentrate population, wealth, and power in urban centers, and those that do. One important difference between the societies that fall into these two categories is that urbanized societies—we may as well call these by the time-honored term “civilizations”—reliably crash and burn after a lifespan of roughly a thousand years, while societies that lack cities have no such fixed lifespans and can last for much longer without going through the cycle of rise and fall, punctuated by dark ages, that defines the history of civilizations.
It’s probably necessary to pause here and clear up what seems to be a common misunderstanding. To say that societies in the first category can last for much more than a thousand years doesn’t mean that all of them do this. I mention this because I fielded a flurry of comments from people who pointed to a few examples of  societies without cities that collapsed in less than a millennium, and insisted that this somehow disproved my hypothesis. Not so; if everyone who takes a certain diet pill, let’s say, suffers from heart damage, the fact that some people who don’t take the diet pill suffer heart damage from other causes doesn’t absolve the diet pill of responsibility. In the same way, the fact that civilizations such as Egypt and China have managed to pull themselves together after a dark age and rebuild a new version of their former civilization doesn’t erase the fact of the collapse and the dark age that followed it.
The question is why civilizations crash and burn so reliably. There are plenty of good reasons why this might happen, and it’s entirely possible that several of them are responsible; the collapse of civilization could be an overdetermined process. Like the victim in the cheap mystery novel who was shot, stabbed, strangled, clubbed over the head, and then chucked out a twentieth floor window, that is, civilizations that fall may have more causes of death than were actually necessary. The ecological costs of building and maintaining cities, for example, place much greater strains on the local environment than the less costly and concentrated settlement patterns of nonurban societies, and the rising maintenance costs of capital—the driving force behind the theory of catabolic collapse I’ve proposed elsewhere—can spin out of control much more easily in an urban setting than elsewhere. Other examples of the vulnerability of urbanized societies can easily be worked out by those who wish to do so.
That said, there’s at least one other factor at work. As noted in last week’s post, civilizations by and large don’t have to be dragged down the slope of decline and fall; instead, they take that route with yells of triumph, convinced that the road to ruin will infallibly lead them to heaven on earth, and attempts to turn them aside from that trajectory typically get reactions ranging from blank incomprehension to furious anger. It’s not just the elites who fall into this sort of self-destructive groupthink, either: it’s not hard to find, in a falling civilization, people who claim to disagree with the ideology that’s driving the collapse, but people who take their disagreement to the point of making choices that differ from those of their more orthodox neighbors are much scarcer. They do exist; every civilization breeds them, but they make up a very small fraction of the population, and they generally exist on the fringes of society, despised and condemned by all those right-thinking people whose words and actions help drive the accelerating process of decline and fall.
The next question, then, is how civilizations get caught in that sort of groupthink. My proposal, as sketched out last week, is that the culprit is a rarely noticed side effect of urban life. People who live in a mostly natural environment—and by this I mean merely an environment in which most things are put there by nonhuman processes rather than by human action—have to deal constantly with the inevitable mismatches between the mental models of the universe they carry in their heads and the universe that actually surrounds them. People who live in a mostly artificial environment—an environment in which most things were made and arranged by human action—don’t have to deal with this anything like so often, because an artificial environment embodies the ideas of the people who constructed and arranged it. A natural environment therefore applies negative or, as it’s also called, corrective feedback to human models of the way things are, while an artificial environment applies positive feedback—the sort of thing people usually mean when they talk about a feedback loop.
This explains, incidentally, one of the other common differences between civilizations and other kinds of human society: the pace of change. Anthropologists not so long ago used to insist that what they liked to call “primitive societies”—that is, societies that have relatively simple technologies and no cities—were stuck in some kind of changeless stasis. That was nonsense, but the thin basis in fact that was used to justify the nonsense was simply that the pace of change in low-tech, non-urban societies, when they’re left to their own devices, tends to be fairly sedate, and usually happens over a time scale of generations. Urban societies, on the other hand, change quickly, and the pace of change tends to accelerate over time: a dead giveaway that a positive feedback loop is at work.
Notice that what’s fed back to the minds of civilized people by their artificial environment isn’t simply human thinking in general. It’s whatever particular set of mental models and habits of thought happen to be most popular in their civilization. Modern industrial civilization, for example, is obsessed with simplicity; our mental models and habits of thought value straight lines, simple geometrical shapes, hard boundaries, and clear distinctions. That obsession, and the models and mental habits that unfold from it, have given us an urban environment full of straight lines, simple geometrical shapes, hard boundaries, and clear distinctions—and thus reinforce our unthinking assumption that these things are normal and natural, which by and large they aren’t.
Modern industrial civilization is also obsessed with the frankly rather weird belief that growth for its own sake is a good thing. (Outside of a few specific cases, that is. I’ve wondered at times whether the deeply neurotic American attitude toward body weight comes from the conflict between current fashions in body shape and the growth-is-good mania of the rest of our culture; if bigger is better, why isn’t a big belly better than a small one?) In a modern urban American environment, it’s easy to believe that growth is good, since that claim is endlessly rehashed whenever some new megawhatsit replaces something of merely human scale, and since so many of the costs of malignant growth get hauled out of sight and dumped on somebody else. In settlement patterns that haven’t been pounded into their present shape by true believers in industrial society’s growth-for-its-own-sake ideology, people are rather more likely to grasp the meaning of the words “too much.”
I’ve used examples from our own civilization because they’re familiar, but every civilization reshapes its urban environment in the shape of its own mental models, which then reinforce those models in the minds of the people who live in that environment. As these people in turn shape that environment, the result is positive feedback: the mental models in question become more and more deeply entrenched in the built environment and thus also the collective conversation of the culture, and in both cases, they also become more elaborate and more extreme. The history of architecture in the western world over the last few centuries is a great example of this latter: over that time, buildings became ever more completely defined by straight lines, flat surfaces, simple geometries, and hard boundaries between one space and another—and it’s hardly an accident that popular culture in urban communities has simplified in much the same way over that same timespan.
One way to understand this is to see a civilization as the working out in detail of some specific set of ideas about the world. At first those ideas are as inchoate as dream-images, barely grasped even by the keenest thinkers of the time. Gradually, though, the ideas get worked out explicitly; conflicts among them are resolved or papered over in standardized ways; the original set of ideas becomes the core of a vast, ramifying architecture of thought which defines the universe to the inhabitants of that civilization. Eventually, everything in the world of human experience is assigned some place in that architecture of thought; everything that can be hammered into harmony with the core set of ideas has its place in the system, while everything that can’t gets assigned the status of superstitious nonsense, or whatever other label the civilization likes to use for the realities it denies.
The further the civilization develops, though, the less it questions the validity of the basic ideas themselves, and the urban environment is a critical factor in making this happen. By limiting, as far as possible, the experiences available to influential members of society to those that fit the established architecture of thought, urban living makes it much easier to confuse mental models with the universe those models claim to describe, and that confusion is essential if enough effort, enthusiasm, and passion are to be directed toward the process of elaborating those models to their furthest possible extent.
A branch of knowledge that has to keep on going back to revisit its first principles, after all, will never get far beyond them. This is why philosophy, which is the science of first principles, doesn’t “progress” in the simpleminded sense of that word—Aristotle didn’t disprove Plato, nor did Nietzsche refute Schopenhauer, because each of these philosophers, like all others in that challenging field, returned to the realm of first principles from a different starting point and so offered a different account of the landscape. Original philosophical inquiry thus plays a very large role in the intellectual life of every civilization early in the process of urbanization, since this helps elaborate the core ideas on which the civilization builds its vision of reality; once that process is more or less complete, though, philosophy turns into a recherchéintellectual specialty or gets transformed into intellectual dogma.
Cities are thus the Petri dishes in which civilizations ripen their ideas to maturity—and like Petri dishes, they do this by excluding contaminating influences. It’s easy, from the perspective of a falling civilization like ours, to see this as a dreadful mistake, a withdrawal from contact with the real world in order to pursue an abstract vision of things increasingly detached from everything else. That’s certainly one way to look at the matter, but there’s another side to it as well.
Civilizations are far and away the most spectacularly creative form of human society. Over the course of its thousand-year lifespan, the inhabitants of a civilization will create many orders of magnitude more of the products of culture—philosophical, scientific, and religious traditions, works of art and the traditions that produce and sustain them, and so on—than an equal number of people living in non-urban societies and experiencing the very sedate pace of cultural change already mentioned. To borrow a metaphor from the plant world, non-urban societies are perennials, and civilizations are showy annuals that throw all their energy into the flowering process.  Having flowered, civilizations then go to seed and die, while the perennial societies flower less spectacularly and remain green thereafter.
The feedback loop described above explains both the explosive creativity of civilizations and their equally explosive downfall. It’s precisely because civilizations free themselves from the corrective feedback of nature, and divert an ever larger portion of their inhabitants’ brainpower from the uses for which human brains were originally adapted by evolution, that they generate such torrents of creativity. Equally, it’s precisely because they do these things that civilizations run off the rails into self-feeding delusion, lose the capacity to learn the lessons of failure or even notice that failure is taking place, and are destroyed by threats they’ve lost the capacity to notice, let alone overcome. Meanwhile, other kinds of human societies move sedately along their own life cycles, and their creativity and their craziness—and they have both of these, of course, just as civilizations do—are kept within bounds by the enduring negative feedback loops of nature.
Which of these two options is better? That’s a question of value, not of fact, and so it has no one answer. Facts, to return to a point made in these posts several times, belong to the senses and the intellect, and they’re objective, at least to the extent that others can say, “yes, I see it too.” Values, by contrast, are a matter of the heart and the will, and they’re subjective; to call something good or bad doesn’t state an objective fact about the thing being discussed. It always expresses a value judgment from some individual point of view. You can’t say “x is better than y,” and mean anything by it, unless you’re willing to field such questions as “better by what criteria?” and “better for whom?”
Myself, I’m very fond of the benefits of civilization. I like hot running water, public libraries, the rule of law, and a great many other things that you get in civilizations and generally don’t get outside of them. Of course that preference is profoundly shaped by the fact that I grew up in a civilization; if I’d happened to be the son of yak herders in central Asia or tribal horticulturalists in upland Papua New Guinea, I might well have a different opinion—and I might also have a different opinion even if I’d grown up in this civilization but had different needs and predilections. Robert E. Howard, whose fiction launched the series of posts that finishes up this week, was a child of American civilization at its early twentieth century zenith, and he loathed civilization and all it stood for.
This is one of the two reasons that I think it’s a waste of time to get into arguments over whether civilization is a good thing. The other reason is that neither my opinion nor yours, dear reader, nor the opinion of anybody else who might happen to want to fulminate on the internet about the virtues or vices of civilization, is worth two farts in an EF-5 tornado when it comes to the question of whether or not future civilizations will rise and fall on this planet after today’s industrial civilization completes the arc of its destiny. Since the basic requirements of urban life first became available not long after the end of the last ice age, civilizations have risen wherever conditions favored them, cycled through their lifespans, and fell, and new civilizations rose again in the same places if the conditions remained favorable for that process.
Until the coming of the fossil fuel age, though, civilization was a localized thing, in a double sense. On the one hand, without the revolution in transport and military technology made possible by fossil fuels, any given civilization could only maintain control over a small portion of the planet’s surface for more than a fairly short time—thus as late as 1800, when the industrial revolution was already well under way, the civilized world was still divided into separate civilizations that each pursued its own very different ideas and values. On the other hand, without the economic revolution made possible by fossil fuels, very large sections of the world were completely unsuited to civilized life, and remained outside the civilized world for all practical purposes. As late as 1800, as a result, quite a bit of the world’s land surface was still inhabited by hunter-gatherers, nomadic pastoralists, and tribal horticulturalists who owed no allegiance to any urban power and had no interest in cities and their products at all—except for the nomadic pastoralists, that is, who occasionally liked to pillage one.
The world’s fossil fuel reserves aren’t renewable on any time scale that matters to human beings. Since we’ve burnt all the easily accessible coal, oil, and natural gas on the planet, and are working our way through the stuff that’s difficult to get even with today’s baroque and energy-intensive technologies, the world’s first fossil-fueled human civilization is guaranteed to be its last as well. That means that once the deindustrial dark age ahead of us is over, and conditions favorable for the revival of civilization recur here and there on various corners of the planet, it’s a safe bet that new civilizations will build atop the ruins we’ve left for them.
The energy resources they’ll have available to them, though, will be far less abundant and concentrated than the fossil fuels that gave industrial civilization its global reach.  With luck, and some hard work on the part of people living now, they may well inherit the information they need to make use of sun, wind, and other renewable energy resources in ways that the civilizations before ours didn’t know how to do. As our present-day proponents of green energy are finding out the hard way just now, though, this doesn’t amount to the kind of energy necessary to maintain our kind of civilization.
I’ve argued elsewhere, especially in my book The Ecotechnic Future, that modern industrial society is simply the first, clumsiest, and most wasteful form of what might be called technic society, the subset of human societies that get a significant amount of their total energy from nonbiotic sources—that is, from something other than human and animal muscles fueled by the annual product of photosynthesis. If that turns out to be correct, future civilizations that learn to use energy sparingly may be able to accomplish some of the things that we currently do by throwing energy around with wild abandon, and they may also learn how to do remarkable things that are completely beyond our grasp today. Eventually there may be other global civilizations, following out their own unique sets of ideas about the world through the usual process of dramatic creativity followed by dramatic collapse.
That’s a long way off, though. As the first global civilization gives way to the first global dark age, my working guess is that civilization—that is to say, the patterns of human society necessary to support the concentration of population, wealth, and power in urban centers—is going to go away everywhere, or nearly everywhere, over the next one to three centuries. A planet hammered by climate change, strewn with chemical and radioactive poisons, and swept by mass migrations is not a safe place for cities and the other amenities of civilized life. As things calm down, say, half a millennium from now, a range of new civilizations will doubtless emerge in those parts of the planet that have suitable conditions for urban life, while human societies of other kinds will emerge everywhere else on the planet that human life is possible at all.
I realize that this is not exactly a welcome prospect for those people who’ve bought into industrial civilization’s overblown idea of its own universal importance. Those who believe devoutly that our society is the cutting edge of humanity’s future, destined to march on gloriously forever to the stars, will be as little pleased by the portrait of the future I’ve painted as their equal and opposite numbers, for whom our society is the end of history and must surely be annihilated, along with all seven billion of us, by some glorious cataclysm of the sort beloved by Hollywood scriptwriters. Still, the universe is under no obligation to cater to anybody’s fantasies, you know. That’s a lesson Robert E. Howard knew well and wove into the best of his fiction, the stories of Conan among them—and it’s a lesson worth learning now, at least for those who hope to have some influence over how the future affects them, their families, and their communities, in an age of decline and fall.

The Cimmerian Hypothesis, Part Two: A Landscape of Hallucinations

Wed, 2015-07-22 19:17
Last week’s post covered a great deal of ground—not surprising, really, for an essay that started from a quotation from a Weird Tales story about Conan the Barbarian—and it may be useful to recap the core argument here. Civilizations—meaning here human societies that concentrate power, wealth, and population in urban centers—have a distinctive historical trajectory of rise and fall that isn’t shared by societies that lack urban centers. There are plenty of good reasons why this should be so, from the ecological costs of urbanization to the buildup of maintenance costs that drives catabolic collapse, but there’s also a cognitive dimension.
Look over the histories of fallen civilizations, and far more often than not, societies don’t have to be dragged down the slope of decline and fall. Rather, they go that way at a run, convinced that the road to ruin must inevitably lead them to heaven on earth. Arnold Toynbee, whose voluminous study of the rise and fall of civilizations has been one of the main sources for this blog since its inception, wrote at length about the way that the elite classes of falling civilizations lose the capacity to come up with new responses for new situations, or even to learn from their mistakes; thus they keep on trying to use the same failed policies over and over again until the whole system crashes to ruin. That’s an important factor, no question, but it’s not just the elites who seem to lose track of the real world as civilizations go sliding down toward history’s compost heap, it’s the masses as well.
Those of my readers who want to see a fine example of this sort of blindness to the obvious need only check the latest headlines. Within the next decade or so, for example, the entire southern half of Florida will become unfit for human habitation due to rising sea levels, driven by our dumping of greenhouse gases into an already overloaded atmosphere. Low-lying neighborhoods in Miami already flood with sea water whenever a high tide and a strong onshore wind hit at the same time; one more foot of sea level rise and salt water will pour over barriers into the remaining freshwater sources, turning southern Florida into a vast brackish swamp and forcing the evacuation of most of the millions who live there.
That’s only the most dramatic of a constellation of climatic catastrophes that are already tightening their grip on much of the United States. Out west, the rain forests of western Washington are burning in the wake of years of increasingly severe drought, California’s vast agricultural acreage is reverting to desert, and the entire city of Las Vegas will probably be out of water—as in, you turn on the tap and nothing but dust comes out—in less than a decade. As waterfalls cascade down the seaward faces of Antarctic and Greenland glaciers, leaking methane blows craters in the Siberian permafrost, and sea level rises at rates considerably faster than the worst case scenarios scientists were considering a few years ago, these threats are hardly abstract issues; is anyone in America taking them seriously enough to, say, take any concrete steps to stop using the atmosphere as a gaseous sewer, starting with their own personal behavior? Surely you jest.
No, the Republicans are still out there insisting at the top of their lungs that any scientific discovery that threatens their rich friends’ profits must be fraudulent, the Democrats are still out there proclaiming just as loudly that there must be some way to deal with anthropogenic climate change that won’t cost them their frequent-flyer miles, and nearly everyone outside the political sphere is making whatever noises they think will allow them to keep on pursuing exactly those lifestyle choices that are bringing on planetary catastrophe. Every possible excuse to insist that what’s already happening won’t happen gets instantly pounced on as one more justification for inertia—the claim currently being splashed around the media that the Sun might go through a cycle of slight cooling in the decades ahead is the latest example. (For the record, even if we get a grand solar minimum, its effects will be canceled out in short order by the impact of ongoing atmospheric pollution.)
Business as usual is very nearly the only option anybody is willing to discuss, even though the long-predicted climate catastrophes are already happening and the days of business as usual in any form are obviously numbered. The one alternative that gets air time, of course, is the popular fantasy of instant planetary dieoff, which gets plenty of attention because it’s just as effective an excuse for inaction as faith in business as usual. What next to nobody wants to talk about is the future that’s actually arriving exactly as predicted: a future in which low-lying coastal regions around the country and the world have to be abandoned to the rising seas, while the Southwest and large portions of the mountain west become more inhospitable than the eastern Sahara or Arabia’s Empty Quarter.
If the ice melt keeps accelerating at its present pace, we could be only a few decades form the point at which it’s Manhattan Island’s turn to be abandoned, because everything below ground level is permanently  flooded with seawater and every winter storm sends waves rolling right across the island and flings driftwood logs against second story windows. A few decades more, and waves will roll over the low-lying neighborhoods of Houston, Boston, Seattle, and Washington DC, while the ruined buildings that used to be New Orleans rise out of the still waters of a brackish estuary and the ruined buildings that used to be Las Vegas are half buried by the drifting sand. Take a moment to consider the economic consequences of that much infrastructure loss, that much destruction of built capital, that many people who somehow have to be evacuated and resettled, and think about what kind of body blow that will deliver to an industrial society that is already in bad shape for other reasons.
None of this had to happen. Half a century ago, policy makers and the public alike had already been presented with a tolerably clear outline of what was going to happen if we proceeded along the trajectory we were on, and those same warnings have been repeated with increasing force year by year, as the evidence to support them has mounted up implacably—and yet nearly all of us nodded and smiled and kept going. Nor has this changed in the least as the long-predicted catastrophes have begun to show up right on schedule. Quite the contrary: faced with a rising spiral of massive crises, people across the industrial world are, with majestic consistency, doing exactly those things that are guaranteed to make those crises worse.
So the question that needs to be asked, and if possible answered, is why civilizations—human societies that concentrate population, power, and wealth in urban centers—so reliably lose the capacity to learn from their mistakes and recognize that a failed policy has in fact failed.  It’s also worth asking why they so reliably do this within a finite and predictable timespan: civilizations last on average around a millennium before they crash into a dark age, while uncivilized societies routinely go on for many times that period. Doubtless any number of factors drive civilizations to their messy ends, but I’d like to suggest a factor that, to my knowledge, hasn’t been discussed in this context before.
Let’s start with what may well seem like an irrelevancy. There’s been a great deal of discussion down through the years in environmental circles about the way that the survival and health of the human body depends on inputs from nonhuman nature. There’s been a much more modest amount of talk about the human psychological and emotional needs that can only be met through interaction with natural systems. One question I’ve never seen discussed, though, is whether the human intellect has needs that are only fulfilled by a natural environment.
As I consider that question, one obvious answer comes to mind: negative feedback.
The human intellect is the part of each of us that thinks, that tries to make sense of the universe of our experience. It does this by creating models. By “models” I don’t just mean those tightly formalized and quantified models we call scientific theories; a poem is also a model of part of the universe of human experience, so is a myth, so is a painting, and so is a vague hunch about how something will work out. When a twelve-year-old girl pulls the petals off a daisy while saying “he loves me, he loves me not,” she’s using a randomization technique to decide between two models of one small but, to her, very important portion of the universe, the emotional state of whatever boy she has in mind.
With any kind of model, it’s critical to remember Alfred Korzybski’s famous rule: “the map is not the territory.” A model, to put the same point another way, is a representation; it represents the way some part of the universe looks when viewed from the perspective of one or more members of our species of social primates, using the idiosyncratic and profoundly limited set of sensory equipments, neural processes, and cognitive frameworks we got handed by our evolutionary heritage. Painful though this may be to our collective egotism, it’s not unfair to say that human mental models are what you get when you take the universe and dumb it down to the point that our minds can more or less grasp it.
What keeps our models from becoming completely dysfunctional is the negative feedback we get from the universe. For the benefit of readers who didn’t get introduced to systems theory, I should probably take a moment to explain negative feedback. The classic example is the common household thermostat, which senses the temperature of the air inside the house and activates a switch accordingly. If the air temperature is below a certain threshold, the thermostat turns the heat on and warms things up; if the air temperature rises above a different, slightly higher threshold, the thermostat turns the heat off and lets the house cool down.
In a sense, a thermostat embodies a very simple model of one very specific part of the universe, the temperature inside the house. Like all models, this one includes a set of implicit definitions and a set of value judgments. The definitions are the two thresholds, the one that turns the furnace on and the one that turns it off, and the value judgments label temperatures below the first threshold “too cold” and those above the second “too hot.” Like every human model, the thermostat model is unabashedly anthropocentric—“too cold” by the thermostat’s standard would be uncomfortably warm for a polar bear, for example—and selects out certain factors of interest to human beings from a galaxy of other things we don’t happen to want to take into consideration.
The models used by the human intellect to make sense of the universe are usually less simple than the one that guides a thermostat—there are unfortunately exceptions—but they work according to the same principle. They contain definitions, which may be implicit or explicit: the girl plucking petals from the daisy may have not have an explicit definition of love in mind when she says “he loves me,” but there’s some set of beliefs and expectations about what those words imply underlying the model. They also contain value judgments: if she’s attracted to the boy in question, “he loves me” has a positive value and “he loves me not” has a negative one.
Notice, though, that there’s a further dimension to the model, which is its interaction with the observed behavior of the thing it’s supposed to model. Plucking petals from a daisy, all things considered, is not a very good predictor of the emotional states of twelve-year-old boys; predictions made on the basis of that method are very often disproved by other sources of evidence, which is why few girls much older than twelve rely on it as an information source. Modern western science has formalized and quantified that sort of reality testing, but it’s something that most people do at least occasionally. It’s when they stop doing so that we get the inability to recognize failure that helps to drive, among many other things, the fall of civilizations.
Individual facets of experienced reality thus provide negative feedback to individual models. The whole structure of experienced reality, though, is capable of providing negative feedback on another level—when it challenges the accuracy of the entire mental process of modeling.
Nature is very good at providing negative feedback of that kind. Here’s a human conceptual model that draws a strict line between mammals, on the one hand, and birds and reptiles, on the other. Not much more than a century ago, it was as precise as any division in science: mammals have fur and don’t lay eggs, reptiles and birds don’t have fur and do lay eggs. Then some Australian settler met a platypus, which has fur and lays eggs. Scientists back in Britain flatly refused to take it seriously until some live platypuses finally made it there by ship. Plenty of platypus egg was splashed across plenty of distinguished scientific faces, and definitions had to be changed to make room for another category of mammals and the evolutionary history necessary to explain it.
Here’s another human conceptual model, the one that divides trees into distinct species. Most trees in most temperate woodlands, though, actually have a mix of genetics from closely related species. There are few red oaks; what you have instead are mostly-red, partly-red, and slightly-red oaks. Go from the northern to the southern end of a species’ distribution, or from wet to dry regions, and the variations within the species are quite often more extreme than those that separate trees that have been assigned to different species. Here’s still another human conceptual model, the one that divides trees from shrubs—plenty of species can grow either way, and the list goes on.
The human mind likes straight lines, definite boundaries, precise verbal definitions. Nature doesn’t. People who spend most of their time dealing with undomesticated natural phenomena, accordingly, have to get used to the fact that nature is under no obligation to make the kind of sense the human mind prefers. I’d suggest that this is why so many of the cultures our society calls “primitive”—that is, those that have simple material technologies and interact directly with nature much of the time—so often rely on nonlogical methods of thought: those our culture labels “mythological,” “magical,” or—I love this term—“prescientific.” (That the “prescientific” will almost certainly turn out to be the postscientific as well is one of the lessons of history that modern industrial society is trying its level best to ignore.) Nature as we experience it isn’t simple, neat, linear, and logical, and so it makes sense that the ways of thinking best suited to dealing with nature directly aren’t simple, neat, linear, and logical either.
 With this in mind, let’s return to the distinction discussed in last week’s post. I noted there that a city is a human settlement from which the direct, unmediated presence of nature has been removed as completely as the available technology permits. What replaces natural phenomena in an urban setting, though, is as important as what isn’t allowed there. Nearly everything that surrounds you in a city was put there deliberately by human beings; it is the product of conscious human thinking, and it follows the habits of human thought just outlined. Compare a walk down a city street to a walk through a forest or a shortgrass prairie: in the city street, much more of what you see is simple, neat, linear, and logical. A city is an environment reshaped to reflect the habits and preferences of the human mind.
I suspect there may be a straightforwardly neurological factor in all this. The human brain, so much larger compared to body weight than the brains of most of our primate relatives, evolved because having a larger brain provided some survival advantage to those hominins who had it, in competition with those who didn’t. It’s probably a safe assumption that processing information inputs from the natural world played a very large role in these advantages, and this would imply, in turn, that the human brain is primarily adapted for perceiving things in natural environments—not, say, for building cities, creating technologies, and making the other common products of civilization.
Thus some significant part of the brain has to be redirected away from the things that it’s adapted to do, in order to make civilizations possible. I’d like to propose that the simplified, rationalized, radically information-poor environment of the city plays a crucial role in this. (Information-poor? Of course; the amount of information that comes cascading through the five keen senses of an alert hunter-gatherer standing in an African forest is vastly greater than what a city-dweller gets from the blank walls and the monotonous sounds and scents of an urban environment.) Children raised in an environment that lacks the constant cascade of information natural environments provide, and taught to redirect their mental powers toward such other activities as reading and mathematics, grow up with cognitive habits and, in all probability, neurological arrangements focused toward the activities of civilization and away from the things to which the human brain is adapted by evolution.
One source of supporting evidence for this admittedly speculative proposal is the worldwide insistence on the part of city-dwellers that people who live in isolated rural communities, far outside the cultural ambit of urban life, are just plain stupid. What that means in practice, of course, is that people from isolated rural communities aren’t used to using their brains for the particular purposes that city people value. These allegedly “stupid” countryfolk are by and large extraordinarily adept at the skills they need to survive and thrive in their own environments. They may be able to listen to the wind and know exactly where on the far side of the hill a deer waits to be shot for dinner, glance at a stream and tell which riffle the trout have chosen for a hiding place, watch the clouds pile up and read from them how many days they’ve got to get the hay in before the rains come and rot it in the fields—all of which tasks require sophisticated information processing, the kind of processing that human brains evolved doing.
Notice, though, how the urban environment relates to the human habit of mental modeling. Everything in a city was a mental model before it became a building, a street, an item of furniture, or what have you. Chairs look like chairs, houses like houses, and so on; it’s so rare for humanmade items to break out of the habitual models of our species and the particular culture that built them that when this happens, it’s a source of endless comment. Where a natural environment constantly challenges human conceptual models, an urban environment reinforces them, producing a feedback loop that’s probably responsible for most of the achievements of civilization.
I suggest, though, that the same feedback loop may also play a very large role in the self-destruction of civilizations. People raised in urban environments come to treat their mental models as realities, more real than the often-unruly facts on the ground, because everything they encounter in their immediate environments reinforces those models. As the models become more elaborate and the cities become more completely insulated from the complexities of nature, the inhabitants of a civilization move deeper and deeper into a landscape of hallucinations—not least because as many of those hallucinations get built in brick and stone, or glass and steel, as the available technology permits. As a civilization approaches its end, the divergence between the world as it exists and the mental models that define the world for the civilization’s inmates becomes total, and its decisions and actions become lethally detached from reality—with consequences that we’ll discuss in next week’s post.

The Cimmerian Hypothesis, Part One: Civilization and Barbarism

Wed, 2015-07-15 17:16
One of the oddities of the writer’s life is the utter unpredictability of inspiration. There are times when I sit down at the keyboard knowing what I have to write, and plod my way though the day’s allotment of prose in much the same spirit that a gardener turns the earth in the beds of a big garden; there are times when a project sits there grumbling to itself and has to be coaxed or prodded into taking shape on the page; but there are also times when something grabs hold of me, drags me kicking and screaming to the keyboard, and holds me there with a squamous paw clamped on my shoulder until I’ve finished whatever it is that I’ve suddenly found out that I have to write.
Over the last two months, I’ve had that last experience on a considerably larger scale than usual; to be precise, I’ve just completed the first draft of a 70,000-word novel in eight weeks. Those of my readers and correspondents who’ve been wondering why I’ve been slower than usual to respond to them now know the reason. The working title is Moon Path to Innsmouth; it deals, in the sidelong way for which fiction is so well suited, with quite a number of the issues discussed on this blog; I’m pleased to say that I’ve lined up a publisher, and so in due time the novel will be available to delight the rugose hearts of the Great Old Ones and their eldritch minions everywhere.
None of that would be relevant to the theme of the current series of posts on The Archdruid Report, except that getting the thing written required quite a bit of reference to the weird tales of an earlier era—the writings of H.P. Lovecraft, of course, but also those of Clark Ashton Smith and Robert E. Howard, who both contributed mightily to the fictive mythos that took its name from Lovecraft’s squid-faced devil-god Cthulhu. One Howard story leads to another—or at least it does if you spent your impressionable youth stewing your imagination in a bubbling cauldron of classic fantasy fiction, as I did—and that’s how it happened that I ended up revisiting the final lines of “Beyond the Black River,” part of the saga of Conan of Cimmeria, Howard’s iconic hero:
“‘Barbarism is the natural state of mankind,’ the borderer said, still staring somberly at the Cimmerian. ‘Civilization is unnatural. It is a whim of circumstance. And barbarism must always ultimately triumph.’”
It’s easy to take that as nothing more than a bit of bluster meant to add color to an adventure story—easy but, I’d suggest, inaccurate. Science fiction has made much of its claim to be a “literature of ideas,” but a strong case can be made that the weird tale as developed by Lovecraft, Smith, Howard, and their peers has at least as much claim to the same label, and the ideas that feature in a classic weird tale are often a good deal more challenging than those that are the stock in trade of most science fiction: “gee, what happens if I extrapolate this technological trend a little further?” and the like. The authors who published with Weird Tales back in the day, in particular, liked to pose edgy questions about the way that the posturings of our species and its contemporary cultures appeared in the cold light of a cosmos that’s wholly uninterested in our overblown opinion of ourselves.
Thus I think it’s worth giving Conan and his fellow barbarians their due, and treating what we may as well call the Cimmerian hypothesis as a serious proposal about the underlying structure of human history. Let’s start with some basics. What is civilization? What is barbarism? What exactly does it mean to describe one state of human society as natural and another unnatural, and how does that relate to the repeated triumph of barbarism at the end of every civilization?
The word “civilization” has a galaxy of meanings, most of them irrelevant to the present purpose. We can take the original meaning of the word—in late Latin, civilisatio—as a workable starting point; it means “having or establishing settled communities.” A people known to the Romans was civilized if its members lived in civitates, cities or towns. We can generalize this further, and say that a civilization is a form of society in which people live in artificial environments. Is there more to civilization than that? Of course there is, but as I hope to show, most of it unfolds from the distinction just traced out.
A city, after all, is a human environment from which the ordinary workings of nature have been excluded, to as great an extent as the available technology permits. When you go outdoors in a city,  nearly all the things you encounter have been put there by human beings; even the trees are where they are because someone decided to put them there, not by way of the normal processes by which trees reproduce their kind and disperse their seeds. Those natural phenomena that do manage to elbow their way into an urban environment—tropical storms, rats, and the like—are interlopers, and treated as such. The gradient between urban and rural settlements can be measured precisely by what fraction of the things that residents encounter is put there by human action, as compared to the fraction that was put there by ordinary natural processes.
What is barbarism? The root meaning here is a good deal less helpful. The Greek word βαρβαροι, barbaroi, originally meant “people who say ‘bar bar bar’” instead of talking intelligibly in Greek. In Roman times that usage got bent around to mean “people outside the Empire,” and thus in due time to “tribes who are too savage to speak Latin, live in cities, or give up without a fight when we decide to steal their land.” Fast forward a century or two, and that definition morphed uncomfortably into “tribes who are too savage to speak Latin, live in cities, or stay peacefully on their side of the border” —enter Alaric’s Visigoths, Genseric’s Vandals, and the ebullient multiethnic horde that marched westwards under the banners of Attila the Hun.
This is also where Conan enters the picture. In crafting his fictional Hyborian Age, which was vaguely located in time betwen the sinking of Atlantis and the beginning of recorded history, Howard borrowed freely from various corners of the past, but the Roman experience was an important ingredient—the story cited above, framed by a struggle between the kingdom of Aquilonia and the wild Pictish tribes beyond the Black River, drew noticeably on Roman Britain, though it also took elements from the Old West and elsewhere. The entire concept of a barbarian hero swaggering his way south into the lands of civilization, which Howard introduced to fantasy fiction (and which has been so freely and ineptly plagiarized since his time), has its roots in the late Roman and post-Roman experience, a time when a great many enterprising warriors did just that, and when some, like Conan, became kings.
What sets barbarian societies apart from civilized ones is precisely that a much smaller fraction of the environment barbarians encounter results from human action. When you go outdoors in Cimmeria—if you’re not outdoors to start with, which you probably are—nearly everything you encounter has been put there by nature. There are no towns of any size, just scattered clusters of dwellings in the midst of a mostly unaltered environment. Where your Aquilonian town dweller who steps outside may have to look hard to see anything that was put there by nature, your Cimmerian who shoulders his battle-ax and goes for a stroll may have to look hard to see anything that was put there by human beings.
What’s more, there’s a difference in what we might usefully call the transparency of human constructions. In Cimmeria, if you do manage to get in out of the weather, the stones and timbers of the hovel where you’ve taken shelter are recognizable lumps of rock and pieces of tree; your hosts smell like the pheromone-laden social primates they are; and when their barbarian generosity inspires them to serve you a feast, they send someone out to shoot a deer, hack it into gobbets, and cook the result in some relatively simple manner that leaves no doubt in anyone’s mind that you’re all chewing on parts of a dead animal. Follow Conan’s route down into the cities of Aquilonia, and you’re in a different world, where paint and plaster, soap and perfume, and fancy cookery, among many other things, obscure nature’s contributions to the human world.
So that’s our first set of distinctions. What makes human societies natural or unnatural? It’s all too easy  to sink into a festering swamp of unsubstantiated presuppositions here, since people in every human society think of their own ways of doing things as natural and normal, and everyone else’s ways of doing the same things as unnatural and abnormal. Worse, there’s the pervasive bad habit in industrial Western cultures of lumping all non-Western cultures with relatively simple technologies together as “primitive man”—as though there’s only one of him, sitting there in a feathered war bonnet and a lionskin kilt playing the didgeridoo—in order to flatten out human history into an imaginary straight line of progress that leads from the caves, through us, to the stars.
In point of anthropological fact, the notion of “primitive man” as an allegedly unspoiled child of nature is pure hokum, and generally racist hokum at that. “Primitive” cultures—that is to say, human societies that rely on relatively simple technological suites—differ from one another just as dramatically as they differ from modern Western industrial societies; nor do simpler technological suites correlate with simpler cultural forms. Traditional Australian aboriginal societies, which have extremely simple material technologies, are considered by many anthropologists to have among the most intricate cultures known anywhere, embracing stunningly elaborate systems of knowledge in which cosmology, myth, environmental knowledge, social custom, and scores of other fields normally kept separate in our society are woven together into dizzyingly complex tapestries of knowledge.
What’s more, those tapestries of knowledge have changed and evolved over time. The hokum that underlies that label “primitive man” presupposes, among other things, that societies that use relatively simple technological suites have all been stuck in some kind of time warp since the Neolithic—think of the common habit of speech that claims that hunter-gatherer tribes are “still in the Stone Age” and so forth. Back of that habit of speech is the industrial world’s irrational conviction that all human history is an inevitable march of progress that leads straight to our kind of society, technology, and so forth. That other human societies might evolve in different directions and find their own wholly valid ways of making a home in the universe is anathema to most people in the industrial world these days—even though all the evidence suggests that this way of looking at the history of human culture makes far more sense of the data than does the fantasy of inevitable linear progress toward us.
Thus traditional tribal societies are no more natural than civilizations are, in one important sense of the word “natural;” that is, tribal societies are as complex, abstract, unique, and historically contingent as civilizations are. There is, however, one kind of human society that doesn’t share these characteristics—a kind of society that tends to be intellectually and culturally as well as technologically simpler than most, and that recurs in astonishingly similar forms around the world and across time. We’ve talked about it at quite some length in this blog; it’s the distinctive dark age society that emerges in the ruins of every fallen civilization after the barbarian war leaders settle down to become petty kings, the survivors of the civilization’s once-vast population get to work eking out a bare subsistence from the depleted topsoil, and most of the heritage of the wrecked past goes into history’s dumpster.
If there’s such a thing as a natural human society, the basic dark age society is probably it, since it emerges when the complex, abstract, unique, and historically contingent cultures of the former civilization and its hostile neighbors have both imploded, and the survivors of the collapse have to put something together in a hurry with nothing but raw human relationships and the constraints of the natural world to guide them. Of course once things settle down the new society begins moving off in its own complex, abstract, unique, and historically contingent direction; the dark age societies of post-Mycenean Greece, post-Roman Britain, post-Heian Japan, and their many equivalents have massive similarities, but the new societies that emerged from those cauldrons of cultural rebirth had much less in common with one another than their forbears did.
In Howard’s fictive history, the era of Conan came well before the collapse of Hyborian civilization; he was not himself a dark age warlord, though he doubtless would have done well in that setting. The Pictish tribes whose activities on the Aquilonian frontier inspired the quotation cited earlier in this post weren’t a dark age society, either, though if they’d actually existed, they’d have been well along the arc of transformation that turns the hostile neighbors of a declining civilization into the breeding ground of the warbands that show up on cue to finish things off. The Picts of Howard’s tale, though, were certainly barbarians—that is, they didn’t speak Aquilonian, live in cities, or stay peaceably on their side of the Black River—and they were still around long after the Hyborian civilizations were gone.
That’s one of the details Howard borrowed from history. By and large, human societies that don’t have urban centers tend to last much longer than those that do. In particular, human societies that don’t have urban centers don’t tend to go through the distinctive cycle of decline and fall ending in a dark age that urbanized societies undergo so predictably. There are plenty of factors that might plausibly drive this difference, many of which have been discussed here and elsewhere, but I’ve come to suspect something subtler may be at work here as well. As we’ve seen, a core difference between civilizations and other human societies is that people in civilizations tend to cut themselves off from the immediate experience of nature nature to a much greater extent than the uncivilized do. Does this help explain why civilizations crash and burn so reliably, leaving the barbarians to play drinking games with mead while sitting unsteadily on the smoldering ruins?
As it happens, I think it does.
As we’ve discussed at length in the last three weekly posts here, human intelligence is not the sort of protean, world-transforming superpower with limitless potential it’s been labeled by the more overenthusiastic partisans of human exceptionalism. Rather, it’s an interesting capacity possessed by one species of social primates, and quite possibly shared by some other animal species as well. Like every other biological capacity, it evolved through a process of adaptation to the environment—not, please note, to some abstract concept of the environment, but to the specific stimuli and responses that a social primate gets from the African savanna and its inhabitants, including but not limited to other social primates of the same species. It’s indicative that when our species originally spread out of Africa, it seems to have settled first in those parts of the Old World that had roughly savanna-like ecosystems, and only later worked out the bugs of living in such radically different environments as boreal forests, tropical jungles, and the like.
The interplay between the human brain and the natural environment is considerably more significant than has often been realized. For the last forty years or so, a scholarly discipline called ecopsychology has explored some of the ways that interactions with nature shape the human mind. More recently, in response to the frantic attempts of American parents to isolate their children from a galaxy of largely imaginary risks, psychologists have begun to talk about “nature deficit disorder,” the set of emotional and intellectual dysfunctions that show up reliably in children who have been deprived of the normal human experience of growing up in intimate contact with the natural world.
All of this should have been obvious from first principles. Studies of human and animal behavior alike have shown repeatedly that psychological health depends on receiving certain highly specific stimuli at certain stages in the maturation process. The famous experiments by Henry Harlow, who showed that monkeys raised  with a mother-substitute wrapped in terrycloth grew up more or less normal, while those raised with a bare metal mother-substitute turned out psychotic even when all their other needs were met, are among the more famous of these, but there have been many more, and many of them can be shown to affect human capacities in direct and demonstrable ways. Children learn language, for example, only if they’re exposed to speech during a certain age window; lacking the right stimulus at the right time, the capacity to use language shuts down and apparently can’t be restarted again.
In this latter example, exposure to speech is what’s known as a triggering stimulus—something from outside the organism that kickstarts a process that’s already hardwired into the organism, but will not get under way until and unless the trigger appears. There are other kinds of stimuli that play different roles in human and animal development. The maturation of the human mind, in fact, might best be seen as a process in which inputs from the environment play a galaxy of roles, some of them of critical importance. What happens when the natural inputs that were around when human intelligence evolved get shut out of the experiences of maturing humans, and replaced by a very different set of inputs put there by human beings? We’ll discuss that next week, in the second part of this post.

Darwin's Casino

Wed, 2015-07-08 16:25
Our age has no shortage of curious features, but for me, at least, one of the oddest is the way that so many people these days don’t seem to be able to think through the consequences of their own beliefs. Pick an ideology, any ideology, straight across the spectrum from the most devoutly religious to the most stridently secular, and you can count on finding a bumper crop of people who claim to hold that set of beliefs, and recite them with all the uncomprehending enthusiasm of a well-trained mynah bird, but haven’t noticed that those beliefs contradict other beliefs they claim to hold with equal devotion.
I’m not talking here about ordinary hypocrisy. The hypocrites we have with us always; our species being what it is, plenty of people have always seen the advantages of saying one thing and doing another. No, what I have in mind is saying one thing and saying another, without ever noticing that if one of those statements is true, the other by definition has to be false. My readers may recall the way that cowboy-hatted heavies in old Westerns used to say to each other, “This town ain’t big enough for the two of us;” there are plenty of ideas and beliefs that are like that, but too many modern minds resemble nothing so much as an OK Corral where the gunfight never happens.
An example that I’ve satirized in an earlier post here is the bizarre way that so many people on the rightward end of the US political landscape these days claim to be, at one and the same time, devout Christians and fervid adherents of Ayn Rand’s violently atheist and anti-Christian ideology.  The difficulty here, of course, is that Jesus tells his followers to humble themselves before God and help the poor, while Rand told hers to hate God, wallow in fantasies of their own superiority, and kick the poor into the nearest available gutter.  There’s quite precisely no common ground between the two belief systems, and yet self-proclaimed Christians who spout Rand’s turgid drivel at every opportunity make up a significant fraction of the Republican Party just now.
Still, it’s only fair to point out that this sort of weird disconnect is far from unique to religious people, or for that matter to Republicans. One of the places it crops up most often nowadays is the remarkable unwillingness of people who say they accept Darwin’s theory of evolution to think through what that theory implies about the limits of human intelligence.
If Darwin’s right, as I’ve had occasion to point out here several times already, human intelligence isn’t the world-shaking superpower our collective egotism likes to suppose. It’s simply a somewhat more sophisticated version of the sort of mental activity found in many other animals. The thing that supposedly sets it apart from all other forms of mentation, the use of abstract language, isn’t all that unique; several species of cetaceans and an assortment of the brainier birds communicate with their kin using vocalizations that show all the signs of being languages in the full sense of the word—that is, structured patterns of abstract vocal signs that take their meaning from convention rather than instinct.
What differentiates human beings from bottlenosed porpoises, African gray parrots, and other talking species is the mere fact that in our case, language and abstract thinking happened to evolve in a species that also had the sort of grasping limbs, fine motor control, and instinctive drive to pick things up and fiddle with them, that primates have and most other animals don’t.  There’s no reason why sentience should be associated with the sort of neurological bias that leads to manipulating the environment, and thence to technology; as far as the evidence goes, we just happen to be the one species in Darwin’s evolutionary casino that got dealt both those cards. For all we know, bottlenosed porpoises have a rich philosophical, scientific, and literary culture dating back twenty million years; they don’t have hands, though, so they don’t have technology. All things considered, this may be an advantage, since it means they won’t have had to face the kind of self-induced disasters our species is so busy preparing for itself due to the inveterate primate tendency to, ahem, monkey around with things.
I’ve long suspected that one of the reasons why human beings haven’t yet figured out how to carry on a conversation with bottlenosed porpoises, African gray parrots, et al. in their own language is quite simply that we’re terrified of what they might say to us—not least because it’s entirely possible that they’d be right. Another reason for the lack of communication, though, leads straight back to the limits of human intelligence. If our minds have emerged out of the ordinary processes of evolution, what we’ve got between our ears is simply an unusually complex variation on the standard social primate brain, adapted over millions of years to the mental tasks that are important to social primates—that is, staying fed, attracting mates, competing for status, and staying out of the jaws of hungry leopards.
Notice that “discovering the objective truth about the nature of the universe” isn’t part of this list, and if Darwin’s theory of evolution is correct—as I believe it to be—there’s no conceivable way it could be. The mental activities of social primates, and all other living things, have to take the rest of the world into account in certain limited ways; our perceptions of food, mates, rivals, and leopards, for example, have to correspond to the equivalent factors in the environment; but it’s actually an advantage to any organism to screen out anything that doesn’t relate to immediate benefits or threats, so that adequate attention can be paid to the things that matter. We perceive colors, which most mammals don’t, because primates need to be able to judge the ripeness of fruit from a distance; we don’t perceive the polarization of light, as bees do, because primates don’t need to navigate by the angle of the sun.
What’s more, the basic mental categories we use to make sense of the tiny fraction of our surroundings that we perceive are just as much a product of our primate ancestry as the senses we have and don’t have. That includes the basic structures of human language, which most research suggests are inborn in our species, as well as such derivations from language as logic and the relation between cause and effect—this latter simply takes the grammatical relation between subjects, verbs, and objects, and projects it onto the nonlinguistic world. In the real world, every phenomenon is part of an ongoing cascade of interactions so wildly hypercomplex that labels like “cause” and “effect” are hopelessly simplistic; what’s more, a great many things—for example, the decay of radioactive nuclei—just up and happen randomly without being triggered by any specific cause at all. We simplify all this into cause and effect because just enough things appear to work that way to make the habit useful to us.
Another thing that has much more to do with our cognitive apparatus than with the world we perceive is number. Does one apple plus one apple equal two apples? In our number-using minds, yes; in the real world, it depends entirely on the size and condition of the apples in question. We convert qualities into quantities because quantities are easier for us to think with.  That was one of the core discoveries that kickstarted the scientific revolution; when Galileo became the first human being in history to think of speed as a quantity, he made it possible for everyone after him to get their minds around the concept of velocity in a way that people before him had never quite been able to do.
In physics, converting qualities to quantities works very, very well. In some other sciences, the same thing is true, though the further you go away from the exquisite simplicity of masses in motion, the harder it is to translate everything that matters into quantitative terms, and the more inevitably gets left out of the resulting theories. By and large, the more complex the phenomena under discussion, the less useful quantitative models are. Not coincidentally, the more complex the phenomena under discussion, the harder it is to control all the variables in play—the essential step in using the scientific method—and the more tentative, fragile, and dubious the models that result.
So when we try to figure out what bottlenosed porpoises are saying to each other, we’re facing what’s probably an insuperable barrier. All our notions of language are social-primate notions, shaped by the peculiar mix of neurology and hardwired psychology that proved most useful to bipedal apes on the East African savannah over the last few million years. The structures that shape porpoise speech, in turn, are social-cetacean notions, shaped by the utterly different mix of neurology and hardwired psychology that’s most useful if you happen to be a bottlenosed porpoise or one of its ancestors.
Mind you, porpoises and humans are at least fellow-mammals, and likely have common ancestors only a couple of hundred million years back. If you want to talk to a gray parrot, you’re trying to cross a much vaster evolutionary distance, since the ancestors of our therapsid forebears and the ancestors of the parrot’s archosaurian progenitors have been following divergent tracks since way back in the Paleozoic. Since language evolved independently in each of the lineages we’re discussing, the logic of convergent evolution comes into play: as with the eyes of vertebrates and cephalopods—another classic case of the same thing appearing in very different evolutionary lineages—the functions are similar but the underlying structure is very different. Thus it’s no surprise that it’s taken exhaustive computer analyses of porpoise and parrot vocalizations just to give us a clue that they’re using language too.
The takeaway point I hope my readers have grasped from this is that the human mind doesn’t know universal, objective truths. Our thoughts are simply the way that we, as members of a particular species of social primates, to like to sort out the universe into chunks simple enough for us to think with. Does that make human thought useless or irrelevant? Of course not; it simply means that its uses and relevance are as limited as everything else about our species—and, of course, every other species as well. If any of my readers see this as belittling humanity, I’d like to suggest that fatuous delusions of intellectual omnipotence aren’t a useful habit for any species, least of all ours. I’d also point out that those very delusions have played a huge role in landing us in the rising spiral of crises we’re in today.
Human beings are simply one species among many, inhabiting part of the earth at one point in its long lifespan. We’ve got remarkable gifts, but then so does every other living thing. We’re not the masters of the planet, the crown of evolution, the fulfillment of Earth’s destiny, or any of the other self-important hogwash with which we like to tickle our collective ego, and our attempt to act out those delusional roles with the help of a lot of fossil carbon hasn’t exactly turned out well, you must admit. I know some people find it unbearable to see our species deprived of its supposed place as the precious darlings of the cosmos, but that’s just one of life’s little learning experiences, isn’t it? Most of us make a similar discovery on the individual scale in the course of growing up, and from my perspective, it’s high time that humanity do a little growing up of its own, ditch the infantile egotism, and get to work making the most of the time we have on this beautiful and fragile planet.
The recognition that there’s a middle ground between omnipotence and uselessness, though, seems to be very hard for a lot of people to grasp just now. I don’t know if other bloggers in the doomosphere have this happen to them, but every few months or so I field a flurry of attempted comments by people who want to drag the conversation over to their conviction that free will doesn’t exist. I don’t put those comments through, and not just because they’re invariably off topic; the ideology they’re pushing is, to my way of thinking, frankly poisonous, and it’s also based on a shopworn Victorian determinism that got chucked by working scientists rather more than a century ago, but is still being recycled by too many people who didn’t hear the thump when it landed in the trash can of dead theories.
A century and a half ago, it used to be a commonplace of scientific ideology that cause and effect ruled everything, and the whole universe was fated to rumble along a rigidly invariant sequence of events from the beginning of time to the end thereof. The claim was quite commonly made that a sufficiently vast intelligence, provided with a sufficiently complete data set about the position and velocity of every particle in the cosmos at one point in time, could literally predict everything that would ever happen thereafter. The logic behind that claim went right out the window, though, once experiments in the early 20th century showed conclusively that quantum phenomena are random in the strictest sense of the world. They’re not caused by some hidden variable; they just happen when they happen, by chance.
What determines the moment when a given atom of an unstable isotope will throw off some radiation and turn into a different element? Pure dumb luck. Since radiation discharges from single atoms of unstable isotopes are the most important cause of genetic mutations, and thus a core driving force behind the process of evolution, this is much more important than it looks. The stray radiation that gave you your eye color, dealt an otherwise uninteresting species of lobefin fish the adaptations that made it the ancestor of all land vertebrates, and provided the raw material for countless other evolutionary transformations:  these were entirely random events, and would have happened differently if certain unstable atoms had decayed at a different moment and sent their radiation into a different ovum or spermatozoon—as they very well could have. So it doesn’t matter how vast the intelligence or complete the data set you’ve got, the course of life on earth is inherently impossible to predict, and so are a great many other things that unfold from it.
With the gibbering phantom of determinism laid to rest, we can proceed to the question of free will. We can define free will operationally as the ability to produce genuine novelty in behavior—that is, to do things that can’t be predicted. Human beings do this all the time, and there are very good evolutionary reasons why they should have that capacity. Any of my readers who know game theory will recall that the best strategy in any competitive game includes an element of randomness, which prevents the other side from anticipating and forestalling your side’s actions. Food gathering, in game theory terms, is a competitive game; so are trying to attract a mate, competing for social prestige, staying out of the jaws of hungry leopards, and most of the other activities that pack the day planners of social primates.
Unpredictability is so highly valued by our species, in fact, that every human culture ever recorded has worked out formal ways to increase the total amount of sheer randomness guiding human action. Yes, we’re talking about divination—for those who don’t know the jargon, this term refers to what you do with Tarot cards, the I Ching, tea leaves, horoscopes, and all the myriad other ways human cultures have worked out to take a snapshot of the nonrational as a guide for action. Aside from whatever else may be involved—a point that isn’t relevant to this blog—divination does a really first-rate job of generating unpredictability. Flipping a coin does the same thing, and most people have confounded the determinists by doing just that on occasion, but fully developed divination systems like those just named provide a much richer palette of choices than the simple coin toss, and thus enable people to introduce a much richer range of novelty into their actions.
Still, divination is a crutch, or at best a supplement; human beings have their own onboard novelty generators, which can do the job all by themselves if given half a chance.  The process involved here was understood by philosophers a long time ago, and no doubt the neurologists will get around to figuring it out one of these days as well. The core of it is that humans don’t respond directly to stimuli, external or internal.  Instead, they respond to their own mental representations of stimuli, which are constructed by the act of cognition and are laced with bucketloads of extraneous material garnered from memory and linked to the stimulus in uniquely personal, irrational, even whimsical ways, following loose and wildly unpredictable cascades of association and contiguity that have nothing to do with logic and everything to do with the roots of creativity. 
Each human society tries to give its children some approximation of its own culturally defined set of representations—that’s what’s going on when children learn language, pick up the customs of their community, ask for the same bedtime story to be read to them for the umpteenth time, and so on. Those culturally defined representations proceed to interact in various ways with the inborn, genetically defined representations that get handed out for free with each brand new human nervous system.  The existence of these biologically and culturally defined representations, and of various ways that they can be manipulated to some extent by other people with or without the benefit of mass media, make up the ostensible reason why the people mentioned above insist that free will doesn’t exist.
Here again, though, the fact that the human mind isn’t omnipotent doesn’t make it powerless. Think about what happens, say, when a straight stick is thrust into water at an angle, and the stick seems to pick up a sudden bend at the water’s surface, due to differential refraction in water and air. The illusion is as clear as anything, but if you show this to a child and let the child experiment with it, you can watch the representation “the stick is bent” give way to “the stick looks bent.” Notice what’s happening here: the stimulus remains the same, but the representation changes, and so do the actions that result from it. That’s a simple example of how representations create the possibility of freedom.
In the same way, when the media spouts some absurd bit of manipulative hogwash, if you take the time to think about it, you can watch your own representation shift from “that guy’s having an orgasm from slurping that fizzy brown sugar water” to “that guy’s being paid to pretend to have an orgasm, so somebody can try to convince me to buy that fizzy brown sugar water.” If you really pay attention, it may shift again to “why am I wasting my time watching this guy pretend to get an orgasm from fizzy brown sugar water?” and may even lead you to chuck your television out a second story window into an open dumpster, as I did to the last one I ever owned. (The flash and bang when the picture tube imploded, by the way, was far more entertaining than anything that had ever appeared on the screen.)
Human intelligence is limited. Our capacities for thinking are constrained by our heredity, our cultures, and our personal experiences—but then so are our capacities for the perception of color, a fact that hasn’t stopped artists from the Paleolithic to the present from putting those colors to work in a galaxy of dizzyingly original ways. A clear awareness of the possibilities and the limits of the human mind makes it easier to play the hand we’ve been dealt in Darwin’s casino—and it also points toward a generally unsuspected reason why civilizations come apart, which we’ll discuss next week.

The Dream of the Machine

Wed, 2015-07-01 16:02
As I type these words, it looks as though the wheels are coming off the global economy. Greece and Puerto Rico have both suspended payments on their debts, and China’s stock market, which spent the last year in a classic speculative bubble, is now in the middle of a classic speculative bust. Those of my readers who’ve read John Kenneth Galbraith’s lively history The Great Crash 1929 already know all about the Chinese situation, including the outcome—and since vast amounts of money from all over the world went into Chinese stocks, and most of that money is in the process of turning into twinkle dust, the impact of the crash will inevitably proliferate through the global economy.
So, in all probability, will the Greek and Puerto Rican defaults. In today’s bizarre financial world, the kind of bad debts that used to send investors backing away in a hurry attract speculators in droves, and so it turns out that some big New York hedge funds are in trouble as a result of the Greek default, and some of the same firms that got into trouble with mortgage-backed securities in the recent housing bubble are in the same kind of trouble over Puerto Rico’s unpayable debts. How far will the contagion spread? It’s anybody’s guess.
Oh, and on another front, nearly half a million acres of Alaska burned up in a single day last week—yes, the fires are still going—while ice sheets in Greenland are collapsing so frequently and forcefully that the resulting earthquakes are rattling seismographs thousands of miles away. These and other signals of a biosphere in crisis make good reminders of the fact that the current economic mess isn’t happening in a vacuum. As Ugo Bardi pointed out in a thoughtful blog post, finance is the flotsam on the surface of the ocean of real exchanges of real goods and services, and the current drumbeat of financial crises are symptomatic of the real crisis—the arrival of the limits to growth that so many people have been discussing, and so many more have been trying to ignore, for the last half century or so.
A great many people in the doomward end of the blogosphere are talking about what’s going on in the global economy and what’s likely to blow up next. Around the time the next round of financial explosions start shaking the world’s windows, a great many of those same people will likely be talking about what to do about it all.  I don’t plan on joining them in that discussion. As blog posts here have pointed out more than once, time has to be considered when getting ready for a crisis. The industrial world would have had to start backpedaling away from the abyss decades ago in order to forestall the crisis we’re now in, and the same principle applies to individuals.  The slogan “collapse now and avoid the rush!” loses most of its point, after all, when the rush is already under way.
Any of my readers who are still pinning their hopes on survival ecovillages and rural doomsteads they haven’t gotten around to buying or building yet, in other words, are very likely out of luck. They, like the rest of us, will be meeting this where they are, with what they have right now. This is ironic, in that ideas that might have been worth adopting three or four years ago are just starting to get traction now. I’m thinking here particularly of a recent article on how to use permaculture to prepare for a difficult future, which describes the difficult future in terms that will be highly familiar to readers of this blog. More broadly, there’s a remarkable amount of common ground between that article and the themes of my book Green Wizardry. The awkward fact remains that when the global banking industry shows every sign of freezing up the way it did in 2008, putting credit for land purchases out of reach of most people for years to come, the article’s advice may have come rather too late.
That doesn’t mean, of course, that my readers ought to crawl under their beds and wait for death. What we’re facing, after all, isn’t the end of the world—though it may feel like that for those who are too deeply invested, in any sense of that last word you care to use, in the existing order of industrial society. As Visigothic mommas used to remind their impatient sons, Rome wasn’t sacked in a day. The crisis ahead of us marks the end of what I’ve called abundance industrialism and the transition to scarcity industrialism, as well as the end of America’s global hegemony and the emergence of a new international order whose main beneficiary hasn’t been settled yet. Those paired transformations will most likely unfold across several decades of economic chaos, political turmoil, environmental disasters, and widespread warfare. Plenty of people got through the equivalent cataclysms of the first half of the twentieth century with their skins intact, even if the crisis caught them unawares, and no doubt plenty of people will get through the mess that’s approaching us in much the same condition.
Thus I don’t have any additional practical advice, beyond what I’ve already covered in my books and blog posts, to offer my readers just now. Those who’ve already collapsed and gotten ahead of the rush can break out the popcorn and watch what promises to be a truly colorful show.  Those who didn’t—well, you might as well get some popcorn going and try to enjoy the show anyway. If you come out the other side of it all, schoolchildren who aren’t even born yet may eventually come around to ask you awed questions about what happened when the markets crashed in ‘15.
In the meantime, while the popcorn is popping and the sidewalks of Wall Street await their traditional tithe of plummeting stockbrokers, I’d like to return to the theme of last week’s post and talk about the way that the myth of the machine—if you prefer, the widespread mental habit of thinking about the world in mechanistic terms—pervades and cripples the modern mind.
Of all the responses that last week’s post fielded, those I found most amusing, and also most revealing, were those that insisted that of course the universe is a machine, so is everything and everybody in it, and that’s that. That’s amusing because most of the authors of these comments made it very clear that they embraced the sort of scientific-materialist atheism that rejects any suggestion that the universe has a creator or a purpose. A machine, though, is by definition a purposive artifact—that is, it’s made by someone to do something. If the universe is a machine, then, it has a creator and a purpose, and if it doesn’t have a creator and a purpose, logically speaking, it can’t be a machine.
That sort of unintentional comedy inevitably pops up whenever people don’t think through the implications of their favorite metaphors. Still, chase that habit further along its giddy path and you’ll find a deeper absurdity at work. When people say “the universe is a machine,” unless they mean that statement as a poetic simile, they’re engaging in a very dubious sort of logic. As Alfred Korzybski pointed out a good many years ago, pretty much any time you say “this is that,” unless you implicitly or explicitly qualify what you mean in very careful terms, you’ve just babbled nonsense.
The difficulty lies in that seemingly innocuous word “is.” What Korzybski called the “is of identity”—the use of the word “is” to represent  =, the sign of equality—makes sense only in a very narrow range of uses.  You can use the “is of identity” with good results in categorical definitions; when I commented above that a machine is a purposive artifact, that’s what I was doing. Here is a concept, “machine;” here are two other concepts, “purposive” and “artifact;” the concept “machine” logically includes the concepts “purposive” and “artifact,” so anything that can be described by the words “a machine” can also be described as “purposive” and “an artifact.” That’s how categorical definitions work.
Let’s consider a second example, though: “a machine is a purple dinosaur.” That utterance uses the same structure as the one we’ve just considered.  I hope I don’t have to prove to my readers, though, that the concept “machine” doesn’t include the concepts “purple” and “dinosaur” in any but the most whimsical of senses.  There are plenty of things that can be described by the label “machine,” in other words, that can’t be described by the labels “purple” or “dinosaur.” The fact that some machines—say, electronic Barney dolls—can in fact be described as purple dinosaurs doesn’t make the definition any less silly; it simply means that the statement “no machine is a purple dinosaur” can’t be justified either.
With that in mind, let’s take a closer look at the statement “the universe is a machine.” As pointed out earlier, the concept “machine” implies the concepts “purposive” and “artifact,” so if the universe is a machine, somebody made it to carry out some purpose. Those of my readers who happen to belong to Christianity, Islam, or another religion that envisions the universe as the creation of one or more deities—not all religions make this claim, by the way—will find this conclusion wholly unproblematic. My atheist readers will disagree, of course, and their reaction is the one I want to discuss here. (Notice how “is” functions in the sentence just uttered: “the reaction of the atheists” equals “the reaction I want to discuss.” This is one of the few other uses of “is” that doesn’t tend to generate nonsense.)
In my experience, at least, atheists faced with the argument about the meaning of the word “machine” I’ve presented here pretty reliably respond with something like “It’s not a machine in that sense.” That response takes us straight to the heart of the logical problems with the “is of identity.” In what sense is the universe a machine? Pursue the argument far enough, and unless the atheist storms off in a huff—which admittedly tends to happen more often than not—what you’ll get amounts to “the universe and a machine share certain characteristics in common.” Go further still—and at this point the atheist will almost certainly storm off in a huff—and you’ll discover that the characteristics that the universe is supposed to share with a machine are all things we can’t actually prove one way or another about the universe, such as whether it has a creator or a purpose.
The statement “the universe is a machine,” in other words, doesn’t do what it appears to do. It appears to state a categorical identity; it actually states an unsupported generalization in absolute terms. It takes a mental model abstracted from one corner of human experience and applies it to something unrelated.  In this case, for polemic reasons, it does so in a predictably one-sided way: deductions approved by the person making the statement (“the universe is a machine, therefore it lacks life and consciousness”) are acceptable, while deductions the person making the statement doesn’t like (“the universe is a machine, therefore it was made by someone for some purpose”) get the dismissive response noted above.
This sort of doublethink appears all through the landscape of contemporary nonconversation and nondebate, to be sure, but the problems with the “is of identity” don’t stop with its polemic abuse. Any time you say “this is that,” and mean something other than “this has some features in common with that,” you’ve just fallen into one of the corel boobytraps hardwired into the structure of human thought.
Human beings think in categories. That’s what made ancient Greek logic, which takes categories as its basic element, so massive a revolution in the history of human thinking: by watching the way that one category includes or excludes another, which is what the Greek logicians did, you can squelch a very large fraction of human stupidities before they get a foothold. What Alfred Korzybski pointed out, in effect, is that there’s a metalogic that the ancient Greeks didn’t get to, and logical theorists since their time haven’t really tackled either: the extremely murky relationship between the categories we think with and the things we experience, which don’t come with category labels spraypainted on them.
Here is a green plant with a woody stem. Is it a tree or a shrub? That depends on exactly where you draw the line between those two categories, and as any botanist can tell you, that’s neither an easy nor an obvious thing. As long as you remember that categories exist within the human mind as convenient handles for us to think with, you can navigate around the difficulties, but when you slip into thinking that the categories are more real than the things they describe, you’re in deep, deep trouble.
It’s not at all surprising that human thought should have such problems built into it. If, as I do, you accept the Darwinian thesis that human beings evolved out of prehuman primates by the normal workings of the laws of evolution, it follows logically that our nervous systems and cognitive structures didn’t evolve for the purpose of understanding the truth about the cosmos; they evolved to assist us in getting food, attracting mates, fending off predators, and a range of similar, intellectually undemanding tasks. If, as many of my theist readers do, you believe that human beings were created by a deity, the yawning chasm between creator and created, between an infinite and a finite intelligence, stands in the way of any claim that human beings can know the unvarnished truth about the cosmos. Neither viewpoint supports the claim that a category created by the human mind is anything but a convenience that helps our very modest mental powers grapple with an ultimately incomprehensible cosmos.
Any time human beings try to make sense of the universe or any part of it, in turn, they have to choose from among the available categories in an attempt to make the object of inquiry fit the capacities of their minds. That’s what the founders of the scientific revolution did in the seventeenth century, by taking the category of “machine” and applying it to the universe to see how well it would fit. That was a perfectly rational choice from within their cultural and intellectual standpoint. The founders of the scientific revolution were Christians to a man, and some of them (for example, Isaac Newton) were devout even by the standards of the time; the idea that the universe had been made by someone for some purpose, after all, wasn’t problematic in the least to people who took it as given that the universe was made by God for the purpose of human salvation. It was also a useful choice in practical terms, because it allowed certain features of the universe—specifically, the behavior of masses in motion—to be accounted for and modeled with a clarity that previous categories hadn’t managed to achieve.
The fact that one narrowly defined aspect of the universe seems to behave like a machine, though, does not prove that the universe is a machine, any more than the fact that one machine happens to look like a purple dinosaur proves that all machines are purple dinosaurs. The success of mechanistic models in explaining the behavior of masses in motion proved that mechanical metaphors are good at fitting some of the observed phenomena of physics into a shape that’s simple enough for human cognition to grasp, and that’s all it proved. To go from that modest fact to the claim that the universe and everything in it are machines involves an intellectual leap of pretty spectacular scale. Part of the reason that leap was taken in the seventeenth century was the religious frame of scientific inquiry at that time, as already mentioned, but there was another factor, too.
It’s a curious fact that mechanistic models of the universe appeared in western European cultures, and become wildly popular there, well before the machines did. In the early seventeenth century, machines played a very modest role in the life of most Europeans; most tasks were done using hand tools powered by human and animal muscle, the way they had been done since the dawn of the agricultural revolution eight millennia or so before. The most complex devices available at the time were pendulum clocks, printing presses, handlooms, and the like—you know, the sort of thing that people these days use instead of machines when they want to get away from technology.
For reasons that historians of ideas are still trying to puzzle out, though, western European thinkers during these same years were obsessed with machines, and with mechanical explanations for the universe. Those latter ranged from the plausible to the frankly preposterous—RenéDescartes, for example, proposed a theory of gravity in which little corkscrew-shaped particles went zooming up from the earth to screw themselves into pieces of matter and yank them down. Until Isaac Newton, furthermore, theories of nature based on mechanical models didn’t actually explain that much, and until the cascade of inventive adaptations of steam power that ended with James Watt’s epochal steam engine nearly a century after Newton, the idea that machines could elbow aside craftspeople using hand tools and animals pulling carts was an unproven hypothesis. Yet a great many people in western Europe believed in the power of the machine as devoutly as their ancestors had believed in the power of the bones of the local saints.
A habit of thought very widespread in today’s culture assumes that technological change happens first and the world of ideas changes in response to it. The facts simply won’t support that claim, though. As the history of mechanistic ideas in science shows clearly, the ideas come first and the technologies follow—and there’s good reason why this should be so. Technologies don’t invent themselves, after all. Somebody has to put in the work to invent them, and then other people have to invest the resources to take them out of the laboratory and give them a role in everyday life. The decisions that drive invention and investment, in turn, are powerfully shaped by cultural forces, and these in turn are by no means as rational as the people influenced by them generally like to think.
People in western Europe and a few of its colonies dreamed of machines, and then created them. They dreamed of a universe reduced to the status of a machine, a universe made totally transparent to the human mind and totally subservient to the human will, and then set out to create it. That latter attempt hasn’t worked out so well, for a variety of reasons, and the rising tide of disasters sketched out in the first part of this week’s post unfold in large part from the failure of that misbegotten dream. In the next few posts, I want to talk about why that failure was inevitable, and where we might go from here.

The Delusion of Control

Wed, 2015-06-24 15:45
I'm sure most of my readers have heard at least a little of the hullaballoo surrounding the release of Pope Francis’ encyclical on the environment, Laudato Si. It’s been entertaining to watch, not least because so many politicians in the United States who like to use Vatican pronouncements as window dressing for their own agendas have been left scrambling for cover now that the wind from Rome is blowing out of a noticeably different quarter.
Take Rick Santorum, a loudly Catholic Republican who used to be in the US Senate and now spends his time entertaining a variety of faux-conservative venues with his signature flavor of hate speech. Santorum loves to denounce fellow Catholics who disagree with Vatican edicts as “cafeteria Catholics,” and announced a while back that John F. Kennedy’s famous defense of the separation of church and state made him sick to his stomach. In the wake of Laudato Si, care to guess who’s elbowing his way to the head of the cafeteria line? Yes, that would be Santorum, who’s been insisting since the encyclical came out that the Pope is wrong and American Catholics shouldn’t be obliged to listen to him.
What makes all the yelling about Laudato Si a source of wry amusement to me is that it’s not actually a radical document at all. It’s a statement of plain common sense. It should have been obvious all along that treating the air as a gaseous sewer was a really dumb idea, and in particular, that dumping billions upon billions of tons of infrared-reflecting gases into the atmosphere would change its capacity for heat retention in unwelcome ways. It should have been just as obvious that all the other ways we maltreat the only habitable planet we’ve got were guaranteed to end just as badly. That this wasn’t obvious—that huge numbers of people find it impossible to realize that you can only wet your bed so many times before you have to sleep in a damp spot—deserves much more attention than it’s received so far.
It’s really a curious blindness, when you think about it. Since our distant ancestors climbed unsteadily down from the trees of late Pliocene Africa, the capacity to anticipate threats and do something about them has been central to the success of our species. A rustle in the grass might indicate the approach of a leopard, a series of unusually dry seasons might turn the local water hole into undrinkable mud: those of our ancestors who paid attention to such things, and took constructive action in response to them, were more likely to survive and leave offspring than those who shrugged and went on with business as usual. That’s why traditional societies around the world are hedged about with a dizzying assortment of taboos and customs meant to guard against every conceivable source of danger.
Somehow, though, we got from that to our present situation, where substantial majorities across the world’s industrial nations seem unable to notice that something bad can actually happen to them, where thoughtstoppers of the “I’m sure they’ll think of something” variety take the place of thinking about the future, and where, when something bad does happen to someone, the immediate response is to find some way to blame the victim for what happened, so that everyone else can continue to believe that the same thing can’t happen to them. A world where Laudato Si is controversial, not to mention necessary, is a world that’s become dangerously detached from the most basic requirements of collective survival.
For quite some time now, I’ve been wondering just what lies behind the bizarre paralogic with which most people these days turn blank and uncomprehending eyes on their onrushing fate. The process of writing last week’s blog post on the astonishing stupidity of US foreign policy, though, seems to have helped me push through to clarity on the subject. I may be wrong, but I think I’ve figured it out.
Let’s begin with the issue at the center of last week’s post, the realy remarkable cluelessness with which US policy toward Russia and China has convinced both nations they have nothing to gain from cooperating with a US-led global order, and are better off allying with each other and opposing the US instead. US politicians and diplomats made that happen, and the way they did it was set out in detail in a recent and thoughtful article by Paul R. Pillar in the online edition of The National Interest.
Pillar’s article pointed out that the United States has evolved a uniquely counterproductive notion of how negotiation works. Elsewhere on the planet, people understand that when you negotiate, you’re seeking a compromise where you get whatever you most need out of the situation, while the other side gets enough of its own agenda met to be willing to cooperate. To the US, by contrast, negotiation means that the other side complies with US demands, and that’s the end of it. The idea that other countries might have their own interests, and might expect to receive some substantive benefit in exchange for cooperation with the US, has apparently never entered the heads of official Washington—and the absence of that idea has resulted in the cascading failures of US foreign policy in recent years.
It’s only fair to point out that the United States isn’t the only practitioner of this kind of self-defeating behavior. A first-rate example has been unfolding in Europe in recent months—yes, that would be the ongoing non-negotiations between the Greek government and the so-called troika, the coalition of unelected bureaucrats who are trying to force Greece to keep pursuing a failed economic policy at all costs. The attitude of the troika is simple: the only outcome they’re willing to accept is capitulation on the part of the Greek government, and they’re not willing to give anything in return. Every time the Greek government has tried to point out to the troika that negotiation usually involves some degree of give and take, the bureaucrats simply give them a blank look and reiterate their previous demands.
That attitude has had drastic political consequences. It’s already convinced Greeks to elect a radical leftist government in place of the compliant centrists who ruled the country in the recent past. If the leftists fold, the neofascist Golden Dawn party is waiting in the wings. The problem with the troika’s stance is simple: the policies they’re insisting that Greece must accept have never—not once in the history of market economies—produced anything but mass impoverishment and national bankruptcy. The Greeks, among many other people, know this; they know that Greece will not return to prosperity until it defaults on its foreign debts the way Russia did in 1998, and scores of other countries have done as well.
If the troika won’t settle for a negotiated debt-relief program, and the current Greek government won’t default, the Greeks will elect someone else who will, no matter who that someone else happens to be; it’s that, after all, or continue along a course that’s already caused the Greek economy to lose a quarter of its precrisis GDP, and shows no sign of stopping anywhere this side of failed-state status. That this could quite easily hand Greece over to a fascist despot is just one of the potential problems with the troika’s strategy. It’s astonishing that so few people in Europe seem to be able to remember what happened the last time an international political establishment committed itself to the preservation of a failed economic orthodoxy no matter what; those of my readers who don’t know what I’m talking about may want to pick up any good book on the rise of fascism in Europe between the wars.
Let’s step back from specifics, though, and notice the thinking that underlies the dysfunctional behavior in Washington and Brussels alike. In both cases, the people who think they’re in charge have lost track of the fact that Russia, China, and Greece have needs, concerns, and interests of their own, and aren’t simply dolls that the US or EU can pose at will. These other nations can, perhaps, be bullied by threats over the short term, but that’s a strategy with a short shelf life.  Successful diplomacy depends on giving the other guy reasons to want to cooperate with you, while demanding cooperation at gunpoint guarantees that the other guy is going to look for ways to shoot back.
The same sort of thinking in a different context underlies the brutal stupidity of American drone attacks in the Middle East. Some wag in the media pointed out a while back that the US went to war against an enemy 5,000 strong, we’ve killed 10,000 of them, and now there are only 20,000 left. That’s a good summary of the situation; the US drone campaign has been a total failure by every objective measure, having worked out consistently to the benefit of the Muslim extremist groups against which it’s aimed, and yet nobody in official Washington seems capable of noticing this fact.
It’s hard to miss the conclusion, in fact, that the Obama administration thinks that in pursuing its drone-strike program, it’s playing some kind of video game, which the United States can win if it can rack up enough points. Notice the way that every report that a drone has taken out some al-Qaeda leader gets hailed in the media: hey, we nailed a commander, doesn’t that boost our score by five hundred? In the real world, meanwhile the indiscriminate slaughter of civilians by US drone strikes has become a core factor convincing Muslims around the world that the United States is just as evil as the jihadis claim, and thus sending young men by the thousands to join the jihadi ranks. Has anyone in the Obama administration caught on to this straightforward arithmetic of failure? Surely you jest.
For that matter, I wonder how many of my readers recall the much-ballyhooed “surge” in Afghanistan several years back.  The “surge” was discussed at great length in the US media before it was enacted on Afghan soil; talking heads of every persuasion babbled learnedly about how many troops would be sent, how long they’d stay, and so on. It apparently never occurred to anybody in the Pentagon or the White House that the Taliban could visit websites and read newspapers, and get a pretty good idea of what the US forces in Afghanistan were about to do. That’s exactly what happened, too; the Taliban simply hunkered down for the duration, and popped back up the moment the extra troops went home.
Both these examples of US military failure are driven by the same problem discussed earlier in the context of diplomacy: an inability to recognize that the other side will reliably respond to US actions in ways that further its own agenda, rather than playing along with the US. More broadly, it’s the same failure of thought that leads so many people to assume that the biosphere is somehow obligated to give us all the resources we want and take all the abuse we choose to dump on it, without ever responding in ways that might inconvenience us.
We can sum up all these forms of acquired stupidity in a single sentence: most people these days seem to have lost the ability to grasp that the other side can learn.
The entire concept of learning has been so poisoned by certain bad habits of contemporary thought that it’s probably necessary to pause here. Learning, in particular, isn’t the same thing as rote imitation. If you memorize a set of phrases in a foreign language, for example, that doesn’t mean you’ve learned that language. To learn the language means to grasp the underlying structure, so that you can come up with your own phrases and say whatever you want, not just what you’ve been taught to say.
In the same way, if you memorize a set of disconnected factoids about history, you haven’t learned history. This is something of a loaded topic right now in the US, because recent “reforms” in the American  public school system have replaced learning with rote memorization of disconnected factoids that are then regurgitated for multiple choice tests. This way of handling education penalizes those children who figure out how to learn, since they might well come up with answers that differ from the ones the test expects. That’s one of many ways that US education these days actively discourages learning—but that’s a subject for another post.
To learn is to grasp the underlying structure of a given subject of knowledge, so that the learner can come up with original responses to it. That’s what Russia and China did; they grasped the underlying structure of US diplomacy, figured out that they had nothing to gain by cooperating with that structure, and came up with a creative response, which was to ally against the United States. That’s what Greece is doing, too.  Bit by bit, the Greeks seem to be figuring out the underlying structure of troika policy, which amounts to the systematic looting of southern Europe for the benefit of Germany and a few of its allies, and are trying to come up with a response that doesn’t simply amount to unilateral submission.
That’s also what the jihadis and the Taliban are doing in the face of US military activity. If life hands you lemons, as the saying goes, make lemonade; if the US hands you drone strikes that routinely slaughter noncombatants, you can make very successful propaganda out of it—and if the US hands you a surge, you roll your eyes, hole up in your mountain fastnesses, and wait for the Americans to get bored or distracted, knowing that this won’t take long. That’s how learning works, but that’s something that US planners seem congenitally unable to take into account.
The same analysis, interestingly enough, makes just as much sense when applied to nonhuman nature. As Ervin Laszlo pointed out a long time ago in Introduction to Systems Philosophy, any sufficiently complex system behaves in ways that approximate intelligence.  Consider the way that bacteria respond to antibiotics. Individually, bacteria are as dumb as politicians, but their behavior on the species level shows an eerie similarity to learning; faced with antibiotics, a species of bacteria “tries out” different biochemical approaches until it finds one that sidesteps the antibiotic. In the same way, insects and weeds “try out” different responses to pesticides and herbicides until they find whatever allows them to munch on crops or flourish in the fields no matter how much poison the farmer sprays on them.
We can even apply the same logic to the environmental crisis as a whole. Complex systems tend to seek equilibrium, and will respond to anything that pushes them away from equilibrium by pushing back the other way. Any field biologist can show you plenty of examples: if conditions allow more rabbits to be born in a season, for instance, the population of hawks and foxes rises accordingly, reducing the rabbit surplus to a level the ecosystem can support. As humanity has put increasing pressure on the biosphere, the biosphere has begun to push back with increasing force, in an increasing number of ways; is it too much to think of this as a kind of learning, in which the biosphere “tries out” different ways to balance out the abusive behavior of humanity, until it finds one or more that work?
Now of course it’s long been a commonplace of modern thought that natural systems can’t possibly learn. The notion that nature is static, timeless, and unresponsive, a passive stage on which human beings alone play active roles, is welded into modern thought, unshaken even by the realities of biological evolution or the rising tide of evidence that natural systems are in fact quite able to adapt their way around human meddling. There’s a long and complex history to the notion of passive nature, but that’s a subject for another day; what interests me just now is that since 1990 or so, the governing classes of the United States, and some other Western nations as well, have applied the same frankly delusional logic to everything in the world other than themselves.
“We’re an empire now, and when we act, we create our own reality,” neoconservative guru Karl Rove is credited as saying to reporter Ron Suskind. “We’re history’s actors, and you, all of you, will be left to just study what we do.” That seems to be the thinking that governs the US government these days, on both sides of the supposed partisan divide. Obama says we’re in a recovery, and if the economy fails to act accordingly, why, rooms full of industrious flacks churn out elaborately fudged statistics to erase that unwelcome reality. That history’s self-proclaimed actors might turn out to be just one more set of flotsam awash on history’s vast tides has never entered their darkest dream.
Let’s step back from specifics again, though. What’s the source of this bizarre paralogic—the delusion that leads politicians to think that they create reality, and that everyone and everything else can only fill the roles they’ve been assigned by history’s actors?  I think I know. I think it comes from a simple but remarkably powerful fact, which is that the people in question, along with most people in the privileged classes of the industrial world, spend most of their time, from childhood on, dealing with machines.
We can define a machine as a subset of the universe that’s been deprived of the capacity to learn. The whole point of building a machine is that it does what you want, when you want it, and nothing else. Flip the switch on, and it turns on and goes through whatever rigidly defined set of behaviors it’s been designed to do; flip the switch off, and it stops. It may be fitted with controls, so you can manipulate its behavior in various tightly limited ways; nowadays, especially when computer technology is involved, the set of behaviors assigned to it may be complex enough that an outside observer may be fooled into thinking that there’s learning going on. There’s no inner life behind the facade, though.  It can’t learn, and to the extent that it pretends to learn, what happens is the product of the sort of rote memorization described above as the antithesis of learning.
A machine that learned would be capable of making its own decisions and coming up with a creative response to your actions—and that’s the opposite of what machines are meant to do, because that response might well involve frustrating your intentions so the machine can get what it wants instead. That’s why the trope of machines going to war against human beings has so large a presence in popular culture: it’s exactly because we expect machines not to act like people, not to pursue their own needs and interests, that the thought of machines acting the way we do gets so reliable a frisson of horror.
The habit of thought that treats the rest of the cosmos as a collection of machines, existing only to fulfill whatever purpose they might be assigned by their operators, is another matter entirely. Its origins can be traced back to the dawning of the scientific revolution in the seventeenth century, when a handful of thinkers first began to suggest that the universe might not be a vast organism—as everybody in the western world had presupposed for millennia before then—but might instead be a vast machine. It’s indicative that one immediate and popular response to this idea was to insist that other living things were simply “meat machines” who didn’t actually suffer pain under the vivisector’s knife, but had been designed by God to imitate sounds of pain in order to inspire feelings of pity in human beings.
The delusion of control—the conviction, apparently immune to correction by mere facts, that the world is a machine incapable of doing anything but the things we want it to do—pervades contemporary life in the world’s industrial societies. People in those societies spend so much more time dealing with machines than they do interacting with other people and other living things without a machine interface getting in the way, that it’s no wonder that this delusion is so widespread. As long as it retains its grip, though, we can expect the industrial world, and especially its privileged classes, to stumble onward from one preventable disaster to another. That’s the inner secret of the delusion of control, after all: those who insist on seeing the world in mechanical terms end up behaving mechanically themselves. Those who deny all other things the ability to learn lose the ability to learn from their own mistakes, and lurch robotically onward along a trajectory that leads straight to the scrapheap of the future.

An Affirming Flame

Wed, 2015-06-17 16:57
According to an assortment of recent news stories, this Thursday, June 18, is the make-or-break date by which a compromise has to be reached between Greece and the EU if a Greek default, with the ensuing risk of a potential Greek exit from the Eurozone, is to be avoided. If that’s more than just media hype, there’s a tremendous historical irony in the fact.  June 18 is after all the 200th anniversary of the Battle of Waterloo, where a previous attempt at European political and economic integration came to grief.
Now of course there are plenty of differences between the two events. In 1815 the preferred instrument of integration was raw military force; in 2015, for a variety of reasons, a variety of less overt forms of political and economic pressure have taken the place of Napoleon’s Grande Armée. The events of 1815 were also much further along the curve of defeat than those of 2015.  Waterloo was the end of the road for France’s dream of pan-European empire, while the current struggles over the Greek debt are taking place at a noticeably earlier milepost along the same road. The faceless EU bureaucrats who are filling Napoleon’s role this time around thus won’t be on their way to Elba for some time yet.
“What discords will drive Europe into that artificial unity—only dry or drying sticks can be tied into a bundle—which is the decadence of every civilization?” William Butler Yeats wrote that in 1936. It was a poignant question but also a highly relevant one, since the discords in question were moving rapidly toward explosion as he penned the last pages of A Vision, where those words appear.  Like most of those who see history in cyclical terms, Yeats recognized that the patterns that recur from age to age  are trends and motifs rather than exact narratives.  The part played by a conqueror in one era can end up in the hands of a heroic failure in the next, for circumstances can define a historical role but not the irreducibly human strengths and foibles of the person who happens to fill it.
Thus it’s not too hard to look at the rising spiral of stresses in the European Union just now and foresee the eventual descent of the continent into a mix of domestic insurgency and authoritarian nationalism, with the oncoming tide of mass migration from Africa and the Middle East adding further pressure to an already explosive mix. Exactly how that will play out over the next century, though, is a very tough question to answer. A century from now, due to raw demography, many countries in Europe will be majority-Muslim nations that look to Mecca for the roots of their faith and culture—but which ones, and how brutal or otherwise will the transition be? That’s impossible to know in advance.
There are plenty of similar examples just now; for the student of historical cycles, 2015 practically defines the phrase “target-rich environment.” Still, I want to focus on something a little different here. Partly, this is because the example I have in mind makes a good opportunity to point out the the way that what philosophers call the contingent nature of events—in less highflown language, the sheer cussedness of things—keeps history’s dice constantly rolling. Partly, though, it’s because this particular example is likely to have a substantial impact on the future of everyone reading this blog.
Last year saw a great deal of talk in the media about possible parallels between the current international situation and that of the world precisely a century ago, in the weeks leading up to the outbreak of the First World War.  Mind you, since I contributed to that discussion, I’m hardly in a position to reject the parallels out of hand. Still, the more I’ve observed the current situation, the more I’ve come to think that a different date makes a considerably better match to present conditions. To be precise, instead of a replay of 1914, I think we’re about to see an equivalent of 1939—but not quite the 1939 we know.
Two entirely contingent factors, added to all the other pressures driving toward that conflict, made the Second World War what it was. The first, of course, was the personality of Adolf Hitler. It was probably a safe bet that somebody in Weimar Germany would figure out how to build a bridge between the politically active but fragmented nationalist Right and the massive but politically inert German middle classes, restore Germany to great-power status, and gear up for a second attempt to elbow aside the British Empire. That the man who happened to do these things was an eccentric anti-Semite ideologue who combined shrewd political instincts, utter military incompetence, and a frankly psychotic faith in his own supposed infallibility, though, was in no way required by the logic of history.
Had Corporal Hitler taken an extra lungful of gas on the Western Front, someone else would likely have filled the same role in the politics of the time. We don’t even have to consider what might have happened if the nation that birthed Frederick the Great and Otto von Bismarck had come up with a third statesman of the same caliber. If the German head of state in 1939 had been merely a capable pragmatist with adequate government and military experience, and guided Germany’s actions by a logic less topsy-turvy than Hitler’s, the trajectory of those years would have been far different.
The second contingent factor that defined the outcome of the great wars of the twentieth century is broader in focus than the quirks of a single personality, but it was just as subject to those vagaries that make hash out of attempts at precise historical prediction. As discussed in an earlier post on this blog, it was by no means certain that America would be Britain’s ally when war finally came. From the Revolution onward, Britain was in many Americans’ eyes the national enemy; as late as the 1930s, when the US Army held its summer exercises, the standard scenario involved a British invasion of US territory.
All along, there was an Anglophile party in American cultural life, and its ascendancy in the years after 1900 played a major role in bringing the United States into two world wars on Britain’s side. Still, there was a considerably more important factor in play, which was a systematic British policy of conciliating the United States. From the American Civil War on, Britain allowed the United States liberties it would never have given any other power,  When the United States expanded its influence in Latin America and the Carribbean, Britain allowed itself to be upstaged there; when the United States shook off its  isolationism and built a massive blue-water navy, the British even allowed US naval vessels to refuel at British coaling stations during the global voyage of the “Great White Fleet” in 1907-9.
This was partly a reflection of the common cultural heritage that made many British politicians think of the United States as a sort of boisterous younger brother of theirs, and partly a cold-eyed recognition, in the wake of the Civil War, that war between Britain and the United States would almost certainly lead to a US invasion of Canada that Britain was very poorly positioned to counter. Still, there was another issue of major importance. To an extent few people realized at the time, the architecture of European peace after Waterloo depended on political arrangements that kept the German-speaking lands of the European core splintered into a diffuse cloud of statelets too small to threaten any of the major powers.
The great geopolitical fact of the 1860s was the collapse of that cloud into the nation of Germany, under the leadership of the dour northeastern kingdom of Prussia. In 1866, the Prussians pounded the stuffing out of Austria and brought the rest of the German states into a federation; in 1870-1871, the Prussians and their allies did the same thing to France, which was a considerably tougher proposition—this was the same French nation, remember, which brought Europe to its knees in Napoleon’s day—and the federation became the German Empire. The Austro-Hungarian Empire was widely considered the third great power in Europe until 1866; until 1870, France was the second; everybody knew that sooner or later the Germans were going to take on great power number one.
British policy toward the United States from 1871 onward was thus tempered by the harsh awareness that Britain could not afford to alienate a rising power who might become an ally, or at least a friendly neutral, when the inevitable war with Germany arrived. Above all, an alliance between Germany and the United States would have been Britain’s death warrant, and everyone in the Foreign Office and the Admiralty in London had to know that. The thought of German submarines operating out of US ports, German and American fleets combining to take on the Royal Navy, and American armies surging into Canada and depriving Britain of a critical source of raw materials and recruits while the British Army was pinned down elsewhere, must have given British planners many sleepless nights.
After 1918, that recognition must have been even more sharply pointed, because US loans and munitions shipments played a massive role in saving the western Allies from collapse in the face of the final German offensive in the autumn of 1917, and turned the tide in a war that, until then, had largely gone Germany’s way. During the two decades leading up to 1939, as Germany recovered and rearmed, British governments did everything they could to keep the United States on their side, with results that paid off handsomely when the Second World War finally came.
Let’s imagine, though, an alternative timeline in which the Foreign Office and the Admiralty from 1918 on are staffed by idiots. Let’s further imagine that Parliament is packed with clueless ideologues whose sole conception of foreign policy is that everyone, everywhere, ought to be bludgeoned into compliance with Britain’s edicts, no matter how moronic those happen to be. Let’s say, in particular, that one British government after another conducts its policy toward the United States on the basis of smug self-centered arrogance, and any move the US makes to assert itself on the international stage can count on an angry response from London. The United States launches an aircraft carrier? A threat to world peace, the London Timesroars.  The United States exerts diplomatic pressure on Mexico, and builds military bases in Panama? British diplomats head for the Carribbean and Latin America to stir up as much opposition to America’s agenda as possible.
Let’s say, furthermore, that in this alternative timeline, Adolf Hitler did indeed take one too many deep breaths on the Western Front, and lies in a military cemetery, one more forgotten casualty of the Great War. In his absence, the German Workers Party remains a fringe group, and the alliance between the nationalist Right and the middle classes is built instead by the Deutsche Volksfreiheitspartei (DVFP), which seizes power in 1934. Ulrich von Hassenstein, the new Chancellor, is a competent insider who knows how to listen to his diplomats and General Staff, and German foreign and military policy under his leadership pursues the goal of restoring Germany to world-power status using considerably less erratic means than those used by von Hassenstein’s equivalent in our timeline.
Come 1939, finally, as rising tensions between Germany and the Anglo-French alliance over Poland’s status move toward war, Chancellor von Hassenstein welcomes US President Charles Lindbergh to Berlin, where the two heads of state sign a galaxy of treaties and trade agreements and talk earnestly to the media about the need to establish a multipolar world order to replace Britain’s global hegemony. A second world war is in the offing, but the shape of that war will be very different from the one that broke out in our version of 1939, and while the United States almost certainly will be among the victors, Britain almost certainly will not.
Does all this sound absurd? Let’s change the names around and see.
Just as the great rivalry of the first half of the twentieth century was fought out between Britain and Germany, the great rivalry of the century’s second half was between the United States and Russia. If nuclear weapons hadn’t been invented, it’s probably a safe bet that at some point the rivalry would have ended in another global war.  As it was, the threat of mutual assured destruction meant that the struggle for global power had to be fought out less directly, in a flurry of proxy wars, sponsored insurgencies, economic warfare, subversion, sabotage, and bare-knuckle diplomacy. In that war, the United States came out on top, and Soviet Russia went the way of Imperial Germany, plunging into the same sort of political and economic chaos that beset the Weimar Republic in its day.
The supreme strategic imperative of the United States in that war was finding ways to drive as deep a wedge as possible between Russia and China, in order to keep them from taking concerted action against the US. That wasn’t all that difficult a task, since the two nations have very little in common and many conflicting interests. Nixon’s 1972 trip to China was arguably the defining moment in the Cold War, the point at which China’s separation from the Soviet bloc became total and Chinese integration with the American economic order began. From that point on, for Russia, it was basically all downhill.
In the aftermath of Russia’s defeat, the same strategic imperative remained, but the conditions of the post-Cold War world made it almost absurdly easy to carry out. All that would have been needed were American policies that gave Russia and China meaningful, concrete reasons to think that their national interests and aspirations would be easier to achieve in cooperation with a US-led global order than in opposition to it. Granting Russia and China the same position of regional influence that the US accords to Germany and Japan as a matter of course probably would have been enough. A little forbearance, a little foreign aid, a little adroit diplomacy, and the United States would have been in the catbird’s seat, with Russia and China glaring suspiciously at each other across their long and problematic mutual border, and bidding against each other for US support in their various disagreements.
But that’s not what happened, of course.
What happened instead was that the US embraced a foreign policy so astonishingly stupid that I’m honestly not sure the English language has adequate resources to describe it. Since 1990, one US administration after another, with the enthusiastic bipartisan support of Congress and the capable assistance of bureaucrats across official Washington from the Pentagon and the State Department on down, has pursued policies guaranteed to force Russia and China to set aside their serious mutual differences and make common cause against us. Every time the US faced a choice between competing policies, it’s consistently chosen the option most likely to convince Russia, China, or both nations at once that they had nothing to gain from further cooperation with American agendas.
What’s more, the US has more recently managed the really quite impressive feat of bringing Iran into rapprochement with the emerging Russo-Chinese alliance. It’s hard to think of another nation on Earth that has fewer grounds for constructive engagement with Russia or China than the Islamic Republic of Iran, but several decades of cluelessly hamfisted American blundering and bullying finally did the job. My American readers can now take pride in the state-of-the-art Russian air defense systems around Tehran, the bustling highways carrying Russian and Iranian products to each other’s markets, and the Russian and Chinese intelligence officers who are doubtless settling into comfortable digs on the north shore of the Persian Gulf, where they can snoop on the daisy chain of US bases along the south shore. After all, a quarter century of US foreign policy made those things happen.
It’s one thing to engage in this kind of serene disregard for reality when you’ve got the political unity, the economic abundance, and the military superiority to back it up. The United States today, like the British Empire in 1939, no longer has those. We’ve got an impressive fleet of aircraft carriers, sure, but Britain had an equally impressive fleet of battleships in 1939, and you’ll notice how much good those did them. Like Britain in 1939, the United States today is perfectly prepared for a kind of war that nobody fights any more, while rival nations less constrained by the psychology of previous investment and less riddled with institutionalized graft are fielding novel weapons systems designed to do end runs around our strengths and focus with surgical precision on our weaknesses.
Meanwhile, inside the baroque carapace of carriers, drones, and all the other high-tech claptrap of an obsolete way of war, the United States is a society in freefall, far worse off than Britain was during its comparatively mild 1930s downturn. Its leaders have forfeited the respect of a growing majority of its citizens; its economy has morphed into a Potemkin-village capitalism in which the manipulation of unpayable IOUs in absurd and rising amounts has all but replaced the actual production of goods and services; its infrastructure is so far fallen into decay that many US counties no longer pave their roads; most Americans these days think of their country’s political institutions as the enemy and its loudly proclaimed ideals as some kind of sick joke—and in both cases, not without reason. The national unity that made victory in two world wars and the Cold War possible went by the boards a long time ago, drowned in a tub by Tea Party conservatives who thought they were getting rid of government and limousine liberals who were going through the motions of sticking it to the Man.
I could go on tracing parallels for some time—in particular, despite a common rhetorical trope of US Russophobes, Vladimir Putin is not an Adolf Hitler but a fair equivalent of the Ulrich von Hassenstein of my alternate-history narrative—but here again, my readers can do the math themselves. The point I want to make is that all the signs suggest we are entering an era of international conflict in which the United States has thrown away nearly all its potential strengths, and handed its enemies advantages they would never have had if our leaders had the brains the gods gave geese. Since nuclear weapons still foreclose the option of major wars between the great powers, the conflict in question will doubtless be fought using the same indirect methods as the Cold War; in fact, it’s already being fought by those means, as the victims of proxy wars in Ukraine, Syria, and Yemen already know. The question in my mind is simply how soon those same methods get applied on American soil.
We thus stand at the beginning of a long, brutal epoch, as unforgiving as the one that dawned in 1939. Those who pin Utopian hopes on the end of American hegemony will get to add disappointment to that already bitter mix, since hegemony remains the same no matter who happens to be perched temporarily in the saddle. (I also wonder how many of the people who think they’ll rejoice at the end of American hegemony have thought through the impact on their hopes of collective betterment, not to mention their own lifestyles, once the 5% of the world’s population who live in the US can no longer claim a quarter or so of the world’s resources and wealth.) If there’s any hope possible at such a time, to my mind, it’s the one W.H. Auden proposed as the conclusion of his bleak and brilliant poem “September 1, 1939”:
Defenceless under the night,
Our world in stupor lies;
Yet, dotted everywhere,
Ironic points of light
Flash out wherever the just
Exchange their messages:
May I, composed like them
Of Eros and of dust,
Beleaguered by the same
Negation and despair,
Show an affirming flame.

The Era of Dissolution

Wed, 2015-06-10 20:06
The last of the five phases of the collapse process we’ve been discussing here in recent posts is the era of dissolution. (For those that haven’t been keeping track, the first four are the eras of pretense, impact, response, and breakdown). I suppose you could call the era of dissolution the Rodney Dangerfield of collapse, though it’s not so much that it gets no respect; it generally doesn’t even get discussed.
To some extent, of course, that’s because a great many of the people who talk about collapse don’t actually believe that it’s going to happen. That lack of belief stands out most clearly in the rhetorical roles assigned to collapse in so much of modern thinking. People who actually believe that a disaster is imminent generally put a lot of time and effort into getting out of its way in one way or another; it’s those who treat it as a scarecrow to elicit predictable emotional reactions from other people, or from themselves, who never quite manage to walk their talk.
Interestingly, the factor that determines the target of scarecrow-tactics of this sort seems to be political in nature. Groups that think they have a chance of manipulating the public into following their notion of good behavior tend to use the scarecrow of collapse to affect other people; for them, collapse is the horrible fate that’s sure to gobble us up if we don’t do whatever it is they want us to do. Those who’ve given up any hope of getting a response from the public, by contrast, turn the scarecrow around and use it on themselves; for them, collapse is a combination of Dante’s Inferno and the Big Rock Candy Mountain, the fantasy setting where the wicked get the walloping they deserve while they themselves get whatever goodies they’ve been unsuccessful at getting  in the here and now.
Then, of course, you get the people for whom collapse is less scarecrow than teddy bear, the thing that allows them to drift off comfortably to sleep in the face of an unwelcome future. It’s been my repeated observation that many of those who insist that humanity will become totally extinct in the very near future fall into this category. Most people, faced with a serious threat to their own lives, will take drastic action to save themselves; faced with a threat to the survival of their family or community, a good many people will take actions so drastic as to put their own lives at risk in an effort to save others they care about. The fact that so many people who insist that the human race is doomed go on to claim that the proper reaction is to sit around feeling very, very sad about it all does not inspire confidence in the seriousness of that prediction—especially when feeling very, very sad seems mostly to function as an excuse to keep enjoying privileged lifestyles for just a little bit longer.
So we have the people for whom collapse is a means of claiming unearned power, the people for whom it’s a blank screen on which to project an assortment of self-regarding fantasies, and the people for whom it’s an excuse to do nothing in the face of a challenging future. All three of those are popular gimmicks with an extremely long track record, and they’ll doubtless all see plenty of use millennia after industrial civilization has taken its place in the list of failed civilizations. The only problem with them is that they don’t happen to provide any useful guidance for those of us who have noticed that collapse isn’t merely a rhetorical gimmick meant to get emotional reactions—that it’s something that actually happens, to actual civilizations, and that it’s already happening to ours.
From the three perspectives already discussed, after all, realistic questions about what will come after the rubble stops bouncing are entirely irrelevant. If you’re trying to use collapse as a boogeyman to scare other people into doing what you tell them, your best option is to combine a vague sense of dread with an assortment of cherrypicked factoids intended to make a worst-case scenario look not nearly bad enough; if you’re trying to use collapse as a source of revenge fantasies where you get what you want and the people you don’t like get what’s coming to them, daydreams of various levels and modes of dampness are far more useful to you than sober assessments; while if you’re trying to use collapse as an excuse to maintain an unsustainable and planet-damaging SUV lifestyle, your best bet is to insist that everyone and everything dies all at once, so nothing will ever matter again to anybody.
On the other hand, there are also those who recognize that collapse happens, that we’re heading toward one, and that it might be useful to talk about what the world might look like on the far side of that long and difficult process. I’ve tried to sketch out a portrait of the postcollapse world in last year’s series of posts here on Dark Age America, and I haven’t yet seen any reason to abandon that portrait of a harsh but livable future, in which a sharply reduced global population returns to agrarian or nomadic lives in those areas of the planet not poisoned by nuclear or chemical wastes or rendered uninhabitable by prolonged drought or the other impacts of climate change, and in which much or most of today’s scientific and technological knowledge is irretrievably lost.
The five phases of collapse discussed in this latest sequence of posts is simply a description of how we get there—or, more precisely, of one of the steps by which we get there. That latter point’s a detail that a good many of my readers, and an even larger fraction of my critics, seem to have misplaced. The five-stage model is a map of how human societies shake off an unsustainable version of business as usual and replace it with something better suited to the realities of the time. It applies to a very wide range of social transformations, reaching in scale from the local to the global and in intensity from the relatively modest to the cataclysmic. To insist that it’s irrelevant because the current example of the species covers more geographical area than any previous example, or has further to fall than most, is like insisting that a law of physics that governs the behavior of marbles and billiards must somehow stop working just because you’re trying to do the same thing with bowling balls.
A difference of scale is not a difference of kind. Differences of scale have their own implications, which we’ll discuss a little later on in this post, but the complex curve of decline is recognizably the same in small things as in big ones, in the most as in the least catastrophic examples. That’s why I’ve used a relatively modest example—the collapse of the economic system of 1920s America and the resulting Great Depression—and an example from the midrange—the collapse of the French monarchy and the descent of 18th-century Europe into the maelstrom of the Napoleonic Wars—to provide convenient outlines for something toward the upper end of the scale—the decline and fall of modern industrial civilization and the coming of a deindustrial dark age. Let’s return to those examples, and see how the thread of collapse winds to its end.
As we saw in last week’s thrilling episode, the breakdown stage of the Great Depression came when the newly inaugurated Roosevelt administration completely redefined the US currency system. Up to that time, US dollar bills were in effect receipts for gold held in banks; after that time, those receipts could no longer be exchanged for gold, and the gold held by the US government became little more than a public relations gimmick. That action succeeded in stopping the ghastly credit crunch that shuttered every bank and most businesses in the US in the spring of 1933.
Roosevelt’s policies didn’t get rid of the broader economic dysfunction the 1929 crash had kickstarted. That was inherent in the industrial system itself, and remains a massive issue today, though its effects were papered over for a while by a series of temporary military, political, and economic factors that briefly enabled the United States to prosper at the expense of the rest of the world. The basic issue is simply that replacing human labor with machines powered by fossil fuel results in unemployment, and no law of nature or economics requires that new jobs can be found or created to replace the ones that are eliminated by mechanization. The history of the industrial age has been powerfully shaped by a whole series of attempts to ignore, evade, or paper over that relentless arithmetic.
Until 1940, the Roosevelt administration had no more luck with that project than the governments of most other nations.  It wasn’t until the Second World War made the lesson inescapable that anyone realized that the only way to provide full employment in an industrial society was to produce far more goods than consumers could consume, and let the military and a variety of other gimmicks take up the slack. That was a temporary gimmick, due to stark limitations in the resource base needed to support the mass production of useless goods, but in 1940, and even more so in 1950, few people recognized that and fewer cared. It’s our bad luck to be living at the time when that particular bill is coming due.
The first lesson to learn from the history of collapse, then, is that the breakdown phase doesn’t necessarily solve all the problems that brought it about. It doesn’t even necessarily take away every dysfunctional feature of the status quo. What it does with fair reliability is eliminate enough of the existing order of things that the problems being caused by that order decline to a manageable level. The more deeply rooted the problematic features of the status quo are in the structure of society and daily life, the harder it will be to change them, and the more likely other features are to be changed: in the example just given, it was much easier to break the effective link between the US currency and gold, and expand the money supply enough to get the economy out of cardiac arrest, than it was to break a link between mechanization and unemployment that’s hardwired into the basic logic of industrialism.
What this implies in turn is that it’s entirely possible for one collapse to cycle through the five stages we’ve explored, and then to have the era of dissolution morph straight into a new era of pretense in which the fact that all society’s problems haven’t been solved is one of the central things nobody in any relation to the centers of power wants to discuss. If the Second World War, the massive expansion of the petroleum economy, the invention of suburbia, the Cold War, and a flurry of other events hadn’t ushered in the immensely wasteful but temporarily prosperous boomtime of late 20th century America, there might well have been another vast speculative bubble in the mid- to late 1940s, resulting in another crash, another depression, and so on. This is after all what we’ve seen over the last twenty years: the tech stock bubble and bust, the housing bubble and bust, the fracking bubble and bust, each one hammering the economy further down the slope of decline.
With that in mind, let’s turn to our second example, the French Revolution. This is particularly fascinating since the aftermath of that particular era of breakdown saw a nominal return to the conditions of the era of pretense. After Napoleon’s final defeat in 1815, the Allied powers found an heir to the French throne and plopped him into the throne of the Bourbons as Louis XVIII to well-coached shouts of “Vive le Roi!” On paper, nothing had changed.
In reality, everything had changed, and the monarchy of post-Napoleonic France had roots about as deep and sturdy as the democracy of post-Saddam Iraq. Louis XVIII was clever enough to recognize this, and so managed to end his reign in the traditional fashion, feet first from natural causes. His heir Charles X was nothing like so clever, and got chucked off the throne after six years on it by another revolution in 1830. King Louis-Philippe went the same way in 1848—the French people were getting very good at revolution by that point. There followed a Republic, an Empire headed by Napoleon’s nephew, and finally another Republic which lasted out the century. All in all, French politics in the 19th century was the sort of thing you’d expect to see in an unusually excitable banana republic.
The lesson to learn from this example is that it’s very easy, and very common, for a society in the dissolution phase of collapse to insist that nothing has changed and pretend to turn back the clock. Depending on just how traumatic the collapse has been, everybody involved may play along with the charade, the way everyone in Rome nodded and smiled when Augustus Caesar pretended to uphold the legal forms of the defunct Roman Republic, and their descendants did exactly the same thing centuries later when Theodoric the Ostrogoth pretended to uphold the legal forms of the defunct Roman Empire. Those who recognize the charade as charade and play along without losing track of the realities, like Louis XVIII, can quite often navigate such times successfully; those who mistake charade for reality, like Charles X, are cruising for a bruising and normally get it in short order.
Combine these two lessons and you’ll get what I suspect will turn out to be a tolerably good sketch of the American future. Whatever comes out of the impact, response, and breakdown phases of the crisis looming ahead of the United States just now—whether it’s a fragmentary mess of successor states, a redefined nation beginning to recover from a period of personal rule by some successful demagogue or, just possibly, a battered and weary republic facing a long trudge back to its foundational principles, it seems very likely that everyone involved will do their level best to insist that nothing has really changed. If the current constitution has been abolished, it may be officially reinstated with much fanfare; there may be new elections, and some shuffling semblance of the two-party system may well come lurching out of the crypts for one or two more turns on the stage.
None of that will matter. The nation will have changed decisively in ways we can only begin to envision at this point, and the forms of twentieth-century American politics will cover a reality that has undergone drastic transformations, just as the forms of nineteenth-century French monarchy did. In due time, by some combination of legal and extralegal means, the forms will be changed to reflect the new realities, and the territory we now call the United States of America—which will almost certainly have a different name, and may well be divided into several different and competing nations by then—will be as prepared to face the next round of turmoil as it’s going to get.
Yes, there will be a next round of turmoil. That’s the thing that most people miss when thinking about the decline and fall of a civilization: it’s not a single event, or even a single linear process. It’s a whole series of cascading events that vary drastically in their importance, geographical scope, and body count. That’s true of every process of historic change.
It was true even of so simple an event as the 1929 crash and Great Depression: 1929 saw the crash, 1930 the suckers’ rally, 1931 the first wave of European bank failures, 1932 the unraveling of the US banking system, and so on until bombs falling on Pearl Harbor ushered in a different era. It was even more true of the French Revolution: between 1789 and 1815 France basically didn’t have a single year without dramatic events and drastic changes of one kind or another, and the echoes of the Revolution kept things stirred up for decades to come. Check out the fall of civilizations and you’ll see the same thing unfolding on a truly vast scale, with crisis after crisis along an arc centuries in length.
The process that’s going on around us is the decline and fall of industrial civilization. Everything we think of as normal and natural, modern and progressive, solid and inescapable is going to melt away into nothingness in the years, decades, and centuries ahead, to be replaced first by the very different but predictable institutions of a dark age, and then by the new and wholly unfamiliar forms of the successor societies of the far future. There’s nothing inevitable about the way we do things in today’s industrial world; our political arrangements, our economic practices, our social instutions, our cultural habits, our sciences and our technologies all unfold from industrial civilization’s distinctive and profoundly idiosyncratic worldview.  So does the central flaw in the entire baroque edifice, our lethally muddleheaded inability to understand our inescapable dependence on the biosphere that supports our lives. All that is going away in the time before us—but it won’t go away suddenly, or all at once.
Here in the United States, we’re facing one of the larger downward jolts in that prolonged process, the end of American global empire and of the robust economic benefits that the machinery of empire pumps from the periphery to the imperial center. Until recently, the five per cent of us who lived here got to enjoy a quarter of the world’s energy supply and raw materials and a third of its manufactured products. Those figures have already decreased noticeably, with consequences that are ringing through every corner of our society; in the years to come they’re going to decrease much further still, most likely to something like a five per cent share of the world’s wealth or even a little less. That’s going to impact every aspect of our lives in ways that very few Americans have even begun to think about.
All of that is taking place in a broader context, to be sure. Other countries will have their own trajectories through the arc of industrial civilization’s decline and fall, and some of those trajectories will be considerably less harsh in the short term than ours. In the long run, the human population of the globe is going to decline sharply; the population bubble that’s causing so many destructive effects just now will be followed in due time by a population bust, in which those four guys on horseback will doubtless play their usual roles. In the long run, furthermore, the vast majority of today’s technologies are going to go away as the resource base needed to support them gets used up, or stops being available due to other bottlenecks. Those are givens—but the long run is not the only scale that matters.
It’s not at all surprising that the foreshocks of that immense change are driving the kind of flight to fantasy criticized in the opening paragraphs of this essay. That’s business as usual when empires go down; pick up a good cultural history of the decline and fall of any empire in the last two millennia or so and you’ll find plenty of colorful prophecies of universal destruction. I’d like to encourage my readers, though, to step back from those fantasies—entertaining as they are—and try to orient themselves instead to the actual shape of the future ahead of us. That shape’s not only a good deal less gaseous than the current offerings of the Apocalypse of the Month Club (internet edition), it also offers an opportunity to do something about the future—a point we’ll be discussing further in posts to come.

The Era of Breakdown

Wed, 2015-06-03 16:49
The fourth of the stages in the sequence of collapse we’ve been discussing is the era of breakdown. (For those who haven’t been keeping track, the first three phases are the eras of pretense, impact, and response; the final phase, which we’ll be discussing next week, is the era of dissolution.) The era of breakdown is the phase that gets most of the press, and thus inevitably no other stage has attracted anything like the crop of misperceptions, misunderstandings, and flat-out hokum as this one.
The era of breakdown is the point along the curve of collapse at which business as usual finally comes to an end. That’s where the confusion comes in. It’s one of the central articles of faith in pretty much every human society that business as usual functions as a bulwark against chaos, a defense against whatever problems the society might face. That’s exactly where the difficulty slips in, because in pretty much every human society, what counts as business as usual—the established institutions and familiar activities on which everyone relies day by day—is the most important cause of the problems the society faces, and the primary cause of collapse is thus quite simply that societies inevitably attempt to solve their problems by doing all the things that make their problems worse.
The phase of breakdown is the point at which this exercise in futility finally grinds to a halt. The three previous phases are all attempts to avoid breakdown: in the phase of pretense, by making believe that the problems don’t exist; in the phase of impact, by making believe that the problems will go away if only everyone doubles down on whatever’s causing them; and in the phase of response, by making believe that changing something other than the things that are causing the problems will fix the problems. Finally, after everything else has been tried, the institutions and activities that define business as usual either fall apart or are forcibly torn down, and then—and only then—it becomes possible for a society to do something about its problems.
It’s important not to mistake the possibility of constructive action for the inevitability of a solution. The collapse of business as usual in the breakdown phase doesn’t solve a society’s problems; it doesn’t even prevent those problems from being made worse by bad choices. It merely removes the primary obstacle to a solution, which is the wholly fictitious aura of inevitability that surrounds the core institutions and activities that are responsible for the problems. Once people in a society realize that no law of God or nature requires them to maintain a failed status quo, they can then choose to dismantle whatever fragments of business as usual haven’t yet fallen down of their own weight.
That’s a more important action than it might seem at first glance. It doesn’t just put an end to the principal cause of the society’s problems. It also frees up resources that have been locked up in the struggle to keep business as usual going at all costs, and those newly freed resources very often make it possible for a society in crisis to transform itself drastically in a remarkably short period of time. Whether those transformations are for good or ill, or as usually happens, a mixture of the two, is another matter, and one I’ll address a little further on.
Stories in the media, some recent, some recently reprinted, happen to have brought up a couple of first-rate examples of the way that resources get locked up in unproductive activities during the twilight years of a failing society. A California newspaper, for example, recently mentioned that Elon Musk’s large and much-ballyhooed fortune is almost entirely a product of government subsidies. Musk is a smart guy; he obviously realized a good long time ago that federal and state subsidies for technology was where the money was at, and he’s constructed an industrial empire funded by US taxpayers to the tune of many billions of dollars. None of his publicly traded firms has ever made a profit, and as long as the subsidies keep flowing, none of them ever has to; between an overflowing feed trough of government largesse and the longstanding eagerness of fools to be parted from their money by way of the stock market, he’s pretty much set for life.
This is business as usual in today’s America. An article from 2013 pointed out, along the same lines, that the profits made by the five largest US banks were almost exactly equal to the amount of taxpayer money those same five banks got from the government. Like Elon Musk, the banks in question have figured out where the money is, and have gone after it with their usual verve; the revolving door that allows men in suits to shuttle back and forth between those same banks and the financial end of the US government doesn’t exactly hinder that process. It’s lucrative, it’s legal, and the mere fact that it’s bankrupting the real economy of goods and services in order to further enrich an already glutted minority of kleptocrats is nothing anyone in the citadels of power worries about.
A useful light on a different side of the same process comes from an editorial (in PDF) which claims thatsomething like half of all current scientific papers are unreliable junk. Is this the utterance of an archdruid, or some other wild-eyed critic of science? No, it comes from the editor of Lancet, one of the two or three most reputable medical journals on the planet. The managing editor of The New England Journal of Medicine, which has a comparable ranking to Lancet, expressed much the same opinion of the shoddy experimental design, dubious analysis, and blatant conflicts of interest that pervade contemporary scientific research.
Notice that what’s happening here affects the flow of information in the same way that misplaced government subsidies affect the flow of investment. The functioning of the scientific process, like that of the market, depends on the presupposition that everyone who takes part abides by certain rules. When those rules are flouted, individual actors profit, but they do so at the expense of the whole system: the results of scientific research are distorted so that (for example) pharmaceutical firms can profit from drugs that don’t actually have the benefits claimed for them, just as the behavior of the market is distorted so that (for example) banks that would otherwise struggle for survival, and would certainly not be able to pay their CEOs gargantuan bonuses, can continue on their merry way.
The costs imposed by these actions are real, and they fall on all other participants in science and the economy respectively. Scientists these days, especially but not only in such blatantly corrupt fields as pharmaceutical research, face a lose-lose choice between basing their own investigations on invalid studies, on the one hand, or having to distrust any experimental results they don’t replicate themselves, on the other. Meanwhile the consumers of the products of scientific research—yes, that would be all of us—have to contend with the fact that we have no way of knowing whether any given claim about the result of research is the product of valid science or not. Similarly, the federal subsidies that direct investment toward politically savvy entrepreneurs like Elon Musk, and politically well-connected banks such as Goldman Sachs, and away from less parasitic and more productive options distort the entire economic system by preventing the normal workings of the market from weeding out nonviable projects and firms, and rewarding the more viable ones.
Turn to the  historical examples we’ve been following for the last three weeks, and distortions of the same kind are impossible to miss. In the US economy before and during the stock market crash of 1929 and its long and brutal aftermath, a legal and financial system dominated by a handful of very rich men saw to it that the bulk of the nation’s wealth flowed uphill, out of productive economic activities and into speculative ventures increasingly detached from the productive economy. When the markets imploded, in turn, the same people did their level best to see to it that their lifestyles weren’t affected even though everyone else’s was. The resulting collapse in consumer expenditures played a huge role in driving the cascading collapse of the US economy that, by the spring of 1933, had shuttered every consumer bank in the nation and driven joblessness and impoverishment to record highs.
That’s what Franklin Roosevelt fixed. It’s always amused me that the people who criticize FDR—and of course there’s plenty to criticize in a figure who, aside from his far greater success as a wartime head of state, can best be characterized as America’s answer to Mussolini—always talk about the very mixed record of the economic policies of his second term. They rarely bother to mention the Hundred Days, in which FDR stopped a massive credit collapse in its tracks. The Hundred Days and their aftermath are the part of FDR’s presidency that mattered most; it was in that brief period that he slapped shock paddles on an economy in cardiac arrest and got a pulse going, by violating most of the rules that had guided the economy up to that time. That casual attitude toward economic dogma is one of the two things his critics have never been able to forgive; the other is that it worked.
In the same way, France before, during, and immediately after the Revolution was for all practical purposes a medieval state that had somehow staggered its way to the brink of the nineteenth century. The various revolutionary governments that succeeded one another in quick succession after 1789 made some badly needed changes, but it was left to Napoléon Bonaparte to drag France by the scruff of its collective neck out of the late Middle Ages. Napoléon has plenty of critics—and of course there’s plenty to criticize in a figure who was basically what Mussolini wanted to be when he grew up—but the man’s domestic policies were by and large inspired. To name only two of his most important changes, he replaced the sprawling provinces of medieval France with a system of smaller and geographically meaningful départements, and abolished the entire body of existing French law in favor of a newly created legal system, the Code Napoléon. When he was overthrown, those stayed; in fact, a great many other countries in Europe and elsewhere proceeded to adopt the Code Napoléon in place of their existing legal systems. There were several reasons for this, but one of the most important was that the new Code simply made that much more sense.

Both men were able to accomplish what they did, in turn, because abolishing the political, economic, and cultural distortions imposed on their respective countries by a fossilized status quo freed up all the resources that had bene locked up in maintaining those distortions. Slapping a range of legal barriers and taxes on the more egregious forms of speculative excess—another of the major achievements of the Roosevelt era—drove enough wealth back into the productive economy to lay the foundations of America’s postwar boom; in the same way, tipping a galaxy of feudal customs into history’s compost bin transformed France from the economic basket case it was in 1789 to the conqueror of Europe twenty years later, and the succesful and innovative economic and cultural powerhouse it became during most of the nineteenth century thereafter.

That’s one of the advantages of revolutionary change. By breaking down existing institutions and the encrusted layers of economic parasitism that inevitably build up around them over time, it reliably breaks loose an abundance of resources that were not available in the prerevolutionary period. Here again, it’s crucial to remember that the availability of resources doesn’t guarantee that they’ll be used wisely; they may be thrown away on absurdities of one kind or another. Nor, even more critically, does it mean that the same abundance of resources will be available indefinitely. The surge of additional resources made available by catabolizing old and corrupt systems is a temporary jackpot, not a permanent state of affairs. That said, when you combine the collapse of fossilized institutions that stand in the way of change, and a sudden rush of previously unavailable resources of various kinds, quite a range of possibilities previously closed to a society suddenly come open.

Applying this same pattern to the crisis of modern industrial civilization, though, requires attention to certain inescapable but highly unwelcome realities. In 1789, the problem faced by France was the need to get rid of a thousand years of fossilized political, economic, and social institutions at a time when the coming of the industrial age had made them hopelessly dysfunctional. In 1929, the problem faced by the United States was the need to pry the dead hand of an equally dysfunctional economic orthodoxy off the throat of the nation so that its economy would actually function again. In both cases, the era of breakdown was catalyzed by a talented despot, and was followed, after an interval of chaos and war, by a period of relative prosperity.

We may well get the despot this time around, too, not to mention the chaos and war, but the period of prosperity is probably quite another matter. The problem we face today, in the United States and more broadly throughout the world’s industrial societies, is that all the institutions of industrial civilization presuppose limitless economic growth, but the conditions that provided the basis for continued economic growth simply aren’t there any more. The 300-year joyride of industrialism was made possible by vast and cheaply extractable reserves of highly concentrated fossil fuels and other natural resources, on the one hand, and a biosphere sufficiently undamaged that it could soak up the wastes of human industry without imposing burdens on the economy, on the other. We no longer have either of those requirements.

With every passing year, more and more of the world’s total economic output has to be diverted from other activities to keep fossil fuels and other resources flowing into the industrial world’s power plants, factories, and fuel tanks; with every passing year, in turn, more and more of the world’s total economic output has to be diverted from other activities to deal with the rising costs of climate change and other ecological disruptions. These are the two jaws of the trap sketched out more than forty years ago in the pages of The Limits to Growth, still the most accurate (and thus inevitably the most savagely denounced) map of the predicament we face. The consequences of that trap can be summed up neatly: on a finite planet, after a certain point—the point of diminishing returns, which we’ve already passed—the costs of growth rise faster than the benefits, and finally force the global economy to its knees.

The task ahead of us is thus in some ways the opposite of the one that France faced in the aftermath of 1789. Instead of replacing a sclerotic and failing medieval economy with one better suited to a new era of industrial expansion, we need to replace a sclerotic and failing industrial economy with one better suited to a new era of deindustrial contraction. That’s a tall order, no question, and it’s not something that can be achieved easily, or in a single leap. In all probability, the industrial world will have to pass through the whole sequence of phases we’ve been discussing several times before things finally bottom out in the deindustrial dark ages to come.

Still, I’m going to shock my fans and critics alike here by pointing out that there’s actually some reason to think that positive change on more than an individual level will be possible as the industrial world slams facefirst into the limits to growth. Two things give me that measured sense of hope. The first is the sheer scale of the resources locked up in today’s spectacularly dysfunctional political, economic, and social institutions, which will become available for other uses when those institutions come apart. The $83 billion a year currently being poured down the oversized rathole of the five biggest US banks, just for starters, could pay for a lot of solar water heaters, training programs for organic farmers, and other things that could actually do some good.

Throw in the resources currently being chucked into all of the other attempts currently under way to prop up a failing system, and you’ve got quite the jackpot that could, in an era of breakdown, be put to work doing things worth while. It’s by no means certain, as already noted, that these resources will go to the best possible use, but it’s all but certain that they’ll go to something less stunningly pointless than, say, handing Elon Musk his next billion dollars.

The second thing that gives me a measured sense of hope is at once subtler and far more profound. These days, despite a practically endless barrage of rhetoric to the contrary, the great majority of Americans are getting fewer and fewer benefits from the industrial system, and are being forced to pay more and more of its costs, so that a relatively small fraction of the population can monopolize an ever-increasing fraction of the national wealth and contribute less and less in exchange. What’s more, a growing number of Americans are aware of this fact. The traditional schism of a collapsing society into a dominant minority and an internal proletariat, to use Arnold Toynbee’s terms, is a massive and accelerating social reality in the United States today.

As that schism widens, and more and more Americans are forced into the Third World poverty that’s among the unmentionable realities of public life in today’s United States, several changes of great importance are taking place. The first, of course, is precisely that a great many Americans are perforce learning to live with less—not in the playacting style popular just now on the faux-green end of the privileged classes, but really, seriously living with much less, because that’s all there is. That’s a huge shift and a necessary one, since the absurd extravagance many Americans consider to be a normal lifestyle is among the most important things that will be landing in history’s compost heap in the not too distant future.
At the same time, the collective consensus that keeps the hopelessly dysfunctional institutions of today’s status quo glued in place is already coming apart, and can be expected to dissolve completely in the years ahead. What sort of consensus will replace it, after the inevitable interval of chaos and struggle, is anybody’s guess at this point—though it’s vanishingly unlikely to have anything to do with the current political fantasies of left and right. It’s just possible, given luck and a great deal of hard work, that whatever new system gets cobbled together during the breakdown phase of our present crisis will embody at least some of the values that will be needed to get our species back into some kind of balance with the biosphere on which our lives depend. A future post will discuss how that might be accomplished—after, that is, we explore the last phase of the collapse process: the era of dissolution, which will be the theme of next week’s post.

The Era of Response

Wed, 2015-05-27 17:21
The third stage of the process of collapse, following what I’ve called the eras of pretense and impact, is the era of response. It’s easy to misunderstand what this involves, because both of the previous eras have their own kinds of response to whatever is driving the collapse; it’s just that those kinds of response are more precisely nonresponses, attempts to make the crisis go away without addressing any of the things that are making it happen.
If you want a first-rate example of the standard nonresponse of the era of pretense, you’ll find one in the sunny streets of Miami, Florida right now. As a result of global climate change, sea level has gone up and the Gulf Stream has slowed down. One consequence is that these days, whenever Miami gets a high tide combined with a stiff onshore wind, salt water comes boiling up through the storm sewers of the city all over the low-lying parts of town. The response of the Florida state government has been to ssue an order to all state employees that they’re not allowed to utter the phrase “climate change.”
That sort of thing is standard practice in an astonishing range of subjects in America these days. Consider the roles that the essentially nonexistent recovery from the housing-bubble crash of 2008-9 has played in political rhetoric since that time. The current inmate of the White House has been insisting through most of two turns that happy days are here again, and the usual reams of doctored statistics have been churned out in an effort to convince people who know better that they’re just imagining that something is wrong with the economy. We can expect to hear that same claim made in increasingly loud and confident tones right up until the day the bottom finally drops out. 
With the end of the era of pretense and the arrival of the era of impact comes a distinct shift in the standard mode of nonresponse, which can be used quite neatly to time the transition from one era to another. Where the nonresponses of the era of pretense insist that there’s nothing wrong and nobody has to do anything outside the realm of business as usual, the nonresponses of the era of impact claim just as forcefully that whatever’s gone wrong is a temporary difficulty and everything will be fine if we all unite to do even more of whatever activity defines business as usual. That this normally amounts to doing more of whatever made the crisis happen in the first place, and thus reliably makes things worse is just one of the little ironies history has to offer.
What unites the era of pretense with the era of impact is the unshaken belief that in the final analysis, there’s nothing essentially wrong with the existing order of things. Whatever little difficulties may show up from time to time may be ignored as irrelevant or talked out of existence, or they may have to be shoved aside by some concerted effort, but it’s inconceivable to most people in these two eras that the existing order of things is itself the source of society’s problems, and has to be changed in some way that goes beyond the cosmetic dimension. When the inconceivable becomes inescapable, in turn, the second phase gives way to the third, and the era of response has arrived.
This doesn’t mean that everyone comes to grips with the real issues, and buckles down to the hard work that will be needed to rebuild society on a sounder footing. Winston Churchill once noted with his customary wry humor that the American people can be counted on to do the right thing, once they have exhausted every other possibility. He was of course quite correct, but the same rule can be applied with equal validity to every other nation this side of Utopia, too. The era of response, in practice, generally consists of a desperate attempt to find something that will solve the crisis du jour, other than the one thing that everyone knows will solve the crisis du jour but nobody wants to do.
Let’s return to the two examples we’ve been following so far, the outbreak of the Great Depression and the coming of the French Revolution. In the aftermath of the 1929 stock market crash, once the initial impact was over and the “sucker’s rally” of early 1930 had come and gone, the federal government and the various power centers and pressure groups that struggled for influence within its capacious frame were united in pursuit of a single goal: finding a way to restore prosperity without doing either of the things that had to be done in order to restore prosperity.  That task occupied the best minds in the US elite from the summer of 1930 straight through until April of 1933, and the mere fact that their attempts to accomplish this impossibility proved to be a wretched failure shouldn’t blind anyone to the Herculean efforts that were involved in the attempt.
The first of the two things that had to be tackled in order to restore prosperity was to do something about the drastic imbalance in the distribution of income in the United States. As noted in previous posts, an economy dependent on consumer expenditures can’t thrive unless consumers have plenty of money to spend, and in the United States in the late 1920s, they didn’t—well, except for the very modest number of those who belonged to the narrow circles of the well-to-do. It’s not often recalled these days just how ghastly the slums of urban America were in 1929, or how many rural Americans lived in squalid one-room shacks of the sort you pretty much have to travel to the Third World to see these days. Labor unions and strikes were illegal in 1920s America; concepts such as a minimum wage, sick pay, and health benefits didn’t exist, and the legal system was slanted savagely against the poor.
You can’t build prosperity in a consumer society when a good half of your citizenry can’t afford more than the basic necessities of life. That’s the predicament that America found clamped to the tender parts of its economic anatomy at the end of the 1920s. In that decade, as in our time, the temporary solution was to inflate a vast speculative bubble, under the endearing delusion that this would flood the economy with enough unearned cash to make the lack of earned income moot. That worked over the short term and then blew up spectacularly, since a speculative bubble is simply a Ponzi scheme that the legal authorities refuse to prosecute as such, and inevitably ends the same way.
There were, of course, effective solutions to the problem of inadequate consumer income. They were exactly those measures that were taken once the era of response gave way to the era of breakdown; everyone knew what they were, and nobody with access to political or economic power was willing to see them put into effect, because those measures would require a modest decline in the relative wealth and political dominance of the rich as compared to everyone else. Thus, as usually happens, they were postponed until the arrival of the era of breakdown made it impossible to avoid them any longer.
The second thing that had to be changed in order to restore prosperity was even more explosive, and I’m quite certain that some of my readers will screech like banshees the moment I mention it. The United States in 1929 had a precious metal-backed currency in the most literal sense of the term. Paper bills in those days were quite literally receipts for a certain quantity of gold—1.5 grams, for much of the time the US spent on the gold standard. That sort of arrangement was standard in most of the world’s industrial nations; it was backed by a dogmatic orthodoxy all but universal among respectable economists; and it was strangling the US economy.
It’s fashionable among certain sects on the economic fringes these days to look back on the era of the gold standard as a kind of economic Utopia in which there were no booms and busts, just a warm sunny landscape of stability and prosperity until the wicked witches of the Federal Reserve came along and spoiled it all. That claim flies in the face of economic history. During the entire period that the United States was on the gold standard, from 1873 to 1933, the US economy was a moonscape cratered by more than a dozen significant depressions. There’s a reason for that, and it’s relevant to our current situation—in a backhanded manner, admittedly.
Money, let us please remember, is not wealth. It’s a system of arbitrary tokens that represent real wealth—that is, actual, nonfinancial goods and services. Every society produces a certain amount of real wealth each year, and those societies that use money thus need to have enough money in circulation to more or less correspond to the annual supply of real wealth. That sounds simple; in practice, though, it’s anything but. Nowadays, for example, the amount of real wealth being produced in the United States each year is contracting steadily as more and more of the nation’s economic output has to be diverted into the task of keeping it supplied with fossil fuels. That’s happening, in turn, because of the limits to growth—the awkward but inescapable reality that you can’t extract infinite resources, or dump limitless wastes, on a finite planet.
The gimmick currently being used to keep fossil fuel extraction funded and cover the costs of the rising impact of environmental disruptions, without cutting into a culture of extravagance that only cheap abundant fossil fuel and a mostly intact biosphere can support, is to increase the money supply ad infinitum. That’s become the bedrock of US economic policy since the 2008-9 crash. It’s not a gimmick with a long shelf life; as the mismatch between real wealth and the money supply balloons, distortions and discontinuities are surging out through the crawlspaces of our economic life, and crisis is the most likely outcome.
In the United States in the first half or so of the twentieth century, by contrast, the amount of real wealth being produced each year soared, largely because of the steady increases in fossil fuel energy being applied to every sphere of life. While the nation was on the gold standard, though, the total supply of money could only grow as fast as gold could be mined out of the ground, which wasn’t even close to fast enough. So you had more goods and services being produced than there was money to pay for them; people who wanted goods and services couldn’t buy them because there wasn’t enough money to go around; business that wanted to expand and hire workers were unable to do so for the same reason. The result was that moonscape of economic disasters I mentioned a moment ago.
The necessary response at that time was to go off the gold standard. Nobody in power wanted to do this, partly because of the dogmatic economic orthodoxy noted earlier, and partly because a money shortage paid substantial benefits to those who had guaranteed access to money. The rentier class—those people who lived off income from their investments—could count on stable or falling prices as long as the gold standard stayed in place, and the mere fact that the same stable or falling prices meant low wages, massive unemployment, and widespread destitution troubled them not at all. Since the rentier class included the vast majority of the US economic and political elite, in turn, going off the gold standard was unthinkable until it became unavoidable.
The period of the French revolution from the fall of the Bastille in 1789 to the election of the National Convention in 1792 was a period of the same kind, though driven by different forces. Here the great problem was how to replace the Old Regime—not just the French monarchy, but the entire lumbering mass of political, economic, and social laws, customs, forms, and institutions that France had inherited from the Middle Ages and never quite gotten around to adapting to drastically changed conditions—with something that would actually work. It’s among the more interesting features of the resulting era of response that nearly every detail differed from the American example just outlined, and yet the results were remarkably similar.
Thus the leaders of the National Assembly who suddenly became the new rulers of France in the summer of 1789 had no desire whatsoever to retain the traditional economic arrangements that gave France’s former elites their stranglehold on an oversized share of the nation’s wealth. The abolition of manorial rights that summer, together with the explosive rural uprisingsagainst feudal landlords and their chateaux in the wake of the Bastille’s fall, gutted the feudal system and left most of its former beneficiaries the choice between fleeing into exile and trying to find some way to make ends meet in a society that had no particular market for used aristocrats. The problem faced by the National Assembly wasn’t that of prying the dead fingers of a failed system off the nation’s throat; it was that of trying to find some other basis for national unity and effective government.
It’s a surprisingly difficult challenge. Those of my readers who know their way around current events will already have guessed that an attempt was made to establish a copy of whatever system was most fashionable among liberals at the time, and that this attempt turned out to be an abject failure. What’s more, they’ll have been quite correct. The National Assembly moved to establish a constitutional monarchy along British lines, bring in British economic institutions, and the like; it was all very popular among liberal circles in France and, naturally, in Britain as well, and it flopped. Those who recall the outcome of the attempt to turn Iraq into a nice pseudo-American democracy in the wake of the US invasion will have a tolerably good sense of how the project unraveled.
One of the unwelcome but reliable facts of history is that democracy doesn’t transplant well. It thrives only where it grows up naturally, out of the civil institutions and social habits of a people; when liberal intellectuals try to impose it on a nation that hasn’t evolved the necessary foundations for it, the results are pretty much always a disaster. That latter was the situation in France at the time of the Revolution. What happened thereafter  is what almost always happens to a failed democratic experiment: a period of chaos, followed by the rise of a talented despot who’s smart and ruthless enough to impose order on a chaotic situation and allow new, pragmatic institutions to emerge to replace those destroyed by clueless democratic idealists. In many cases, though by no means all, those pragmatic institutions have ended up providing a bridge to a future democracy, but that’s another matter.
Here again, those of my readers who have been paying attention to current events already know this; the collapse of the Soviet Union was followed in classic form by a failed democracy, a period of chaos, and the rise of a talented despot. It’s a curious detail of history that the despots in question are often rather short. Russia has had the great good fortune to find, as its despot du jour, a canny realist who has successfully brought it back from the brink of collapse and reestablished it as a major power with a body count considerably smaller than usual.. France was rather less fortunate; the despot it found, Napoleon Bonaparte, turned out to be a megalomaniac with an Alexander the Great complex who proceeded to plunge Europe into a quarter century of cataclysmic war. Mind you, things could have been even worse; when Germany ended up in a similar situation, what it got was Adolf Hitler.
Charismatic strongmen are a standard endpoint for the era of response, but they properly belong to the era that follows, the era of breakdown, which will be discussed next week. What I want to explore here is how an era of response might work out in the future immediately before us, as the United States topples from its increasingly unsteady imperial perch and industrial civilization as a whole slams facefirst into the limits to growth. The examples just cited outline the two most common patterns by which the era of response works itself out. In the first pattern, the old elite retains its grip on power, and fumbles around with increasing desperation for a response to the crisis. In the second, the old elite is shoved aside, and the new holders of power are left floundering in a political vacuum.
We could see either pattern in the United States. For what it’s worth, I suspect the latter is the more likely option; the spreading crisis of legitimacy that grips the country these days is exactly the sort of thing you saw in France before the Revolution, and in any number of other countries in the few decades just prior to revolutionary political and social change. Every time a government tries to cope with a crisis by claiming that it doesn’t exist, every time some member of the well-to-do tries to dismiss the collective burdens its culture of executive kleptocracy imposes on the country by flinging abuse at critics, every time institutions that claim to uphold the rule of law defend the rule of entrenched privilege instead, the United States takes another step closer to the revolutionary abyss.
I use that last word advisedly. It’s a common superstition in every troubled age that any change must be for the better—that the overthrow of a bad system must by definition lead to the establishment of a better one. This simply isn’t true. The vast majority of revolutions have established governments that were far more abusive than the ones they replaced. The exceptions have generally been those that brought about a social upheaval without wrecking the political system: where, for example, an election rather than a coup d’etat or a mass rising put the revolutionaries in power, and the political institutions of an earlier time remained in place with only such reshaping as new necessities required.
We could still see that sort of transformation as the United States sees the end of its age of empire and has to find its way back to a less arrogant and extravagant way of functioning in the world. I don’t think it’s likely, but I think it’s possible, and it would probably be a good deal less destructive than the other alternative. It’s worth remembering, though, that history is under no obligation to give us the future we think we want.

The Era of Impact

Wed, 2015-05-20 15:03
Of all the wistful superstitions that cluster around the concept of the future in contemporary popular culture, the most enduring has to be the notion that somehow, sooner or later, something will happen to shake the majority out of its complacency and get it to take seriously the crisis of our age. Week after week, I field comments and emails that presuppose that belief. People want to know how soon I think the shock of awakening will finally hit, or wonder whether this or that event will do the trick, or simply insist that the moment has to come sooner or later.
To all such inquiries and expostulations I have no scrap of comfort to offer. Quite the contrary, what history shows is that a sudden awakening to the realities of a difficult situation is far and away the least likely result of what I’ve called the era of impact, the second of the five stages of collapse. (The first, for those who missed last week’s post, is the era of pretense; the remaining three, which will be covered in the coming weeks, are the eras of response, breakdown, and dissolution.)
The era of impact is the point at which it becomes clear to most people that something has gone wrong with the most basic narratives of a society—not just a little bit wrong, in the sort of way that requires a little tinkering here and there, but really, massively, spectacularly wrong. It arrives when an asset class that was supposed to keep rising in price forever stops rising, does its Wile E. Coyote moment of hang time, and then drops like a stone. It shows up when an apparently entrenched political system, bristling with soldiers and secret police, implodes in a matter of days or weeks and is replaced by a provisional government whose leaders look just as stunned as everyone else. It comes whenever a state of affairs that was assumed to be permanent runs into serious trouble—but somehow it never seems to succeed in getting people to notice just how temporary that state of affairs always was.
Since history is the best guide we’ve got to how such events work out in the real world, I want to take a couple of examples of the kind just outlined and explore them in a little more detail. The stock market bubble of the 1920s makes a good case study on a relatively small scale. In the years leading up to the crash of 1929, stock values in the US stock market quietly disconnected themselves from the economic fundamentals and began what was, for the time, an epic climb into la-la land. There were important if unmentionable reasons for that airy detachment from reality; the most significant was the increasingly distorted distribution of income in 1920s America, which put more and more of the national wealth in the hands of fewer and fewer people and thus gutted the national economy.
It’s one of the repeated lessons of economic history that money in the hands of the rich does much less good for the economy as a whole than money in the hands of the working classes and the poor. The reasoning here is as simple as it is inescapable. Industrial economies survive and thrive on consumer expenditures, but consumer expenditures are limited by the ability of consumers to buy the things they want and need. As money is diverted away from the lower end of the economic pyramid, you get demand destruction—the process by which those who can’t afford to buy things stop buying them—and consumer expenditures fall off. The rich, by contrast, divert a large share of their income out of the consumer economy into investments; the richer they get, the more of the national wealth ends up in investments rather than consumer expenditures; and as consumer expenditures falter, and investments linked to the consumer economy falter in turn, more and more money ends up in illiquid speculative vehicles that are disconnected from the productive economy and do nothing to stimulate demand.
That’s what happened in the 1920s. All through the decade in the US, the rich got richer and the poor got screwed, speculation took the place of productive investment throughout the US economy, and the well-to-do wallowed in the wretched excess chronicled in F. Scott Fitzgerald’s The Great Gatsby while most other people struggled to get by. The whole decade was a classic era of pretense, crowned by the delusional insistence—splashed all over the media of the time—that everyone in the US could invest in the stock market and, since the market was of course going to keep on rising forever, everyone in the US would thus inevitably become rich.
It’s interesting to note that there were people who saw straight through the nonsense and tried to warn their fellow Americans about the inevitable consequences. They were denounced six ways from Sunday by all right-thinking people, in language identical to that used more recently on those of us who’ve had the effrontery to point out that an infinite supply of oil can’t be extracted from a finite planet.  The people who insisted that the soaring stock values of the late 1920s were the product of one of history’s great speculative bubbles were dead right; they had all the facts and figures on their side, not to mention plain common sense; but nobody wanted to hear it.
When the stock market peaked just before the Labor Day weekend in 1929 and started trending down, therefore, the immediate response of all right-thinking people was to insist at the top of their lungs that nothing of the sort was happening, that the market was simply catching its breath before its next great upward leap, and so on. Each new downward lurch was met by a new round of claims along these lines, louder, more dogmatic, and more strident than the one that preceded it, and nasty personal attacks on anyone who didn’t support the delusional consensus filled the media of the time.
People were still saying those things when the bottom dropped out of the market.
Tuesday, October 29, 1929 can reasonably be taken as the point at which the era of pretense gave way once and for all to the era of impact. That’s not because it was the first day of the crash—there had been ghastly slumps on the previous Thursday and Monday, on the heels of two months of less drastic but still seriously ugly declines—but because, after that day, the pundits and the media pretty much stopped pretending that nothing was wrong. Mind you, next to nobody was willing to talk about what exactly had gone wrong, or why it had gone wrong, but the pretense that the good fairy of capitalism had promised Americans happy days forever was out the window once and for all.
It’s crucial to note, though, that what followed this realization was the immediate and all but universal insistence that happy days would soon be back if only everyone did the right thing. It’s even more crucial to note that what nearly everyone identified as “the right thing”—running right out and buying lots of stocks—was a really bad idea that bankrupted many of those who did it, and didn’t help the imploding US economy at all.
It’s probably necessary to talk about this in a little more detail, since it’s been an article of blind faith in the United States for many decades now that it’s always a good idea to buy and hold stocks. (I suspect that stockbrokers have had a good deal to do with the promulgation of this notion.) It’s been claimed that someone who bought stocks in 1929 at the peak of the bubble, and then held onto them, would have ended up in the black eventually, and for certain values of “eventually,” this is quite true—but it took the Dow Jones industrial average until the mid-1950s to return to its 1929 high, and so for a quarter of a century our investor would have been underwater on his stock purchases.
What’s more, the Dow isn’t necessarily a good measure of stocks generally; many of the darlings of the market in the 1920s either went bankrupt in the Depression or never again returned to their 1929 valuations. Nor did the surge of money into stocks in the wake of the 1929 crash stave off the Great Depression, or do much of anything else other than provide a great example of the folly of throwing good money after bad. The moral to this story? In an era of impact, the advice you hear from everyone around you may not be in your best interest.
That same moral can be shown just as clearly in the second example I have in mind, the French Revolution. We talked briefly in last week’s post about the way that the French monarchy and aristocracy blinded themselves to the convulsive social and economic changes that were pushing France closer and closer to a collective explosion on the grand scale, and pursued business as usual long past the point at which business as usual was anything but a recipe for disaster. Even when the struggle between the Crown and the aristocracy forced Louis XVI to convene the États-Généraux—the rarely-held national parliament of France, which had powers more or less equivalent to a constitutional convention in the US—next to nobody expected anything but long rounds of political horse-trading from which some modest shifts in the balance of power might result.
That was before the summer of 1789. On June 17, the deputies of the Third Estate—the representatives of the commoners—declared themselves a National Assembly and staged what amounted to a coup d’etat; on July 14, faced with the threat of a military response from the monarchy, the Parisian mob seized the Bastille, kickstarting a wave of revolt across the country that put government and military facilities in the hands of the revolutionary National Guard and broke the back of the feudal system; on August 4, the National Assembly abolished all feudal rights and legal distinctions between the classes. Over less than two months, a political and social system that had been welded firmly in place for a thousand years all came crashing to the ground.
Those two months marked the end of the era of pretense and the arrival of the era of impact. The immediate response, with a modest number of exceptions among the aristocracy and the inner circles of the monarchy’s supporters, was frantic cheering and an insistence that everything would soon settle into a wonderful new age of peace, prosperity, and liberty. All the overblown dreams of the philosophes about a future age governed by reason were trotted out and treated as self-evident fact. Of course that’s not what happened; once it was firmly in power, the National Assembly used its unchecked authority as abusively as the monarchy had once done; factional struggles spun out of control, and before long mob rule and the guillotine were among the basic facts of life in Revolutionary France. 
Among the most common symptoms of an era of impact, in other words, is the rise of what we may as well call “crackpot optimism”—the enthusiastic and all but universal insistence, in the teeth of the evidence, that the end of business as usual will turn out to be the door to a wonderful new future. In the wake of the 1929 stock market crash, people were urged to pile back into the market in the belief that this would cause the economy to boom again even more spectacularly than before, and most of the people who followed this advice proceeded to lose their shirts. In the wake of the revolution of 1789, likewise, people across France were encouraged to join with their fellow citizens in building the shining new utopia of reason, and a great many of those who followed that advice ended up decapitated or, a little later, dying of gunshot or disease in the brutal era of pan-European warfare that extended almost without a break from the cannonade of Valmy in 1792 to the battle of Waterloo in 1815.
And the present example? That’s a question worth exploring, if only for the utterly pragmatic reason that most of my readers are going to get to see it up close and personal.
That the United States and the industrial world generally are deep in an era of pretense is, I think, pretty much beyond question at this point. We’ve got political authorities, global bankers, and a galaxy of pundits insisting at the top of their lungs that nothing is wrong, everything is fine, and we’ll be on our way to the next great era of prosperity if we just keep pursuing a set of boneheaded policies that have never—not once in the entire span of human history—brought prosperity to the countries that pursued them. We’ve got shelves full of books for sale in upscale bookstores insisting, in the strident language usual to such times, that life is wonderful in this best of all possible worlds, and it’s going to get better forever because, like, we have technology, dude! Across the landscape of the cultural mainstream, you’ll find no shortage of cheerleaders insisting at the top of their lungs that everything’s going to be fine, that even though they said ten years ago that we only have ten years to do something before disaster hits, why, we still have ten years before disaster hits, and when ten more years pass by, why, you can be sure that the same people will be insisting that we have ten more.
This is the classic rhetoric of an era of pretense. Over the last few years, though, it’s seemed to me that the voices of crackpot optimism have gotten more shrill, the diatribes more fact-free, and the logic even shoddier than it was in Bjorn Lomborg’s day, which is saying something. We’ve reached the point that state governments are making it a crime to report on water quality and forbidding officials from using such unwelcome phrases as “climate change.” That’s not the action of people who are confident in their beliefs; it’s the action of a bunch of overgrown children frantically clenching their eyes shut, stuffing their fingers in their ears, and shouting “La, la, la, I can’t hear you.”
That, in turn, suggests that the transition to the era of impact may be fairly close. Exactly when it’s likely to arrive is a complex question, and exactly what’s going to land the blow that will crack the crackpot optimism and make it impossible to ignore the arrival of real trouble is an even more complex one. In 1929, those who hadn’t bought into the bubble could be perfectly sure—and in fact, a good many of them were perfectly sure—that the usual mechanism that brings bubbles to a catastrophic end was about to terminate the boom of the 1920s with extreme prejudice, as indeed it did. In the last decades of the French monarchy, it was by no means clear exactly what sequence of events would bring the Ancien Régime crashing down, but such thoughtful observers as Talleyrand knew that something of the sort was likely to follow the crisis of legitimacy then under way.
The problem with trying to predict the trigger that will bring our current situation to a sudden stop is that we’re in such a target-rich environment. Looking over the potential candidates for the sudden shock that will stick a fork in the well-roasted corpse of business as usual, I’m reminded of the old board game Clue. Will Mr. Boddy’s killer turn out to be Colonel Mustard in the library with a lead pipe, Professor Plum in the conservatory with a candlestick, or Miss Scarlet in the dining room with a rope?
In much the same sense, we’ve got a global economy burdened to the breaking point with more than a quadrillion dollars of unpayable debt; we’ve got a global political system coming apart at the seams as the United States slips toward the usual fate of empires and its rivals circle warily, waiting for the kill; we’ve got a domestic political system here in the US entering a classic prerevolutionary condition under the impact of a textbook crisis of legitimacy; we’ve got a global climate that’s hammered by our rank stupidity in treating the atmosphere as a gaseous sewer for our wastes; we’ve got a global fossil fuel industry that’s frantically trying to pretend that scraping the bottom of the barrel means that the barrel is full, and the list goes on. It’s as though Colonel Mustard, Professor Plum, Miss Scarlet, and the rest of them all ganged up on Mr. Boddy at once, and only the most careful autopsy will be able to determine which of them actually dealt the fatal blow.
In the midst of all this uncertainty, there are three things that can, I think, be said for certain about the end of the current era of pretense and the coming of the era of impact. The first is that it’s going to happen. When something is unsustainable, it’s a pretty safe bet that it won’t be sustained indefinitely, and a society that keeps on embracing policies that swap short-term gains for long-term problems will sooner or later end up awash in the consequences of those policies. Timing such transitions is difficult at best; it’s an old adage among stock traders that the market can stay irrational longer than you can stay solvent. Still, points made above—especially the increasingly shrill tone of the defenders of the existing order—suggest to me that the era of impact may be here within a decade or so at the outside.
The second thing that can be said for certain about the coming era of impact is that it’s not the end of the world. Apocalyptic fantasies are common and popular in eras of pretense, and for good reason; fixating on the supposed imminence of the Second Coming, human extinction, or what have you, is a great way to distract yourself from the real crisis that’s breathing down your neck. If the real crisis in question is partly or wholly a result of your own actions, while the apocalyptic fantasy can be blamed on someone or something else, that adds a further attraction to the fantasy.
The end of industrial civilization will be a long, bitter, painful cascade of conflicts, disasters, and  accelerating decline in which a vast number of people are going to die before they otherwise would, and a great many things of value will be lost forever. That’s true of any falling civilization, and the misguided decisions of the last forty years have pretty much guaranteed that the current example is going to have an extra helping of all these unwelcome things. I’ve discussed at length, in earlier posts in the Dark Age America sequence here and in other sequences as well, why the sort of apocalyptic sudden stop beloved of Hollywood scriptwriters is the least likely outcome of the predicament of our time; still, insisting on the imminence and inevitability of some such game-ending event will no doubt be as popular as usual in the years immediately ahead.
The third thing that I think can be said for certain about the coming era of impact, though, is the one that counts. If it follows the usual pattern, as I expect it to do, once the crisis hits there will be serious, authoritative, respectable figures telling everyone exactly what they need to do to bring an end to the troubles and get the United States and the world back on track to renewed peace and prosperity. Taking these pronouncements seriously and following their directions will be extremely popular, and it will almost certainly also be a recipe for unmitigated disaster. If forewarned is forearmed, as the saying has it, this is a piece of firepower to keep handy as the era of pretense winds down. In next week’s post, we’ll talk about comparable weaponry relating to the third stage of collapse—the era of response.

The Era of Pretense

Wed, 2015-05-13 17:00
I've mentioned in previous posts here on The Archdruid Report the educational value of the comments I receive from readers in the wake of each week’s essay. My post two weeks ago on the death of the internet was unusually productive along those lines.  One of the comments I got in response to that post gave me the theme for last week’s essay, but there was at least one other comment calling for the same treatment. Like the one that sparked last week’s post, it appeared on one of the many other internet forums on which The Archdruid Report, and it unintentionally pointed up a common and crucial failure of imagination that shapes, or rather misshapes, the conventional wisdom about our future.
Curiously enough, the point that set off the commenter in question was the same one that incensed the author of the denunciation mentioned in last week’s post: my suggestion in passing that fifty years from now, most Americans may not have access to electricity or running water. The commenter pointed out angrily that I’d claimed that the twilight of industrial civilization would be a ragged arc of decline over one to three centuries. Now, he claimed, I was saying that it was going to take place in the next fifty years, and this apparently convinced him that everything I said ought to be dismissed out of hand.
I run into this sort of confusion all the time. If I suggest that the decline and fall of a civilization usually takes several centuries, I get accused of inconsistency if I then note that one of the sharper downturns included in that process may be imminent.  If I point out that the United States is likely within a decade or two of serious economic and political turmoil, driven partly by the implosion of its faltering global hegemony and partly by a massive crisis of legitimacy that’s all but dissolved the tacit contract between the existing order of US society and the masses who passively support it, I get accused once again of inconsistency if I then say that whatever comes out the far side of that crisis—whether it’s a battered and bruised United States or a patchwork of successor states—will then face a couple of centuries of further decline and disintegration before the deindustrial dark age bottoms out.
Now of course there’s nothing inconsistent about any of these statements. The decline and fall of a civilization isn’t a single event, or even a single linear process; it’s a complex fractal reality composed of many different events on many different scales in space and time. If it takes one to three centuries, as usual, those centuries are going to be taken up by an uneven drumbeat of wars, crises, natural disasters, and assorted breakdowns on a variety of time frames with an assortment of local, regional, national, or global effects. The collapse of US global hegemony is one of those events; the unraveling of the economic and technological framework that currently provides most Americans with electricity and running water is another, but neither of those is anything like the whole picture.
It’s probably also necessary to point out that any of my readers who think that being deprived of electricity and running water is the most drastic kind of collapse imaginable have, as the saying goes, another think coming. Right now, in our oh-so-modern world, there are billions of people who get by without regular access to electricity and running water, and most of them aren’t living under dark age conditions. A century and a half ago, when railroads, telegraphs, steamships, and mechanical printing presses were driving one of history’s great transformations of transport and information technology, next to nobody had electricity or running water in their homes. The technologies of 1865 are not dark age technologies; in fact, the gap between 1865 technologies and dark age technologies is considerably greater, by most metrics, than the gap between 1865 technologies and the ones we use today.
Furthermore, whether or not Americans have access to running water and electricity may not have as much to say about the future of industrial society everywhere in the world as the conventional wisdom would suggest.  I know that some of my American readers will be shocked out of their socks to hear this, but the United States is not the whole world. It’s not even the center of the world. If the United States implodes over the next two decades, leaving behind a series of bankrupt failed states to squabble over its territory and the little that remains of its once-lavish resource base, that process will be a great source of gaudy and gruesome stories for the news media of the world’s other continents, but it won’t affect the lives of the readers of those stories much more than equivalent events in Africa and the Middle East affect the lives of Americans today.
As it happens, over the next one to three centuries, the benefits of industrial civilization are going to go away for everyone. (The costs will be around a good deal longer—in the case of the nuclear wastes we’re so casually heaping up for our descendants, a good quarter of a million years, but those and their effects are rather more localized than some of today’s apocalyptic rhetoric likes to suggest.) The reasoning here is straightforward. White’s Law, one of the fundamental principles of human ecology, states that economic development is a function of energy per capita; the immense treasure trove of concentrated energy embodied in fossil fuels, and that alone, made possible the sky-high levels of energy per capita that gave the world’s industrial nations their brief era of exuberance; as fossil fuels deplete, and remaining reserves require higher and higher energy inputs to extract, the levels of energy per capita the industrial nations are used to having will go away forever.
It’s important to be clear about this. Fossil fuels aren’t simply one energy source among others; in terms of concentration, usefulness, and fungibility—that is, the ability to be turned into any other form of energy that might be required—they’re in a category all by themselves. Repeated claims that fossil fuels can be replaced with nuclear power, renewable energy resources, or what have you sound very good on paper, but every attempt to put those claims to the test so far has either gone belly up in short order, or become a classic subsidy dumpster surviving purely on a diet of government funds and mandates.
Three centuries ago, the earth’s fossil fuel reserves were the largest single deposit of concentrated energy in this part of the universe; now we’ve burnt through nearly all the easily accessible reserves, and we’re scrambling to keep the tottering edifice of industrial society going by burning through the dregs that remain. As those run out, the remaining energy resources—almost all of them renewables—will certainly sustain a variety of human societies, and some of those will be able to achieve a fairly high level of complexity and maintain some kinds of advanced technologies. The kind of absurd extravagance that passes for a normal standard of living among the more privileged inmates of the industrial nations is another matter, and as the fossil fuel age sunsets out, it will end forever.
The fractal trajectory of decline and fall mentioned earlier in this post is simply the way this equation works out on the day-to-day scale of ordinary history. Still, those of us who happen to be living through a part of that trajectory might reasonably be curious about how it’s likely to unfold in our lifetimes. I’ve discussed in a previous series of posts, and in my book Decline and Fall: The End of Empire and the Future of Democracy in 21st Century America, how the end of US global hegemony is likely to unfold, but as already noted, that’s only a small portion of the broader picture. Is a broader view possible?
Fortunately history, the core resource I’ve been using to try to make sense of our future, has plenty to say about the broad patterns that unfold when civilizations decline and fall. Now of course I know all I have to do is mention that history might be relevant to our present predicament, and a vast chorus of voices across the North American continent and around the world will bellow at rooftop volume, “But it’s different this time!” With apologies to my regular readers, who’ve heard this before, it’s probably necessary to confront that weary thoughtstopper again before we proceed.
As I’ve noted before, claims that it’s different this time are right where it doesn’t matter and wrong where it counts.  Predictions made on the basis of history—and not just by me—have consistently predicted events over the last decade or so far more accurately than predictions based on the assumption that history doesn’t matter. How many times, dear reader, have you heard someone insist that industrial civilization is going to crash to ruin in the next six months, and then watched those six months roll merrily by without any sign of the predicted crash? For that matter, how many times have you heard someone insist that this or that policy that’s never worked any other time that it’s been tried, or this or that piece of technological vaporware that’s been the subject of failed promises for decades, will inevitably put industrial society back on its alleged trajectory to the stars—and how many times has the policy or the vaporware been quietly shelved, and something else promoted using the identical rhetoric, when it turned out not to perform as advertised?
It’s been a source of wry amusement to me to watch the same weary, dreary, repeatedly failed claims of imminent apocalypse and inevitable progress being rehashed year after year, varying only in the fine details of the cataclysm du jour and the techno-savior du jour, while the future nobody wants to talk about is busily taking shape around us. Decline and fall isn’t something that will happen sometime in the conveniently distant future; it’s happening right now in the United States and around the world. The amusement, though, is tempered with a sense of familiarity, because the period in which decline is under way but nobody wants to admit that fact is one of the recurring features of the history of decline.
There are, very generally speaking, five broad phases in the decline and fall of a civilization. I know it’s customary in historical literature to find nice dull labels for such things, but I’m in a contrary mood as I write this, so I’ll give them unfashionably colorful names: the eras of pretense, impact, response, breakdown, and dissolution. Each of these is complex enough that it’ll need a discussion of its own; this week, we’ll talk about the era of pretense, which is the one we’re in right now.
Eras of pretense are by no means limited to the decline and fall of civilizations. They occur whenever political, economic, or social arrangements no longer work, but the immediate costs of admitting that those arrangements don’t work loom considerably larger in the collective imagination than the future costs of leaving those arrangements in place. It’s a curious but consistent wrinkle of human psychology that this happens even if those future costs soar right off the scale of frightfulness and lethality; if the people who would have to pay the immediate costs don’t want to do so, in fact, they will reliably and cheerfully pursue policies that lead straight to their own total bankruptcy or violent extermination, and never let themselves notice where they’re headed.
Speculative bubbles are a great setting in which to watch eras of pretense in full flower. In the late phases of a bubble, when it’s clear to anyone who has two spare neurons to rub together that the boom du jour is cobbled together of equal parts delusion and chicanery, the people who are most likely to lose their shirts in the crash are the first to insist at the top of their lungs that the bubble isn’t a bubble and their investments are guaranteed to keep on increasing in value forever. Those of my readers who got the chance to watch some of their acquaintances go broke in the real estate bust of 2008-9, as I did, will have heard this sort of self-deception at full roar; those who missed the opportunity can make up for the omission by checking out the ongoing torrent of claims that the soon-to-be-late fracking bubble is really a massive energy revolution that will make America wealthy and strong again.
The history of revolutions offers another helpful glimpse at eras of pretense. France in the decades before 1789, to cite a conveniently well-documented example, was full of people who had every reason to realize that the current state of affairs was hopelessly unsustainable and would have to change. The things about French politics and economics that had to change, though, were precisely those things that the French monarchy and aristocracy were unwilling to change, because any such reforms would have cost them privileges they’d had since time out of mind and were unwilling to relinquish.
Louis XIV, who finished up his long and troubled reign a supreme realist, is said to have muttered “Après moi, le déluge”—“Once I’m gone, this sucker’s going down” may not be a literal translation, but it catches the flavor of the utterance—but that degree of clarity was rare in his generation, and all but absent in those of his increasingly feckless successors. Thus the courtiers and aristocrats of the Old Regime amused themselves at the nation’s expense, dabbled in avant-garde thought, and kept their eyes tightly closed to the consequences of their evasions of looming reality, while the last opportunities to excuse themselves from a one-way trip to visit the guillotine and spare France the cataclysms of the Terror and the Napoleonic wars slipped silently away.
That’s the bitter irony of eras of pretense. Under most circumstances, they’re the last period when it would be possible to do anything constructive on the large scale about the crisis looming immediately ahead, but the mass evasion of reality that frames the collective thinking of the time stands squarely in the way of any such constructive action. In the era of pretense before a speculative bust, people who could have quietly cashed in their positions and pocketed their gains double down on their investments, and guarantee that they’ll be ruined once the market stops being liquid. In the era of pretense before a revolution, in the same way, those people and classes that have the most to lose reliably take exactly those actions that ensure that they will in fact lose everything. If history has a sense of humor, this is one of the places that it appears in its most savage form.
The same points are true, in turn, of the eras of pretense that precede the downfall of a civilization. In a good many cases, where too few original sources survive, the age of pretense has to be inferred from archeological remains. We don’t know what motives inspired the ancient Mayans to build their biggest pyramids in the years immediately before the Terminal Classic period toppled over into a savage political and demographic collapse, but it’s hard to imagine any such project being set in motion without the usual evasions of an era of pretense being involved  Where detailed records of dead civilizations survive, though, the sort of rhetorical handwaving common to bubbles before the bust and decaying regimes on the brink of revolution shows up with knobs on. Thus the panegyrics of the Roman imperial court waxed ever more lyrical and bombastic about Rome’s invincibility and her civilizing mission to the nations as the Empire stumbled deeper into its terminal crisis, echoing any number of other court poets in any number of civilizations in their final hours.
For that matter, a glance through classical Rome’s literary remains turns up the remarkable fact that those of her essayists and philosophers who expressed worries about her survival wrote, almost without exception, during the Republic and the early Empire; the closer the fall of Rome actually came, the more certainty Roman authors expressed that the Empire was eternal and the latest round of troubles was just one more temporary bump on the road to peace and prosperity. It took the outsider’s vision of Augustine of Hippo to proclaim that Rome really was falling—and even that could only be heard once the Visigoths sacked Rome and the era of pretense gave way to the age of impact.
The present case is simply one more example to add to an already lengthy list. In the last years of the nineteenth century, it was common for politicians, pundits, and mass media in the United States, the British empire, and other industrial nations to discuss the possibility that the advanced civilization of the time might be headed for the common fate of nations in due time. The intellectual history of the twentieth century is, among other things, a chronicle of how that discussion was shoved to the margins of our collective discourse, just as the ecological history of the same century is among other things a chronicle of how the worries of the previous era became the realities of the one we’re in today. The closer we’ve moved toward the era of impact, that is, the more unacceptable it has become for anyone in public life to point out that the problems of the age are not just superficial.
Listen to the pablum that passes for political discussion in Washington DC or the mainstream US media these days, or the even more vacuous noises being made by party flacks as the country stumbles wearily toward yet another presidential election. That the American dream of upward mobility has become an American nightmare of accelerating impoverishment outside the narrowing circle of the kleptocratic rich, that corruption and casual disregard for the rule of law are commonplace in political institutions from local to Federal levels, that our medical industry charges more than any other nation’s and still provides the worst health care in the industrial world, that our schools no longer teach anything but contempt for learning, that the national infrastructure and built environment are plunging toward Third World conditions at an ever-quickening pace, that a brutal and feckless foreign policy embraced by both major parties is alienating our allies while forcing our enemies to set aside their mutual rivalries and make common cause against us: these are among the issues that matter, but they’re not the issues you’ll hear discussed as the latest gaggle of carefully airbrushed candidates go through their carefully scripted elect-me routines on their way to the 2016 election.
If history teaches anything, though, it’s that eras of pretense eventually give way to eras of impact. That doesn’t mean that the pretense will go away—long after Alaric the Visigoth sacked Rome, for example, there were still plenty of rhetors trotting out the same tired clichés about Roman invincibility—but it does mean that a significant number of people will stop finding the pretense relevant to their own lives. How that happens in other historical examples, and how it might happen in our own time, will be the theme of next week’s post.