AODA Blog

The Externality Trap, or, How Progress Commits Suicide

Wed, 2015-02-25 19:14
I've commented more than once in these essays about the cooperative dimension of writing:  the way that even the most solitary of writers inevitably takes part in what Mortimer Adler used to call the Great Conversation, the flow of ideas and insights across the centuries that’s responsible for most of what we call culture. Sometimes that conversation takes place second- or third-hand—for example, when ideas from two old books collide in an author’s mind and give rise to a third book, which will eventually carry the fusion to someone else further down the stream of time—but sometimes it’s far more direct.
Last week’s post here brought an example of the latter kind. My attempt to cut through the ambiguities surrounding that slippery word “progress” sparked a lively discussion on the comments page of my blog about just exactly what counted as progress, what factors made one change “progressive” while another was denied that label. In the midst of it all, one of my readers—tip of the archdruidical hat to Jonathan—proposed an unexpected definition:  what makes a change qualify as progress, he suggested, is that it increases the externalization of costs. 
I’ve been thinking about that definition since Jonathan proposed it, and it seems to me that it points up a crucial and mostly unrecognized dimension of the crisis of our time. To make sense of it, though, it’s going to be necessary to delve briefly into economic jargon.
Economists use the term “externalities” to refer to the costs of an economic activity that aren’t paid by either party in an exchange, but are pushed off onto somebody else. You won’t hear a lot of talk about externalities these days; it many circles, it’s considered impolite to mention them, but they’re a pervasive presence in contemporary life, and play a very large role in some of the most intractable problems of our age. Some of those problems were discussed by Garret Hardin in his famous essay on the tragedy of the commons, and more recently by Elinor Ostrom in her studies of how that tragedy can be avoided; still, I’m not sure how often it’s recognized that the phenomena they discussed applies not just to commons systems, but to societies as a whole—especially to societies like ours.
An example may be useful here. Let’s imagine a blivet factory, which turns out three-prong, two-slot blivets in pallet loads for customers. The blivet-making process, like manufacturing of every other kind, produces waste as well as blivets, and we’ll assume for the sake of the example that blivet waste is moderately toxic and causes health problems in people who ingest it. The blivet factory produces one barrel of blivet waste for every pallet load of blivets it ships. The cheapest option for dealing with the waste, and thus the option that economists favor, is to dump it into the river that flows past the factory.
Notice what happens as a result of this choice. The blivet manufacturer has maximized his own benefit from the manufacturing process, by avoiding the expense of finding some other way to deal with all those barrels of blivet waste. His customers also benefit, because blivets cost less than they would if the cost of waste disposal was factored into the price. On the other hand, the costs of dealing with the blivet waste don’t vanish like so much twinkle dust; they are imposed on the people downstream who get their drinking water from the river, or from aquifers that receive water from the river, and who suffer from health problems because there’s blivet waste in their water. The blivet manufacturer is externalizing the cost of waste disposal; his increased profits are being paid for at a remove by the increased health care costs of everyone downstream.
That’s how externalities work. Back in the days when people actually talked about the downsides of economic growth, there was a lot of discussion of how to handle externalities, and not just on the leftward end of the spectrum.  I recall a thoughtful book titled TANSTAAFL—that’s an acronym, for those who don’t know their Heinlein, for “There Ain’t No Such Thing As A Free Lunch”—which argued, on solid libertarian-conservative grounds, that the environment could best be preserved by making sure that everyone paid full sticker price for the externalities they generated. Today’s crop of pseudoconservatives, of course, turned their back on all this a long time ago, and insist at the top of their lungs on their allegedly God-given right to externalize as many costs as they possibly can.  This is all the more ironic in that most pseudoconservatives claim to worship a God who said some very specific things about “what ye do to the least of these,” but that’s a subject for a different post.
Economic life in the industrial world these days can be described, without too much inaccuracy, as an arrangement set up to allow a privileged minority to externalize nearly all their costs onto the rest of society while pocketing as much as possible the benefits themselves. That’s come in for a certain amount of discussion in recent years, but I’m not sure how many of the people who’ve participated in those discussions have given any thought to the role that technological progress plays in facilitating the internalization of benefits and the externalization of costs that drive today’s increasingly inegalitarian societies. Here again, an example will be helpful.
Before the invention of blivet-making machinery, let’s say, blivets were made by old-fashioned blivet makers, who hammered them out on iron blivet anvils in shops that were to be found in every town and village. Like other handicrafts, blivet-making was a living rather than a ticket to wealth; blivet makers invested their own time and muscular effort in their craft, and turned out enough in the way of blivets to meet the demand. Notice also the effect on the production of blivet waste. Since blivets were being made one at a time rather than in pallet loads, the total amount of waste was smaller; the conditions of handicraft production also meant that blivet makers and their families were more likely to be exposed to the blivet waste than anyone else, and so had an incentive to invest the extra effort and expense to dispose of it properly. Since blivet makers were ordinary craftspeople rather than millionaires, furthermore, they weren’t as likely to be able to buy exemption from local health laws.
The invention of the mechanical blivet press changed that picture completely.  Since one blivet press could do as much work as fifty blivet makers, the income that would have gone to those fifty blivet makers and their families went instead to one factory owner and his stockholders, with as small a share as possible set aside for the wage laborers who operate the blivet press. The factory owner and stockholders had no incentive to pay for the proper disposal of the blivet waste, either—quite the contrary, since having to meet the disposal costs cut into their profit, buying off local governments was much cheaper, and if the harmful effects of blivet waste were known, you can bet that the owner and shareholders all lived well upstream from the factory. 
Notice also that a blivet manufacturer who paid a living wage to his workers and covered the costs of proper waste disposal would have to charge a higher price for blivets than one who did neither, and thus would be driven out of business by his more ruthless competitor. Externalities aren’t simply made possible by technological progress, in other words; they’re the inevitable result of technological progress in a market economy, because externalizing the costs of production is in most cases the most effective way to outcompete rival firms, and the firm that succeeds in externalizing the largest share of its costs is the most likely to prosper and survive.
Each further step in the progress of blivet manufacturing, in turn, tightened the same screw another turn. Today, to finish up the metaphor, the entire global supply of blivets is made in a dozen factories in  distant Slobbovia, where sweatshop labor under ghastly working conditions and the utter absence of environmental regulations make the business of blivet fabrication more profitable than anywhere else. The blivets are as shoddily made as possible; the entire blivet supply chain from the open-pit mines worked by slave labor that provide the raw materials to the big box stores with part-time, poorly paid staff selling blivetronic technology to the masses is a human and environmental disaster.  Every possible cost has been externalized, so that the two multinational corporations that dominate the global blivet industry can maintain their profit margins and pay absurdly high salaries to their CEOs.
That in itself is bad enough, but let’s broaden the focus to include the whole systems in which blivet fabrication takes place: the economy as a whole, society as a whole, and the biosphere as a whole. The impact of technology on blivet fabrication in a market economy has predictable and well understood consequences for each of these whole systems, which can be summed up precisely in the language we’ve already used. In order to maximize its own profitability and return on shareholder investment, the blivet industry externalizes costs in every available direction. Since nobody else wants to bear those costs, either, most of them end up being passed onto the whole systems just named, because the economy, society, and the biosphere have no voice in today’s economic decisions.
Like the costs of dealing with blivet waste, though, the other externalized costs of blivet manufacture don’t go away just because they’re externalized. As externalities increase, they tend to degrade the whole systems onto which they’re dumped—the economy, society, and the biosphere. This is where the trap closes tight, because blivet manufacturing exists within those whole systems, and can’t be carried out unless all three systems are sufficiently intact to function in their usual way. As those systems degrade, their ability to function degrades also, and eventually one or more of them breaks down—the economy plunges into a depression, the society disintegrates into anarchy or totalitarianism, the biosphere shifts abruptly into a new mode that lacks adequate rainfall for crops—and the manufacture of blivets stops because the whole system that once supported it has stopped doing so.
Notice how this works out from the perspective of someone who’s benefiting from the externalization of costs by the blivet industry—the executives and stockholders in a blivet corporation, let’s say. As far as they’re concerned, until very late in the process, everything is fine and dandy: each new round of technological improvements in blivet fabrication increases their profits, and if each such step in the onward march of progress also means that working class jobs are eliminated or offshored, democratic institutions implode, toxic waste builds up in the food chain, or what have you, hey, that’s not their problem—and after all, that’s just the normal creative destruction of capitalism, right?
That sort of insouciance is easy for at least three reasons. First, the impacts of externalities on whole systems can pop up a very long way from the blivet factories.  Second, in a market economy, everyone else is externalizing their costs as enthusiastically as the blivet industry, and so it’s easy for blivet manufacturers (and everyone else) to insist that whatever’s going wrong is not their fault.  Third, and most crucially, whole systems as stable and enduring as economies, societies, and biospheres can absorb a lot of damage before they tip over into instability. The process of externalization of costs can thus run for a very long time, and become entrenched as a basic economic habit, long before it becomes clear to anyone that continuing along the same route is a recipe for disaster.
Even when externalized costs have begun to take a visible toll on the economy, society, and the biosphere, furthermore, any attempt to reverse course faces nearly insurmountable obstacles. Those who profit from the existing order of things can be counted on to fight tooth and nail for the right to keep externalizing their costs: after all, they have to pay the full price for any reduction in their ability to externalize costs, while the benefits created by not imposing those costs on whole systems are shared among all participants in the economy, society, and the biosphere respectively. Nor is it necessarily easy to trace back the causes of any given whole-system disruption to specific externalities benefiting specific people or industries. It’s rather like loading hanging weights onto a chain; sooner or later, as the amount of weight hung on the chain goes up, the chain is going to break, but the link that breaks may be far from the last weight that pushed things over the edge, and every other weight on  the chain made its own contribution to the end result
A society that’s approaching collapse because too many externalized costs have been loaded onto on the whole systems that support it thus shows certain highly distinctive symptoms. Things are going wrong with the economy, society, and the biosphere, but nobody seems to be able to figure out why; the measurements economists use to determine prosperity show contradictory results, with those that measure the profitability of individual corporations and industries giving much better readings those that measure the performance of whole systems; the rich are convinced that everything is fine, while outside the narrowing circles of wealth and privilege, people talk in low voices about the rising spiral of problems that beset them from every side. If this doesn’t sound familiar to you, dear reader, you probably need to get out more.
At this point it may be helpful to sum up the argument I’ve developed here:
a) Every increase in technological complexity tends also to increase the opportunities for externalizing the costs of economic activity;
b) Market forces make the externalization of costs mandatory rather than optional, since economic actors that fail to externalize costs will tend to be outcompeted by those that do;
c) In a market economy, as all economic actors attempt to externalize as many costs as possible, externalized costs will tend to be passed on preferentially and progressively to whole systems such as the economy, society, and the biosphere, which provide necessary support for economic activity but have no voice in economic decisions;
d) Given unlimited increases in technological complexity, there is no necessary limit to the loading of externalized costs onto whole systems short of systemic collapse;
e) Unlimited increases in technological complexity in a market economy thus necessarily lead to the progressive degradation of the whole systems that support economic activity;
f) Technological progress in a market economy  is therefore self-terminating, and ends in collapse.
Now of course there are plenty of arguments that could be deployed against this modest proposal. For example, it could be argued that progress doesn’t have to generate a rising tide of externalities. The difficulty with this argument is that externalization of costs isn’t an accidental side effect of technology but an essential aspect—it’s not a bug, it’s a feature. Every technology is a means of externalizing some cost that would otherwise be borne by a human body. Even something as simple as a hammer takes the wear and tear that would otherwise affect the heel of your hand, let’s say, and transfers it to something else: directly, to the hammer; indirectly, to the biosphere, by way of the trees that had to be cut down to make the charcoal to smelt the iron, the plants that were shoveled aside to get the ore, and so on.
For reasons that are ultimately thermodynamic in nature, the more complex a technology becomes, the more costs it generates. In order to outcompete a simpler technology, each more complex technology has to externalize a significant proportion of its additional costs, in order to compete against the simpler technology. In the case of such contemporary hypercomplex technosystems as the internet, the process of externalizing costs has gone so far, through so many tangled interrelationships, that it’s remarkably difficult to figure out exactly who’s paying for how much of the gargantuan inputs needed to keep the thing running. This lack of transparency feeds the illusion that large systems are cheaper than small ones, by making externalities of scale look like economies of scale.
It might be argued instead that a sufficiently stringent regulatory environment, forcing economic actors to absorb all the costs of their activities instead of externalizing them onto others, would be able to stop the degradation of whole systems while still allowing technological progress to continue. The difficulty here is that increased externalization of costs is what makes progress profitable. As just noted, all other things being equal, a complex technology will on average be more expensive in real terms than a simpler technology, for the simple fact that each additional increment of complexity has to be paid for by an investment of energy and other forms of real capital.
Strip complex technologies of the subsidies that transfer some of their costs to the government, the perverse regulations that transfer some of their costs to the rest of the economy, the bad habits of environmental abuse and neglect that transfer some of their costs to the biosphere, and so on, and pretty soon you’re looking at hard economic limits to technological complexity, as people forced to pay the full sticker price for complex technologies maximize their benefits by choosing simpler, more affordable options instead. A regulatory environment sufficiently strict to keep technology from accelerating to collapse would thus bring technological progress to a halt by making it unprofitable.
Notice, however, the flipside of the same argument: a society that chose to stop progressing technologically could maintain itself indefinitely, so long as its technologies weren’t dependent on nonrenewable resources or the like. The costs imposed by a stable technology on the economy, society, and the biosphere would be more or less stable, rather than increasing over time, and it would therefore be much easier to figure out how to balance out the negative effects of those externalities and maintain the whole system in a steady state.  Societies that treated technological progress as an option rather than a requirement, and recognized the downsides to increasing complexity, could also choose to reduce complexity in one area in order to increase it in another, and so on—or they could just raise a monument to the age of progress, and go do something else instead.
The logic suggested here requires a comprehensive rethinking of most of the contemporary world’s notions about technology, progress, and the good society. We’ll begin that discussion in future posts—after, that is, we discuss a second dimension of progress that came out of last week’s discussion.

What Progress Means

Wed, 2015-02-18 18:06
Last week’s post here on The Archdruid Report appears to have hit a nerve. That didn’t come as any sort of a surprise, admittedly.  It’s one thing to point out that going back to the simpler and less energy-intensive technologies of earlier eras could help extract us from the corner into which industrial society has been busily painting itself in recent decades; it’s quite another to point out that doing this can also be great fun, more so than anything that comes out of today’s fashionable technologies, and in a good many cases the results include an objectively better quality of life as well
That’s not one of the canned speeches that opponents of progress are supposed to make. According to the folk mythology of modern industrial culture, since progress always makes things better, the foes of whatever gets labeled as progress are supposed to put on hair shirts and insist that everyone has to suffer virtuously from a lack of progress, for some reason based on sentimental superstition. The Pygmalion effect being what it is, it’s not hard to find opponents of progress who say what they’re expected to say, and thus fulfill their assigned role in contemporary culture, which is to stand there in their hair shirts bravely protesting until the steamroller of progress rolls right over them.
The grip of that particular bit of folk mythology on the collective imagination of our time is tight enough that when somebody brings up some other reason to oppose “progress”—we’ll get into the ambiguities behind that familiar label in a moment—a great many people quite literally can’t absorb what’s actually being said, and respond instead to the canned speeches they expect to hear. Thus I had several people attempt to dispute the comments on last week’s post, castigating my readers with varying degrees of wrath and profanity for thinking that they had to sacrifice the delights of today’s technology and go creeping mournfully back to the unsatisfying lifestyles of an earlier day.
That was all the more ironic in that none of the readers who were commenting on the post were saying anything of the kind. Most of them were enthusiastically talking about how much more durable, practical, repairable, enjoyable, affordable, and user-friendly older technologies are compared to the disposable plastic trash that fills the stores these days. They were discussing how much more fun it is to embrace the delights of outdated technologies than it would be to go creeping mournfully back—or forward, if you prefer—to the unsatisfying lifestyles of the present time. That heresy is far more than the alleged openmindness and intellectual diversity of our age is willing to tolerate, so it’s not surprising that some people tried to pretend that nothing of the sort had been said at all. What was surprising to me, and pleasantly so, was the number of readers who were ready to don the party clothes of some earlier time and join in the Butlerian carnival.
There are subtleties to the project of deliberate technological regress that may not be obvious at first glance, though, and it seems sensible to discuss those here before we proceed.  It’s important, to begin with, to remember that when talking heads these days babble about technology in the singular, as a uniform, monolithic thing that progresses according to some relentless internal logic of its own, they’re spouting balderdash.  In the real world, there’s no such monolith; instead, there are technologies in the plural, a great many of them, clustered more or less loosely in technological suites which may or may not have any direct relation to one another.
An example might be useful here. Consider the technologies necessary to build a steel-framed bicycle. The metal parts require the particular suite of technologies we use to smelt ores, combine the resulting metals into useful alloys, and machine and weld those into shapes that fit together to make a bicycle. The tires, inner tubes, brake pads, seat cushion, handlebar grips, and paint require a different suite of technologies drawing on various branches of applied organic chemistry, and a few other suites also have a place:  for example, the one that’s needed to make and apply lubricants  The suites that make a bicycle have other uses; if you can build a bicycle, as Orville and Wilbur Wright demonstrated, you can also build an aircraft, and a variety of other interesting machines as well; that said, there are other technologies—say, the ones needed to manufacture medicines, or precision optics, or electronics—that require very different technological suites. You can have everything you need to build a bicycle and still be unable to make a telescope or a radio receiver, and vice versa.
Strictly speaking, therefore, nothing requires the project of deliberate technological regress to move in lockstep to the technologies of a specific past date and stay there. It would be wholly possible to dump certain items of modern technology while keeping others. It would be just as possible to replace one modern technological suite with an older equivalent from one decade, another with an equivalent from a different decade and so on. Imagine, for example, a future America in which solar water heaters (worked out by 1920) and passive solar architecture (mostly developed in the 1960s and 1970s) were standard household features, canal boats (dating from before 1800) and tall ships (ditto) were the primary means of bulk transport, shortwave radio (developed in the early 20th century) was the standard long-range communications medium, ultralight aircraft (largely developed in the 1980s) were still in use, and engineers crunched numbers using slide rules (perfected around 1880).
There’s no reason why such a pastiche of technologies from different eras couldn’t work. We know this because what passes for modern technology is a pastiche of the same kind, in which (for example) cars whose basic design dates from the 1890s are gussied up with onboard computers invented a century later. Much of modern technology, in fact, is old technology with a new coat of paint and a few electronic gimmicks tacked on, and it’s old technology that originated in many different eras, too. Part of what differentiates modern technology from older equivalents, in other words, is mere fashion. Another part, though, moves into more explosive territory.
In the conversation that followed last week’s post, one of my readers—tip of the archdruid’s hat to Cathy—recounted the story of the one and only class on advertising she took at college. The teacher invited a well-known advertising executive to come in and talk about the business, and one of the points he brought up was the marketing of disposable razors. The old-fashioned steel safety razor, the guy admitted cheerfully, was a much better product: it was more durable, less expensive, and gave a better shave than disposable razors. Unfortunately, it didn’t make the kind of profits for the razor industry that the latter wanted, and so the job of the advertising company was to convince shavers that they really wanted to spend more money on a worse product instead.
I know it may startle some people to hear a luxuriantly bearded archdruid talk about shaving, but I do have a certain amount of experience with the process—though admittedly it’s been a while. The executive was quite correct: an old-fashioned safety razor gives better shaves than a disposable. What’s more, an old-fashioned safety razor combined with a shaving brush, a cake of shaving soap, a mug and a bit of hot water from the teakettle produces a shaving experience that’s vastly better, in every sense, than what you’ll get from squirting cold chemical-laced foam out of a disposable can and then scraping your face with a disposable razor; the older method, furthermore, takes no more time, costs much less on a per-shave basis, and has a drastically smaller ecological footprint to boot.
Notice also the difference in the scale and complexity of the technological suites needed to maintain these two ways of shaving. To shave with a safety razor and shaving soap, you need the metallurgical suite that produces razors and razor blades, the very simple household-chemistry suite that produces soap, the ability to make pottery and brushes, and some way to heat water. To shave with a disposable razor and a can of squirt-on shaving foam, you need fossil fuels for plastic feedstocks, chemical plants to manufacture the plastic and the foam, the whole range of technologies needed to manufacture and fill the pressurized can, and so on—all so that you can count on getting an inferior shave at a higher price, and the razor industry can boost its quarterly profits.
That’s a small and arguably silly example of a vast and far from silly issue. These days, when you see the words “new and improved” on a product, rather more often than not, the only thing that’s been improved is the bottom line of the company that’s trying to sell it to you. When you hear equivalent claims about some technology that’s being marketed to society as a whole, rather than sold to you personally, the same rule applies at least as often. That’s one of the things that drove the enthusiastic conversations on this blog’s comment page last week, as readers came out of hiding to confess that they, too, had stopped using this or that piece of cutting-edge, up-to-date, hypermodern trash, and replaced it with some sturdy, elegant, user-friendly device from an earlier decade which works better and lacks the downsides of the newer item.
What, after all, defines a change as “progress”? There’s a wilderness of ambiguities hidden in that apparently simple word. The popular notion of progress presupposes that there’s an inherent dynamic to history, that things change, or tend to change, or at the very least ought to change, from worse to better over time.  That presupposition then gets flipped around into the even more dubious claim that just because something’s new, it must be better than whatever it replaced. Move from there to specific examples, and all of a sudden it’s necessary to deal with competing claims—if there are two hot new technologies on the market, is option A more progressive than option B, or vice versa? The answer, of course, is that whichever of them manages to elbow the other aside will be retroactively awarded the coveted title of the next step in the march of progress.
That was exactly the process by which the appropriate tech of the 1970s was shoved aside and buried in the memory hole of our culture. In its heyday, appropriate tech was as cutting-edge and progressive as anything you care to name, a rapidly advancing field pushed forward by brilliant young engineers and innovative startups, and it saw itself (and presented itself to the world) as the wave of the future. In the wake of the Reagan-Thatcher counterrevolution of the 1980s, though, it was retroactively stripped of its erstwhile status as an icon of progress and consigned to the dustbin of the past. Technologies that had been lauded in the media as brilliantly innovative in 1978 were thus being condemned in the same media as Luddite throwbacks by 1988. If that abrupt act of redefinition reminds any of my readers of the way history got rewritten in George Orwell’s 1984—“Oceania has never been allied with Eurasia” and the like—well, let’s just say the parallel was noticed at the time, too.
The same process on a much smaller scale can be traced with equal clarity in the replacement of the safety razor and shaving soap with the disposable razor and squirt-can shaving foam. In what sense is the latter, which wastes more resources and generates more trash in the process of giving users a worse shave at a higher price, more progressive than the former? Merely the fact that it’s been awarded that title by advertising and the media. If razor companies could make more money by reintroducing the Roman habit of scraping beard hairs off the face with a chunk of pumice, no doubt that would quickly be proclaimed as the last word in cutting-edge, up-to-date hypermodernity, too.
Behind the mythological image of the relentless and inevitable forward march of technology-in-the-singular in the grand cause of progress, in other words, lies a murky underworld of crass commercial motives and no-holds-barred struggles over which of the available technologies will get the funding and marketing that will define it as the next great step in progress. That’s as true of major technological programs as it is of shaving supplies. Some of my readers are old enough, as I am, to remember when supersonic airliners and undersea habitats were the next great steps in progress, until all of a sudden they weren’t.  We may not be all that far from the point at which space travel and nuclear power will go the way of Sealab and the Concorde.
In today’s industrial societies, we don’t talk about that. It’s practically taboo these days to mention the long, long list of waves of the future that abruptly stalled and rolled back out to sea without delivering on their promoters’ overblown promises. Remind people that the same rhetoric currently being used to prop up faith in space travel, nuclear power, or any of today’s other venerated icons of the religion of progress was lavished just as thickly on these earlier failures, and you can pretty much expect to have that comment shouted down as an irrelevancy if the other people in the conversation don’t simply turn their backs and pretend that they never heard you say anything at all.
They have to do something of the sort, because the alternative is to admit that what we call “progress” isn’t the impersonal, unstoppable force of nature that industrial culture’s ideology insists it must be. Pay attention to the grand technological projects that failed, compare them with those that are failing now, and it’s impossible to keep ignoring certain crucial if hugely unpopular points. To begin with technological progress is a function of collective choices—do we fund Sealab or the Apollo program? Supersonic transports or urban light rail? Energy conservation and appropriate tech or an endless series of wars in the Middle East? No impersonal force makes those decisions; individuals and institutions make them, and then use the rhetoric of impersonal progress to cloak the political and financial agendas that guide the decision-making process.
What’s more, even if the industrial world chooses to invest its resources in a project, the laws of physics and economics determine whether the project is going to work. The Concorde is the poster child here, a technological successbut an economic flop that never even managed to cover its operating costs. Like nuclear power, it was only viable given huge and continuing government subsidies, and since the strategic benefits Britain and France got from having Concordes in the air were nothing like so great as those they got from having an independent source of raw material for nuclear weapons, it’s not hard to see why the subsidies went where they did.
That is to say, when something is being lauded as the next great step forward in the glorious march of progress leading humanity to a better world, those who haven’t drunk themselves tipsy on folk mythology need to keep four things in mind. The first is that the next great step forward  in the glorious march of progres (etc.) might not actually work when it’s brought down out of the billowing clouds of overheated rhetoric into the cold hard world of everyday life. The second is that even if it works, the next great step forward (etc.) may be a white elephant in economic terms, and survive only so long as it gets propped up by subsidies. The third is that even if it does make economic sense, the next great step (etc.) may be an inferior product, and do a less effective job of meeting human needs than whatever it’s supposed to replace. The fourth is that when it comes right down to it, to label something as the next great (etc.) is just a sales pitch, an overblown and increasingly trite way of saying “Buy this product!”
Those necessary critiques, in turn, are all implicit in the project of deliberate technological regress. Get past the thoughtstopping rhetoric that insists “you can’t turn back the clock”—to rephrase a comment of G.K. Chesterton’s, most people turn back the clock every fall, so that’s hardly a valid objection—and it becomes hard not to notice that “progress” is just a label for whatever choices happen to have been made by governments and corporations, with or without input from the rest of us. If we don’t like the choices that have been made for us in the name of progress, in turn, we can choose something else.
Now of course it’s possible to stuff that sort of thinking back into the straitjacket of progress, and claim that progress is chugging along just fine, and all we have to do is get it back on the proper track, or what have you. This is a very common sort of argument, and one that’s been used over and over again by critics of this or that candidate for the next (etc.). The problem with that argument, as I see it, is that it may occasionally win battles but it pretty consistently loses the war; by failing to challenge the folk mythology of progress and the agendas that are enshrined by that mythology, it guarantees that no matter what technology or policy or program gets put into place, it’ll end up leading the same place as all the others before it, because it questions the means but forgets to question the goals.
That’s the trap hardwired into the contemporary faith in progress. Once you buy into the notion that the specific choices made by industrial societies over the last three centuries or so are something more than the projects that happened to win out in the struggle for wealth and power, once you let yourself believe that there’s a teleology to it all—that there’s some objectively definable goal called “progress” that all these choices did a better or worse job of furthering—you’ve just made it much harder to ask where this thing called “progress” is going. The word “progress,” remember, means going further in the same direction, and it’s precisely questions about the direction that industrial society is going that most need to be asked.
I’d like to suggest, in fact, that going further in the direction we’ve been going isn’t a particularly bright idea just now.  It isn’t even necessary to point to the more obviously self-destructive dimensions of business as usual. Look at any trend that affects your life right now, however global or local that trend may be, and extrapolate it out in a straight line indefinitely; that’s what going further in the same direction means. If that appeals to you, dear reader, then you’re certainly welcome to it.  I have to say it doesn’t do much for me.
It’s only from within the folk mythology of progress that we have no choice but to accept the endless prolongation of current trends. Right now, as individuals, we can choose to shrug and walk away from the latest hypermodern trash, and do something else instead. Later on, on the far side of the crisis of our time, it may be possible to take the same logic further, and make deliberate technological regress a recognized policy option for organizations, communities, and whole nations—but that will depend on whether individuals do the thing first, and demonstrate to everyone else that it’s a workable option. In next week’s post, we’ll talk more about where that strategy might lead.

The Butlerian Carnival

Wed, 2015-02-11 16:29
Over the last week or so, I’ve heard from a remarkable number of people who feel that a major crisis is in the offing. The people in question don’t know each other, many of them have even less contact with the mass media than I do, and the sense they’ve tried to express to me is inchoate enough that they’ve been left fumbling for words, but they all end up reaching for the same metaphors: that something in the air just now seems reminiscent of the American colonies in 1775, France in 1789, America in 1860, Europe in 1914, or the world in 1939: a sense of being poised on the brink of convulsive change, with the sound of gunfire and marching boots coming ever more clearly from the dimly seen abyss ahead.
It’s not an unreasonable feeling, all things considered. In Washington DC, Obama’s flunkies are beating the war drums over Ukraine, threatening to send shipments of allegedly “defensive” weapons to join the mercenaries and military advisors we’ve already not-so-covertly got over there. Russian officials have responded to American saber-rattling by stating flatly that a US decision to arm Kiev will be the signal for all-out war. The current Ukrainian regime, installed by a US-sponsored coup and backed by NATO, means to Russia precisely what a hostile Canadian government installed by a Chinese-sponsored coup and backed by the People’s Liberation Army would mean to the United States; if Obama’s trademark cluelessness leads him to ignore that far from minor point and decide that the Russians are bluffing, we could be facing a European war within weeks.
Head south and west from the fighting around Donetsk, and another flashpoint is heating up toward an explosion of its own just now. Yes, that would be Greece, where the new Syriza government has refused to back down from the promises that got it into office: promises that center on the rejection of the so-called “austerity” policies that have all but destroyed the Greek economy since they were imposed in 2009.  This shouldn’t be news to anyone; those same policies, though they’ve been praised to the skies by neoliberal economists for decades now as a guaranteed ticket to prosperity, have had precisely the opposite effect in every single country where they’ve been put in place.
Despite that track record of unbroken failure, the EU—in particular, Germany, which has benefited handsomely from the gutting of southern European economies—continues to insist that Greece must accept what amounts to a perpetual state of debt peonage. The Greek defense minister noted in response in a recent speech that if Europe isn’t willing to cut a deal, other nations might well do so. He’s quite correct; it’s probably a safe bet that cold-eyed men in Moscow and Beijing are busy right now figuring out how best to step through the window of opportunity the EU is flinging open for them. If they do so—well, I’ll leave it to my readers to consider how the US is likely to respond to the threat of Russian air and naval bases in Greece, which would be capable of projecting power anywhere in the eastern and central Mediterranean basin. Here again, war is a likely outcome; I hope that the Greek government is braced for an attempt at regime change.
That is to say, the decline and fall of industrial civilization is proceeding in the normal way, at pretty much the normal pace. The thermodynamic foundations tipped over into decline first, as stocks of cheap abundant fossil fuels depleted steadily and the gap had to be filled by costly and much less abundant replacements, driving down net energy; the economy went next, as more and more real wealth had to be pulled out of all other economic activities to keep the energy supply more or less steady, until demand destruction cut in and made that increasingly frantic effort moot; now a global political and military superstructure dependent on cheap abundant fossil fuels, and on the economic arrangement that all of that surplus energy made possible, is cracking at the seams.
One feature of times like these is that the number of people who can have an influence on the immediate outcome declines steadily as crisis approaches. In the years leading up to 1914, for example, a vast number of people contributed to the rising spiral of conflict between the aging British Empire and its German rival, but the closer war came, the narrower the circle of decision-makers became, until a handful of politicians in Germany, France, and Britain had the fate of Europe in their hands. A few more bad decisions, and the situation was no longer under anybody’s control; thereafter, the only option left was to let the juggernaut of the First World War roll mindlessly onward to its conclusion.
In the same way, as recently as the 1980s, many people in the United States and elsewhere had some influence on how the industrial age would end; unfortunately most of them backed politicians who cashed in the resources that could have built a better future on one last round of absurd extravagance, and a whole landscape of possibilities went by the boards. Step by step, as the United States backed itself further and further into a morass of short-term gimmicks with ghastly long-term consequences, the number of people who have had any influence on the trajectory we’re on has narrowed steadily, and as we approach what may turn out to be the defining crisis of our time, a handful of politicians in a handful of capitals are left to make the last decisions that can shape the situation in any way at all, before the tanks begin to roll and the fighter-bombers rise up from their runways.
Out here on the fringes of the collective conversation of our time, where archdruids lurk and heresies get uttered, the opportunity to shape events as they happen is a very rare thing. Our role, rather, is to set agendas for the future, to take ideas that are unthinkable in the mainstream today and prepare them for their future role as the conventional wisdom of eras that haven’t dawned yet. Every phrase on the lips of today’s practical men of affairs, after all, was once a crazy notion taken seriously only by the lunatic fringe—yes, that includes democracy, free-market capitalism, and all the other shibboleths of our age. 
With that in mind, while we wait to see whether today’s practical men of affairs stumble into war the way they did in 1914, I propose to shift gears and talk about something else—something that may seem whimsical, even pointless, in the light of the grim martial realities just discussed. It’s neither whimsical nor pointless, as it happens, but the implications may take a little while to dawn even on those of my readers who’ve been following the last few years of discussions most closely. Let’s begin with a handful of data points.
Item: Britain’s largest bookseller recently noted that sales of the Kindle e-book reader have dropped like a rock in recent months, while sales of old-fashioned printed books are up. Here in the more gizmocentric USA, e-books retain more of their erstwhile popularity, but the bloom is off the rose; among the young and hip, it’s not hard at all to find people who got rid of their book collections in a rush of enthusiasm when e-books came out, regretted the action after it was too late, and now are slowly restocking their bookshelves while their e-book readers collect cobwebs or, at best, find use as a convenience for travel and the like.
Item: more generally, a good many of the hottest new trends in popular culture aren’t new trends at all—they’re old trends revived, in many cases, by people who weren’t even alive to see them the first time around. Kurt B. Reighley’s lively guide The United States of Americana was the first, and remains the best, introduction to the phenomenon, one that embraces everything from burlesque shows and homebrewed bitters to backyard chickens and the revival of Victorian martial arts. One pervasive thread that runs through the wild diversity of this emerging subculture is the simple recognition that many of these older things are better, in straightforwardly measurable senses, than their shiny modern mass-marketed not-quite-equivalents.
Item: within that subculture, a small but steadily growing number of people have taken the principle to its logical extreme and adopted the lifestyles and furnishings of an earlier decade wholesale in their personal lives. The 1950s are a common target, and so far as I know, adopters of 1950s culture are the furthest along the process of turning into a community, but other decades are increasingly finding the same kind of welcome among those less than impressed by what today’s society has on offer. Meanwhile, the reenactment scene has expanded spectacularly in recent years from the standard hearty fare of Civil War regiments and the neo-medievalism of the Society for Creative Anachronism to embrace almost any historical period you care to name. These aren’t merely dress-up games; go to a buckskinner’s rendezvous or an outdoor SCA event, for example, and you’re as likely as not to see handspinners turning wool into yarn with drop spindles, a blacksmith or two laboring over a portable forge, and the like.
Other examples of the same broad phenomenon could be added to the list, but these will do for now. I’m well aware, of course, that most people—even most of my readers—will have dismissed the things just listed as bizarre personal eccentricities, right up there with the goldfish-swallowing and flagpole-sitting of an earlier era. I’d encourage those of my readers who had that reaction to stop, take a second look, and tease out the mental automatisms that make that dismissal so automatic a part of today’s conventional wisdom. Once that’s done, a third look might well be in order, because the phenomenon sketched out here marks a shift of immense importance for our future.
For well over two centuries now, since it first emerged as the crackpot belief system of a handful of intellectuals on the outer fringes of their culture, the modern ideology of progress has taken it as given that new things were by definition better than whatever they replaced.  That assumption stands at the heart of contemporary industrial civilization’s childlike trust in the irreversible cumulative march of progress toward a future among the stars. Finding ways to defend that belief even when it obviously wasn’t true—when the latest, shiniest products of progress turned out to be worse in every meaningful sense than the older products they elbowed out of the way—was among the great growth industries of the 20th century; even so, there were plenty of cases where progress really did seem to measure up to its billing. Given the steady increases of energy per capita in the world’s industrial nations over the last century or so, that was a predictable outcome.
The difficulty, of course, is that the number of cases where new things really are better than what they replace has been shrinking steadily in recent decades, while the number of cases where old products are quite simply better than their current equivalents—easier to use, more effective, more comfortable, less prone to break, less burdened with unwanted side effects and awkward features, and so on—has been steadily rising. Back behind the myth of progress, like the little man behind the curtain in The Wizard of Oz, stand two unpalatable and usually unmentioned realities. The first is that profits, not progress, determines which products get marketed and which get roundfiled; the second is that making a cheaper, shoddier product and using advertising gimmicks to sell it anyway has been the standard marketing strategy across a vast range of American businesses for years now.
More generally, believers in progress used to take it for granted that progress would sooner or later bring about a world where everyone would live exciting, fulfilling lives brimfull of miracle products and marvelous experiences. You still hear that sort of talk from the faithful now and then these days, but it’s coming to sound a lot like all that talk about the glorious worker’s paradise of the future did right around the time the Iron Curtain came down for good. In both cases, the future that was promised didn’t have much in common with the one that actually showed up. The one we got doesn’t have some of the nastier features of the one the former Soviet Union and its satellites produced—well, not yet, at least—but the glorious consumer’s paradise described in such lavish terms a few decades back got lost on the way to the spaceport, and what we got instead was a bleak landscape of decaying infrastructure, abandoned factories, prostituted media, and steadily declining standards of living for everyone outside the narrowing circle of the privileged, with the remnants of our once-vital democratic institutions hanging above it all like rotting scarecrows silhouetted against a darkening sky.
In place of those exciting, fulfilling lives mentioned above, furthermore, we got the monotony and stress of long commutes, cubicle farms, and would-you-like-fries-with that for the slowly shrinking fraction of our population who can find a job at all. The Onion, with its usual flair for packaging unpalatable realities in the form of deadpan humor, nailed it a few days ago with a faux health-news article announcing that the best thing office workers could do for their health is stand up at their desk, leave the office, and never go back. Joke or not, it’s not bad advice; if you have a full-time job in today’s America, the average medieval peasant had a less stressful job environment and more days off than you do; he also kept a larger fraction of the product of his labor than you’ll ever see.
Then, of course, if you’re like most Americans, you’ll numb yourself once you get home by flopping down on the sofa and spending most of your remaining waking hours staring at little colored pictures on a glass screen. It’s remarkable how many people get confused about what this action really entails. They insist that they’re experiencing distant places, traveling in worlds of pure imagination, and so on through the whole litany of self-glorifying drivel the mass media likes to employ in its own praise. Let us please be real: when you watch a program about the Amazon rain forest, you’re not experiencing the Amazon rain forest; you’re experiencing colored pictures on a screen, and you’re only getting as much of the experience as fits through the narrow lens of a video camera and the even narrower filter of the production process. The difference between experiencing something and watching it on TV or the internet, that is to say, is precisely the same as the difference between making love and watching pornography; in each case, the latter is a very poor substitute for the real thing.
For most people in today’s America, in other words, the closest approach to the glorious consumer’s paradise of the future they can expect to get is eight hours a day, five days a week of mindless, monotonous work under the constant pressure of management efficiency experts, if they’re lucky enough to get a job at all, with anything up to a couple of additional hours commuting and any off-book hours the employer happens to choose to demand from them into the deal, in order to get a paycheck that buys a little less each month—inflation is under control, the government insists, but prices somehow keep going up—of products that get more cheaply made, more likely to be riddled with defects, and more likely to pose a serious threat to the health and well-being of their users, with every passing year. Then they can go home and numb their nervous systems with those little colored pictures on the screen, showing them bland little snippets of experiences they will never have, wedged in there between the advertising.
That’s the world that progress has made. That’s the shining future that resulted from all those centuries of scientific research and technological tinkering, all the genius and hard work and sacrifice that have gone into the project of progress. Of course there’s more to the consequences of progress than that; progress has saved quite a few children from infectious diseases, and laced the environment with so many toxic wastes that childhood cancer, all but unheard of in 1850, is a routine event today; it’s made impressive contributions to human welfare, while flooding the atmosphere with greenhouse gases that will soon make far more impressive contributions to human suffering and death—well, I could go on along these lines for quite a while. True believers in the ideology of perpetual progress like to insist that all the good things ought to be credited to progress while all the bad things ought to be blamed on something else, but that’s not so plausible an article of faith as it once was, and it bids fair to become a great deal less common as the downsides of progress become more and more difficult to ignore.
The data points I noted earlier in this week’s post, I’ve come to believe, are symptoms of that change, the first stirrings of wind that tell of the storm to come. People searching for a better way of living than the one our society offers these days are turning to the actual past, rather than to some imaginary future, in that quest. That’s the immense shift I mentioned earlier. What makes it even more momentous is that by and large, it’s not being done in the sort of grim Puritanical spirit of humorless renunciation that today’s popular culture expects from those who want something other than what the consumer economy has on offer. It’s being done, rather, in a spirit of celebration.
One of my readers responded to my post  two weeks ago on deliberate technological regress by suggesting that I was proposing a Butlerian jihad of sorts. (Those of my readers who don’t get the reference should pick up a copy of Frank Herbert’s iconic SF novel Dune and read it.) I demurred, for two reasons. First, the Butlerian jihad in Herbert’s novel was a revolt against computer technology, and I see no need for that; once the falling cost of human labor intersects the rising cost of energy and technology, and it becomes cheaper to hire file clerks and accountants than to maintain the gargantuan industrial machine that keeps computer technology available, computers will go away, or linger as a legacy technology for a narrowing range of special purposes until the hardware finally burns out.
The second reason, though, is the more important. I’m not a fan of jihads, or of holy wars of any flavor; history shows all too well that when you mix politics and violence with religion, any actual religious content vanishes away, leaving its castoff garments to cover the naked rule of force and fraud. If you want people to embrace a new way of looking at things, furthermore, violence, threats, and abusive language don’t work, and it’s even less effective to offer that new way as a ticket to virtuous misery, along the lines of the Puritanical spirit noted above. That’s why so much of the green-lifestyle propaganda of the last thirty years has done so little good—so much of it has been pitched as a way to suffer self-righteously for the good of Gaia, and while that approach appeals to a certain number of wannabe martyrs, that’s not a large enough fraction of the population to matter.
The people who are ditching their Kindles and savoring books as physical objects, brewing their own beer and resurrecting other old arts and crafts, reformatting their lives in the modes of a past decade, or spending their spare time reconnecting with the customs and technologies of an earlier time—these people aren’t doing any of those things out of some passion for self-denial. They’re doing them because these things bring them delights that the shoddy mass-produced lifestyles of the consumer economy can’t match. What these first stirrings suggest to me is that the way forward isn’t a Butlerian jihad, but a Butlerian carnival—a sensuous celebration of the living world outside the cubicle farms and the glass screens, which will inevitably draw most of its raw materials from eras, technologies, and customs of the past, which don’t require the extravagant energy and resource inputs that the modern consumer economy demands, and so will be better suited to a future defined by scarce energy and resources.
The Butlerian carnival isn’t the only way to approach the deliberate technological regression we need to carry out in the decades ahead, but it’s an important one. In upcoming posts, I’ll talk more about how this and other avenues to the same goal might be used to get through the mess immediately ahead, and start laying foundations for a future on the far side of the crises of our time.

As Night Closes In

Wed, 2015-02-04 17:47
I was saddened to learn a few days ago, via a phone call from a fellow author, that William R. Catton Jr. died early last month, just short of his 89th birthday. Some of my readers will have no idea who he was; others may dimly recall that I’ve mentioned him and his most important book, Overshoot, repeatedly in these essays. Those who’ve taken the time to read the book just named may be wondering why none of the sites in the peak oil blogosphere has put up an obituary, or even noted the man’s passing. I don’t happen to know the answer to that last question, though I have my suspicions.
I encountered Overshoot for the first time in a college bookstore in Bellingham, Washington in 1983. Red letters on a stark yellow spine spelled out the title, a word I already knew from my classes in ecology and systems theory; I pulled it off the shelf, and found the future staring me in the face. This is what’s on the front cover below the title:
carrying capacity: maximum permanently supportable load.
cornucopian myth: euphoric belief in limitless resources.
drawdown: stealing resources from the future.
cargoism: delusion that technology will always save us from
overshoot: growth beyond an area’s carrying capacity, leading to
crash: die-off.
If you want to know where I got the core ideas I’ve been exploring in these essays for the last eight-going-on-nine years, in other words, now you know. I still have that copy of Overshoot; it’s sitting on the desk in front of me right now, reminding me yet again just how many chances we had to turn away from the bleak future that’s closing in around us now, like the night at the end of a long day.
Plenty of books in the 1970s and early 1980s applied the lessons of ecology to the future of industrial civilization and picked up at least part of the bad news that results. Overshoot was arguably the best of the lot, but it was pretty much guaranteed to land even deeper in the memory hole than the others. The difficulty was that Catton’s book didn’t pander to the standard mythologies that still beset any attempt to make sense of the predicament we’ve made for ourselves; it provided no encouragement to what he called cargoism, the claim that technological progress will inevitably allow us to have our planet and eat it too, without falling off the other side of the balance into the sort of apocalyptic daydreams that Hollywood loves to make into bad movies. Instead, in calm, crisp, thoughtful prose, he explained how industrial civilization was cutting its own throat, how far past the point of no return we’d already gone, and what had to be done in order to salvage anything from the approaching wreck.
As I noted in a post here in 2011, I had the chance to meet Catton at an ASPO conference, and tried to give him some idea of how much his book had meant to me. I did my best not to act like a fourteen-year-old fan meeting a rock star, but I’m by no means sure that I succeeded. We talked for fifteen minutes over dinner; he was very gracious; then things moved on, each of us left the conference to carry on with our lives, and now he’s gone. As the old song says, that’s the way it goes.
There’s much more that could be said about William Catton, but that task should probably be left for someone who knew the man as a teacher, a scholar, and a human being. I didn’t; except for that one fifteen-minute conversation, I knew him solely as the mind behind one of the books that helped me make sense of the world, and then kept me going on the long desert journey through the Reagan era, when most of those who claimed to be environmentalists over the previous decade cashed in their ideals and waved around the cornucopian myth as their excuse for that act. Thus I’m simply going to urge all of my readers who haven’t yet read Overshoot to do so as soon as possible, even if they have to crawl on their bare hands and knees over abandoned fracking equipment to get a copy. Having said that, I’d like to go on to the sort of tribute I think he would have appreciated most: an attempt to take certain of his ideas a little further than he did.
The core of Overshoot, which is also the core of the entire world of appropriate technology and green alternatives that got shot through the head and shoved into an unmarked grave in the Reagan years, is the recognition that the principles of ecology apply to industrial society just as much as they do to other communities of living things. It’s odd, all things considered, that this is such a controversial proposal. Most of us have no trouble grasping the fact that the law of gravity affects human beings the same way it affects rocks; most of us understand that other laws of nature really do apply to us; but quite a few of us seem to be incapable of extending that same sensible reasoning to one particular set of laws, the ones that govern how communities of living things relate to their environments.
If people treated gravity the way they treat ecology, you could visit a news website any day of the week and read someone insisting with a straight face that while it’s true that rocks fall down when dropped, human beings don’t—no, no, they fall straight up into the sky, and anyone who thinks otherwise is so obviously wrong that there’s no point even discussing the matter. That degree of absurdity appears every single day in the American media, and in ordinary conversations as well, whenever ecological issues come up. Suggest that a finite planet must by definition contain a finite amount of fossil fuels, that dumping billions of tons of gaseous trash into the air every single year for centuries might change the way that the atmosphere retains heat, or that the law of diminishing returns might apply to technology the way it applies to everything else, and you can pretty much count on being shouted down by those who, for all practical purposes, might as well believe that the world is flat.
Still, as part of the ongoing voyage into the unspeakable in which this blog is currently engaged, I’d like to propose that, in fact, human societies are as subject to the laws of ecology as they are to every other dimension of natural law. That act of intellectual heresy implies certain conclusions that are acutely unwelcome in most circles just now; still, as my regular readers will have noticed long since, that’s just one of the services this blog offers.
Let’s start with the basics. Every ecosystem, in thermodynamic terms, is a process by which relatively concentrated energy is dispersed into diffuse background heat. Here on Earth, at least, the concentrated energy mostly comes from the Sun, in the form of solar radiation—there are a few ecosystems, in deep oceans and underground, that get their energy from chemical reactions driven by the Earth’s internal heat instead. Ilya Prigogine showed some decades back that the flow of energy through a system of this sort tends to increase the complexity of the system; Jeremy England, a MIT physicist, has recently shown that the same process accounts neatly for the origin of life itself. The steady flow of energy from source to sink is the foundation on which everything else rests.
The complexity of the system, in turn, is limited by the rate at which energy flows through the system, and this in turn depends on the difference in concentration between the energy that enters the system, on the one hand, and the background into which waste heat diffuses when it leaves the system, on the other. That shouldn’t be a difficult concept to grasp. Not only is it basic thermodynamics, it’s basic physics—it’s precisely equivalent, in fact, to pointing out that the rate at which water flows through any section of a stream depends on the difference in height between the place where the water flows into that section and the place where it flows out.
Simple as it is, it’s a point that an astonishing number of people—including some who are scientifically literate—routinely miss. A while back on this blog, for example, I noted that one of the core reasons you can’t power a modern industrial civilization on solar energy is that sunlight is relatively diffuse as an energy source, compared to the extremely concentrated energy we get from fossil fuels. I still field rants from people insisting that this is utter hogwash, since photons have exactly the same amount of energy they did when they left the Sun, and so the energy they carry is just as concentrated as it was when it left the Sun. You’ll notice, though, that if this was the only variable that mattered, Neptune would be just as warm as Mercury, since each of the photons hitting the one planet pack on average the same energetic punch as those that hit the other.
It’s hard to think of a better example of the blindness to whole systems that’s pandemic in today’s geek culture. Obviously, the difference between the temperatures of Neptune and Mercury isn’t a function of the energy of individual photons hitting the two worlds; it’s a function of differing concentrations of photons—the number of them, let’s say, hitting a square meter of each planet’s surface. This is also one of the two figures that matter when we’re talking about solar energy here on Earth. The other? That’s the background heat into which waste energy disperses when the system, eco- or solar, is done with it. On the broadest scale, that’s deep space, but ecosystems don’t funnel their waste heat straight into orbit, you know. Rather, they diffuse it into the ambient temperature at whatever height above or below sea level, and whatever latitude closer or further from the equator, they happen to be—and since that’s heated by the Sun, too, the difference between input and output concentrations isn’t very substantial.
Nature has done astonishing things with that very modest difference in concentration. People who insist that photosynthesis is horribly inefficient, and of course we can improve its efficiency, are missing a crucial point: something like half the energy that reaches the leaves of a green plant from the Sun is put to work lifting water up from the roots by an ingenious form of evaporative pumping, in which water sucked out through the leaf pores as vapor draws up more water through a network of tiny tubes in the plant’s stems. Another few per cent goes into the manufacture of sugars by photosynthesis, and a variety of minor processes, such as the chemical reactions that ripen fruit, also depend to some extent on light or heat from the Sun; all told, a green plant is probably about as efficient in its total use of solar energy as the laws of thermodynamics will permit. 
What’s more, the Earth’s ecosystems take the energy that flows through the green engines of plant life and put it to work in an extraordinary diversity of ways. The water pumped into the sky by what botanists call evapotranspiration—that’s the evaporative pumping I mentioned a moment ago—plays critical roles in local, regional, and global water cycles. The production of sugars to store solar energy in chemical form kicks off an even more intricate set of changes, as the plant’s cells are eaten by something, which is eaten by something, and so on through the lively but precise dance of the food web. Eventually all the energy the original plant scooped up from the Sun turns into diffuse waste heat and permeates slowly up through the atmosphere to its ultimate destiny warming some corner of deep space a bit above absolute zero, but by the time it gets there, it’s usually had quite a ride.
That said, there are hard upper limits to the complexity of the ecosystem that these intricate processes can support. You can see that clearly enough by comparing a tropical rain forest to a polar tundra. The two environments may have approximately equal amounts of precipitation over the course of a year; they may have an equally rich or poor supply of nutrients in the soil; even so, the tropical rain forest can easily support fifteen or twenty thousand species of plants and animals, and the tundra will be lucky to support a few hundred. Why? The same reason Mercury is warmer than Neptune: the rate at which photons from the sun arrive in each place per square meter of surface.
Near the equator, the sun’s rays fall almost vertically.  Close to the poles, since the Earth is round, the Sun’s rays come in at a sharp angle, and thus are spread out over more surface area. The ambient temperature’s quite a bit warmer in the rain forest than it is on the tundra, but because the vast heat engine we call the atmosphere pumps heat from the equator to the poles, the difference in ambient temperature is not as great as the difference in solar input per cubic meter. Thus ecosystems near the equator have a greater difference in energy concentration between input and output than those near the poles, and the complexity of the two systems varies accordingly.
All this should be common knowledge. Of course it isn’t, because the industrial world’s notions of education consistently ignore what William Catton called “the processes that matter”—that is, the fundamental laws of ecology that frame our existence on this planet—and approach a great many of those subjects that do make it into the curriculum in ways that encourage the most embarrassing sort of ignorance about the natural processes that keep us all alive. Down the road a bit, we’ll be discussing that in much more detail. For now, though, I want to take the points just made and apply them systematically, in much the way Catton did, to the predicament of industrial civilization.
A human society is an ecosystem.  Like any other ecosystem, it depends for its existence on flows of energy, and as with any other ecosystem, the upper limit on its complexity depends ultimately on the difference in concentration between the energy that enters it and the background into which its waste heat disperses. (This last point is a corollary of White’s Law, one of the fundamental principles of human ecology, which holds that a society’s economic development is directly proportional to its consumption of energy per capita.)  Until the beginning of the industrial revolution, that upper limit was not much higher than the upper limit of complexity in other ecosystems, since human ecosystems drew most of their energy from the same source as nonhuman ones: sunlight falling on green plants.  As human societies figured out how to tap other flows of solar energy—windpower to drive windmills and send ships coursing over the seas, water power to turn mills, and so on—that upper limit crept higher, but not dramatically so.
The discoveries that made it possible to turn fossil fuels into mechanical energy transformed that equation completely. The geological processes that stockpiled half a billion years of sunlight into coal, oil, and natural gas boosted the concentration of the energy inputs available to industrial societies by an almost unimaginable factor, without warming the ambient temperature of the planet more than a few degrees, and the huge differentials in energy concentration that resulted drove an equally unimaginable increase in complexity. Choose any measure of complexity you wish—number of discrete occupational categories, average number of human beings involved in the production, distribution, and consumption of any given good or service, or what have you—and in the wake of the industrial revolution, it soared right off the charts. Thermodynamically, that’s exactly what you’d expect.
The difference in energy concentration between input and output, it bears repeating, defines the upper limit of complexity. Other variables determine whether or not the system in question will achieve that upper limit. In the ecosystems we call human societies, knowledge is one of those other variables. If you have a highly concentrated energy source and don’t yet know how to use it efficiently, your society isn’t going to become as complex as it otherwise could. Over the three centuries of industrialization, as a result, the production of useful knowledge was a winning strategy, since it allowed industrial societies to rise steadily toward the upper limit of complexity defined by the concentration differential. The limit was never reached—the law of diminishing returns saw to that—and so, inevitably, industrial societies ended up believing that knowledge all by itself was capable of increasing the complexity of the human ecosystem. Since there’s no upper limit to knowledge, in turn, that belief system drove what Catton called the cornucopian myth, the delusion that there would always be enough resources if only the stock of knowledge increased quickly enough.
That belief only seemed to work, though, as long as the concentration differential between energy inputs and the background remained very high. Once easily accessible fossil fuels started to become scarce, and more and more energy and other resources had to be invested in the extraction of what remained, problems started to crop up. Tar sands and oil shales in their natural form are not as concentrated an energy source as light sweet crude—once they’re refined, sure, the differences are minimal, but a whole system analysis of energy concentration has to start at the moment each energy source enters the system. Take a cubic yard of tar sand fresh from the pit mine, with the sand still in it, or a cubic yard of oil shale with the oil still trapped in the rock, and you’ve simply got less energy per unit volume than you do if you’ve got a cubic yard of light sweet crude fresh from the well, or even a cubic yard of good permeable sandstone with light sweet crude oozing out of every pore.
It’s an article of faith in contemporary culture that such differences don’t matter, but that’s just another aspect of our cornucopian myth. The energy needed to get the sand out of the tar sands or the oil out of the shale oil has to come from somewhere, and that energy, in turn, is not available for other uses. The result, however you slice it conceptually, is that the upper limit of complexity begins moving down. That sounds abstract, but it adds up to a great deal of very concrete misery, because as already noted, the complexity of a society determines such things as the number of different occupational specialties it can support, the number of employees who are involved in the production and distribution of a given good or service, and so on. There’s a useful phrase for a sustained contraction in the usual measures of complexity in a human ecosystem: “economic depression.”
The economic troubles that are shaking the industrial world more and more often these days, in other words, are symptoms of a disastrous mismatch between the level of complexity that our remaining concentration differential can support, and the level of complexity that our preferred ideologies insist we ought to have. As those two things collide, there’s no question which of them is going to win. Adding to our total stock of knowledge won’t change that result, since knowledge is a necessary condition for economic expansion but not a sufficient one: if the upper limit of complexity set by the laws of thermodynamics drops below the level that your knowledge base would otherwise support, further additions to the knowledge base simply mean that there will be a growing number of things that people know how to do in theory, but that nobody has the resources to do in practice.
Knowledge, in other words, is not a magic wand, a surrogate messiah, or a source of miracles. It can open the way to exploiting energy more efficiently than otherwise, and it can figure out how to use energy resources that were not previously being used at all, but it can’t conjure energy out of thin air. Even if the energy resources are there, for that matter, if other factors prevent them from being used, the knowledge of how they might be used offers no consolation—quite the contrary.
That latter point, I think, sums up the tragedy of William Catton’s career. He knew, and could explain with great clarity, why industrialism would bring about its own downfall, and what could be done to salvage something from its wreck. That knowledge, however, was not enough to make things happen; only a few people ever listened, most of them promptly plugged their ears and started chanting “La, la, la, I can’t hear you” once Reagan made that fashionable, and the actions that might have spared all of us a vast amount of misery never happened. When I spoke to him in 2011, he was perfectly aware that his life’s work had done essentially nothing to turn industrial society aside from its rush toward the abyss. That’s got to be a bitter thing to contemplate in your final hours, and I hope his thoughts were on something else last month as the night closed in at last.

The One Way Forward

Wed, 2015-01-28 16:19
All things considered, 2015 just isn’t shaping up to be a good year for believers in business as usual. Since last week’s post here on The Archdruid Report, the anti-austerity party Syriza has swept the Greek elections, to the enthusiastic cheers of similar parties all over Europe and the discomfiture of the Brussels hierarchy. The latter have no one to blame for this turn of events but themselves; for more than a decade now, EU policies have effectively put sheltering banks and bondholders from the healthy discipline of the market ahead of all other considerations, including the economic survival of entire nations. It should be no surprise to anyone that this wasn’t an approach with a long shelf life.
Meanwhile, the fracking bust continues unabated. The number of drilling rigs at work in American oilfields continues to drop vertically from week to week, layoffs in the nation’s various oil patches are picking up speed, and the price of oil remains down at levels that make further fracking a welcome mat for the local bankruptcy judge. Those media pundits who are still talking the fracking industry’s book keep insisting that the dropping price of oil proves that they were right and those dratted heretics who talk of peak oil must be wrong, but somehow those pundits never get around to explaining why iron ore, copper, and most other major commodities are dropping in price even faster than crude oil, nor why demand for petroleum products here in the US has been declining steadily as well.
The fact of the matter is that an industrial economy built to run on cheap conventional oil can’t run on expensive oil for long without running itself into the ground. Since 2008, the world’s industrial nations have tried to make up the difference by flooding their economies with cheap credit, in the hope that this would somehow make up for the sharply increased amounts of real wealth that have had to be diverted from other purposes into the struggle to keep liquid fuels flowing at their peak levels. Now, though, the laws of economics have called their bluff; the wheels are coming off one national economy after another, and the price of oil (and all those other commodities) has dropped to levels that won’t cover the costs of fracked oil, tar sands, and the like, because all those frantic attempts to externalize the costs of energy production just meant that the whole global economy took the hit.
Now of course this isn’t how governments and the media are spinning the emerging crisis. For that matter, there’s no shortage of people outside the corridors of power, or for that matter of punditry, who ignore the general collapse of commodity prices, fixate on oil outside of the broader context of resource depletion in general, and insist that the change in the price of oil must be an act of economic warfare, or what have you. It’s a logic that readers of this blog will have seen deployed many times in the past: whatever happens, it must have been decided and carried out by human beings. An astonishing number of people these days seem unable to imagine the possibility that such wholly impersonal factors as the laws of economics, geology, and thermodynamics could make things happen all by themselves.
The problem we face now is precisely that the unimaginable is now our reality. For just that little bit too long, too many people have insisted that we didn’t need to worry about the absurdity of pursuing limitless growth on a finite and fragile planet, that “they’ll think of something,” or that chattering on internet forums about this or that or the other piece of technological vaporware was doing something concrete about our species’ imminent collision with the limits to growth. For just that little bit too long, not enough people were willing to do anything that mattered, and now impersonal factors have climbed into the driver’s seat, having mugged all seven billion of us and shoved us into the trunk.
As I noted in last week’s post, that puts hard limits on what can be done in the short term. In all probability, at this stage of the game, each of us will be meeting the oncoming wave of crisis with whatever preparations we’ve made, however substantial or insubstantial those happen to be. I’m aware that a certain subset of my readers are unhappy with that suggestion, but that can’t be helped; the future is under no obligation to wait patiently while we get ready for it. A few years back, when I posted an essay here whose title sums up the strategy I’ve been proposing, I probably should have put more stress on the most important word in that slogan: now. Still, that’s gone wherever might-have-beens spend their time. 
That doesn’t mean the world is about to end. It means that in all probability, beginning at some point this year and continuing for several years after that, most of my readers will be busy coping with the multiple impacts of a thumping economic crisis on their own lives and those of their families, friends, communities, and employers, at a time when political systems over much of the industrial world have frozen up into gridlock, the simmering wars in the Middle East and much of the Third World seem more than usually likely to boil over, and the twilight of the Pax Americana is pushing both the US government and its enemies into an ever greater degree of brinksmanship. Exactly how that’s going to play out is anyone’s guess, but no matter what happens, it’s unlikely to be pretty.
While we get ready for the first shocks to hit, though, it’s worth talking a little bit about what comes afterwards.  No matter how long a train of financial dominoes the collapse of the fracking bubble sets toppling, the last one fill fall eventually, and within a few years things will have found a “new normal,” however far down the slope of contraction that turns out to be. No matter how many proxy wars, coups d’etat, covert actions, and manufactured insurgencies get launched by the United States or its global rivals in their struggle for supremacy, most of the places touched by that conflict will see a few years at most of actual warfare or the equivalent, with periods of relative peace before and after. The other driving forces of collapse act in much the same way; collapse is a fractal process, not a linear one.
Thus there’s something on the far side of crisis besides more of the same. The discussion I’d like to start at this point centers on what might be worth doing once the various masses of economic, political, and military rubble stops bouncing. It’s not too early to begin planning for that. If nothing else, it will give readers of this blog something to think about while standing in bread lines or hiding in the basement while riot police and insurgents duke it out in the streets. That benefit aside, the sooner we start thinking about the options that will be available once relative stability returns, the better chance we’ll have of being ready to implement it, in our own lives or on a broader scale, once stability returns.
One of the interesting consequences of crisis, for that matter, is that what was unthinkable before a really substantial crisis may not be unthinkable afterwards. Read Barbara Tuchman’s brilliant The Proud Tower and you’ll see how many of the unquestioned certainties of 1914 were rotting in history’s compost bucket by the time 1945 rolled around, and how many ideas that had been on the outermost fringes before the First World War that had become plain common sense after the Second. It’s a common phenomenon, and I propose to get ahead of the curve here by proposing, as raw material for reflection if nothing else, something that’s utterly unthinkable today but may well be a matter of necessity ten or twenty or forty years from now.
What do I have in mind? Intentional technological regression as a matter of public policy.
Imagine, for a moment, that an industrial nation were to downshift its technological infrastructure to roughly what it was in 1950. That would involve a drastic decrease in energy consumption per capita, both directly—people used a lot less energy of all kinds in 1950—and indirectly—goods and services took much less energy to produce then, too. It would involve equally sharp decreases in the per capita consumption of most resources. It would also involve a sharp increase in jobs for the working classes—a great many things currently done by robots were done by human beings in those days, and so there were a great many more paychecks going out of a Friday to pay for the goods and services that ordinary consumers buy. Since a steady flow of paychecks to the working classes is one of the major things that keep an economy stable and thriving, this has certain obvious advantages, but we can leave those alone for now.
Now of course the change just proposed would involve certain changes from the way we do things. Air travel in the 1950s was extremely expensive—the well-to-do in those days were called “the jet set,” because that’s who could afford tickets—and so everyone else had to put up with fast, reliable, energy-efficient railroads when they needed to get from place to place. Computers were rare and expensive, which meant once again that more people got hired to do jobs, and also meant that when you called a utility or a business, your chance of getting a human being who could help you with whatever problem you might have was considerably higher than it is today.
Lacking the internet, people had to make do instead with their choice of scores of AM and shortwave radio stations, thousands of general and specialized print periodicals, and full-service bookstores and local libraries bursting at the seams with books—in America, at least, the 1950s were the golden age of the public library, and most small towns had collections you can’t always find in big cities these days. Oh, and the folks who like looking at pictures of people with their clothes off, and who play a large and usually unmentioned role in paying for the internet today, had to settle for naughty magazines, mail-order houses that shipped their products in plain brown wrappers, and tacky stores in the wrong end of town. (For what it’s worth, this didn’t seem to inconvenience them any.)
As previously noted, I’m quite aware that such a project is utterly unthinkable today, and we’ll get to the superstitious horror that lies behind that reaction in a bit. First, though, let’s talk about the obvious objections. Would it be possible? Of course. Much of it could be done by simple changes in the tax code. Right now, in the United States, a galaxy of perverse regulatory incentives penalize employers for hiring people and reward them for replacing employees with machines. Change those so that spending money on wages, salaries and benefits up to a certain comfortable threshold makes more financial sense for employers than using the money to automate, and you’re halfway there already. 
A revision in trade policy would do most of the rest of what’s needed.  What’s jokingly called “free trade,” despite the faith-based claims of economists, benefits the rich at everyone else’s expense, and would best be replaced by sensible tariffs to support domestic production against the sort of predatory export-driven mercantilism that dominates the global economy these days. Add to that high tariffs on technology imports, and strip any technology beyond the 1950 level of the lavish subsidies that fatten the profit margins of the welfare-queen corporations in the Fortune 500, and you’re basically there.
What makes the concept of technological regression so intriguing, and so workable, is that it doesn’t require anything new to be developed. We already know how 1950 technology worked, what its energy and resource needs are, and what the upsides and downsides of adopting it would be; abundant records and a certain fraction of the population who still remember how it worked make that easy. Thus it would be an easy thing to pencil out exactly what would be needed, what the costs and benefits would be, and how to minimize the former and maximize the latter; the sort of blind guesses and arbitrary assumptions that have to go into deploying a brand new technology need not apply.
So much for the first objection. Would there be downsides to deliberate technological regression? Of course. Every technology and every set of policy options has its downsides.  A common delusion these days claims, in effect, that it’s unfair to take the downsides of new technologies or the corresponding upsides of old ones into consideration when deciding whether to replace an older technology with a newer one. An even more common delusion claims that you’re not supposed to decide at all; once a new technology shows up, you’re supposed to run bleating after it like everyone else, without asking any questions at all.
Current technology has immense downsides. Future technologies are going to have them, too—it’s only in sales brochures and science fiction stories, remember, that any technology is without them. Thus the mere fact that 1950 technology has problematic features, too, is not a valid reason to dismiss technological retrogression. The question that needs to be asked, however unthinkable it might be, is whether, all things considered, it’s wiser to accept the downsides of 1950 technology in order to have a working technological suite that can function on much smaller per capita inputs of energy and resources, and thus a much better chance to get through the age of limits ahead than today’s far more extravagant and brittle technological infrastructure.
It’s probably also necessary to talk about a particular piece of paralogic that comes up reliably any time somebody suggests technological regression: the notion that if you return to an older technology, you have to take the social practices and cultural mores of its heyday as well. I fielded a good many such comments last year when I suggested steam-powered Victorian technology powered by solar energy as a form the ecotechnics of the future might take. An astonishing number of people seemed unable to imagine that it was possible to have such a technology without also reintroducing Victorian habits such as child labor and sexual prudery. Silly as that claim is, it has deep roots in the modern imagination.
No doubt, as a result of those deep roots, there will be plenty of people who respond to the proposal just made by insisting that the social practices and cultural mores of 1950 were awful, and claiming that those habits can’t be separated from the technologies I’m discussing. I could point out in response that 1950 didn’t have a single set of social practices and cultural mores; even in the United States, a drive from Greenwich Village to rural Pennsylvania in 1950 would have met with remarkable cultural diversity among people using the same technology. 
The point could be made even more strongly by noting that the same technology was in use that year in Paris, Djakarta, Buenos Aires, Tokyo, Tangiers, Novosibirsk, Guadalajara, and Lagos, and the social practices and cultural mores of 1950s middle America didn’t follow the technology around to these distinctly diverse settings, you know. Pointing that out, though, will likely be wasted breath. To true believers in the religion of progress, the past is the bubbling pit of eternal damnation from which the surrogate messiah of progress is perpetually saving us, and the future is the radiant heaven into whose portals the faithful hope to enter in good time. Most people these days are no more willing to question those dubious classifications than a medieval peasant would be to question the miraculous powers that supposely emanated from the bones of St. Ethelfrith.
Nothing, but nothing, stirs up shuddering superstitious horror in the minds of the cultural mainstream these days as effectively as the thought of, heaven help us, “going back.” Even if the technology of an earlier day is better suited to a future of energy and resource scarcity than the infrastructure we’ve got now, even if the technology of an earlier day actually does a better job of many things than what we’ve got today, “we can’t go back!” is the anguished cry of the masses. They’ve been so thoroughly bamboozled by the propagandists of progress that they never stop to think that, why, yes, they can, and there are valid reasons why they might even decide that it’s the best option open to them.
There’s a very rich irony in the fact that alternative and avant-garde circles tend to be even more obsessively fixated on the dogma of linear progress than the supposedly more conformist masses. That’s one of the sneakiest features of the myth of progress; when people get dissatisfied with the status quo, the myth convinces them that the only option they’ve got is to do exactly what everyone else is doing, and just take it a little further than anyone else has gotten yet. What starts off as rebellion thus gets coopted into perfect conformity, and society continues to march mindlessly along its current trajectory, like lemmings in a Disney nature film, without ever asking the obvious questions about what might be waiting at the far end.
That’s the thing about progress; all the word means is “continued movement in the same direction.” If the direction was a bad idea to start with, or if it’s passed the point at which it still made sense, continuing to trudge blindly onward into the gathering dark may not be the best idea in the world. Break out of that mental straitjacket, and the range of possible futures broadens out immeasurably.
It may be, for example, that technological regression to the level of 1950 turns out to be impossible to maintain over the long term. If the technologies of 1920  can be supported on the modest energy supply we can count on getting from renewable sources, for example, something like a 1920 technological suite might be maintained over the long term, without further regression. It might turn out instead that something like the solar steampower I mentioned earlier, an ecotechnic equivalent of 1880 technology, might be the most complex technology that can be supported on a renewable basis. It might be the case, for that matter, that something like the technological infrastructure the United States had in 1820, with windmills and water wheels as the prime movers of industry, canalboats as the core domestic transport technology, and most of the population working on small family farms to support very modest towns and cities, is the fallback level that can be sustained indefinitely.
Does that last option seem unbearably depressing? Compare it to another very likely scenario—what will happen if the world’s industrial societies gamble their survival on a great leap forward to some unproven energy source, which doesn’t live up to its billing, and leaves billions of people twisting in the wind without any working technological infrastructure at all—and you may find that it has its good points. If you’ve driven down a dead end alley and are sitting there with the front grill hard against a brick wall, it bears remembering, shouting “We can’t go back!” isn’t exactly a useful habit. In such a situation—and I’d like to suggest that that’s a fair metaphor for the situation we’re in right now—going back, retracing the route as far back as necessary, is the one way forward.

The Mariner's Rule

Wed, 2015-01-21 16:54
One of the things my readers ask me most often, in response to this blog’s exploration of the ongoing decline and impending fall of modern industrial civilization, is what I suggest people ought to do about it all. It’s a valid question, and it deserves a serious answer.
Now of course not everyone who asks the question is interested in the answers I have to offer. A great many people, for example, are only interested in answers that will allow them to keep on enjoying the absurd extravagance that passed, not too long ago, for an ordinary lifestyle among the industrial world’s privileged classes, and is becoming just a little bit less ordinary with every year that slips by.  To such people I have nothing to say. Those lifestyles were only possible because the world’s industrial nations burnt through half a billion years of stored sunlight in a few short centuries, and gave most of the benefits of that orgy of consumption to a relatively small fraction of their population; now that easily accessible reserves of fossil fuels are running short, the party’s over. 
Yes, I’m quite aware that that’s a controversial statement. I field heated denunciations on a regular basis insisting that it just ain’t so, that solar energy or fission or perpetual motion or something will allow the industrial world’s privileged classes to have their planet and eat it too. Printer’s ink being unfashionable these days, a great many electrons have been inconvenienced on the internet to proclaim that this or that technology must surely allow the comfortable to remain comfortable, no matter what the laws of physics, geology, or economics have to say.  Now of course the only alternative energy sources that have been able to stay in business even in a time of sky-high oil prices are those that can count on gargantuan government subsidies to pay their operating expenses; equally, the alternatives receive an even more gigantic “energy subsidy” from fossil fuels, which make them look much more economical than they otherwise would.  Such reflections carry no weight with those whose sense of entitlement makes living with less unthinkable.
I’m glad to say that there are  fair number of people who’ve gotten past that unproductive attitude, who have grasped the severity of the crisis of our time and are ready to accept unwelcome change in order to secure a livable future for our descendants. They want to know how we can pull modern civilization out of its current power dive and perpetuate it into the centuries ahead. I have no answers for them, either, because that’s not an option at this stage of the game; we’re long past the point at which decline and fall can be avoided, or even ameliorated on any large scale.
A decade ago, a team headed by Robert Hirsch and funded by the Department of Energy released a study outlining what would have to be done in order to transition away from fossil fuels before they transitioned away from us. What they found, to sketch out too briefly the findings of a long and carefully worded study, is that in order to avoid massive disruption, the transition would have to begin twenty years before conventional petroleum production reached its peak and began to decline. There’s a certain irony in the fact that 2005, the year this study was published, was also the year when conventional petroleum production peaked; the transition would thus have had to begin in 1985—right about the time, that is, that the Reagan administration in the US and its clones overseas were scrapping the promising steps toward just such a transition.
A transition that got under way in 2005, in other words, would have been too late, and given the political climate, it probably would have been too little as well. Even so, it would have been a much better outcome than the one we got, in which most of us have spent the last ten years insisting that we don’t have to worry about depleting oilfields because fracking was going to save us all. At this point, thirty years after the point at which we would have had to get started, it’s all very well to talk about some sort of grand transition to sustainability, but the time when such a thing would have been possible came and went decades ago. We could have chosen that path, but we didn’t, and insisting thirty years after the fact that we’ve changed our minds and want a different future than the one we chose isn’t likely to make any kind of difference that matters.
So what options does that leave? In the minds of a great many people, at least in the United States, the choice that apparently comes first to mind involves buying farmland in some isolated rural area and setting up a homestead in the traditional style. Many of the people who talk enthusiastically about this option, to be sure, have never grown anything more demanding than a potted petunia, know nothing about the complex and demanding arts of farming and livestock raising, and aren’t in anything like the sort of robust physical condition needed to handle the unremitting hard work of raising food without benefit of fossil fuels; thus it’s a safe guess that in most of these cases, heading out to the country is simply a comforting daydream that serves to distract attention from the increasingly bleak prospects so many people are facing in the age of unraveling upon us.
There’s a long history behind such daydreams. Since colonial times, the lure of the frontier has played a huge role in the American imagination, providing any number of colorful inkblots onto which fantasies of a better life could be projected. Those of my readers who are old enough to remember the aftermath of the Sixties counterculture, when a great many young people followed that dream to an assortment of hastily created rural communes, will also recall the head-on collision between middle-class fantasies of entitlement and the hard realities of rural subsistence farming that generally resulted. Some of the communes survived, though many more did not; that I know of, none of the surviving ones made it without a long and difficult period of readjustment in which romantic notions of easy living in the lap of nature got chucked in favor of a more realistic awareness of just how little in the way of goods and services a bunch of untrained ex-suburbanites can actually produce by their own labor.
In theory, that process of reassessment is still open. In practice, just at the moment, I’m far from sure it’s an option for anyone who’s not already traveled far along that road. The decline and fall of modern industrial civilization, it bears repeating, is not poised somewhere off in the indefinite future, waiting patiently for us to get ready for it before it puts in an appearance; it’s already happening at the usual pace, and the points I’ve raised in posts here over the last few weeks suggest that the downward slope is probably going to get a lot steeper in the near future. As the collapse of the fracking bubble ripples out through the financial sphere, most of us are going to be scrambling to adapt, and the chances of getting everything lined up in time to move to rural property, get the necessary equipment and supplies to start farming, and get past the worst of the learning curve before crunch time arrives are not good.
If you’re already on a rural farm, in other words, by all means pursue the strategy that put you there. If your plans to get the necessary property, equipment, and skills are well advanced at this point, you may still be able to make it, but you’d probably better get a move on. On the other hand, dear reader, if your rural retreat is still off there in the realm of daydreams and good intentions, it’s almost certainly too late to do much about it, and where you are right now is probably where you’ll be when the onrushing waves of crisis come surging up and break over your head.
That being the case, are there any options left other than hiding under the bed and hoping that the end will be relatively painless? As it happens, there are.
The point that has to be understood to make sense of those options is that in the real world, as distinct from Hollywood-style disaster fantasies, the end of a civilization follows the famous rule attributed to William Gibson: “The future is already here, it’s just not evenly distributed yet.”  Put another way, the impacts of decline and fall aren’t uniform; they vary in intensity over space and time, and they impact particular systems of a falling civilization at different times and in different ways.  If you’re in the wrong place at the wrong time, and depend on the wrong systems to support you, your chances aren’t good, but the places, times, and systems that take the brunt of the collapse aren’t random. To some extent, those can be anticipated, and some of them can also be avoided.
Here’s an obvious example. Right now, if your livelihood depends on the fracking industry, the tar sands industry, or any of the subsidiary industries that feed into those, your chances of getting through 2015 with your income intact are pretty minimal.  People in those industries who got to witness earlier booms and busts know this, and a good many of them are paying off their debts, settling any unfinished business they might have, and making sure they can cover a tank of gas or a plane ticket to get back home when the bottom falls out. People in those industries who don’t have that experience to guide them, and are convinced that nothing bad can actually happen to them, are not doing these things, and are likely to end up in a world of hurt when their turn comes.
They’re not the only ones who would benefit right now from taking such steps. A very large part of the US banking and finance industry has been flying high on bloated profits from an assortment of fracking-related scams, ranging from junk bonds through derivatives to exotic financial fauna such as volumetric production payments. Now that the goose that laid the golden eggs is bobbing feet upwards in a pond of used fracking fluid, the good times are coming to a sudden stop, and that means sharply reduced income for those junior bankers, brokers, and salespeople who can keep their jobs, and even more sharply reduced prospects for those who don’t.
They’ve got plenty of company on the chopping block.  The entire retail sector in the US is already in trouble, with big-box stores struggling for survival and shopping malls being abandoned, and the sharp economic downturn we can expect as the fracking bust unfolds will likely turn that decline into freefall, varying in intensity by region and a galaxy of other factors. Those who brace themselves for a hard landing now are a good deal more likely to make it than those who don’t, and those who have the chance to jump to something more stable now would be well advised to make the leap.
That’s one example; here’s another. I’ve written here in some detail about how anthropogenic climate change will wallop North America in the centuries ahead of us. One thing that’s been learned from the last few years of climate vagaries is that North America, at least, is shifting in exactly the way paleoclimatic data would suggest—more or less the same way it did during warm periods over the last ten or twenty million years. The short form is that the Southwest and mountain West are getting baked to a crackly crunch under savage droughts; the eastern Great Plains, Midwest, and most of the South are being hit by a wildly unstable climate, with bone-dry dry years alternating with exceptionally soggy wet ones; while the Appalachians and points eastward have been getting unsteady temperatures but reliable rainfall. Line up your choice of subsistence strategies next to those climate shifts, and if you still have the time and resources to relocate, you have some idea where to go.
All this presumes, of course, that what we’re facing has much more in common with the crises faced by other civilizations on their way to history’s compost heap than it does with the apocalyptic fantasies so often retailed these days as visions of the immediate future. I expect to field a flurry of claims that it just ain’t so, that everything I’ve just said is wasted breath because some vast and terrible whatsit will shortly descend on the whole world and squash us like bugs. I can utter that prediction with perfect confidence, because I’ve been fielding such claims over and over again since long before this blog got started. All the dates by which the world was surely going to end have rolled past without incident, and the inevitable cataclysms have pulled one no-show after another, but the shrill insistence that something of the sort really will happen this time around has shown no sign of letting up. Nor will it, since the unacceptable alternative consists of taking responsibility for doing something about the future.
Now of course I’ve already pointed out that there’s not much that can be done about the future on the largest scale. As the fracking bubble implodes, the global economy shudders, the climate destabilizes, and a dozen other measures of imminent crisis head toward the red zone on the gauge, it’s far too late in the day for much more than crisis management on a local and individual level. Even so, crisis management is a considerably more useful response than sitting on the sofa daydreaming about the grandiose project that’s certain to save us or the grandiose cataclysm that’s certain to annihilate us—though these latter options are admittedly much more comfortable in the short term.
What’s more, there’s no shortage of examples in relatively recent history to guide the sort of crisis management I have in mind. The tsunami of discontinuities that’s rolling toward us out of the deep waters of the future may be larger than the waves that hit the Western world with the coming of the First World War in 1914, the Great Depression in 1929, or the Second World War in 1939, but from the perspective of the individual, the difference isn’t as vast as it might seem. In fact, I’d encourage my readers to visit their local public libraries and pick up books about the lived experience of those earlier traumas. I’d also encourage those with elderly relatives who still remember the Second World War to sit down with them over a couple of cups of whatever beverage seems appropriate, and ask about what it was like on a day-by-day basis to watch their ordinary peacetime world unravel into chaos.
I’ve had the advantage of taking part in such conversations, and I’ve also done a great deal of reading about historical crises that have passed below the horizon of living memory. There are plenty of lessons to be gained from such sources, and one of the most important also used to be standard aboard sailing ships in the days before steam power. Sailors in those days had to go scrambling up the rigging at all hours and in all weathers to set, reef, or furl sails; it was not an easy job—imagine yourself up in the rigging of a tall ship in the middle of a howling storm at night, clinging to tarred ropes and slick wood and trying to get a mass of wet, heavy, wind-whipped canvas to behave, while below you the ship rolls from side to side and swings you out over a raging ocean and back again. If you slip and you’re lucky, you land on deck with a pretty good chance of breaking bones or worse; if you slip and you’re not lucky, you plunge straight down into churning black water and are never seen again.
The rule that sailors learned and followed in those days was simple: “One hand for yourself, one hand for the ship.” Every chore that had to be done up there in the rigging could be done by a gang of sailors who each lent one hand to the effort, so the other could cling for dear life to the nearest rope or ratline. Those tasks that couldn’t be done that way, such as hauling on ropes, took place down on the deck—the rigging was designed with that in mind. There were emergencies where that rule didn’t apply, and even with the rule in place there were sailors who fell from the rigging to their deaths, but as a general principle it worked tolerably well.
I’d like to propose that the same rule might be worth pursuing in the crisis of our age. In the years to come, a great many of us will face the same kind of scramble for survival that so many others faced in the catastrophes of the early 20th century. Some of us won’t make it, and some will have to face the ghastly choice between sheer survival and everything else they value in life. Not everyone, though, will land in one or the other of those categories, and many those who manage to stay out of them will have the chance to direct time and energy toward the broader picture.
Exactly what projects might fall into that latter category will differ from one person to another, for reasons that are irreducibly personal. I’m sure there are plenty of things that would motivate you to action in desperate times, dear reader, that would leave me cold, and of course the reverse is also true—and in times of crisis, of the kind we’re discussing, it’s personal factors of that sort that make the difference, not abstract considerations of the sort we might debate here. I’ll be discussing a few of the options in upcoming posts, but I’d also encourage readers of this blog to reflect on the question themselves: in the wreck of industrial civilization, what are you willing to make an effort to accomplish, to defend, or to preserve?
In thinking about that, I’d encourage my readers to consider the traumatic years of the early 20th century as a model for what’s approaching us. Those who were alive when the first great wave of dissolution hit in 1914 weren’t facing forty years of continuous cataclysm; as noted here repeatedly, collapse is a fractal process, and unfolds in real time as a sequence of crises of various kinds separated by intervals of relative calm in which some level of recovery is possible. It’s pretty clear that the first round of trouble here in the United States, at least, will be a major economic crisis; at some point not too far down the road, the yawning gap between our senile political class and the impoverished and disaffected masses promises the collapse of politics as usual and a descent into domestic insurgency or one of the other standard patterns by which former democracies destroy themselves; as already noted, there are plenty of other things bearing down on us—but after an interval, things will stabilize again.
Then it’ll be time to sort through the wreckage, see what’s been saved and what can be recovered, and go on from there. First, though, we have a troubled time to get through.

March of the Squirrels

Wed, 2015-01-14 16:03
Prediction is a difficult business at the best of times, but the difficulties seem to change from one era to another. Just now, at least for me, the biggest challenge is staying in front of the headlines. So far, the crash of 2015 is running precisely to spec. Smaller companies in the energy sector are being hammered by the plunging price of oil, while the banking industry insists that it’s not in trouble—those of my readers who recall identical expressions of misplaced confidence on the part of bankers in news stories just before the 2008 real estate crash will know just how seriously to take such claims.
The shiny new distractions disguised as energy breakthroughs I mentioned here two weeks ago have also started to show up. A glossy puff piece touting oceanic thermal energy conversion (OTEC), a white-elephant technology which was tested back in the 1970s and shown to be hopelessly uneconomical, shared space in the cornucopian end of the blogosphere over the last week with an equally disingenuous puff piece touting yet another rehash of nuclear fission as the answer to our energy woes. (Like every fission technology, of course, this one will be safe, clean, and affordable until someone actually tries to build it.)
No doubt there will shortly be other promoters scrambling for whatever government subsidies and private investment funds might be available for whatever revolutionary new energy breakthrough (ahem) will take the place of hydrofractured shales as America’s favorite reason to do nothing. I admit to a certain feeling of disappointment, though, in the sheer lack of imagination displayed so far in that competition. OTEC and molten-salt fission reactors were already being lauded as America’s energy salvation back when I was in high school: my junior year, I think it was, energy was the topic du jour for the local high school debate league, and we discussed those technologies at length. So did plenty of more qualified people, which is why both of them—and quite a few other superficially plausible technologies—never made it off the drawing board.
Something else came in for discussion that same year, and it’s a story with more than a little relevance to the current situation. A team from another school in the south Seattle suburbs had a brainstorm, did some frantic research right before a big debate tournament, and showed up with data claiming to prove that legions of squirrels running in squirrel cages, powering little generators, could produce America’s electricity. Since no one else happened to have thought of that gimmick, none of the other teams had evidence to refute them, and they swept the tournament. By the next tournament, of course, everyone else had crunched the numbers and proceeded to stomp the squirrel promoters, but for years to come the phrase “squirrel case” saw use in local debate circles as the standard term for a crackpot proposal backed with seemingly plausible data.
The OTEC plants and molten-salt reactors currently being hawked via the media are squirrel cases in exactly the same sense; they sound plausible as long as you don’t actually crunch the numbers and see whether they’re economically and thermodynamically viable. The same thing was true of the fracking bubble that’s messily imploding around us right now, not to mention the ethanol and biodiesel projects, the hydrogen economy, and the various other glittery excuses that have occupied so much useless space in the collective conversation of our time. So, it has to be said, do the more enthusiastic claims being made for renewable energy just now.
Don’t get me wrong, I’m a great fan of renewable energy. When extracting fossil carbon from the earth stops being economically viable—a point that may arrive a good deal sooner than many people expect—renewables are what we’ll have left, and the modest but real energy inputs that can be gotten from renewable sources when they don’t receive energy subsidies from fossil fuels could make things significantly better for our descendants. The fact remains that in the absence of subsidies from fossil fuels, renewables won’t support the absurdly extravagant energy consumption that props up what passes for an ordinary middle class lifestyle in the industrial world these days.
That’s the pterodactyl in the ointment, the awkward detail that most people even in the greenest of green circles don’t want to discuss. Force the issue into a conversation, and one of the more common responses you’ll get is the exasperated outburst “But there has to be something.” Now of course this simply isn’t true; no law of nature, no special providence, no parade of marching squirrels assures us that we can go ahead and use as much energy as we want in the serene assurance that more will always be waiting for us. It’s hard to think of a more absurd delusion, and the fact that a great many people making such claims insist on their superior rationality and pragmatism just adds icing to the cake.
Let’s go ahead and say it in so many words: there doesn’t have to be a replacement for fossil fuels. In point of fact, there’s good reason to think that no such replacement exists anywhere in the small corner of the universe accessible to us, and once fossil fuels are gone, the rest of human history will be spent in a world that doesn’t have the kind of lavish energy resources we’re used to having. Concentrations of energy, like all other natural resources, follow what’s known as the power law, the rule—applicable across an astonishingly broad spectrum of phenomena—that whatever’s ten times as concentrated is approximately ten times as rare. At the dawn of the industrial age, the reserves of fossil fuel in the Earth’s crust were the richest trove of stored energy on the planet, and of course fossil fuel extraction focused on the richest and most easily accessible prizes first, just as quickly as they could be found.
Those are gone now. Since 2005, when conventional petroleum production peaked worldwide, the industrial world has been engaged in what amounts to a frantic game of make-believe, pretending that scraping the bottom of the oil barrel proves that the barrel is still full. Every half-baked scheme for producing liquid fuels got flooded with as much cheap credit as its promoters could squander. Some of those—biodiesel and cellulosic ethanol come to mind—turned out to be money pits so abysmal that even a tide of freshly printed money couldn’t do much more than gurgle on the way down; others—above all, shale fracking and tar sand mining—were able to maintain a pretense of profitability for a while, with government subsidies, junk bonds, loans from clueless banks, and round after round of economic stimulus from central banks over much of the world serving to prop up industries that, in the final analysis, were never economically viable in the first place.
The collapse in the price of oil that began this June put paid to that era of make-believe. The causes of the oil crash are complex, but back of them all, I suggest, is a straightforward bit of economics that almost everyone’s been trying to avoid for a decade now.  To maintain economic production at any given level, the global economy has to produce enough real wealth—not, please note, enough money, but enough actual goods and services—to cover resource extraction, the manufacture and replacement of the whole stock of nonfinancial capital goods, and whatever level of economic waste is considered socially and politically necessary. If the amount of real wealth needed to keep extracting resources at a given rate goes up steeply, the rest of the economy won’t escape the consequences: somewhere or other, something has to give.
The economic history of the last decade is precisely the story of what gave in what order, or to put it another way, how the industrial world threw everything in sight under the bus to keep liquid fuel production around its 2005 peak. Infrastructure was abandoned to malign neglect, the last of the industrial world’s factory jobs got offshored to Third World sweatshops, standards of living for most people dropped steadily—well, you can fill in the blanks as well as I can. Consumption remained relatively high only because central banks flooded the global economy with limitless cheap credit, while the US government filled the gap between soaring government expenditures and flat or shrinking tax receipts by the simple equivalent of having the Fed print enough money each month to cover the federal deficit. All these things were justified by the presupposition that the global economy was just going through a temporary rough patch, and normal growth would return any day now.
But normal growth has not returned. It’s not going to return, either, because it was only “normal” in an era when cheap abundant fossil fuels greased the wheels of every kind of economic activity. As I noted in a blog post here back in 2007, the inevitable consequence of soaring oil prices is what economists call demand destruction: less formally, the process by which people who can’t afford oil stop using it, bringing the price back down. Since what’s driving the price of oil up isn’t merely market factors, but the hard geological realities of depletion, not everyone who got forced out of the market when the price was high can get back into it when the price is low—gas at $2 a gallon doesn’t matter if your job scavenging abandoned houses doesn’t pay enough for you to cover the costs of a car, and let’s not even talk about how much longer the local government can afford to maintain streets in driveable condition.
Demand destruction sounds very abstract. In practice, though, it’s all too brutally concrete: a rising tide of job losses, business failures, slumping standards of living, cutbacks to every kind of government service at every level, and so on down the litany of decline that’s become part of everyday life in the industrial world over the last decade—leaving aside, that is, the privileged few who have been sheltered from those changes so far. Unless I miss my guess, we’re going to see those same changes shift into overdrive in the months and years ahead. The attempt to boost the world out of its deepening slump by flooding the planet with cheap credit has failed; the global economy is choking on a supersized meal of unpayable IOUs and failed investments; stock markets and other venues for the exchange of paper wealth are so thoroughly gimmicked that they’ve become completely detached from the real economy of goods and services, and the real economy is headed south in a hurry.
Those unwelcome realities are going to constrain any attempt by the readers of this blog to follow up on the proposal I made in last week’s post, and take constructive action in the face of the crisis that’s now upon us. The energy situation here in the US could have been helped substantially if conservation measures and homescale renewables had received any kind of significant support from the oh-so-allegedly-green Democratic party, back when it still had enough clout in Congress to matter; the economic situation would be nowhere near as dire if governments and central banks had bitten the bullet and dealt with the crisis of our time in 2008 or thereafter, rather than papering things over with economic policies that assumed that enough money could negate the laws of physics and geology. At this point, it’s much too late for any sort of collective action on either of those fronts—and of course the political will needed to do anything meaningful about either one went missing in action at the end of the 1970s and hasn’t been seen since.
Thus all of us will have to cope with a world in which the cost of energy suffers from drastic and economically devastating swings, and the sort of localized infrastructure that could cushion the impact of those swings wasn’t built in time. All of us will also have to cope with a global economy in disarray, in which bank failures, currency crises, credit shortages, and crisis measures imposed by government fiat will take the place of the familiar workings of a market economy. Those are baked into the cake at this point, and what individuals, families, and community groups will be able to do in the years ahead will be constrained by the limits those transformations impose.
Those of my readers who still have a steady income and a home they expect to be able to keep would still be well advised to doublecheck their insulation and weatherstripping, install solar water heating and other homescale renewable energy technologies, and turn the back lawn into a vegetable garden with room for a chicken coop, if by any chance they haven’t taken these sensible steps already.  A great many of my readers don’t have such options, and at this point, it may be a long time before such options are readily available again. This is crunch time, folks; unless I’m very much mistaken, we’re on the brink of a historical inflection point like the ones in 1789 and 1914, one of the watersheds of time after which nothing will ever be the same again.
There’s still much that can be done in other spheres, and I’ll be discussing some of those things in upcoming posts. In terms of energy and the economy, though, I suspect that for a lot of us, the preparations we’re going to be able to make are the ones we’ve already made, and a great many people whose plans depend on having a stable income and its associated perks and privileges may find themselves scrambling for options when the unraveling of the economy leaves them without one. Those of my readers who have been putting off the big changes that might make them more secure in hard times may be facing the hard decision of making those changes now, in a hurry, or facing the crisis of our age in the location and situation they’re in right now. Those who’ve gone ahead and made the changes—well, you know as well as I do that it’s time to review your plans, doublecheck the details, batten down the hatches and get ready to weather the storm.
One of the entertainments to be expected as the year draws on and the crisis bears down on us all, though, is a profusion of squirrel cases of the sort discussed toward the beginning of this essay. It’s an interesting regularity of history that the closer to disaster a society in decline becomes, the more grandiose, triumphalist, and detached from the grubby realities its fantasies generally get. I’m thinking here of the essay on military affairs from the last years of the Roman world that’s crammed full of hopelessly unworkable war machines, and of the final, gargantuan round of Mayan pyramids built on the eve of the lowland classic collapse. The habit of doubling down in the face of self-induced catastrophe seems to be deeply engrained in the human psyche, and I don’t doubt for a moment that we’ll see some world-class examples of the phenomenon in the years immediately ahead.
That said, the squirrel cases mentioned earlier—the OTEC and molten-salt fission proposals—suffer from a disappointing lack of imagination. If our society is going to indulge in delusional daydreams as it topples over the edge of crisis, couldn’t we at least see some proposals that haven’t been rehashed since I was in high school?  I can only think of one such daydream that has the hallucinatory quality our current circumstances deserve; yes, that would be the proposal, being made quite seriously in the future-oriented media just now, that we can solve all our energy problems by mining helium-3 on the Moon and ship it to Earth to fuel fusion power plants we have absolutely no idea how to build yet. As faith-based cheerleading for vaporware, which is of course what those claims are, they set a very high standard—but it’s a standard that will doubtless be reached and exceeded in due time.
That said, I think the media may need some help launching the march of the squirrels just mentioned, and the readers of this blog proved a good long time ago that they have more than enough imagination to meet that pressing need.
Therefore I’m delighted to announce a new contest here on The Archdruid Report, the Great Squirrel Case Challenge of 2015. The goal is to come up with the most absurd new energy technology you can think of, and write either the giddily dishonest corporate press release or the absurdly sycophantic media article announcing it to the world. If you or a friend can Photoshop an image or two of your proposed nonsolution to the world’s energy needs, that’s all the better. Post your press release or media article on your blog if you have one; if you don’t, you can get one for free from Blogspot or Wordpress. Post a link to your piece in the comments section of this blog.
Entries must be posted here by February 28, 2012.  Two winners—one picked by me, the other by vote of the registered members of the Green Wizards forum—will receive signed complimentary copies of my forthcoming book After Progress. I can’t speak for the forum, which will doubtless have its own criteria, but I’ll be looking for a winsome combination of sheer absurdity with the sort of glossy corporate presentation that frames so many absurd statements these days. (Hint: it’s not against the rules to imitate real press releases and media articles.)
As for the wonderful new energy breakthrough you’ll be lauding so uncritically, why, that’s up to you. Biodiesel plants using investment bankers as their primary feedstock? A vast crank hooked to the Moon, running a global system of belts and pulleys? An undertaking of great energy profit, to misquite the famous ad from the South Sea Bubble, but no one to know what it is? Let your imagination run wild; no matter how giddy you get, as the failure of the fracking bubble becomes impossible to ignore, the mass media and a great many of our fellow hominids are go much further along the track of the marching squirrels than you will.
********************In not unrelated news, I’m delighted to report that the second volume of stories to come out of this blog’s 2014 Space Bats challenge is now available in ebook formats, and will shortly be out in print as well. After Oil 2: The Years of Crisis features a dozen original short stories set in the near future, as industrial civilization slams face first into the limits to growth. Those of my readers who followed the original contest already know that this is a first-rate collection of deindustrial SF; as for the rest of you—why, youre in for a treat. Click here to order a copy.

A Camp Amid the Ruins

Wed, 2015-01-07 18:35
Well, the Fates were apparently listening last week. As I write this, stock markets around the world are lurching through what might just be the opening moves of the Crash of 2015, whipsawed by further plunges in the price of oil and a range of other bad economic news; amid a flurry of layoffs and dropping rig counts, the first bankruptcy in the fracking industry has been announced, with more on their way; gunfire in Paris serves up a brutal reminder that the rising spiral of political violence I traced in last week’s post is by no means limited to North American soil.  The cheerleaders of business as usual in the media are still insisting at the top of their lungs that America’s new era of energy independence is still on its way; those of my readers who recall the final days of the housing bubble that burst in 2008, or the tech-stock bubble that popped in 2000, will recognize a familiar tone in the bluster.
It’s entirely possible, to be sure, that central banks and governments will be able to jerry-rig another round of temporary supports for the fraying architecture of the global economy, and postpone a crash—or at least drag out the agony a bit longer. It’s equally possible that other dimensions of the crisis of our age can be forestalled or postponed by drastic actions here and now.  That said, whether the process is fast or slow, whether the crunch hits now or a bit further down the road, the form of technic society I’ve termed abundance industrialism is on its way out through history’s exit turnstile, and an entire world of institutions and activities familiar to all of us is going with it.
It doesn’t require any particular genius or prescience to grasp this, merely the willingness to recognize that if something is unsustainable, sooner or later it won’t be sustained. Of course that’s the sticking point, because what can’t be sustained at this point is the collection of wildly extravagant energy- and resource-intensive habits that used to pass for a normal lifestyle in the world’s industrial nations, and has recently become just a little less normal than it used to be. Those lifestyles, and most of what goes with them, only existed in the first place because a handful of the world’s nations burned through half a billion years of fossil sunlight in a few short centuries, and stripped the planet of most of its other concentrated resource stocks into the bargain.
That’s the unpalatable reality of the industrial era. Despite the rhetoric of universal betterment that was brandished about so enthusiastically by the propagandists of the industrial order, there were never enough of any of the necessary resources to make that possible for more than a small fraction of the world’s population, or for more than a handful of generations. Nearly all the members of our species who lived outside the industrial nations, and a tolerably large number who resided within them, were expected to carry most of the costs of reckless resource extraction and ecosystem disruption while receiving few if any of the benefits. They’ll have plenty of company shortly: abundance industrialism is winding down, but its consequences are not, and people around the world for centuries and millennia to come will have to deal with the depleted and damaged planet our actions have left them.
That’s a bitter pill to swallow, and the likely aftermath of the industrial age won’t do anything to improve the taste. Over the last six months or so, I’ve drawn on the downside trajectories of other failed civilizations to sketch out how that aftermath will probably play out here in North America: the disintegration of familiar political and economic structures, the rise of warband culture, the collapse of public order, and the failure of cultural continuity, all against a backdrop of rapid and unpredictable climate change, rising seas, and the appearance of chemical and radiological dead zones created by some of industrial civilization’s more clueless habits. It’s an ugly picture, and the only excuse I have for that unwelcome fact is that falling civilizations look like that.
The question that remains, though, is what we’re going to do about it all.
I should say up front that by “we” I don’t mean some suitably photogenic collection of Hollywood heroes and heroines who just happen to have limitless resources and a bag of improbable inventions at their disposal. I don’t mean a US government that has somehow shaken off the senility that affects all great powers in their last days and is prepared to fling everything it has into the quest for a sustainable future. Nor do I mean a coterie of gray-skinned aliens from Zeta Reticuli, square-jawed rapists out of Ayn Rand novels, or some other source of allegedly superior beings who can be counted upon to come swaggering onto the scene to bail us out of the consequences of our own stupidity. They aren’t part of this conversation; the only people who are, just now, are the writer and the readers of this blog.
Within those limits, the question I’ve posed may seem preposterous. I grant that for a phenomenon that practically defines the far edges of the internet—a venue for lengthy and ornately written essays about wildly unpopular subjects by a clergyman from a small and distinctly eccentric fringe religion—The Archdruid Report has a preposterously large readership, and one that somehow manages to find room for a remarkably diverse and talented range of people, bridging some of the ideological and social barriers that divide  industrial society into so many armed and uncommunicative camps. Even so, the regular readership of this blog could probably all sit down at once in a football stadium and still leave room for the hot dog vendors. Am I seriously suggesting that this modest and disorganized a group can somehow rise up and take meaningful action in the face of so vast a process as the fall of a civilization?
One of the things that gives that question an ironic flavor is that quite a few people are making what amounts to the same claim in even more grandiose terms than mine. I’m thinking here of the various proposals for a Great Transition of one kind or another being hawked at various points along the social and political spectrum these days. I suspect we’re going to be hearing a lot more from those in the months and years immediately ahead, as the collapse of the fracking bubble forces people to find some other excuse for insisting that they can have their planet and eat it too.
Part of the motivation behind the grand plans just mentioned is straightforwardly financial. One part of what drove the fracking bubble along the classic trajectory—up with the rocket, down with the stick—was a panicked conviction on the part of a great many people that some way had to be found to keep industrial society’s fuel tanks somewhere on the near side of that unwelcome letter E. Another part of it, though, was the recognition on the part of a somewhat smaller but more pragmatic group of people tht the panicked conviction in question could be turned into a sales pitch. Fracking wasn’t the only thing that got put to work in the time-honored process of proving Ben Franklin’s proverb about a fool and his money; fuel ethanol, biodiesel, and large-scale wind power also had their promoters, and sucked up their share of government subsidies and private investment.
Now that fracking is falling by the wayside, there’ll likely be a wild scramble to replace it in the public eye as the wave of the energy future. The nuclear industry will doubtless be in there—nuclear power is one of the most durable subsidy dumpsters in modern economic life, and the nuclear industry has had to become highly skilled at slurping from the government teat, since nuclear power isn’t economically viable otherwise—it’s worth recalling that no nation on earth has been able to create or maintain a nuclear power program without massive ongoing government subsidies. No doubt we’ll get plenty of cheerleading for fusion, satellite-based solar power, and other bits of high-end vaporware, too.
Still, I suspect the next big energy bubble is probably going to come from the green end of things. Over the last few years, there’s been no shortage of claims that renewable resources can pick right up where fossil fuels leave off and keep the lifestyles of today’s privileged middle classes intact. Those claims tend to be long on enthusiasm and cooked numbers and short on meaningful assessment, but then that same habit didn’t slow the fracking boom any; we can expect to see a renewed flurry of claims that solar power must be sustainable because the sticker price has gone down, and similar logical non sequiturs. (By the same logic, the internet must be sustainable if you can pay your monthly ISP bill by selling cute kitten photos on eBay.  In both cases, the sprawling and almost entirely fossil-fueled infrastructure of mines, factories, supply chains, power grids, and the like, has been left out of the equation, as though those don’t have to be accounted for: typical of the blindness to whole systems that pervades so much of contemporary culture.)
It’s not enough for an energy technology to be green, in other words; it also has to work.  It’s probably safe to assume that that point is going to be finessed over and over again, in a galaxy of inventive ways,  as the fracking bubble goes whereved popped financial bubbles go when they die. The point that next to nobody wants to confront is the one made toward the beginning of this week’s post: if something is unsustainable, sooner or later it won’t be sustained—and what’s unsustainable in this case isn’t simply fossil fuel production and consumption, it’s the lifestyles that were made possible by the immensely abundant and highly concentrated energy supply we got from fossil fuels.
You can’t be part of the solution if your lifestyle is part of the problem. I know that those words are guaranteed to make the environmental equivalent of limousine liberals gasp and clutch their pearls or their Gucci ties, take your pick, but there it is; it really is as simple as that. There are at least two reasons why that maxim needs to be taken seriously. On the one hand, if you’re clinging to an unsustainable lifestyle in the teeth of increasingly strong economic and environmental headwinds, you’re not likely to be able to spare the money, the free time, or any of the other resources you would need to contribute to a solution; on the other, if you’re emotionally and financially invested in keeping an unsustainable lifestyle, you’re likely to put preserving that lifestyle ahead of things that arguably matter more, like leaving a livable planet for future generations.
Is the act of letting go of unsustainable lifestyles the only thing that needs to be done? Of course not, and in the posts immediately ahead I plan on talking at length about some of the other options. I’d like to suggest, though, that it’s the touchstone or, if you will, the boundary that divides those choices that might actually do some good from those that are pretty much guaranteed to do no good at all. That’s useful when considering the choices before us as individuals; it’s at least as useful, if not more so, when considering the collective options we’ll be facing in the months and years ahead, among them the flurry of campaigns, movements, and organizations that are already gearing up to exploit the crisis of our time in one way or another—and with one agenda or another.
An acronym I introduced a while back in these posts might well be worth revisiting here: LESS, which stands for “Less Energy, Stuff, and Stimulation.” That’s a convenient summary of the changes that have to be made to move from today’s unsustainable lifestyles to ways of living that will be viable when today’s habits of absurd extravagance are fading memories. It’s worth taking a moment to unpack the acronym a little further, and see what it implies.
“Less energy” might seem self-evident, but there’s more involved here than just turning off unneeded lights and weatherstripping your windows and doors—though those are admittedly good places to start. A huge fraction of the energy consumed by a modern industrial society gets used indirectly to produce, supply, and transport goods and services; an allegedly “green” technological device that’s made from petroleum-based plastics and exotic metals taken from an open-pit mine in a Third World country, then shipped halfway around the planet to the air-conditioned shopping mall where you bought it, can easily have a carbon footprint substantially bigger than some simpler item that does the same thing in a less immediately efficient way. The blindness to whole systems mentioned earlier has to be overcome in order to make any kind of meaningful sense of energy issues: a point I’ll be discussing further in an upcoming post here.
“Less stuff” is equally straightforward on the surface, equally subtle in its ramifications. Now of course it’s hardly irrelevant that ours is the first civilization in the history of the planet to have to create an entire industry of storage facilities to store the personal possessions that won’t fit into history’s biggest homes. That said, “stuff” includes a great deal more than the contents of your closets and storage lockers. It also includes infrastructure—the almost unimaginably vast assortment of technological systems on which the privileged classes of the industrial world rely for most of the activities of their daily lives. That infrastructure was only made possible by the deluge of cheap abundant energy our species briefly accessed from fossil fuels; as what’s left of the world’s fossil fuel supply moves deeper into depletion, the infrastructure that it created has been caught in an accelerating spiral of deferred maintenance and malign neglect; the less dependent you are on what remains, the less vulnerable you are to further systems degradation, and the more of what’s left can go to those who actually need it.
“Less stimulation” may seem like the least important part of the acronym, but in many ways it’s the most crucial point of all. These days most people in the industrial world flood their nervous systems with a torrent of electronic noise.  Much of this is quite openly intended to manipulate their thoughts and feelings by economic and political interests; a great deal more has that effect, if only by drowning out any channel of communication that doesn’t conform to the increasingly narrow intellectual tunnel vision of late industrial society. If you’ve ever noticed how much of what passes for thinking these days amounts to the mindless regurgitation of sound bites from the media, dear reader, that’s why. What comes through the media—any media—is inevitably prechewed and predigested according to someone else’s agenda; those who are interested in thinking their own thoughts and making their own decisions, rather than bleating in perfect unison with the rest of the herd, might want to keep this in mind.
It probably needs to be said that very few of us are in a position to go whole hog with LESS—though it’s also relevant that some of us, and quite possibly a great many of us, will end up doing so willy-nilly if the economic contraction at the end of the fracking bubble turns out to be as serious as some current figures suggest. Outside of that grim possibility, “less” doesn’t have to mean “none at all”—certainly not at first; for those who aren’t caught in the crash, at least, there may yet be time to make a gradual transition toward a future of scarce energy and scarce resources. Still, I’d like to suggest that any proposed response to the crisis of our time that doesn’t start with LESS simply isn’t serious.
As already noted, I expect to see a great many nonserious proposals in the months and years ahead. Those who put maintaining their comfortable lifestyles ahead of other goals will doubtless have no trouble coming up with enthusiastic rhetoric and canned numbers to support their case; certainly the promoters and cheerleaders of the soon-to-be-late fracking bubble had no difficulty at all on that score. Not too far in the future, something or other will have been anointed as the shiny new technological wonder that will save us all, or more precisely, that will give the privileged classes of the industrial world a new set of excuses for clinging to some semblance of their current lifestyles for a little while longer. Mention the growing list of things that have previously occupied that hallowed but inevitably temporary status, and you can count on either busy silence or a flustered explanation why it really is different this time.
There may not be that many of us who get past the nonserious proposals, ask the necessary but unwelcome questions about the technosavior du jour, and embrace LESS while there’s still time to do so a step at a time. I’m convinced, though, that those who manage these things are going to be the ones who make a difference in the shape the future will have on the far side of the crisis years ahead. Let go of the futile struggle to sustain the unsustainable, take the time and money and other resources that might be wasted in that cause and do something less foredoomed with them, and there’s a lot that can still be done, even in the confused and calamitous time that’s breaking over us right now. In the posts immediately ahead, as already mentioned, I’ll discuss some of the options; no doubt many of my readers will be able to think of options of their own, for that matter.
I’ve noted before more than once that the collapse of industrial society isn’t something located off in the nearer or further future; it’s something that got under way a good many years ago, has been accelerating around us for decades, and is simply hitting one of the rougher patches of the normal process of decline and fall just now. Most of the nonserious proposals just referred to start from the insistence that that can’t happen. Comforting in the short term, that insistence is a rich source of disaster and misery from any longer perspective, and the sooner each of us gets over it and starts to survey the wreckage around us, the better. Then we can make camp in the ruins, light a fire, get some soup heating in a salvaged iron pot, and begin to talk about where we can go from here.

The Cold Wet Mackerel of Reality

Wed, 2014-12-31 15:54
To misuse a bit of prose from Charles Dickens, it was neither the best of times nor the worst of times, but I know very few people who will object when, a few hours from now, 2014 gets dragged off to the glue factory.  This has not been a good year for most people in the United States. By all accounts, this year’s holiday season was an economic flop of considerable scale—the US media, which normally treats cheery corporate press releases with the same enforced credulity that Pravdaused to give to pronouncements from the Politburo in the Soviet era, has had to admit that Black Friday sales were down hard this year, even counting the internet—and plenty of Americans outside the echo chambers of the media have very good reasons to worry about what 2015 will bring.
  Mind you, cheerleading of a distinctly Pravda-esque variety can still be heard from those pundits and politicians who are still convinced that people can be talked into ignoring their own experience if they can only be force-fed enough spin-doctored malarkey. That sort of enthusiasm for the glorious capitalist banker’s paradise has plenty of company just now; I’m thinking in particular of the steady drumbeat of articles and essays in the US mass media wondering aloud why so many Americans haven’t noticed the economic recovery of the last four years, and are still behaving as though there’s a recession on.
Of course there’s an explanation for that awkward fact, which is that the recovery in question never happened—outside, that is, of the abstract otherworld of numerical jugglery and spin doctoring that passes for official economic statistics these days. For most Americans, the last four years have been a bleak era of soaring expenses, shrinking incomes and benefits, rising economic insecurity, and increasingly frequent and bitter struggles with dysfunctional institutions that no longer bother even to pretend to serve the public good. That’s the reality people in the United States face when they get out of bed each morning, but it’s not a reality that’s welcome in the American mass media, so endless ingenuity has been expended in explaining why so many people in the US these days haven’t noticed the alleged economic recovery that’s allegedly burgeoning all around them.
I expect to see a good deal more of this sort of twaddle in the weeks immediately ahead, as those mass media pundits who haven’t yet trotted out their predictions for the new year get around to that annual task. For that matter, it’s doubtless safe to assume that out here on the fringes where archdruids lurk, there will be plenty of predictions of a different kind or, rather, several different kinds. There will be another round of claims that this is the year when the global economy will seize up suddenly and leave us all to starve in the dark; there will be another round of claims that this is the year when this or that or the other hot new technology will come swooping in to save the day and let the middle classes maintain their privileged lifestyles; there will be—well, those of my readers who have been following the blogosphere for any length of time can fill in the blanks themselves.
I’ve noted in previous years just how many of these latter predictions get rehashed every single January in the serene conviction that nobody will notice how often they’ve flopped before. Popular though that habit may be, it seems counterproductive to me, since—at least in theory—predictions of the sort we’re discussing is intended to be something more than light entertainment. With this in mind, I’d like to engage in the annual ritual of glancing back over the predictions I posted here at the beginning of the year now ending, and see how well I did. Here’s what I said:
“My prediction for 2014, in turn, is that we’ll see more of the same:  another year, that is, of uneven but continued downward movement along the same arc of decline and fall, while official statistics here in the United States will be doctored even more extravagantly than before to manufacture a paper image of prosperity. The number of Americans trying to survive without a job will continue to increase, the effective standard of living for most of the population will continue to decline, and what used to count as the framework of ordinary life in this country will go on unraveling a thread at a time. Even so, the dollar, the Euro, the stock market, and the Super Bowl will still be functioning as 2015 begins; there will still be gas in the gas pumps and food on grocery store shelves, though fewer people will be able to afford to buy either one.
“The fracking bubble has more than lived up to last year’s expectations, filling the mass media with vast amounts of meretricious handwaving about the coming era of abundance:  the same talk, for all practical purposes, that surrounded the equally delusional claims made for the housing bubble, the tech bubble, and so on all the way back to the Dutch tulip bubble of 1637. That rhetoric will prove just as dishonest as its predecessors, and the supposed new era of prosperity will come tumbling back down to earth once the bubble pops, taking a good chunk of the American economy with it. Will that happen in 2014? That’s almost impossible to know in advance. Timing the collapse of a bubble is one of the trickiest jobs in economic life; no less a mind than Isaac Newton’s was caught flatfooted by the collapse of the South Sea Bubble in 1720, and the current bubble is far more opaque. My guess is that the collapse will come toward the end of 2014, but it could have another year or so to run first.
“It’s probably a safe bet that weather-related disasters will continue to increase in number and severity. If we get a whopper on the scale of Katrina or Sandy, watch the Federal response; it’s certain to fall short of meeting the needs of the survivors and their communities, but the degree to which it falls short will be a useful measure of just how brittle and weak the national government has become. One of these years—just possibly this year, far more likely later on—that weakness is going to become one of the crucial political facts of our time, and responses to major domestic disasters are among the few good measures we’ll have of how close we are to the inevitable crisis.
“Meanwhile, what won’t happen is at least as important as what will. Despite plenty of enthusiastic pronouncements and no shortage of infomercials disguised as meaningful journalism, there will be no grand breakthroughs on the energy front. Liquid fuels—that is to say, petroleum plus anything else that can be thrown into a gas tank—will keep on being produced at something close to 2013’s rates, though the fraction of the total supply that comes from expensive alternative fuels with lower net energy and higher production costs will continue to rise, tightening a noose around the neck of every other kind of economic activity. Renewables will remain as dependent on government subsidies as they’ve been all along, nuclear power will remain dead in the water, fusion will remain a pipe dream, and more exotic items such as algal biodiesel will continue to soak up their quotas of investment dollars before going belly up in the usual way. Once the fracking bubble starts losing air, expect something else to be scooped up hurriedly by the media and waved around to buttress the claim that peak oil won’t happen, doesn’t matter, and so on; any of my readers who happen to guess correctly what that will be, and manage their investments accordingly, may just make a great deal of money.
“Sudden world-ending catastrophes will also be in short supply in 2014, though talk about them will be anything but...Both the grandiose breakthroughs that never happen and the equally gaudy catastrophes that never happen will thus continue to fill their current role as excuses not to think about, much less do anything about, what’s actually happening around us right now—the long ragged decline and fall of industrial civilization that I’ve called the Long Descent. Given the popularity of both these evasive moves, we can safely assume that one more thing won’t happen in 2014:  any meaningful collective response to the rising spiral of crises that’s shredding our societies and our future. As before, anything useful that’s going to happen will be the work of individuals, families, and community groups, using the resources on hand to cope with local conditions.”
As I write these words, the US media is still parroting the fantasy of a fracking-driven “Saudi America” with a mindless repetitiveness that puts broken records to shame, and so the next shiny distraction disguised as a marvelous new energy breakthrough hasn’t yet been trotted out for the usual round of carefully choreographed oohs and aahs. Other than that, once again, I think it’s fair to say I called it. Continuing economic decline, check; a fracking bubble heading toward a world-class bust, check; climate-related disasters on the rise, with government interventions doing less and less to help those affected, check; and a continuing shortage of game-changing breakthroughs, world-ending catastrophes, and meaningful collective responses to the crisis of our age, check-check-check. If this were a bingo game, I’d be walking up to the front of the room with a big smile on my face.
Now of course a case could be made that I’m cheating. After all, it doesn’t take any particular insight to point out that continuing trends tend to continue, or to choose trends that are pretty clearly ongoing and predict that they’ll keep on going for another year. While this is true, it’s also part of the point I’ve been trying to make here for getting on for nine years now:  in the real world, by and large, history is what happened when you weren’t looking. Under some circumstances, sudden jarring discontinuities can hit societies like a cold wet mackerel across the face, but close attention to the decade or so before things changed routinely shows that the discontinuity itself was the product of long-established trends, and could have been anticipated if anyone was willing to do so.
That’s a particularly relevant issue just now, because the sort of long-established trends that can lead to sudden jarring discontinuities have been more and more evident in the United States in recent years, and one of the things that made 2014 so wretched for everyone outside the narrowing circle of the privileged well-to-do is precisely that several of those trends seem to be moving toward a flashpoint. I’d like to sketch out a couple of examples, because my predictions for 2015 will center on them.
The first and most obvious is the headlong collapse of the fracking bubble, which I discussed at some length in a post earlier this month. For most of the last decade, Wall Street has been using the fracking industry in all the same ways it used the real estate industry in the runup to the 2008 crash, churning out what we still laughably call “securities” on the back of a rapidly inflating speculative bubble. As the slumping price of oil kicks the props out from under the fracking boom, the vast majority of that paper—the junk bonds issued by fracking-industry firms, the securitized loans those same firms used to make up for the fact that they lost money every single quarter, the chopped and packaged shale leases, the volumetric production agreements, and all the rest of it—will revert to its actual value, which in most cases approximates pretty closely to zero.
It’s important in this context to remember that those highly insecure securities haven’t been cooped up in the financial equivalent of the dog pound where they belong; quite the contrary, they’ve gone roaming all over the neighborhood, leaving an assortment of messes behind. Banks, investment firms, pension funds, university endowments, and many other institutions in the US and abroad snapped this stuff up in gargantuan amounts, because it offered something like what used to count as a normal rate of return on investment. As a result, as the fracking boom goes belly up, it’s not just firms in the fracking industry that will be joining it in that undignified position. In the real estate bust, a great many businesses and institutions that seemingly had nothing to do with real estate found themselves in deep financial trouble; in the fracking bust, we can count on the same thing happening—and a great deal of the resulting bankruptcies, defaults, and assorted financial chaos will likely hit in 2015.
Thus one of the entertainments 2015 has in store for us is a thumping economic crisis here in the US, and in every other country that depends on our economy for its bread and butter. The scale of the crash depends on how many people bet how much of their financial future on the fantasy of an endless frack-propelled boom, but my guess is it’ll be somewhere around the scale of the 2008 real estate bust.
It probably has to be said that this doesn’t work out to the kind of fast-crash fantasy that sees the global economy grind to a sudden stop in a matter of weeks, leaving supermarket shelves bare and so on. The events of the 2008 crash proved, if there was ever any doubt on that score, that the governments of the world are willing to do whatever it takes to keep economic activity going, and if bailing out their pals in the big banks is what’s needed, hey, that’s all in a day’s work. Now of course bailing out the big banks won’t stop the bankruptcies, the layoffs, the steep cuts to pensions, the slashing of local and state government services, and the rest of it, any more than the same thing did in the wake of the 2008 crisis, but it does guarantee that the perfect storms and worst case scenarios beloved of a certain category of collapsitarian thinkers will remain imaginative fictions.
Something else that’s baked into the baby new year’s birthday cake at this point is a rising spiral of political unrest here in the United States. The mass protests over the extrajudicial executions of nonwhite Americans by police were pretty much inevitable, as pressures on the American underclass have been building toward an explosion for decades now.  There’s a certain bleak amusement to be had from watching financially comfortable white Americans come up with reasons to insist that this can’t possibly be the case, or for that matter, from hearing them contrive ways to evade the awkward fact that American police seem to have much less difficulty subduing belligerent suspects in nonlethal ways when the skins of the suspects in question are white.
Behind the killings and the protests, though, lies an explosive tangle that nobody on either side of the picket lines seems willing to address. Morale in many police departments across the United States resembles nothing so much as morale among American enlisted men in Vietnam in the last years of US involvement; after decades of budget cuts, grandstanding politicians, bungled reforms, an imploding criminal justice system, and ongoing blowback from misguided economic and social policies, a great many police officers feel that they’re caught between an enemy they can’t defeat and a political leadership that’s more than willing to throw them to the wolves for personal advantage. That the “enemy” they think they’re fighting is indistinguishable from the people they’re supposed to be protecting just adds to the list of troubling parallels.
In Vietnam, collapsing morale led to war crimes, “fragging” of officers by their own men, and worried reports to the Pentagon warning of the possibility of armed mutinies among US troops.  We haven’t yet gotten to the fragging stage this time, though the response of New York police to Mayor De Blasio suggests that we’re closer to that than most people think.  The routine extrajudicial execution of nonwhite suspects—there are scores if not hundreds of such executions a year—is the My Lai of our era, one of the few warnings that gets through the Five O’Clock Follies of the media to let the rest of us know that the guys on the front lines are cracking under the strain.
The final bitter irony here is that the federal government has been busily worsening the situation by encouraging the militarization of police departments across the United States, to the extent of equipping them with armored personnel carriers and other pieces of hardware that don’t have any possible use in ordinary policing. This is one of a good many data points that has me convinced that the US government is frantically gearing up to fight a major domestic insurgency. What’s more, they’re almost certainly going to get one. For decades now, since the post-Soviet “color revolutions,” the US has been funding and directing mass movements and rebellions against governments we don’t like, with Syria and Ukraine the two most recent beneficiaries of that strategy. We’ve made a lot of enemies in the process; it’s a safe bet that some of those enemies are eager to give us a heaping dose of our own medicine, and there are certainly nations with the means, motive, and opportunity to do just that.
Will an American insurgency funded by one or more hostile foreign powers get under way in 2015? I don’t think so, though I’m prepared to be wrong. More likely, I think, is another year of rising tensions, political gridlock, scattered gunfire, and rhetoric heated to the point of incandescence, while the various players in the game get into position for actual conflict:  the sort of thing the United States last saw in the second half of the 1850s, as sectional tensions built toward the bloody opening rounds of the Civil War.  One sign to watch for is the first outbreaks of organized violence—not just the shooting of one individual by another, but paramilitary assaults by armed groups—equivalent, more or less, to the fighting in “bleeding Kansas” that did so much to help make the Civil War inevitable.
Another thing to watch for, along the same lines, are glorifications of revolutionary violence on the part of people who haven’t previously taken that plunge. To some extent, that’s already happening. I’m thinking here especially of a recent essay by Rebecca Solnit, which starts off with lavish praise of the French Revolution: “It’s popular to say that the experiment failed,” she says, “but that’s too narrow an interpretation. France never again regressed to an absolutist monarchy”—a statement that will surprise anyone who’s heard of Napoleon, Louis XVII, or Napoleon III. In holding up the French Revolution as a model for today’s radicals, Ms. Solnit also doesn’t happen to mention the Terror, the tyranny of the Directorate, the Napoleonic wars, or any of the other problematic outcomes of the uprising in question. That sort of selective historical memory is common in times like these, and has played a very large role in setting the stage for some of history’s most brutal tragedies.
Meanwhile, back behind these foreground events, the broader trends this blog has been tracking since its outset are moving relentlessly on their own trajectories. The world’s finite supplies of petroleum, along with most other resources on which industrial civilization depends for survival, are depleting further with each day that passes; the ecological consequences of treating the atmosphere as an aerial sewer for the output of our tailpipes and smokestacks, along with all the other frankly brainless ways our civilization maltreats the biosphere that sustains us all, builds apace; caught between these two jaws of a tightening vise, industrial civilization has entered the rising spiral of crisis about which so many environmental scientists tried to warn the world back in the 1970s, and only a very small minority of people out on the fringes of our collective discourse has shown the least willingness to recognize the mess we’re in and start changing their own lives in response: the foundation, it bears repeating, of any constructive response to the crisis of our era. 
I’ve heard quite a few people insist hopefully that since 2014 was so bad, 2015 has to be better. I’m sorry to say, though, that I don’t see much likelihood of that, at least here in the US. Quite the contrary, I think that when people recall 2015, they may just think of it as the year in which America got slapped across its collective face with the cold wet mackerel of reality. Come New Year’s Day of 2016, I expect to find the dollar, the Euro, the stock market, and the Super Bowl still functioning, gas in the pumps and products for sale on the grocery store shelves—but the nation in which these things exist will have passed through a traumatic and crisis-ridden year, and the chances of avoiding an even more difficult 2016 don’t seem good just now. Still, we’ll see.

Waiting for the Sunrise

Wed, 2014-12-24 17:05
By the time many of my readers get to this week’s essay here on The Archdruid Report, it will be Christmas Day. Here in America, that means that we’re finally most of the way through one of the year’s gaudiest orgies of pure vulgar greed, the holiday shopping season, which strikes me as rather an odd way to celebrate the birth of someone whose teachings so resolutely critiqued the mindless pursuit of material goodies. If, as that same person pointed out, it’s impossible to serve both God and Mammon, the demon of wealth, it’s pretty clear which of those two personages most Americans—including no small number who claim to be Christians—really consider the reason for the season.
  A long time before that stable in Bethlehem received its most famous tenants, though, the same day was being celebrated across much of the northern temperate zone. The reason has to do with one of those details everyone knew before the invention of electric lighting and few people remember now, the movement of the apparent point of sunrise along the eastern horizon during the year. Before the printing press made calendars ubiquitous, that was a standard way of gauging the changing seasons: the point of sunrise swings from southeast to northeast as winter in the northern hemisphere gives way to summer and from northeast back to southeast as summer gives way again to winter, and if you have a way to track the apparent motion, you can follow the yearly cycle with a fair degree of precision.
This movement is like the swing of a pendulum: it’s very fast in the middle of the arc, and slows to a dead stop on either end. That makes the spring and fall equinoxes easy to identify—if you have a couple of megaliths lined up just right, for example, the shadow of one will fall right on the foot of the other on the days of the equinoxes, and a little to one side or the other one day before or after—but the summer and winter solstices are a different matter. At those times of year, the sun seems to grind to a halt around the 17th of June or December, you wait for about a week, and then finally the sun comes up a little further south on June 25th or a little further north on December 25th, and you know for a fact that the wheel of the seasons is still turning.
That’s why Christmas is when it is. I’ve read, though I don’t have the reference handy, that shepherds in the Levant back in the day kept watch over their flocks in the fields in late summer, not in December, and so—if the New Testament narrative is to be believed—Jesus was something like four months old when the first Christmas rolled around. As far as I know, nobody knows exactly how the present date got put in place, but I suspect the old solar symbolism had a lot to do with it; in those days, the Christian church was less prone to the rigid literalism that’s become common in recent centuries, and also quite aware of seasonal and astronomical cycles—consider the complicated rules for setting the date of Easter, in which movements of the sun and moon both play a part.
I’ve been thinking quite a bit about such things as the holiday shopping season stumbles toward its end and a troubled, weary, and distracted nation prepares to bid a hearty good riddance to 2014. Of course Druids generally think about such things; the seasonal cycle has had an important role in our traditions since those were revived in the eighteenth century. Even so, it’s been more on my mind than usual.  In particular, as I think about the shape of things in the world right now, what keeps coming to mind is the image of the old loremasters, waiting in the darkness at the end of a cold winter’s night to see the sunrise begin swinging back in the direction of spring.
Those of my readers who see such an image as hopelessly out of place just now have, I grant, quite a bit of evidence on their side. Most notably, the coming of 2015 marks a full decade since production of conventional petroleum worldwide hit its all-time peak and began to decline. Those who were around in the early days of the peak oil scene, as I was, will doubtless recall how often and eagerly the more optimistic members of that scene insisted that once the peak arrived, political and business interests everywhere would be forced to come to terms with the end of the age of cheap abundant energy. Once that harsh but necessary awakening took place, they argued, the transition to sustainable societies capable of living within the Earth’s annual budget of sunlight would finally get under way.
Of course that’s not what happened.  Instead, political and business interests responded to the peak by redefining what counts as crude oil, pouring just about any flammable liquid they could find into the world’s fuel tank—ethanol, vegetable oil, natural gas liquids, “dilbit” (diluted bitumen) from tar sands, you name it—while scraping the bottom of the barrel for petroleum via hydrofracturing, ultradeep offshore wells, and other extreme extraction methods. All of those require much higher inputs of fossil fuel energy per barrel produced than conventional crude does, so that a growing fraction of the world’s fossil fuel supply has had to be burned just to produce more fossil fuel. Did any whisper of this far from minor difficulty find its way into the cheery charts of “all liquids” and the extravagantly rose-colored projections of future production? Did, for example, any of the official agencies tasked with tracking fossil fuel production consider subtracting an estimate for barrels of oil equivalent used in extraction from the production figures, so that we would have at least a rough idea of the world’s net petroleum production?  Surely you jest.
The need to redirect an appreciable fraction of the world’s fossil fuel supply into fossil fuel production, in turn, had significant economic costs. Those were shown by the simultaneous presence of prolonged economic dysfunction and sky-high oil prices: a combination, please note, that last appeared during the energy crises of the 1970s, and should have served as a warning sign that something similar was afoot. Instead of paying attention, political and business interests around the world treated the abrupt fraying of the economy as a puzzling anomaly to be drowned in a vat of cheap credit—when, that is, they didn’t treat it as a public relations problem that could be solved by proclaiming a recovery that didn’t happen to exist. Economic imbalances accordingly spun out of control; paper wealth flowed away from those who actually produce goods and service into the hands of those who manipulate fiscal abstractions; the global economy was whipsawed by convulsive fiscal crisis in 2009 and 2009, and shows every sign of plunging into a comparable round of turmoil right now.
I wish I could say that the alternative energy side of the equation had responded to any of this in a way that might point toward a better future, but no such luck. With embarrassingly few exceptions, the things that got funding, or even any significant amount of discussion, were the sorts of overpriced white-elephant systems that only make economic sense in the presence of lavish government subsidies, and are utterly dependent on a technostructure that’s only viable given exactly the sort of cheap abundant fossil fuels that those systems are theoretically going to replace. Grid-tied photovoltaic systems, gargantuan wind turbines, and vast centralized solar-thermal facilities soaked up the attention and the funding, while simple, affordable, thoroughly proven technologies such as solar water heating got another decade of malign neglect. As for using less—the necessary foundation for anything approaching a sustainable future—that remained utterly taboo in polite company.
Back in 2005, a then-famous study done for the Department of Energy by a team headed by Robert Hirsch showed that to get through declining oil supplies without massive crisis, preparations for the descent would have to begin twenty years before the peak arrived. Since the peak of conventional crude oil production had already arrived in 2005, this warning was perhaps a little tardy, but a crash program focusing on conservation and the conversion of energy-intensive infrastructure to less vulnerable technologies might still have done much. Instead, we collectively wasted another decade on daydreams—and all the while, week after dreary week, the mainstream media has kept up a steady drumbeat of articles claiming to prove that this or that or the other thing has disproved peak oil. Given all this, is there any reason to expect anything other than a continuation of the same dysfunctional behavior, with the blind leading the blind until they all tumble together down the long bitter slope ahead?
As it happens, I think there is.
Part of it, oddly enough, is the steady drumbeat of articles just referred to, each claiming to have disproved peak oil once and for all. The last time the subject was shouted down, in the early 1980s, there wasn’t that kind of ongoing barrage; after a few blandly confident denunciations, the subject just got dropped from the media so hard it would have left a dent on a battleship’s armored deck, and was consigned thereafter to a memory hole straight out of George Orwell. Presumably that was the intention this time, too, but something has shifted.  In the early 1980s, when the media started spouting the same sort of cornucopian drivel they’re engaged in this time, the vast majority of the people who claimed to be concerned about energy and the environment trotted along after them with scarcely a dissenting bleat. That hasn’t happened in the present case; if I may indulge in a bit of very edgy irony here, this is one of the very few ways in which it really is different this time.
It’s worth glancing back over how that difference unfolded. To be sure, the brief heyday during which media reports took the end of the age of cheap abundant energy seriously stopped abruptly when puffing up the fracking bubble became the order of the day; the aforementioned drumbeat of alleged disproofs got going; those of us who kept on talking about peak oil started getting pressure from mainstream (that is, corporate-funded) environmentalists to drop the subject, get on board with the climate change bandwagon, and join them in the self-defeating rut that’s kept the environmental movement from accomplishing anything worthwhile for the last thirty years. In response, a certain number of bloggers and speakers who had been involved in peak oil discussions did in fact drop the subject, and those peak oil organizations that had committed themselves to a grant-funded organizational model fell on hard times. A fair number of us stayed the course, though.  Far more significantly, so did a very substantial portion of our audience.
That latter point is the thing that I find most encouraging. Over the last decade, in the teeth of constant propaganda from the mass media and a giddy assortment of other sources, the number of people in the United States and elsewhere who are aware of the ongoing decline of industrial society, who recognize the impossibility of infinite growth on a finite planet, and who are willing to make changes in their own lives in response to these things, somehow failed to dwindle away to near-irrelevance, as it did the last time around. If anything—though I don’t have hard statistics to back this perception, just a scattering of suggestive proxy measurements—that number seems to have increased.
When I speak to audiences about catabolic collapse and the twilight of the industrial age these days, for example, I don’t get anything like as many blank looks or causal dismissals as those concepts routinely fielded even a few years ago. Books on peak oil and related topics, mine among them, remain steady sellers, and stats on this blog have zigzagged unevenly but relentlessly upwards over the years, regularly topping 300,000 page views a month this autumn. Less quantifiable but more telling, at least to me, are the shifts I’ve watched in people I know. Some who used to reject the whole idea of imminent and ongoing decline with scornful laughter have slowly come around to rueful admissions that, well, maybe we really are in trouble; others, starting from the same place, now denounce any such notion with the sort of brittle rage that you normally see in people who are losing the ability to make themselves keep believing the dogma they’ve committed themselves to defending.
Even more telling are the young people I meet who have sized up the future with cold eyes, and walked away from the officially approved options spread before them like so many snares by a society whose easy promises a great many of them no longer believe.  Each year that passes brings me more encounters with people in their late teens and twenties who have recognized that the rules that shaped their parents’ and grandparents’ lives don’t work any more, that most of the jobs they have been promised either don’t exist or won’t exist for much longer, that a college education these days is a one-way ticket to decades of debt peonage, and that most of the other  institutions that claim to be there to help them don’t have their best interests in mind.  They’re picking up crafts and skilled trades, living with their parents or with groups of other young people, and learning to get by on less, because the price of doing otherwise is more than they’re willing to pay.
More broadly, more and more people seem to be turning their backs on the American dream, or more precisely on the bleak waking nightmare into which the American dream has metastasized over the last few decades. A growing number of people have walked away from the job market and found ways to support themselves outside a mainstream economy that’s increasingly stacked against them. Even among those who are still in the belly of the beast, the sort of unthinking trust in business as usual that used to be tolerably common straight through American society is increasingly rare these days. Outside the narrowing circle of those who benefit from the existing order of society, a crisis of legitimacy is in the making, and it’s not simply the current US political system that’s facing the brunt of that crisis—it’s the entire crumbling edifice of American collective life.
That crisis of legitimacy won’t necessarily lead to better things. It could all too easily head in directions no sane person would wish to go. I’ve written here more than once about the possibility that the abject and ongoing failure of constructive leadership in contemporary America could lay the groundwork for the rise of something closely akin to the fascist regimes of Depression-era Europe, as people desperate for an alternative to the Republicratic consensus frozen into place inside the Washington DC beltway turn to a charismatic demagogue who promises to break the gridlock and bring change. Things could also go in even more destructive directions; a nation that ships tens of thousands of its young people in uniform to an assortment of Middle Eastern countries, teaches them all the latest trends in  counterinsurgency warfare, and then dumps them back home in a collapsing economy without the benefits they were promised, has only itself to blame if some of them end up applying their skills in the service of a domestic insurgency against the present US government.
Those possibilities are real, and so are a galaxy of other potential outcomes that are considerably worse than what exists in America today. That said, constructive change is also a possibility. The absurd extravagances that most Americans still think of as an ordinary standard of living were always destined to be a short-term phenomenon, and we’re decades past the point at which a descent from those giddy heights could have been made without massive disruptions; no possible combination of political, social, economic, and environmental transformations at this point can change those unwelcome facts. Even so, there’s much worth doing that can still be done. We can at least stop making things worse than they have to be; we can begin shifting, individually and collectively, to technologies and social forms that will still make sense in a world of tightly constrained energy and resource supplies; we can preserve things of value to the near, middle, and far future that might otherwise be lost; we might, given luck and hard work, be able to revive enough of the moribund traditions of American democracy and voluntary association to provide an alternative down the road to the naked rule of force and fraud.
None of that will be easy, but then all the easy options went whistling down the wind a long time ago. No doubt there will still be voices insisting that Americans can have the lifestyles to which they think they’re entitled if only this, or that, or the other thing were to happen; no doubt the collapse of the fracking bubble will be followed by some equally gaudy and dishonest set of cargo-cult rhetoric meant to convince the rubes that happy days will shortly be here again, just as soon as billions of dollars we don’t happen to have are poured down whatever the next rathole du jour happens to be. If enough of us ignore those claims and do what must be done—and “enough” in this context does not need to equal a majority, or even a large minority, of Americans—there’s still much of value that can be accomplished in the time before us.
To return to the metaphor that opened this post, that first slight shift of sunrise north along the horizon from the solstice point, faint as it is, is a reminder that winter doesn’t last forever, even though the coldest nights and the worst of the winter storms come after that point is past. In the same way, bleak as the immediate prospects may be, there can still be a future worth having on the far side of the crisis of our age, and our actions here and now can further the immense task of bringing such a future into being. In the new year, as I continue the current series of posts on the American future, I plan on talking at quite some length about some of the things that can be done and some of the possibilities that those actions might bring within reach.
And with that, I would like to wish my Christian readers a very merry Christmas, my readers of other faiths, a blessed holiday season, and to all my readers, a happy New Year.

Déjà Vu All Over Again

Wed, 2014-12-17 17:28
Over the last few weeks, a number of regular readers of The Archdruid Report have asked me what I think about the recent plunge in the price of oil and the apparent end of the fracking bubble. That interest seems to be fairly widespread, and has attracted many of the usual narratives; the  blogosphere is full of claims that the Saudis crashed the price of oil to break the US fracking industry, or that Obama got the Saudis to crash the price of oil to punish the Russians, or what have you.
  I suspect, for my part, that what’s going on is considerably more important. To start with, oil isn’t the only thing that’s in steep decline. Many other major commodities—coal, iron ore, and copper among them—have registered comparable declines over the course of the last few months. I have no doubt that the Saudi government has its own reasons for keeping their own oil production at full tilt even though the price is crashing, but they don’t control the price of those other commodities, or the pace of commercial shipping—another thing that has dropped steeply in recent months.
What’s going on, rather, is something that a number of us in the peak oil scene have been warning about for a while now. Since most of the world’s economies run on petroleum products, the steep oil prices of the last few years have taken a hefty bite out of all economic activities.  The consequences of that were papered over for a while by frantic central bank activities, but they’ve finally begun to come home to roost in what’s politely called “demand destruction”—in less opaque terms, the process by which those who can no longer afford goods or services stop buying them.
That, in turn, reminded me of the last time prolonged demand destruction collided with a boom in high-priced oil production, and sent me chasing after a book I read almost three decades ago. A few days ago, accordingly,  the excellent interlibrary loan service we have here in Maryland brought me a hefty 1985 hardback by financial journalist Philip Zweig, with the engaging title Belly Up: The Collapse of the Penn Square Bank. Some of my readers may never have heard of the Penn Square Bank; others may be scratching their heads, trying to figure out why the name sounds vaguely familiar. Those of my readers who belong to either category may want to listen up, because the same story seems to be repeating itself right now on an even larger scale.
The tale begins in the middle years of the 1970s, when oil prices shot up to unprecedented levels, and reserves of oil and natural gas that hadn’t been profitable before suddenly looked like winning bets. The deep strata of Oklahoma’s Anadarko basin were ground zero for what many people thought was a new era in natural gas production, especially when a handful of deep wells started bringing in impressive volumes of gas. The only missing ingredient was cash, and plenty of it, to pay for the drilling and hardware. That’s where the Penn Square Bank came into the picture.
The Penn Square Bank was founded in 1960. At that time, as a consequence of hard-earned suspicions about big banks dating back to the Populist era, Oklahoma state banking laws prohibited banks from owning more than one branch, and so there were hundreds of little one-branch banks scattered across the state, making a modest return from home mortgages, auto loans, and the like. That’s what Penn Square was; it had been organized by the developer of the Penn Square shopping mall, in the northern suburbs of Oklahoma City, to provide an additional draw to retailers and customers. There it sat, in between a tobacconist and Shelley’s Tall Girl’s Shop, doing ordinary retail banking, until 1975.
In that year it was bought by a group of investors headed by B.P. “Beep” Jennings, an Oklahoma City banker who had been passed over for promotion at one of the big banks in town. Jennings pretty clearly wanted to prove that he could run with the big dogs; he was an excellent salesman, but not particularly talented at the number-crunching details that make for long-term success in banking, and he proceeded to demonstrate his strengths and weaknesses in an unforgettable manner. He took the little shopping mall bank and transformed it into a big player in the Oklahoma oil and gas market, which was poised—or so a chorus of industry voices insisted—on the brink of one of history’s great energy booms.
Now of course this involved certain difficulties, which had to be overcome. A small shopping center bank doesn’t necessarily have the financial resources to become a big player in a major oil and gas market, for example. Fortunately for Beep Jennings, one of the grand innovations that has made modern banking what it is today had already occurred; by his time, loans were no longer seen as money that was collected from depositors and loaned out to qualified borrowers, in the expectation that it would be repaid with interest. Rather, loans were (and are) assets, which could (and can) be sold, for cash, to other banks. This is what Penn Square did, and since their loans charged a competitive interest rate and thus promised competitive profits, they were eagerly snapped up by Chase Manhattan, Continental Illinois, Seattle First, and a great many other large and allegedly sophisticated banks. So Penn Square Bank started issuing loans to Oklahoma oil and gas entrepreneurs, a flotilla of other banks around the country proceeded to fund those loans, and to all intents and purposes, the energy boom began.
At least that’s what it looked like. There was a great deal of drilling going on, certainly; the economists insisted that the price of oil and gas would just keep on rising; the local and national media promptly started featuring giddily enthusiastic stories about the stunning upside opportunities in the booming Oklahoma oil and gas business. What’s more, Oklahoma oil and gas entrepreneurs were spending money like nobody’s business, and not just on drilling leases, steel pipe, and the other hardware of the trade. Lear jets, vacation condos in fashionable resorts, and such lower-priced symbols of nouveau richesse as overpriced alligator-hide cowboy boots were much in evidence; so was the kind of high-rolling crassness that only the Sunbelt seems to inspire. Habitués of the Oklahoma oilie scene used to reminisce about one party where one of the attendees stood at the door with a stack of crisp $100 bills in his hand and asked every woman who entered how much she wanted for her clothes: every stitch, then and there, piled up in the entry. Prices varied, but apparently none of them turned down the offer.
It’s only fair to admit that there were a few small clouds marring the otherwise sunny vistas of the late 1970s Oklahoma oil scene. One of them was the difficulty the banks buying loans from Penn Square—the so-called “upstream” banks—had in getting Penn Square to forward all the necessary documents on those loans. Since their banks were making loads of money off the transactions, the people in charge at the upstream banks were unwilling to make a fuss about it, and so their processing staff just had to put up with such minor little paperwork problems as missing or contradictory statements concerning collateral, payments of interest and principal, and so on. 
Mind you, some of the people in charge at those upstream banks seem to have had distinctly personal reasons for not wanting to make a fuss about those minor little paperwork problems. They were getting very large loans from Penn Square on very good terms, entering into partnerships with Penn Square’s favorite oilmen, and in at least some cases attending the clothing-optional parties just mentioned. No one else in the upstream banks seems to have been rude enough to ask too many questions about these activities; those who wondered aloud about them were told, hey, that’s just the way Oklahoma oilmen do business, and after all, the banks were making loads of money off the boom.
All in all, the future looked golden just then. In 1979, the Iranian revolution drove the price of oil up even further; in 1980, Jimmy Carter’s troubled presidency—with its indecisive but significant support for alternative energy and, God help us all, conservation—was steamrollered by Reagan’s massively funded and media-backed candidacy. As the new president took office in January of 1981, promising “morning in America,” the Penn Square bankers, their upstream counterparts, their clients in the Oklahoma oil and gas industry, and everyone else associated with the boom felt confident that happy days were there to stay. After all, the economists insisted that the price of oil and gas would just keep rising for decades to come, the most business-friendly and environment-hostile administration in living memory was comfortably ensconced in the White House; and investors were literally begging to be allowed to get a foot in the door in the Oklahoma boom. What could possibly go wrong?
Then, in 1981, without any fuss at all, the price of oil and natural gas peaked and began to decline.
In retrospect, it’s not difficult to see what happened, though a lot of people since then have put a lot of effort into leaving the lessons of those years unlearnt.  Energy is so central to a modern economy that when the price of energy goes up, every other sector of the economy ends up taking a hit. The rising price of energy functions, in effect, as a hidden tax on all economic activity outside the energy sector, and sends imbalances cascading through every part of the economy. As a result, other economic sectors cut their expenditures on energy as far as they can, either by conservation measures or by such tried and true processes as shedding jobs, cutting production, or going out of business. All this had predictable effects on the price of oil and gas, even though very few people predicted them.
As oil and gas prices slumped, investors started backing away from fossil fuel investments, including the Oklahoma boom. Upstream banks, in turn, started to have second thoughts about the spectacular sums of money they’d poured into Penn Square Bank loans. For the first time since the boom began, hard questions—the sort of questions that, in theory, investors and bankers are supposed to ask as a matter of course when people ask them for money—finally got asked. That’s when the problems began in earnest, because a great many of those questions didn’t have any good answers.
It took until July 5, 1982 for the boom to turn definitively into a bust. That’s the day that  federal bank regulators, after several years of inconclusive fumbling and a month or so of increasing panic, finally shut down the Penn Square Bank. What they discovered, as they dug through the mass of fragmentary, inaccurate, and nonexistent paperwork, was that Penn Square had basically been lending money to anybody in the oil and gas industry who wanted some, without taking the trouble to find out if the borrowers would ever be able to repay it. When payments became a problem, Penn Square obligingly loaned out the money to make their payments, and dealt with loans that went bad by loaning deadbeat borrowers even more money, so they could clear their debts and maintain their lifestyles.
The oil and gas boom had in fact been nothing of the kind, as a good many of the firms that had been out there producing oil and gas had been losing money all along.  Rather, it was a Ponzi scheme facilitated by delusional lending practices.  All those Lear jets, vacation condos, alligator-skin cowboy boots, heaps of slightly used women’s clothing, and the rest of it? They were paid for by money from investors and upstream banks, some of it via the Penn Square Bank, the rest from other banks and investors. The vast majority of the money was long gone; the resulting crash brought half a dozen major banks to their knees, and plunged Oklahoma and the rest of the US oil belt into a savage recession that gripped the region for most of a decade.
That was the story chronicled in Zweig’s book, which I reread  over a few quiet evenings last week. Do any of the details seem familiar to you? If not, dear reader, you need to get out more.
As far as I know, the fracking bubble that’s now well into its denouement didn’t have a single ineptly run bank at its center, as the Oklahoma oil and gas bubble did. Most of the other details of that earlier fiasco, though, were present and accounted for. Sky-high fuel prices, check; reserves unprofitable at earlier prices that suddenly looked like a winning deal, check; a media frenzy that oversold the upside and completely ignored the possibility of a downside, check; vast torrents of money and credit from banks and investors too dazzled by the thought of easy riches to ask the obvious questions, check; a flurry of drilling companies that lost money every single quarter but managed to stay in business by heaping up mountains of unpayable debt, check. Pretty much every square on the bingo card marked “ecoomic debacle” has been filled in with a pen dipped in fracking fluid.
Now of course a debacle of the Penn Square variety requires at least one other thing, which is a banking industry so fixated on this quarter’s profits that it can lose track of the minor little fact that lending money to people who can’t pay it back isn’t a business strategy with a long shelf life. I hope none of my readers are under the illusion that this is lacking just now. With interest rates stuck around zero and people and institutions that live off their investments frantically hunting for what used to count as a normal rate of return, the same culture of short-term thinking and financial idiocy that ran the global economy into the ground in the 2008 real estate crash remains firmly in place, glued there by the refusal of the Obama administration and its equivalents elsewhere to prosecute even the most egregious cases of fraud and malfeasance.
Now that the downturn in oil prices is under way, and panic selling of energy-related junk bonds and lower grades of unconventional crude oil has begun in earnest, it seems likely that we’ll learn just how profitable the fracking fad of the last few years actually was. My working guess, which is admittedly an outsider’s view based on limited data and historical parallels, is that it was a money-losing operation from the beginning, and looked prosperous—as the Oklahoma boom did—only because it attracted a flood of investment money from people and institutions who were swept up in the craze. If I’m right, the spike in domestic US oil production due to fracking was never more than an artifact of fiscal irresponsibility in the first place, and could not have been sustained no matter what. Still, we’ll see.
The more immediate question is just how much damage the turmoil now under way will do to a US and global economy that have never recovered from the body blow inflicted on them by the real estate bubble that burst in 2008. Much depends on exactly who sunk how much money into fracking-related investments, and just how catastrophically those investments come unraveled.  It’s possible that the result could be just a common or garden variety recession; it’s possible that it could be quite a bit more. When the tide goes out, as Warren Buffet has commented, you find out who’s been swimming naked, and just how far the resulting lack of coverage will extend is a question of no small importance.
At least three economic sectors outside the fossil fuel industry, as I see it, stand to suffer even if all we get is an ordinary downturn. The first, of course, is the financial sector. A vast amount of money was loaned to the fracking industry; another vast amount—I don’t propose to guess how it compares to the first one—was accounted for by issuing junk bonds, and there was also plenty of ingenious financial architecture of the sort common in the housing boom. Those are going to lose most or all of their value in the months and years ahead. No doubt the US government will bail out its pals in the really big banks again, but there’s likely to be a great deal of turmoil anyway, and midsized and smaller players may crash and burn in a big way. One way or another, it promises to be entertaining.
The second sector I expect to take a hit is the renewable energy sector.  In the 1980s, as prices of oil and natural gas plunged, they took most of the then-burgeoning solar and wind industries with them. There were major cultural shifts at the same time that helped feed the abandonment of renewable energy, but the sheer impact of cheap oil and natural gas needs to be taken into account. If, as seems likely, we can expect several years of lowerr energy prices, and several years of the kind of economic downdraft that makes access to credit for renewable-energy projects a real challenge, a great many firms in the green sector will struggle for survival, and some won’t make it.
Those renewable-energy firms that pull through will find a substantial demand for their services further down the road, once the recent talk about Saudi America finds its proper home in the museum of popular delusions next to perpetual motion machines and Piltdown Man, and the US has to face a future without the imaginary hundred-year reserve of fracked natural gas politicians were gabbling about not that long ago. Still, it’s going to take some nimble footwork to get there; my guess is that those firms that get ready to do without government subsidies and tax credits, and look for ways to sell low-cost homescale systems in an era of disintegrating energy infrastructure, will do much better than those that cling to the hope of government subsidies and big corporate contracts.
The third sector I expect to land hard this time around is the academic sector. Yes, I know, it’s not fashionable to talk of the nation’s colleges and universities as an economic sector, but let’s please be real; in today’s economy, the academic industry functions mostly as a sales office for predatory loans, which are pushed on unwary consumers using deceptive marketing practices. The vast majority of people who are attending US universities these days, after all, will not prosper as a result; in fact, they will never recover financially from the burden of their student loans, since the modest average increase in income that will come to those graduates who actually manage to find jobs will be dwarfed by the monthly debt service they’ll have to pay for decades after graduation.
One of the core reasons why the academic industry has become so vulnerable to a crash is that most colleges and universities rely on income from their investments to pay their operating expenses, and income from investments has taken a double hit in the last decade. First, the collapse of interest rates to near-zero (and in some cases, below-zero) levels has hammered returns across the spectrum of investment vehicles. As a result, colleges and universities have increasingly put their money into risky investments that promise what used to be ordinary returns, and this drove the second half of the equation; in the wake of the 2008 real estate crash, many colleges and universities suffered massive losses of endowment funds, and most of these losses have never been made good.
Did the nation’s colleges and universities stay clear of the fracking bubble?  That would have required, I think, far more prudence and independent thinking than the academic industry has shown of late. Those institutions that had the common sense to get out of fossil fuels for ecological reasons may end up reaping a surprising benefit; the rest, well, here again we’ll have to wait and see. My working guess, which is once again an outsider’s guess based on limited data and historical parallels, is that a great many institutions tried to bail themselves out from the impact of the real estate bust by doubling down on fracking. If that’s what happened, the looming crisis in American higher education—a crisis driven partly by the predatory loan practices mentioned earlier, partly by the jawdropping inflation in the price of a college education in recent decades, and partly by rampant overbuilding of academic programs—will be hitting shortly, and some very big names in the academic industry may not survive the impact.
As Yogi Berra liked to point out, it’s hard to make predictions, especially about the future. Still, it looks as though we may be in the opening stages of a really ugly fiscal crisis, and I’d encourage my readers to take that possibility seriously and act accordingly.

Dark Age America: The Sharp Edge of the Shell

Wed, 2014-12-10 17:41
One of the interesting features of blogging about the twilight of science and technology these days is that there’s rarely any need to wait long for a cogent example. One that came my way not long ago via a reader of this blog—tip of the archdruidical hat to Eric S.—shows that not even a science icon can get away with asking questions about the rising tide of financial corruption and dogmatic ideology that’s drowning the scientific enterprise in our time.
Many of my readers will recall Bill Nye the Science Guy, the star of a television program on science in the 1990s and still a vocal and entertaining proponent of science education. In a recent interview, Nye was asked why he doesn’t support the happy-go-lucky attitude toward dumping genetically modified organisms into the environment that’s standard in the United States and a few other countries these days. His answer  is that their impact on ecosystems is a significant issue that hasn’t been adequately addressed. Those who know their way around today’s pseudoskeptic scene won’t be surprised by the reaction from one of Discover Magazine’s bloggers: a tar and feathers party, more or less, full of the standard GMO industry talking points and little else.
Nye’s point, as it happens, is as sensible as it is scientific: ecosystems are complex wholes that can be thrown out of balance by relatively subtle shifts, and since human beings depend for their survival and prosperity on the products of natural ecosystems, avoiding unnecessary disruption to those systems is arguably a good idea. This eminently rational sort of thinking, though, is not welcomed in corporate boardrooms just now.  In the case under discussion, it’s particularly unwelcome in the boardrooms of  corporations heavily invested in genetic modification, which have a straightforward if shortsighted financial interest in flooding the biosphere with as many GMOs as they can sell.
Thus it’s reasonable that Monsanto et al. would scream bloody murder in response to Nye’s comment. What interests me is that so many believers in science should do the same, and not only in this one case. Last I checked, “what makes the biggest profit for industry must be true” isn’t considered a rule of scientific reasoning, but that sort of thinking is remarkably common in what passes for skepticism these days. To cite an additional example, it’s surely not accidental that there’s a 1.00 correlation between the health care modalities that make money for the medical and pharmaceutical industries and the health care modalities that the current crop of soi-disant skeptics consider rational and science-based, and an equal 1.00 correlation between those modalities that don’t make money for the medical and pharmaceutical industries and those that today’s skeptics dismiss as superstitious quackery.
To some extent, this is likely a product of what’s called “astroturfing,” the manufacture of artificial grassroots movements to support the agendas of an industrial sector or a political faction. The internet, with its cult of anonymity and its less than endearing habit of letting every discussion plunge to the lowest common denominator of bullying and abuse, was tailor-made for that sort of activity; it’s pretty much an open secret at this point, or so I’m told by the net-savvy, that most significant industries these days maintain staffs of paid flacks who spend their working hours searching the internet for venues to push messages favorable to their employers and challenge opposing views. Given the widespread lack of enthusiasm for GMOs, Monsanto and its competitors would have to be idiots to neglect such an obvious and commonly used marketing tactic.
Still, there’s more going on here than ordinary media manipulation in the hot pursuit of profits. There are plenty of people who have no financial stake in the GMO industry who defend it fiercely from even the least whisper of criticism, just as there are plenty of people who denounce alternative medicine in ferocious terms even though they don’t happen to make money from the medical-pharmaceutical industrial complex. I’ve discussed in previous posts here, and in a forthcoming book, the way that faith in progress was pressed into service as a substitute for religious belief during the nineteenth century, and continues to fill that role for many people today. It’s not a transformation that did science any good, but its implications as industrial civilization tips over into decline and fall are considerably worse than the ones I’ve explored in previous essays. I want to talk about those implications here, because they have a great deal to say about the future of science and technology in the deindustrializing world of the near future.
It’s important, in order to make sense of those implications, to grasp that science and technology function as social phenomena, and fill social roles, in ways that have more than a little in common with the intellectual activities of civilizations of the past. That doesn’t mean, as some postmodern theorists have argued, that science and technology are purely social phenomena; both of them have to take the natural world into account, and so have an important dimension that transcends the social. That said, the social dimension also exists, and since human beings are social mammals, that dimension has an immense impact on the way that science and technology function in this or any other human society.
From a social standpoint, it’s thus not actually all that relevant that that the scientists and engineers of contemporary industrial society can accomplish things with matter and energy that weren’t within the capacities of Babylonian astrologer-priests, Hindu gurus, Chinese literati, or village elders in precontact New Guinea. Each of these groups have been assigned a particular social role, the role of interpreter of Nature, by their respective societies, and each of them are accorded substantial privileges for fulfilling the requirements of their role. It’s therefore possible to draw precise and pointed comparisons between the different bodies of people filling that very common social role in different societies.
The exercise is worth doing, not least because it helps sort out the far from meaningless distinction between the aspects of modern science and technology that unfold from their considerable capacities for doing things with matter and energy, and the aspects of modern science and technology that unfold from the normal dynamics of social privilege.  What’s more, since modern science and technology wasn’t around in previous eras of decline and fall but privileged intellectual castes certainly were, recognizing the common features that unite today’s scientists, engineers, and promoters of scientific and technological progress with equivalent groups in past civilizations makes it a good deal easier to anticipate the fate of science and technology in the decades and centuries to come.
A specific example will be more useful here than any number of generalizations, so let’s consider the fate of philosophy in the waning years of the Roman world. The extraordinary intellectual adventure we call classical philosophy began in the Greek colonial cities of Ionia around 585 BCE, when Thales of Miletus first proposed a logical rather than a mythical explanation for the universe, and proceeded through three broad stages from there. The first stage, that of the so-called Presocratics, focused on the natural world, and the questions it asked and tried to answer can more or less be summed up as “What exists?”  Its failures and equivocal successes led the second stage, which extended from Socrates through Plato and Aristotle to the Old Academy and its rivals, to focus their attention on different questions, which can be summed up just as neatly as “How can we know what exists?”
That was an immensely fruitful shift in focus. It led to the creation of classical logic—one of the great achievements of the human mind—and it also drove the transformations that turned mathematics from an assortment of rules of thumb to an architecture of logical proofs, and thus laid the foundations on which Newtonian physics and other quantitative sciences eventually built.  Like every other great intellectual adventure of our species, though, it never managed to fulfill all the hopes that had been loaded onto it; the philosopher’s dream of human society made wholly subject to reason turned out to be just as unreachable as the scientist’s of the universe made wholly subject to the human will. As that failure became impossible to ignore, classical philosophy shifted focus again, to a series of questions and attempted answers that amounted to “given what we know about what exists, how should we live?”
That’s the question that drove the last great age of classical philosophy, the age of the Epicureans, the Stoics, and the Neoplatonists, the three philosophical schools I discussed a few months back as constructive personal responses to the fall of our civilization. At first, these and other schools carried on lively and far-reaching debates, but as the Roman world stumbled toward its end under the burden of its own unsolved problems, the philosophers closed ranks; debates continued, but they focused more and more tightly on narrow technical issues within individual schools. What’s more, the schools themselves closed ranks; pure Stoic, Aristotelian, and Epicurean philosophy gradually dropped out of fashion, and by the fourth century CE, a Neoplatonism enriched with bits and pieces of all the other schools stood effectively alone, the last school standing in the long struggle Thales kicked off ten centuries before.
Now I have to confess to a strong personal partiality for the Neoplatonists. It was from Plotinus and Proclus, respectively the first and last great figures in the classical tradition, that I first grasped why philosophy matters and what it can accomplish, and for all its problems—like every philosophical account of the world, it has some—Neoplatonism still makes intuitive sense to me in a way that few other philosophies do. What’s more, the men and women who defended classical Neoplatonism in its final years were people of great intellectual and personal dignity, committed to proclaming the truth as they knew it in the face of intolerance and persecution that ended up costing no few of them their lives.
The awkward fact remains that classical philosophy, like modern science, functioned as a social phenomenon and filled certain social roles. The intellectual power of the final Neoplatonist synthesis and the personal virtues of its last proponents have to be balanced against its blind support of a deeply troubled social order; in all the long history of classical philosophy, it never seems to have occurred to anyone that debates about the nature of justice might reasonably address, say, the ethics of slavery. While a stonecutter like Socrates could take an active role in philosophical debate in Athens in the fourth century BCE, furthermore, the institutionalization of philosophy meant that by the last years of classical Neoplatonism, its practice was restricted to those with ample income and leisure, and its values inevitably became more and more closely tied to the social class of its practitioners.
That’s the thing that drove the ferocious rejection of philosophy by the underclass of the age, the slaves and urban poor who made up the vast majority of the population throughout the Roman empire, and who received little if any benefit from the intellectual achievements of their society. To them, the subtleties of Neoplatonist thought were irrelevant to the increasingly difficult realities of life on the lower end of the social pyramid in a brutally hierarchical and increasingly dysfunctional world. That’s an important reason why so many of them turned for solace to a new religious movement from the eastern fringes of the empire, a despised sect that claimed that God had been born on earth as a mere carpenter’s son and communicated through his life and death a way of salvation that privileged the poor and downtrodden above the rich and well-educated.
It was as a social phenomenon, filling certain social roles, that Christianity attracted persecution from the imperial government, and it was in response to Christianity’s significance as a social phenomenon that the imperial government executed an about-face under Constantine and took the new religion under its protection. Like plenty of autocrats before and since, Constantine clearly grasped that the real threat to his position and power came from other members of his own class—in his case, the patrician elite of the Roman world—and saw that he could undercut those threats and counter potential rivals through an alliance of convenience with the leaders of the underclass. That’s the political subtext of the Edict of Milan, which legalized Christianity throughout the empire and brought it imperial patronage.
The patrician class of late Roman times, like its equivalent today, exercised power through a system of interlocking institutions from which outsiders were carefully excluded, and it maintained a prickly independence from the central government.  By the fourth century, tensions between the bureaucratic imperial state and the patrician class, with its local power bases and local loyalties, were rising toward a flashpoint.  The rise of Christianity thus gave Constantine and his successors an extraordinary opportunity.  Most of the institutions that undergirded patrician power linked to Pagan religion; local senates, temple priesthoods, philosophical schools, and other elements of elite culture normally involved duties drawn from the traditional faith. A religious pretext to strike at those institutions must have seemed as good as any other, and the Christian underclass offered one other useful feature: mobs capable of horrific acts of violence against prominent defenders of the patrician order.
That was why, for example, a Christian mob in 415 CE dragged the Neoplatonist philosopher Hypatia from her chariot as she rode home from her teaching gig at the Academy in Alexandria, cudgeled her to death, cut the flesh from her bones with sharpened oyster shells—the cheap pocket knives of the day—and burned the bloody gobbets to ashes. What doomed Hypatia was not only her defense of the old philosophical traditions, but also her connection to Alexandria’s patrician class; her ghastly fate was as much the vengeance of the underclass against the elite as it was an act of religious persecution. She was far from the only victim of violence driven by those paired motives, either. It was as a result of such pressures that, by the time the emperor Justinian ordered the last academies closed in 529 CE, the classical philosophical tradition was essentially dead.
That’s the sort of thing that happens when an intellectual tradition becomes too closely affiliated with the institutions, ideologies, and interests of a social elite. If the elite falls, so does the tradition—and if it becomes advantageous for anyone else to target the elite, the tradition can be a convenient target, especially if it’s succeeded in alienating most of the population outside the elite in question.
Modern science is extremely vulnerable to such a turn of events. There was a time when the benefits of scientific research and technological development routinely reached the poor as well as the privileged, but that time has long since passed; these days, the benefits of research and development move up the social ladder, while the costs and negative consequences move down. Nearly all the jobs eliminated by automation, globalization, and the computer revolution, for example, used to hire from the bottom end of the job market. In the same way, changes in US health care in recent decades have benefited the privileged while subjecting most others to substandard care at prices so high that medical bills are the leading cause of bankruptcy in the US today.
It’s all very well for the promoters of progress to gabble on about science as the key to humanity’s destiny; the poor know that the destiny thus marketed isn’t for them.  To the poor, progress means fewer jobs with lower pay and worse conditions, more surveillance and impersonal violence carried out by governments that show less and less interest in paying even lip service to the concept of civil rights, a rising tide of illnesses caused by environmental degradation and industrial effluents, and glimpses from afar of an endless stream of lavishly advertised tech-derived trinkets, perks and privileges that they will never have. Between the poor and any appreciation for modern science stands a wall made of failed schools, defunded libraries, denied opportunities, and the systematic use of science and technology to benefit other people at their expense. Such a wall, it probably bears noting, makes a good surface against which to sharpen oyster shells.
It seems improbable that anything significant will be done to change this picture until it’s far too late for such changes to have any meaningful effect. Barring dramatic transformations in the distribution of wealth, the conduct of public education, the funding for such basic social amenities as public libraries, and a great deal more, the underclass of the modern industrial world can be expected to grow more and more disenchanted with science as a social phenomenon in our culture, and to turn instead—as their equivalents in the Roman world and so many other civilizations did—to some tradition from the fringes that places itself in stark opposition to everything modern scientific culture stands for. Once that process gets under way, it’s simply a matter of waiting until the corporate elite that funds science, defines its values, and manipulates it for PR purposes, becomes sufficiently vulnerable that some other power center decides to take it out, using institutional science as a convenient point of attack.
Saving anything from the resulting wreck will be a tall order. Still, the same historical parallel discussed above offers some degree of hope. The narrowing focus of classical philosophy in its last years meant, among other things, that a substantial body of knowledge that had once been part of the philosophical movement was no longer identified with it by the time the cudgels and shells came out, and much of it was promptly adopted by Christian clerics and monastics as useful for the Church. That’s how classical astronomy, music theory, and agronomy, among other things, found their way into the educational repertoire of Christian monasteries and nunneries in the dark ages. What’s more, once the power of the patrician class was broken, a carefully sanitized version of Neoplatonist philosophy found its way into Christianity; in some denominations, it’s still a living presence today.
That may well happen again. Certainly today’s defenders of science are doing their best to shove a range of scientific viewpoints out the door; the denunciation meted out to Bill Nye for bringing basic concepts from ecology into a discussion where they were highly relevant is par for the course these days. There’s an interesting distinction between the sciences that get this treatment and those that don’t: on the one hand, those that are being flung aside are those that focus on observation of natural systems rather than control of artificial ones; on the other, any science that raises doubts about the possibility or desirability of infinite technological expansion can expect to find itself shivering in the dark outside in very short order. (This latter point applies to other fields of intellectual endeavor as well; half the angry denunciations of philosophy you’ll hear these days from figures such as Neil DeGrasse Tyson, I’m convinced, come out of the simple fact that the claims of modern science to know objective truths about nature won’t stand up to fifteen minutes of competent philosophical analysis.)
Thus it’s entirely possible that observational sciences, if they can squeeze through the bottleneck imposed by the loss of funding and prestige, will be able to find a new home in whatever intellectual tradition replaces modern scientific rationalism in the deindustrial future. It’s at least as likely that such dissident sciences as ecology, which has always raised challenging questions about the fantasies of the manipulative sciences, may find themselves eagerly embraced by a future intellectual culture that has no trouble at all recognizing the futility of those fantasies. That said, it’s still going to take some hard work to preserve what’s been learnt in those fields—and it’s also going to take more than the usual amount of prudence and plain dumb luck not to get caught up in the conflict when the sharp edge of the shell gets turned on modern science.

Dark Age America: The Fragmentation of Technology

Wed, 2014-12-03 16:21
It was probably inevitable that last week’s discussion of the way that contemporary science is offering itself up as a sacrifice on the altar of corporate greed and institutional arrogance would field me a flurry of responses that insisted that I must hate science.  This is all the more ironic in that the shoddy logic involved in that claim also undergirded George W. Bush’s famous and fatuous insistence that the Muslim world is riled at the United States because “they hate our freedom.”
In point of fact, the animosity felt by many Muslims toward the United States is based on specific grievances concerning specific acts of US foreign policy. Whether or not those grievances are justified is a matter I don’t propose to get into here; the point that’s relevant to the current discussion is that the grievances exist, they relate to identifiable actions on the part of the US government, and insisting that the animosity in question is aimed at an abstraction instead is simply one of the ways that Bush, or for that matter his equally feckless successor, have tried to sidestep any discussion of the means, ends, and cascading failures of US policy toward the Middle East and the rest of the Muslim world.
In the same way, it’s very convenient to insist that people who ask hard questions about the way that contemporary science has whored itself out to economic and political interests, or who have noticed gaps between the claims about reality made by the voices of the scientific mainstream and their own lived experience of the world, just hate science. That evasive strategy makes it easy to brush aside questions about the more problematic dimensions of science as currently practiced. This isn’t a strategy with a long shelf life; responding to a rising spiral of problems by insisting that the problems don’t exist and denouncing those who demur is one of history’s all-time bad choices, but intellectuals in falling civilizations all too often try to shore up the crumbling foundations of their social prestige and privilege via that foredoomed approach.
Central to the entire strategy is a bit of obfuscation that treats “science” as a monolithic unity, rather than the complex and rather ramshackle grab-bag of fields of study, methods of inquiry, and theories about how different departments of nature appear to work. There’s no particular correlation between, let’s say, the claims made for the latest heavily marketed and dubiously researched pharmaceutical, on the one hand, and the facts of astronomy, evolutionary biology, or agronomy on the other; and someone can quite readily find it impossible to place blind faith in the pharmaceutical and the doctor who’s pushing it on her, while enjoying long nights observing the heavens through a telescope, delighting in the elegant prose and even more elegant logic of Darwin’s The Origin of Species, or running controlled experiments in her backyard on the effectiveness of compost as a soil amendment. To say that such a person “hates science” is to descend from meaningful discourse to thoughtstopping noise.
The habit of insisting that science is a single package, take it or leave it, is paralleled by the equivalent and equally specious insistence that there is this single thing called “technology,” that objecting to any single component of that alleged unity amounts to rejecting all of it, and that you’re not allowed to pick and choose among technologies—you have to take all of it or reject it all. I field this sort of nonsense all the time. It so happens, for example, that I have no interest in owning a cell phone, never got around to playing video games, and have a sufficiently intense fondness for books printed on actual paper that I’ve never given more than a passing thought to the current fad for e-books.
I rarely mention these facts to those who don’t already know them, because it’s a foregone conclusion that if I do so, someone will ask me whether I hate technology.  Au contraire, I’m fond of slide rules, love rail travel, cherish an as yet unfulfilled ambition to get deep into letterpress printing, and have an Extra class amateur radio license; all these things entail enthusiastic involvement with specific technologies, and indeed affection for them; but if I mention these points in response to the claim that I must hate technology, the responses I get range from baffled incomprehension to angry dismissal.
“Technology,” in the mind of those who make such claims, clearly doesn’t mean what the dictionary says it means.  To some extent, of course, it amounts to whatever an assortment of corporate and political marketing firms want you to buy this week, but there’s more to it than that. Like the word “science,” “technology” has become a buzzword freighted with a vast cargo of emotional, cultural, and (whisper this) political meanings.  It’s so densely entangled with passionately felt emotions, vast and vague abstractions, and frankly mythic imagery that many of those who use the word can’t explain what they mean by it, and get angry if you ask them to try.
The flattening out of the vast diversity of technologies, in the plural, into a single monolithic shape guarded by unreasoning emotions would be problematic under any conditions. When a civilization that depends on the breakneck exploitation of nonrenewable resources is running up against the unyielding limits of a finite planet, with resource depletion and pollution in a neck-and-neck race to see which one gets to bring the industrial project to an end first, it’s a recipe for disaster. A sane response to the predicament of our time would have to start by identifying the technological suites that will still be viable in a resource-constrained and pollution-damaged environment, and then shift as much vital infrastructure to those as possible with the sharply limited resources we have left. Our collective thinking about technology is so muddled by unexamined emotions, though, that it doesn’t matter now obviously necessary such a project might be: it remains unthinkable.
Willy-nilly, though, the imaginary monolith of “technology” is going to crumble, because different technologies have wildly varying resource requirements, and they vary just as drastically in terms of their importance to the existing order of society. As resource depletion and economic contraction tighten their grip on the industrial world, the stock of existing and proposed technologies face triage in a continuum defined by two axes—the utility of the technology, on the one hand, and its cost in real (i.e., nonfinancial) terms on the other. A chart may help show how this works.

 This is a very simplified representation of the frame in which decisions about technology are made. Every kind of utility from the demands of bare survival to the whims of fashion is lumped in together and measured on the vertical axis, and every kind of nonfinancial cost from energy and materials straight through to such intangibles as opportunity cost is lumped in together and measured on the horizontal axis. In an actual analysis, of course, these variables would be broken out and considered separately; the point of a more schematic view of the frame, like this one, is that it allows the basic concepts to be grasped more easily.
The vertical and horizontal lines that intersect in the middle of the graph are similarly abstractions from a complex reality. The horizontal line represents the boundary between those technologies which have enough utility to be worth building and maintaining, which are above the line, and those which have too little utility to be worth the trouble, which are below it. The vertical line represents the boundary between those technologies which are affordable and those that are not. In the real world, those aren’t sharp boundaries but zones of transition, with complex feedback loops weaving back and forth among them, but again, this is a broad conceptual model.
The intersection of the lines divides the whole range of technology into four categories, which I’ve somewhat unoriginally marked with the first four letters of the alphabet. Category A consists of things that are both affordable and useful, such as indoor plumbing. Category B consists of things that are affordable but useless, such as electrically heated underwear for chickens. Category C consists of things that are useful but unaffordable, such as worldwide 30-minute pizza delivery from low earth orbit. Category D, rounding out the set, consists of things that are neither useful nor affordable, such as—well, I’ll let my readers come up with their own nominees here.
Now of course the horizontal and vertical lines aren’t fixed; they change position from one society to another, from one historical period to another, and indeed from one community, family, or individual to another. (To me, for example, cell phones belong in category B, right next to the electrically heated chicken underwear; other people would doubtless put them in somewhere else on the chart.) Every society, though, has a broad general consensus about what goes in which category, which is heavily influenced by but by no means entirely controlled by the society’s political class.  That consensus is what guides its collective decisions about funding or defunding technologies.

With the coming of the industrial revolution, both of the lines shifted substantially from their previous position, as shown in the second chart. Obviously, the torrent of cheap abundant energy gave the world’s industrial nations access to an unparalleled wealth of resources, and this pushed the dividing line between what was affordable and what was unaffordable quite a ways over toward the right hand side of the chart. A great many things that had been desirable but unaffordable to previous civilizations swung over from category C into category A as fossil fuels came on line. This has been discussed at great length here and elsewhere in the peak oil blogosphere.
Less obviously, the dividing line between what was useful and what was useless also shifted quite a bit toward the bottom of the chart, moving a great many things from category B into category A. To follow this, it’s necessary to grasp the concept of technological suites. A technological suite is a set of interdependent technologies that work together to achieve a common purpose. Think of the relationship between cars and petroleum drilling, computer chips and the clean-room filtration systems required for their manufacture, or commercial airliners and ground control radar. What connects each pair of technologies is that they belong to the same technological suite. If you want to have the suite, you must either have all the elements of the suite in place, or be ready to replace any absent element with something else that can serve the same purpose.
For the purpose of our present analysis, we can sort out the component technologies of a technological suite into three very rough categories. There are interface technologies, which are the things with which the end user interacts—in the three examples just listed, those would be private cars, personal computers, and commercial flights to wherever you happen to be going. There are support technologies, which are needed to produce, maintain, and operate the output technologies; they make up far and away the majority of technologies in a technological suite—consider the extraordinary range of  technologies it takes to manufacture a car from raw materials, maintain it, fuel it, provide it with roads on which to drive, and so on. Some interface technologies and most support technologies can be replaced with other technologies as needed, but some of both categories can’t; we can put those that can’t be replaced bottleneck technologies, for reasons that will become clear shortly.
What makes this relevant to the charts we’ve been examining is that most support technologies have no value aside from the technological suites to which they belong and the interface technologies they serve. Without commercial air travel, for example, most of the specialized technologies found at airports are unnecessary. Thus a great many things that once belonged in category B—say, automated baggage carousels—shifted into category A with the emergence of the technological suite that gave them utility. Thus category A balloons with the coming of industrialization, and it kept getting bigger as long as energy and resource use per capita in the industrial nations kept on increasing.
Once energy and resource use per capita peak and begin their decline, though, a different reality comes into play, leading over time to the situation shown in the third chart.

 As cheap abundant energy runs short, and it and all its products become expensive, scarce, or both, the vertical line slides inexorably toward the left. That’s obvious enough. Less obviously, the horizontal line also slides upwards. The reason, here again, is the interrelationship of individual technologies into technological suites. If commercial air travel stops being economically viable, the support technologies that belong to that suite are no longer needed. Even if they’re affordable enough to stay on the left hand side of the vertical line, the technologies needed to run automated baggage carousels thus no longer have enough utility to keep them above the horizontal line, and down they drop into category B.
That’s one way that a technology can drop out of use. It’s just as possible, of course, for something that would still have ample utility to cost too much in terms of real wealth to be an option in a contracting society, and slide across the border into category C. Finally, it’s possible for something to do both at once—to become useless and unaffordable at something like the same time, as economic contraction takes away the ability to pay for the technology and the ability to make use of it at the same time.
It’s also possible for a technology that remains affordable, and participates in a technological suite that’s still capable of meeting genuine needs, to tumble out of category A into one of the others. This can happen because the cost of different technologies differ qualitatively, and not just quantitatively. If you need small amounts of niobium for the manufacture of blivets, and the handful of niobium mines around the world stop production—whether this happens because the ore has run out, or for some other reason, environmental, political, economic, cultural, or what have you—you aren’t going to be able to make blivets any more. That’s one kind of difficulty if it’s possible to replace blivets with something else, or substitute some other rare element for the niobium; it’s quite another, and much more challenging, if blivets made with niobium are the only thing that will work for certain purposes, or the only thing that makes those purposes economically viable.
It’s habitual in modern economics to insist that such bottlenecks don’t exist, because there’s always a viable alternative. That sort of thinking made a certain degree of sense back when energy per capita was still rising, because the standard way to get around material shortages for a century now has been to throw more energy, more technology, and more complexity into the mix. That’s how low-grade taconite ores with scarcely a trace of iron in them have become the mainstay of today’s iron and steel industry; all you have to do is add fantastic amounts of cheap energy, soaring technological complexity, and an assortment of supply and resource chains reaching around the world and then some, and diminishing ore quality is no problem at all.
It’s when you don’t have access to as much cheap energy, technological complexity, and baroque supply chains as you want that this sort of logic becomes impossible to sustain. Once this point is reached, bottlenecks become an inescapable feature of life. The bottlenecks, as already suggested, don’t have to be technological in nature—a bottleneck technology essential to a given technological suite can be perfectly feasible, and still out of reach for other reasons—but whatever generates them, they throw a wild card into the process of technological decline that shapes the last years of a civilization on its way out, and the first few centuries of the dark age that follows.
The crucial point to keep in mind here is that one bottleneck technology, if it becomes inaccessible for any reason, can render an entire technological suite useless, and compromise other technological suites that depend on the one directly affected. Consider the twilight of ceramics in the late Roman empire. Rome’s ceramic industry operated on as close to an industrial scale as you can get without torrents of cheap abundant energy; regional factories in various places, where high-quality clay existed, produced ceramic goods in vast amounts and distributed them over Roman roads and sea lanes to the far corners of the empire and beyond it. The technological suite that supported Roman dishes and roof tiles thus included transport technologies, and those turned out to be the bottleneck: as long-distance transport went away, the huge ceramic factories could no longer market their products and shut down, taking with them every element of their technological suite that couldn’t be repurposed in a hurry.
The same process affected many other technologies that played a significant role in the Roman world, and for that matter in the decline and fall of every other civilization in history. The end result can best be described as technological fragmentation: what had been a more or less integrated whole system of technology, composed of many technological suites working together more or less smoothly, becomes a jumble of disconnected technological suites, nearly all of them drastically simplified compared to their pre-decline state, and many of them jerry-rigged to make use of still-viable fragments of technological suites whose other parts didn’t survive their encounter with one bottleneck or another.  In places where circumstances permit, relatively advanced technological suites can remain in working order long after the civilization that created them has perished—consider the medieval cities that got their water from carefully maintained Roman aqueducts a millennium after Rome’s fall—while other systems operate at far simpler levels, and other regions and communities get by with much simpler technological suites.
All this has immediate practical importance for those who happen to live in a civilization that’s skidding down the curve of its decline and fall—ours, for example. In such a time, as noted above, one critical task is to identify the technological suites that will still be viable in the aftermath of the decline, and shift as much vital infrastructure as possible over to depend on those suites rather than on those that won’t survive the decline. In terms of the charts above, that involves identifying those technological suites that will still be in category A when the lines stop shifting up and to the left, figuring out how to work around any bottleneck technologies that might otherwise cripple them, and get the necessary knowledge into circulation among those who might be able to use it, so that access to information doesn’t become a bottleneck of its own
That sort of analysis, triage, and salvage is among the most necessary tasks of our time, especially for those who want to see viable technologies survive the end of our civilization, and it’s being actively hindered by the insistence that the only possible positive attitude toward technology is sheer blind faith. For connoisseurs of irony, it’s hard to think of a more intriguing spectacle. The impacts of that irony on the future, though, are complex, and will be the subject of several upcoming posts here.

Dark Age America: The Suicide of Science

Wed, 2014-11-26 18:53
Last week’s discussion of facts and values was not as much of a diversion from the main theme of the current sequence of posts here on The Archdruid Report as it may have seemed.  Every human society likes to think that its core cultural and intellectual projects, whatever those happen to be, are the be-all and end-all of human existence. As each society rounds out its trajectory through time with the normal process of decline and fall, in turn, its intellectuals face the dismaying experience of watching those projects fail, and betray the hopes so fondly confided to them.
It’s important not to underestimate the shattering force of this experience. The plays of Euripides offer cogent testimony of the despair felt by ancient Greek thinkers as their grand project of reducing the world to rational order dissolved in a chaos of competing ideologies and brutal warfare. Fast forward most of a millennium, and Augustine’s The City of God anatomized the comparable despair of Roman intellectuals at the failure of their dream of a civilized world at peace under the rule of law. 
Skip another millennium and a bit, and the collapse of the imagined unity of Christendom into a welter of contending sects and warring nationalities had a similar impact on cultural productions of all kinds as the Middle Ages gave way to the era of the Reformation. No doubt when people a millennium or so from now assess the legacies of the twenty-first century, they’ll have no trouble tracing a similar tone of despair in our arts and literature, driven by the failure of science and technology to live up to the messianic fantasies of perpetual progress that have been loaded onto them since Francis Bacon’s time.
I’ve already discussed, in previous essays here, some of the reasons why such projects so reliably fail. To begin with, of course, the grand designs of intellectuals in a mature society normally presuppose access to the kind and scale of resources that such a society supplies to its more privileged inmates.  When the resource needs of an intellectual project can no longer be met, it doesn’t matter how useful it would be if it could be pursued further, much less how closely aligned it might happen to be to somebody’s notion of the meaning and purpose of human existence.
Furthermore, as a society begins its one-way trip down the steep and slippery chute labeled “Decline and Fall,” and its ability to find and distribute resources starts to falter, its priorities necessarily shift. Triage becomes the order of the day, and projects that might ordinarily get funding end up  out of luck so that more immediate needs can get as much of the available resource base as possible. A society’s core intellectual projects tend to face this fate a good deal sooner than other, more pragmatic concerns; when the barbarians are at the gates, one might say, funds that might otherwise be used to pay for schools of philosophy tend to get spent hiring soldiers instead.
Modern science, the core intellectual project of the contemporary industrial world, and technological complexification, its core cultural project, are as subject to these same two vulnerabilities as were the corresponding projects of other civilizations. Yes, I’m aware that this is a controversial claim, but I’d argue that it follows necessarily from the nature of both projects. Scientific research, like most things in life, is subject to the law of diminishing returns; what this means in practice is that the more research has been done in any field, the greater an investment is needed on average to make the next round of discoveries. Consider the difference between the absurdly cheap hardware that was used in the late 19th century to detect the electron and the fantastically expensive facility that had to be built to detect the Higgs boson; that’s the sort of shift in the cost-benefit ratio of research that I have in mind.
A civilization with ample resources and a thriving economy can afford to ignore the rising cost of research, and gamble that new discoveries will be valuable enough to cover the costs. A civilization facing resource shortages and economic contraction can’t. If the cost of new discoveries in particle physics continues to rise along the same curve that gave us the Higgs boson’s multibillion-Euro price tag, for example, the next round of experiments, or the one after that, could easily rise to the point that in an era of resource depletion, economic turmoil, and environmental payback, no consortium of nations on the planet will be able to spare the resources for the project. Even if the resources could theoretically be spared, furthermore, there will be many other projects begging for them, and it’s far from certain that another round of research into particle physics would be the best available option.
The project of technological complexification is even more vulnerable to the same effect. Though true believers in progress like to think of new technologies as replacements for older ones, it’s actually more common for new technologies to be layered over existing ones. Consider, as one example out of many, the US transportation grid, in which airlanes, freeways, railroads, local roads, and navigable waterways are all still in use, reflecting most of the history of transport on this continent from colonial times to the present. The more recent the transport mode, by and large, the more expensive it is to maintain and operate, and the exotic new transportation schemes floated in recent years are no exception to that rule.
Now factor in economic contraction and resource shortages. The most complex and expensive parts of the technostructure tend also to be the most prestigious and politically influential, and so the logical strategy of a phased withdrawal from unaffordable complexity—for example, shutting down airports and using the proceeds to make good some of the impact of decades of malign neglect on the nation’s rail network—is rarely if ever a politically viable option. As contraction accelerates, the available resources come to be distributed by way of a political free-for-all in which rational strategies for the future play no significant role. In such a setting, will new technological projects be able to get the kind of ample funding they’ve gotten in the past? Let’s be charitable and simply say that this isn’t likely.
Thus the end of the age of fossil-fueled extravagance means the coming of a period in which science and technology will have a very hard row to hoe, with each existing or proposed project having to compete for a slice of a shrinking pie of resources against many other equally urgent needs. That in itself would be a huge challenge. What makes it much worse is that many scientists, technologists, and their supporters in the lay community are currently behaving in ways that all but guarantee that when the resources are divided up, science and technology will draw the short sticks.
It has to be remembered that science and technology are social enterprises. They don’t happen by themselves in some sort of abstract space insulated from the grubby realities of human collective life. Laboratories, institutes, and university departments are social constructs, funded and supported by the wider society. That funding and support doesn’t happen by accident; it exists because the wider society believes that the labors of scientists and engineers will further its own collective goals and projects.
Historically speaking, it’s only in exceptional circumstances that something like scientific research gets as large a cut of a society’s total budget as they do today.  As recently as a century ago, the sciences received only a tiny fraction of the support they currently get; a modest number of university positions with limited resources provided most of what institutional backing the sciences got, and technological progress was largely a matter of individual inventors pursuing projects on their own nickel in their off hours—consider the Wright brothers, who carried out the research that led to the first successful airplane in between waiting on customers in their bicycle shop, and without benefit of research grants.
The transformation of scientific research and technological progress from the part-time activity of an enthusiastic fringe culture to its present role as a massively funded institutional process took place over the course of the twentieth century. Plenty of things drove that transformation, but among the critical factors were the successful efforts of scientists, engineers, and the patrons and publicists of science and technology to make a case for science and technology as forces for good in society, producing benefits that would someday be extended to all. In the boomtimes that followed the Second World War, it was arguably easier to make that case than it had ever been before, but it took a great deal of work—not merely propaganda, but actual changes in the way that scientists and engineers interacted with the public and met their concerns—to overcome the public wariness toward science and technology that made the mad scientist such a stock figure in the popular media of the time.
These days, the economic largesse that made it possible for the latest products of industry to reach most American households is increasingly a fading memory, and that’s made life a good deal more difficult for those who argue for science and technology as forces for good. Still, there’s another factor, which is the increasing failure of institutional science and technology to make that case in any way that matters.
Here’s a homely example. I have a friend who suffered from severe asthma. She was on four different asthma medications, each accompanied by its own bevy of nasty side effects, which more or less kept the asthma under control without curing it. After many years of this, she happened to learn that another health problem she had was associated with a dietary allergy, cut the offending food out of her diet, and was startled and delighted to find that her asthma cleared up as well.
After a year with no asthma symptoms, she went to her physician, who expressed surprise that she hadn’t had to come in for asthma treatment in the meantime. She explained what had happened. The doctor admitted that the role of that allergy as a cause of severe asthma was well known. When she asked the doctor why she hadn’t been told this, so she could make an informed decision, the only response she got was, and I quote, “We prefer to medicate for that condition.”
Most of the people I know have at least one such story to tell about their interactions with the medical industry, in which the convenience and profit of the industry took precedence over the well-being of the patient; no few have simply stopped going to physicians, since the side effects from the medications they received have been reliably worse than the illness they had when they went in. Since today’s mainstream medical industry makes so much of its scientific basis, the growing public unease with medicine splashes over onto science in general. For that matter, whenever some technology seems to be harming people, it’s a safe bet that somebody in a lab coat with a prestigious title will appear on the media insisting that everything’s all right; some of the time, the person in the lab coat is right, but it’s happened often enough that everything was not all right that the trust once reposed in scientific experts is getting noticeably threadbare these days.
Public trust in scientists has taken a beating for several other reasons as well. I’ve discussed in previous posts here the way that the vagaries of scientific opinion concerning climate change have been erased from our collective memory by one side in the current climate debate.  It’s probably necessary for me to reiterate here that I find the arguments for disastrous anthropogenic climate change far stronger than the arguments against it, and have discussed the likely consequences of our civilization’s maltreatment of the atmosphere repeatedly on this blog and in my books; the fact remains that in my teen years, in the 1970s and 1980s, scientific opinion was still sharply divided on the subject of future climates, and a significant number of experts believed that the descent into a new ice age was likely.
I’ve taken the time to find and post here the covers of some of the books I read in those days. The authors were by no means nonentities. Nigel Calder was a highly respected science writer and media personality. E.C. Pielou is still one of the most respected Canadian ecologists, and the book of hers shown here, After the Ice Age, is a brilliant ecological study that deserves close attention from anyone interested in how ecosystems respond to sudden climatic warming. Windsor Chorlton, the author of Ice Ages, occupied a less exalted station in the food chain of science writers, but all the volumes in the Planet Earthseries were written in consultation with acknowledged experts and summarized the state of the art in the earth sciences at the time of publication.
Since certain science fiction writers have been among the most vitriolic figures denouncing those who remember the warnings of an imminent ice age, I’ve also posted covers of two of my favorite science fiction novels from those days, which were both set in an ice age future. My younger readers may not remember Robert Silverbergand Poul Anderson; those who do will know that both of them were serious SF writers who paid close attention to the scientific thought of their time, and wrote about futures defined by an ice age at the time when this was still a legitimate scientific extrapolation
These books exist.  I still own copies of most of them, and any of my readers who takes the time to find one will discover, in each nonfiction volume, a thoughtfully developed argument suggesting that the earth would soon descend into a new ice age, and in each of the novels, a lively story set in a future shaped by the new ice age in question. Those arguments turned out to be wrong, no question; they were made by qualified experts, at a time when the evidence concerning climate change was a good deal more equivocal than it’s become since that time, and the more complete evidence that was gathered later settled the matter; but the arguments and the books existed, many people alive today know that they existed, and when scientists associated with climate activism insist that they didn’t, the result is a body blow to public trust in science.

It’s far from the only example of the same kind. Many of my readers will remember the days when all cholesterol was bad and polyunsaturated fats were good for you. Most of my readers will recall drugs that were introduced to the market with loud assurances of safety and efficacy, and then withdrawn in a hurry when those assurances turned out to be dead wrong. Those readers who are old enough may even remember when continental drift was being denounced as the last word in pseudoscience, a bit of history that a number of science writers these days claim never happened. Support for science depends on trust in scientists, and that’s become increasingly hard to maintain at a time when it’s unpleasantly easy to point to straightforward falsifications of the kind just outlined.
On top of all this, there’s the impact of the atheist movement on public debates concerning science. I hasten to say that I know quite a few atheists, and the great majority of them are decent, compassionate people who have no trouble accepting the fact that their beliefs aren’t shared by everyone around them. Unfortunately, the atheists who have managed to seize the public limelight too rarely merit description in those terms.  Most of my readers will be wearily familiar with the sneering bullies who so often claim to speak for atheism these days; I can promise you that as the head of a small religious organization in a minority faith, I get to hear from them far too often for my taste.
Mind you, there’s a certain wry amusement in the way that the resulting disputes are playing out in contemporary culture. Even diehard atheists have begun to notice that whenever Richard Dawkins opens his mouth, a dozen people decide to give religion a second chance. Still, the dubious behavior of the “angry atheist” crowd affects the subject of this post at least as powerfully as it does the field of popular religion. A great many of today’s atheists claim the support of scientific materialism for their beliefs, and no small number of the most prominent figures in the atheist movement hold down day jobs as scientists or science educators. In the popular mind, as a result, these people, their beliefs, and their behavior are quite generally conflated with science as a whole.
Theimplications of all these factors are best explored by way of a simple thought experiment. Let’s say, dear reader, that you’re an ordinary American citizen. Over the last month, you’ve heard one scientific expert insist that the latest fashionable heart drug is safe and effective, while three of your drinking buddies have told you in detail about the ghastly side effects it gave them. You’ve heard another scientific expert denounce acupuncture as crackpot pseudoscience, while your Uncle Henry, who messed up his back in Iraq, got more relief from three visits to an acupuncturist than he got from six years of conventional treatment. You’ve heard still another scientific expert claim yet again that no qualified scientist ever said back in the 1970s that the world was headed for a new ice age, and you read the same books I did when you were in high school and know that the expert is either misinformed or lying. Finally, you’ve been on the receiving end of yet another diatribe by yet another atheist of the sneering-bully type mentioned earlier, who vilified your personal religious beliefs in terms that would probably count as hate speech in most other contexts, and used an assortment of claims about science to justify his views and excuse his behavior.
Given all this, will you vote for a candidate who says that you have to accept a cut in your standard of living in order to keep research laboratories and university science departments fully funded?
No, I didn’t think so.
In miniature, that’s the crisis faced by science as we move into the endgame of industrial civilization, just as comparable crises challenged Greek philosophy, Roman jurisprudence, and medieval theology in the endgames of their own societies. When a society assigns one of its core intellectual or cultural projects to a community of specialists, those specialists need to think, hard, about the way that  their words and actions will come across to those outside that community. That’s important enough when the society is still in a phase of expansion; when it tips over its historic peak and begins the long road down, it becomes an absolute necessity—but it’s a necessity that, very often, the specialists in question never get around to recognizing until it’s far too late.
Thus it’s unlikely that science as a living tradition will be able to survive in its current institutional framework as the Long Descent picks up speed around us. It’s by no means certain that it will survive at all. The abstract conviction that science is humanity’s best hope for the future, even if it were more broadly held than it is, offers little protection against the consequences of popular revulsion driven by the corruptions, falsifications, and abusive behaviors sketched out above. What Oswald Spengler called the Second Religiosity, the resurgence of religion in the declining years of a culture, could have taken many forms in the historical trajectory of industrial society; at this point I think it’s all too likely to contain a very large dollop of hostility toward science and complex technology. How the scientific method and the core scientific discoveries of the last few centuries might be preserved in the face of that hostility will be discussed in a future post.

Facts, Values, and Dark Beer

Wed, 2014-11-19 19:40
Over the last eight and a half years, since I first began writing essays on The Archdruid Report, I’ve fielded a great many questions about what motivates this blog’s project. Some of those questions have been abusive, and some of them have been clueless; some of them have been thoughtful enough to deserve an answer, either in the comments or as a blog post in its own right. Last week brought one of that last category. It came from one of my European readers, Ervino Cus, and it read as follows:
“All considered (the amount of weapons—personal and of MD—around today; the population numbers; the environmental pollution; the level of lawlessness we are about to face; the difficulty to have a secure form of life in the coming years; etc.) plus the ‘low’ technical level of possible development of the future societies (I mean: no more space flight? no more scientific discovery about the ultimate structure of the Universe? no genetic engineering to modify the human genome?) the question I ask to myself is: why bother?
“Seriously: why one should wish to plan for his/her long term survival in the future that await us? Why, when all goes belly up, don't join the first warlord band available and go off with a bang, pillaging and raping till one drops dead?
“If the possibilities for a new stable civilization are very low, and it's very probable that such a civilization, even if created, will NEVER be able to reach even the technical level of today, not to mention to surpass it, why one should want to try to survive some more years in a situation that becomes every day less bright, without ANY possibilities to get better in his/her lifetime, and with, as the best objective, only some low-tech rural/feudal state waaay along the way?
“Dunno you, but for me the idea that this is the last stop for the technological civilization, that things as a syncrothron or a manned space flight are doomed and never to repeat, and that the max at which we, as a species and as individuals, can aspire from now on is to have a good harvest and to ‘enjoy’ the same level of knowledge of the structure of the Universe of our flock of sheeps, doesen't makes for a good enough incentive to want to live more, or to give a darn if anybody other lives on.
“Apologies if my word could seem blunt (and for my far than good English: I'm Italian), but, as Dante said:
“Considerate la vostra semenza:fatti non foste a viver come bruti,ma per seguir virtute e canoscenza.” (Inferno - Canto XXVI - vv. 112-120)
“If our future is not this (and unfortunately I too agree with you that at this point the things seems irreversibles) I, for one, don't see any reason to be anymore compelled by any moral imperative... :-(
“PS: Yes, I know, I pose some absolutes: that a high-tech/scientific civilization is the only kind of civilization that enpowers us to gain any form of ‘real’ knowledge of the Universe, that this knowledge is a ‘plus’ and that a life made only of ‘birth-reproduction-death’ is a life of no more ‘meaning’ than the one of an a plant.
“Cheers, Ervino.”
It’s a common enough question, though rarely expressed as clearly or as starkly as this. As it happens, there’s an answer to it, or rather an entire family of answers, but the best way there is to start by considering the presuppositions behind it.  Those aren’t adequately summarized by Ervino’s list of ‘absolutes’—the latter are simply restatements of his basic argument.
What Ervino is suggesting, rather, presupposes that scientific and technological progress are the only reasons for human existence. Lacking those—lacking space travel, cyclotrons, ‘real’ knowledge about the universe, and the rest—our existence is a waste of time and we might as well just lay down and die or, as he suggests, run riot in anarchic excess until death makes the whole thing moot. What’s more, only the promise of a better future gives any justification for moral behavior—consider his comment about not feeling compelled by any moral imperative if no better future is in sight.
Those of my readers who recall the discussion of progress as a surrogate religion in last year’s posts here will find this sort of thinking very familiar, because the values being imputed to space travel, cyclotrons et al. are precisely those that used to be assigned to more blatantly theological concepts such as God and eternal life. Still, I want to pose a more basic question: is this claim—that the meaning and purpose of human existence and the justification of morality can only be found in scientific and technological progress—based on evidence? Are there, for example, double-blinded, controlled studies by qualified experts that confirm this claim?
Of course not. Ervino’s claim is a value judgment, not a statement of fact.  The distinction between facts and values was mentioned in last week’s post, but probably needs to be sketched out here as well; to summarize a complex issue somewhat too simply, facts are the things that depend on the properties of perceived objects rather than perceiving subjects. Imagine, dear reader, that you and I were sitting in the same living room, and I got a bottle of beer out of the fridge and passed it around.  Provided that everyone present had normally functioning senses and no reason to prevaricate, we’d be able to agree on certain facts about the bottle: its size, shape, color, weight, temperature, and so on. Those are facts.
Now let’s suppose I got two glasses, poured half the beer into each glass, handed one to you and took the other for myself. Let’s further suppose that the beer is an imperial stout, and you can’t stand dark beer. I take a sip and say, “Oh, man, that’s good.” You take a sip, make a face, and say, “Ick. That’s awful.” If I were to say, “No, that’s not true—it’s delicious,” I’d be talking nonsense of a very specific kind: the nonsense that pops up reliably whenever someone tries to treat a value as though it’s a fact.
“Delicious” is a value judgment, and like every value judgment, it depends on the properties of perceiving subjects rather than perceived objects. That’s true of all values without exception, including those considerably more important than those involved in assessing the taste of beer. To say “this is good” or “this is bad” is to invite the question “according to whose values?”—which is to say, every value implies a valuer, just as every judgment implies a judge.
Now of course it’s remarkably common these days for people to insist that their values are objective truths, and values that differ from theirs objective falsehoods. That’s a very appealing sort of nonsense, but it’s still nonsense. Consider the claim often made by such people that if values are subjective, that would make all values, no matter how repugnant, equal to one another. Equal in what sense? Why, equal in value—and of course there the entire claim falls to pieces, because “equal in value” invites the question already noted, “according to whose values?” If a given set of values is repugnant to you, then pointing out that someone else thinks differently about those values doesn’t make them less repugnant to you.  All it means is that if you want to talk other people into sharing those values, you have to offer good reasons, and not simply insist at the top of your lungs that you’re right and they’re wrong.
To say that values depend on the properties of perceiving subjects rather than perceived objects does not mean that values are wholly arbitrary, after all. It’s possible to compare different values to one another, and to decide that one set of values is better than another. In point of fact, people do this all the time, just as they compare different claims of fact to one another and decide that one is more accurate than another. The scientific method itself is simply a relatively rigorous way to handle this latter task: if fact X is true, then fact Y would also be true; is it? In the same way, though contemporary industrial culture tends to pay far too little attention to this, there’s an ethical method that works along the same lines: if value X is good, then value Y would also be good; is it?
Again, we do this sort of thing all the time. Consider, for example, why it is that most people nowadays reject the racist claim that some arbitrarily defined assortment of ethnicities—say, “the white race”—is superior to all others, and ought to have rights and privileges that are denied to everyone else. One reason why such claims are rejected is that they conflict with other values, such as fairness and justice, that most people consider to be important; another is that the history of racial intolerance shows that people who hold the values associated with racism are much more likely than others to engage in activities, such as herding their neighbors into concentration camps, which most people find morally repugnant. That’s the ethical method in practice.
With all this in mind, let’s go back to Ervino’s claims. He proposes that in all the extraordinary richness of human life, out of all its potentials for love, learning, reflection, and delight, the only thing that can count as a source of meaning is the accumulation of “‘real’ knowledge of the Universe,” defined more precisely as the specific kind of quantitative knowledge about the behavior of matter and energy that the physical sciences of the world’s industrial societies currently pursue. That’s his value judgment on human life. Of course he has the right to make that judgment; he would be equally within his rights to insist that the point of life is to see how many orgasms he can rack up over the course of his existence; and it’s by no means obvious why one of these ambitions is any more absurd than the other.
Curiosity, after all, is a biological drive, one that human beings share in a high degree with most other primates. Sexual desire is another such drive, rather more widely shared among living things. Grant that the fulfillment of some such drive can be seen as the purpose of life, why not another? For that matter, why not more than one, or some combination of biological drives and the many other incentives that are capable of motivating human beings?
For quite a few centuries now, though, it’s been fashionable for thinkers in the Western world to finesse such issues, and insist that some biological drives are “noble” while others are “base,” “animal,” or what have you. Here again, we have value judgments masquerading as statements of fact, with a hearty dollop of class prejudice mixed in—for “base,” “animal,” etc., you could as well put “peasant,” which is of course the literal opposite of “noble.” That’s the sort of thinking that appears in the bit of Dante that Ervino included in his comment. His English is better than my Italian, and I’m not enough of a poet to translate anything but the raw meaning of Dante’s verse, but this is roughly what the verses say:
“Consider your lineage;You were not born to live as animals,But to seek virtue and knowledge.”
It’s a very conventional sentiment. The remarkable thing about this passage, though, is that Dante was not proposing the sentiment as a model for others to follow. Rather, this least conventional of poets put those words in the mouth of Ulysses, who appears in this passage of the Inferno as a damned soul frying in the eighth circle of Hell. Dante has it that after the events of Homer’s poem, Ulysses was so deeply in love with endless voyaging that he put to sea again, and these are the words with which he urged his second crew to sail beyond all known seas—a voyage which took them straight to a miserable death, and sent Ulysses himself tumbling down to eternal damnation.
This intensely equivocal frame story is typical of Dante, who delineated as well as any poet ever has the many ways that greatness turns into hubris, that useful Greek concept best translated as the overweening pride of the doomed. The project of scientific and technological progress is at least as vulnerable to that fate as any of the acts that earned the damned their places in Dante’s poem. That project might fail irrevocably if industrial society comes crashing down and no future society will ever be able to pursue the same narrowly defined objectives that ours has valued. In that case—at least in the parochial sense just sketched out—progress is over. Still, there’s at least one more way the same project would come to a screeching and permanent halt: if it succeeds.
Let’s imagine, for instance, that the fantasies of our scientific cornucopians are right and the march of progress continues on its way, unhindered by resource shortages or destabilized biospheres. Let’s also imagine that right now, some brilliant young physicist in Mumbai is working out the details of the long-awaited Unified Field Theory. It sees print next year; there are furious debates; the next decade goes into experimental tests of the theory, and proves that it’s correct. The relationship of all four basic forces of the cosmos—the strong force, the weak force, electromagnetism, and gravity—is explained clearly once and for all. With that in place, the rest of physical science falls into place step by step over the next century or so, and humanity learns the answers to all the questions that science can pose.
It’s only in the imagination of true believers in the Singularity, please note, that everything becomes possible once that happens. Many of the greatest achievements of science can be summed up in the words “you can’t do that;” the discovery of the laws of thermodynamics closed the door once and for all on perpetual motion, just as the theory of relativity put a full stop to the hope of limitless velocity. (“186,282 miles per second: it’s not just a good idea, it’s the law.”) Once the sciences finish their work, the technologists will have to scramble to catch up with them, and so for a while, at least, there will be no shortage of novel toys to amuse those who like such things; but sooner or later, all of what Ervino calls “‘real’ knowledge about the Universe” will have been learnt; at some point after that, every viable technology will have been refined to the highest degree of efficiency that physical law allows.
What then? The project of scientific and technological progress will be over. No one will ever again be able to discover a brand new, previously unimagined truth about the universe, in any but the most trivial sense—“this star’s mass is 1.000000000000000000006978 greater than this other star,” or the like—and variations in technology will be reduced to shifts in what’s fashionable at any given time. If the ongoing quest for replicable quantifiable knowledge about the physical properties of nature is the only thing that makes human life worth living, everyone alive at that point arguably ought to fly their hovercars at top speed into the nearest concrete abutment and end it all.
One way or another, that is, the project of scientific and technological progress is self-terminating. If this suggests to you, dear reader, that treating it as the be-all and end-all of human existence may not be the smartest choice, well, yes, that’s what it suggests to me as well. Does that make it worthless? Of course not. It should hardly be necessary to point out that “the only thing important in life” and “not important at all” aren’t the only two options available in discussions of this kind.
I’d like to suggest, along these lines, that human life sorts itself out most straightforwardly into an assortment of separate spheres, each of which deals with certain aspects of the extraordinary range of possibilities open to each of us. The sciences comprise one of those spheres, with each individual science a subsphere within it; the arts are a separate sphere, similarly subdivided; politics, religion, and sexuality are among the other spheres. None of these spheres contains more than a fraction of the whole rich landscape of human existence. Which of them is the most important? That’s a value judgment, and thus can only be made by an individual, from his or her own irreducibly individual point of view.
We’ve begun to realize—well, at least some of us have—that authority in one of these spheres isn’t transferable. When a religious leader, let’s say, makes pronouncements about science, those have no more authority than they would if they came from any other more or less clueless layperson, and a scientist who makes pronouncements about religion is subject to exactly the same rule. The same distinction applies with equal force between any two spheres, and as often as not between subspheres of a single sphere as well:  plenty of scientists make fools of themselves, for example, when they try to lay down the law about sciences they haven’t studied.
Claiming that one such sphere is the only thing that makes human life worthwhile is an error of the same kind. If Ervino feels that scientific and technological progress is the only thing that makes his own personal life worth living, that’s his call, and presumably he has reasons for it. If he tries to say that that’s true for me, he’s wrong—there are plenty of things that make my life worth living—and if he’s trying to make the same claim for every human being who will ever live, that strikes me as a profoundly impoverished view of the richness of human possibility. Insisting that scientific and technological progress are the only acts of human beings that differentiate their existence from that of a plant isn’t much better. Dante’s Divina Commedia, to cite the obvious example, is neither a scientific paper nor a technological invention; does that mean that it belongs in the same category as the noise made by hogs grunting in the mud?
Dante Alighieri lived in a troubled age in which scientific and technological progress were nearly absent and warfare, injustice, famine, pestilence, and the collapse of widely held beliefs about the world were matters of common experience. From that arguably unpromising raw material, he brewed one of the great achievements of human culture. It may well be that the next few centuries will be far from optimal for scientific and technological progress; it may well be that the most important thing that can be done by people who value science and technology is to figure out what can be preserved through the difficult times ahead, and do their best to see that these things reach the waiting hands of the future. If life hands you a dark age, one might say, it’s probably not a good time to brew lite beer, but there are plenty of other things you can still brew, bottle and drink.
As for me—well, all things considered, I find that being alive beats the stuffing out of the alternative, and that’s true even though I live in a troubled age in which scientific and technological progress show every sign of grinding to a halt in the near future, and in which warfare, injustice, famine, pestilence, and the collapse of widely held beliefs are matters of common experience. The notion that life has to justify itself to me seems, if I may be frank, faintly silly, and so does the comparable claim that I have to justify my existence to it, or to anyone else. Here I am; I did not make the world; quite the contrary, the world made me, and put me in the irreducibly personal situation in which I find myself. Given that I’m here, where and when I happen to be, there are any number of things that I can choose to do, or not do; and it so happens that one of the things I choose to do is to prepare, and help others prepare, for the long decline of industrial civilization and the coming of the dark age that will follow it.
And with that, dear reader, I return you to your regularly scheduled discussion of decline and fall on The Archdruid Report.

Dark Age America: The Hoard of the Nibelungs

Wed, 2014-11-12 15:46
Of all the differences that separate the feudal economy sketched out in last week’s post from the market economy most of us inhabit today, the one that tends to throw people for a loop most effectively is the near-total absence of money in everyday medieval life. Money is so central to current notions of economics that getting by without it is all but unthinkable these days.  The fact—and of course it is a fact—that the vast majority of human societies, complex civilizations among them, have gotten by just fine without money of any kind barely registers in our collective imagination.
One source of this curious blindness, I’ve come to think, is the way that the logic of money is presented to students in school. Those of my readers who sat through an Economics 101 class will no doubt recall the sort of narrative that inevitably pops up in textbooks when this point is raised. You have, let’s say, a pig farmer who has bad teeth, but the only dentist in the village is Jewish, so the pig farmer can’t simply swap pork chops and bacon for dental work. Barter might be an option, but according to the usual textbook narrative, that would end up requiring some sort of complicated multiparty deal whereby the pig farmer gives pork to the carpenter, who builds a garage for the auto repairman, who fixes the hairdresser’s car, and eventually things get back around to the dentist. Once money enters the picture, by contrast, the pig farmer sells bacon and pork chops to all and sundry, uses the proceeds to pay the dentist, and everyone’s happy. Right?
Well, maybe. Let’s stop right there for a moment, and take a look at the presuppositions hardwired into this little story. First of all, the narrative assumes that participants have a single rigidly defined economic role: the pig farmer can only raise pigs, the dentist can only fix teeth, and so on. Furthermore, it assumes that participants can’t anticipate needs and adapt to them: even though he knows the only dentist in town is Jewish, the pig farmer can’t do the logical thing and start raising lambs for Passover on the side, or what have you. Finally, the narrative assumes that participants can only interact economically through market exchanges: there are no other options for meeting needs for goods and services, no other way to arrange exchanges between people other than market transactions driven by the law of supply and demand.
Even in modern industrial societies, these three presuppositions are rarely true. I happen to know several pig farmers, for example, and none of them are so hyperspecialized that their contributions to economic exchanges are limited to pork products; garden truck, fresh eggs, venison, moonshine, and a good many other things could come into the equation as well. For that matter, outside the bizarre feedlot landscape of industrial agriculture, mixed farms raising a variety of crops and livestock are far more resilient than single-crop farms, and thus considerably more common in societies that haven’t shoved every economic activity into the procrustean bed of the money economy.
As for the second point raised above, the law of supply and demand works just as effectively in a barter economy as in a money economy, and successful participants are always on the lookout for a good or service that’s in short supply relative to potential demand, and so can be bartered with advantage. It’s no accident that traditional village economies tend to be exquisitely adapted to produce exactly that mix of goods and services the inhabitants of the village need and want.
Finally, of course, there are many ways of handling the production and distribution of goods and services without engaging in market exchanges. The household economy, in which members of each household produce goods and services that they themselves consume, is the foundation of economic activity in most human societies, and still accounted for the majority of economic value produced in the United States until not much more than a century ago. The gift economy, in which members of a community give their excess production to other members of the same community in the expectation that the gift will be reciprocated, is immensely common; so is the feudal economy delineated in last week’s post, with its systematic exclusion of market forces from the economic sphere. There are others, plenty of them, and none of them require money at all.
Thus the logic behind money pretty clearly isn’t what the textbook story claims it is. That doesn’t mean that there’s no logic to it at all; what it means is that nobody wants to talk about what it is that money is actually meant to do. Fortunately, we’ve discussed the relevant issues in last week’s post, so I can sum up the matter here in a single sentence: the point of money is that it makes intermediation easy.
Intermediation, for those of my readers who weren’t paying attention last week, is the process by which other people insert themselves between the producer and the consumer of any good or service, and take a cut of the proceeds of the transaction. That’s very easy to do in a money economy, because—as we all know from personal experience—the intermediaries can simply charge fees for whatever service they claim to provide, and then cash in those fees for whatever goods and services they happen to want.
Imagine, by way of contrast, the predicament of an intermediary who wanted to insert himself into, and take a cut out of, a money-free transaction between the pig farmer and the dentist. We’ll suppose that the arrangement the two of them have worked out is that the pig farmer raises enough lambs each year that all the Jewish families in town can have a proper Passover seder, the dentist takes care of the dental needs of the pig farmer and his family, and the other families in the Jewish community work things out with the dentist in exchange for their lambs—a type of arrangement, half barter and half gift economy, that’s tolerably common in close-knit communities.
Intermediation works by taking a cut from each transaction. The cut may be described as a tax, a fee, an interest payment, a service charge, or what have you, but it amounts to the same thing: whenever money changes hands, part of it gets siphoned off for the benefit of the intermediaries involved in the transaction. The same thing can be done in some money-free transactions, but not all. Our intermediary might be able to demand a certain amount of meat from each Passover lamb, or require the pig farmer to raise one lamb for the intermediary per six lambs raised for the local Jewish families, though this assumes that he either likes lamb chops or can swap the lamb to someone else for something he wants.
What on earth, though, is he going to do to take a cut from the dentist’s side of the transaction?  There wouldn’t be much point in demanding one tooth out of every six the dentist extracts, for example, and requiring the dentist to fill one of the intermediary’s teeth for every twenty other teeth he fills would be awkward at best—what if the intermediary doesn’t happen to need any teeth filled this year? What’s more, once intermediation is reduced to such crassly physical terms, it’s hard to pretend that it’s anything but a parasitic relationship that benefits the intermediary at everyone else’s expense.
What makes intermediation seem to make sense in a money economy is that money is the primary intermediation. Money is a system of arbitrary tokens used to facilitate exchange, but it’s also a good deal more than that. It’s the framework of laws, institutions, and power relationships that creates the tokens, defines their official value, and mandates that they be used for certain classes of economic exchange. Once the use of money is required for any purpose, the people who control the framework—whether those people are government officials, bankers, or what have you—get to decide the terms on which everyone else gets access to money, which amounts to effective control over everyone else. That is to say, they become the primary intermediaries, and every other intermediation depends on them and the money system they control.
This is why, to cite only one example, British colonial administrators in Africa imposed a house tax on the native population, even though the cost of administering and collecting the tax was more than the revenue the tax brought in. By requiring the tax to be paid in money rather than in kind, the colonial government forced the natives to participate in the money economy, on terms that were of course set by the colonial administration and British business interests. The money economy is the basis on which nearly all other forms of intermediation rest, and forcing the native peoples to work for money instead of allowing them to meet their economic needs in some less easily exploited fashion was an essential part of the mechanism that pumped wealth out of the colonies for Britain’s benefit.
Watch the way that the money economy has insinuated itself into every dimension of modern life in an industrial society and you’ve got a ringside seat from which to observe the metastasis of intermediation in recent decades. Where money goes, intermediation follows:  that’s one of the unmentionable realities of political economy, the science that Adam Smith actually founded, but was gutted, stuffed, and mounted on the wall—turned, that is, into the contemporary pseudoscience of economics—once it became painfully clear just what kind of trouble got stirred up when people got to talking about the implications of the links between political power and economic wealth.
There’s another side to the metastasis just mentioned, though, and it has to do with the habits of thought that the money economy both requires and reinforces. At the heart of the entire system of money is the concept of abstract value, the idea that goods and services share a common, objective attribute called “value” that can be gauged according to the one-dimensional measurement of price.
It’s an astonishingly complex concept, and so needs unpacking here. Philosophers generally recognize a crucial distinction between facts and values; there are various ways of distinguishing them, but the one that matters for our present purposes is that facts are collective and values are individual. Consider the statement “it rained here last night.” Given agreed-upon definitions of “here” and “last night,” that’s a factual statement; all those who stood outside last night in the town where I live and looked up at the sky got raindrops on their faces. In the strict sense of the word, facts are objective—that is, they deal with the properties of objects of perception, such as raindrops and nights.
Values, by contrast, are subjective—that is, they deal with the properties of perceiving subjects, such as people who look up at the sky and notice wetness on their faces. One person is annoyed by the rain, another is pleased, another is completely indifferent to it, and these value judgments are irreducibly personal; it’s not that the rain is annoying, pleasant, or indifferent, it’s the individuals who are affected in these ways. Nor are these personal valuations easy to sort out along a linear scale without drastic distortion. The human experience of value is a richly multidimensional thing; even in a language as poorly furnished with descriptive terms for emotion as English is, there are countless shades of meaning available for talking about positive valuations, and at least as many more for negative ones.
From that vast universe of human experience, the concept of abstract value extracts a single variable—“how much will you give for it?”—and reduces the answer to a numerical scale denominated in dollars and cents or the local equivalent. Like any other act of reductive abstraction, it has its uses, but the benefits of any such act always have to be measured against the blind spots generated by reductive modes of thinking, and the consequences of that induced blindness must either be guarded against or paid in full. The latter is far and away the more common of the two, and it’s certainly the option that modern industrial society has enthusiastically chosen.
Those of my readers who want to see the blindness just mentioned in full spate need only turn to any of the popular cornucopian economic theorists of our time. The fond and fatuous insistence that resource depletion can’t possibly be a problem, because investing additional capital will inevitably turn up new supplies—precisely the same logic, by the way, that appears in the legendary utterance “I can’t be overdrawn, I still have checks left!”—unfolds precisely from the flattening out of qualitative value into quantitative price just discussed.  The habit of reducing every kind of value to bare price is profitable in a money economy, since it facilitates ignoring every variable that might get in the way of making money off  transactions; unfortunately it misses a minor but crucial fact, which is that the laws of physics and ecology trump the laws of economics, and can neither be bribed nor bought.
The contemporary fixation on abstract value isn’t limited to economists and those who believe them, nor is its potential for catastrophic consequences. I’m thinking here specifically of those people who have grasped the fact that industrial civilization is picking up speed on the downslope of its decline, but whose main response to it consists of trying to find some way to stash away as much abstract value as possible now, so that it will be available to them in some prospective postcollapse society. Far more often than not, gold plays a central role in that strategy, though there are a variety of less popular vehicles that play starring roles the same sort of plan.
Now of course it was probably inevitable in a consumer society like ours that even the downfall of industrial civilization would be turned promptly into yet another reason to go shopping. Still, there’s another difficulty here, and that’s that the same strategy has been tried before, many times, in the last years of other civilizations. There’s an ample body of historical evidence that can be used to see just how well it works. The short form? Don’t go there.
It so happens, for example, that in there among the sagas and songs of early medieval Europe are a handful that deal with historical events in the years right after the fall of Rome: the Nibelungenlied, Beowulf, the oldest strata of Norse saga, and some others. Now of course all these started out as oral traditions, and finally found their way into written form centuries after the events they chronicle, when their compilers had no way to check their facts; they also include plenty of folktale and myth, as oral traditions generally do. Still, they describe events and social customs that have been confirmed by surviving records and archeological evidence, and offer one of the best glimpses we’ve got into the lived experience of descent into a dark age.
Precious metals played an important part in the political economy of that age—no surprises there, as the Roman world had a precious-metal currency, and since banks had not been invented yet, portable objects of gold and silver were the most common way that the Roman world’s well-off classes stashed their personal wealth. As the western empire foundered in the fifth century CE and its market economy came apart, hoarding precious metals became standard practice, and rural villas, the doomsteads of the day, popped up all over. When archeologists excavate those villas, they routinely find evidence that they were looted and burnt when the empire fell, and tolerably often the archeologists or a hobbyist with a metal detector has located the buried stash of precious metals somewhere nearby, an expressive reminder of just how much benefit that store of abstract wealth actually provided to its owner.
That’s the same story you get from all the old legends: when treasure turns up, a lot of people are about to die. The Volsunga saga and the Nibelungenlied, for example, are versions of the same story, based on dim memories of events in the Rhine valley in the century or so after Rome’s fall. The primary plot engine of those events is a hoard of the usual late Roman kind,  which passes from hand to hand by way of murder, torture, treachery, vengeance, and the extermination of entire dynasties. For that matter, when Beowulf dies after slaying his dragon, and his people discover that the dragon was guarding a treasure, do they rejoice? Not at all; they take it for granted that the kings and warriors of every neighboring kingdom are going to come and slaughter them to get it—and in fact that’s what happens. That’s business as usual in a dark age society.
The problem with stockpiling gold on the brink of a dark age is thus simply another dimension, if a more extreme one, of the broader problem with intermediation. It bears remembering that gold is not wealth; it’s simply a durable form of money, and thus, like every other form of money, an arbitrary token embodying a claim to real wealth—that is, goods and services—that other people produce. If the goods and services aren’t available, a basement safe full of gold coins won’t change that fact, and if the people who have the goods and services need them more than they want gold, the same is true. Even if the goods and services are to be had, if everyone with gold is bidding for the same diminished supply, that gold isn’t going to buy anything close to what it does today. What’s more, tokens of abstract value have another disadvantage in a society where the rule of law has broken down: they attract violence the way a dead rat draws flies.
The fetish for stockpiling gold has always struck me, in fact, as the best possible proof that most of the people who think they are preparing for total social collapse haven’t actually thought the matter through, and considered the conditions that will obtain after the rubble stops bouncing. Let’s say industrial civilization comes apart, quickly or slowly, and you have gold.  In that case, either you spend it to purchase goods and services after the collapse, or you don’t. If you do, everyone in your vicinity will soon know that you have gold, the rule of law no longer discourages people from killing you and taking it in the best Nibelungenlied fashion, and sooner or later you’ll run out of ammo. If you don’t, what good will the gold do you?
The era when Nibelungenlied conditions apply—when, for example, armed gangs move from one doomstead to another, annihilating the people holed up there, living for a while on what they find, and then moving on to the next, or when local governments round up the families of those believed to have gold and torture them to death, starting with the children, until someone breaks—is a common stage of dark ages. It’s a self-terminating one, since sooner or later the available supply of precious metals or other carriers of abstract wealth are spread thin across the available supply of warlords. This can take anything up to a century or two before we reach the stage commemorated in the Anglo-Saxon poem “The Seafarer:” Nearon nú cyningas ne cáseras, ne goldgiefan swylce iú wáeron(No more are there kings or caesars or gold-givers as once there were).
That’s when things begin settling down and the sort of feudal arrangement sketched out in last week’s post begins to emerge, when money and the market play little role in most people’s lives and labor and land become the foundation of a new, impoverished, but relatively stable society where the rule of law again becomes a reality. None of us living today will see that period arrive, but it’s good to know where the process is headed. We’ll discuss the practical implications of that knowledge in a future post.