AODA Blog

A Muddle of Mind and Matter

Wed, 2017-02-22 10:02
The philosophy of Arthur Schopenhauer, which we’ve been discussing for the last two weeks, has a feature that reliably irritates most people when they encounter it for the first time: it doesn’t divide up the world the way people in modern western societies habitually do. To say, as Schopenhauer does, that the world we experience is a world of subjective representations, and that we encounter the reality behind those representations in will, is to map out the world in a way so unfamiliar that it grates on the nerves. Thus it came as no surprise that last week’s post fielded a flurry of responses trying to push the discussion back onto the more familiar ground of mind and matter.
That was inevitable. Every society has what I suppose could be called its folk metaphysics, a set of beliefs about the basic nature of existence that are taken for granted by most people in that society, and the habit of dividing the world of our experience into mind and matter is among the core elements of the folk metaphysics of the modern western world. Most of us think of it, on those occasions when we think of it at all, as simply the way the world is. It rarely occurs to most of us that there’s any other way to think of things—and when one shows up, a great many of us back away from it as fast as possible.
Yet dividing the world into mind and matter is really rather problematic, all things considered. The most obvious difficulty is the relation between the two sides of the division. This is usually called the mind-body problem, after the place where each of us encounters that difficulty most directly. Grant for the sake of argument that each of us really does consist of a mind contained in a material body, how do these two connect? It’s far from easy to come up with an answer that works.
Several approaches have been tried in the attempt to solve the mind-body problem. There’s dualism, which is the claim that there are two entirely different and independent kinds of things in the world—minds and bodies—and requires proponents to comes up with various ways to justify the connection between them. First place for philosophical brashness in this connection goes to Rene Descartes, who argued that the link was directly and miraculously caused by the will of God. Plenty of less blatant methods of handwaving have been used to accomplish the same trick, but all of them require question-begging maneuvers of various kinds, and none has yet managed to present any kind of convincing evidence for itself.
Then there are the reductionistic monisms, which attempt to account for the relationship of mind and matter by reducing one of them to the other. The most popular reductionistic monism these days is reductionistic materialism, which claims that what we call “mind” is simply the electrochemical activity of those lumps of matter we call human brains. Though it’s a good deal less popular these days, there’s also reductionistic idealism, which claims that what we call “matter” is the brought into being by the activity of minds, or of Mind.
Further out still, you get the eliminative monisms, which deal with the relationship between mind and matter by insisting that one of them doesn’t exist. There are eliminative materialists, for example, who insist that mental experiences don’t exist, and our conviction that we think, feel, experience pain and pleasure, etc. is an “introspective illusion.” (I’ve often thought that one good response to such a claim would be to ask, “Do you really think so?” The consistent eliminative materialist would have to answer “No.”) There are also eliminative idealists, who insist that matter doesn’t exist and that all is mind.
There’s probably been as much effort expended in attempting to solve the mind-body problem as any other single philosophical issue has gotten in modern times, and yet it remains the focus of endless debates even today. That sort of intellectual merry-go-round is usually a pretty good sign that the basic assumptions at the root of the question have some kind of lethal flaw. That’s particularly true when this sort of ongoing donnybrook isn’t the only persistent difficulty surrounding the same set of ideas—and that’s very much the case here.
After all, there’s a far more personal sense in which the phrase “mind-body problem” can be taken. To speak in the terms usual for our culture, this thing we’re calling “mind” includes only a certain portion of what we think of as our inner lives. What, after all, counts as “mind”? In the folk metaphysics of our culture, and in most of the more formal systems of thought based on it, “mind” is consciousness plus the thinking and reasoning functions, perhaps with intuition (however defined) tied on like a squirrel’s  tail to the antenna of an old-fashioned jalopy. The emotions aren’t part of mind, and neither are such very active parts of our lives as sexual desire and the other passions; it sounds absurd, in fact, to talk about “the emotion-body problem” or the “passion-body problem.” Why does it sound absurd? Because, consciously or unconsciously, we assign the emotions and the passions to the category of “body,” along with the senses.
This is where we get the second form of the mind-body problem, which is that we’re taught implicitly and explicitly that the mind governs the body, and yet the functions we label “body” show a distinct lack of interest in obeying the functions we call “mind.” Sexual desire is of course the most obvious example. What people actually desire and what they think they ought to desire are quite often two very different things, and when the “mind” tries to bully the “body” into desiring what the “mind” thinks it ought to desire, the results are predictably bad. Add enough moral panic to the mix, in fact, and you end up with sexual hysteria of the classic Victorian type, in which the body ends up being experienced as a sinister Other responding solely to its own evil propensities, the seductive wiles of other persons, or the machinations of Satan himself despite all the efforts of the mind to rein it in.
Notice the implicit hierarchy woven into the folk metaphysics just sketched out, too. Mind is supposed to rule matter, not the other way around; mind is active, while matter is passive or, at most, subject to purely mechanical pressures that make it lurch around in predictable ways. When things don’t behave that way, you tend to see people melt down in one way or another—and the universe being what it is, things don’t actually behave that way very often, so the meltdowns come at regular intervals.
They also arrive in an impressive range of contexts, because the way of thinking about things that divides them into mind and matter is remarkably pervasive in western societies, and pops up in the most extraordinary places.  Think of the way that our mainstream religions portray God as the divine Mind ruling omnipotently over a universe of passive matter; that’s the ideal toward which our notions of mind and body strive, and predictably never reach. Think of the way that our entertainment media can always evoke a shudder of horror by imagining something we assign to the category of lifeless matter—a corpse in the case of zombie flicks, a machine in such tales as Stephen King’s Christine, or what have you—suddenly starts acting as though it possesses a mind.
For that matter, listen to the more frantic end of the rhetoric on the American left following the recent presidential election and you’ll hear the same theme echoing off the hills. The left likes to think of itself as the smart people, the educated people, the sensitive and thoughtful and reasonable people—in effect, the people of Mind. The hate speech that many of them direct toward their political opponents leans just as heavily on the notion that these latter are stupid, uneducated, insensitive, irrational, and so on—that is to say, the people of Matter. Part of the hysteria that followed Trump’s election, in turn, might best be described as the political equivalent of the instinctive reaction to a zombie flick: the walking dead have suddenly lurched out of their graves and stalked toward the ballot box, the body politic has rebelled against its self-proclaimed mind!
Let’s go deeper, though. The habit of dividing the universe of human experience into mind and matter isn’t hardwired into the world, or for that matter into human consciousness; there have been, and are still, societies in which people simply don’t experience themselves and the world that way. The mind-body problem and the habits of thought that give rise to it have a history, and it’s by understanding that history that it becomes possible to see past the problem toward a solution.
That history takes its rise from an interesting disparity among the world’s great philosophical traditions. The three that arose independently—the Chinese, the Indian, and the Greek—focused on different aspects of humanity’s existence in the world. Chinese philosophy from earliest times directed its efforts to understanding the relationship between the individual and society; that’s why the Confucian mainstream of Chinese philosophy is resolutely political and social in its focus, exploring ways that the individual can find a viable place within society, and the alternative Taoist tradition in its oldest forms (before it absorbed mysticism from Indian sources) focused on ways that the individual can find a viable place outside society. Indian philosophy, by contrast, directed its efforts to understanding the nature of individual existence itself; that’s why the great Indian philosophical schools all got deeply into epistemology and ended up with a strong mystical bent.
The Greek philosophical tradition, in turn, went to work on a different set of problems. Greek philosophy, once it got past its initial fumblings, fixed its attention on the world of thought. That’s what led Greek thinkers to transform mathematics from a unsorted heap of practical techniques to the kind of ordered system of axioms and theorems best exemplified by Euclid’s Elements of Geometry, and it’s also what led Greek thinkers in the same generation as Euclid to create logic, one of the half dozen or so greatest creations of the human mind. Yet it also led to something considerably more problematic: the breathtaking leap of faith by which some of the greatest intellects of the ancient world convinced themselves that the structure of their thoughts was the true structure of the universe, and that thoughts about things were therefore more real than the things themselves.
The roots of that conviction go back all the way to the beginnings of Greek philosophy, but it really came into its own with Parmenides, an important philosopher of the generation immediately before Plato. Parmenides argued that there were two ways of understanding the world, the way of truth and the way of opinion; the way of opinion consisted of understanding the world as it appears to the senses, which according to Parmenides means it’s false, while the way of truth consisted of understanding the world the way that reason proved it had to be, even when this contradicted the testimony of the senses. To be sure, there are times and places where the testimony of the senses does indeed need to be corrected by logic, but it’s at least questionable whether this should be taken anything like as far as Parmenides took it—he argued, for example, that motion was logically impossible, and so nothing ever actually moves, even though it seems that way to our deceiving senses.
The idea that thoughts about things are more real than things settled into what would be its classic form in the writings of Plato, who took Parmenides’ distinction and set to work to explain the relationship between the worlds of truth and opinion. To Plato, the world of truth became a world of forms or ideas, on which everything in the world of sensory experience is modeled. The chair we see, in other words, is a projection or reflection downwards into the world of matter of the timeless, pure, and perfect form or idea of chair-ness. The senses show us the projections or reflections; the reasoning mind shows us the eternal form from which they descend.
That was the promise of classic Platonism—that the mind could know the truth about the universe directly, without the intervention of the senses, the same way it could know the truth of a mathematical demonstration. The difficulty with this enticing claim, though, was that when people tried to find the truth about the universe by examining their thinking processes, no two of them discovered exactly the same truth, and the wider the cultural and intellectual differences between them, the more different the truths turned out to be. It was for this reason among others that Aristotle, whose life’s work was basically that of cleaning up the mess that Plato and his predecessors left behind, made such a point of claiming that nothing enters the mind except through the medium of the senses. It’s also why the Academy, the school founded by Plato, in the generations immediately after his time took a hard skeptical turn, and focused relentlessly on the limits of human knowledge and reasoning.
Later on, Greek philosophy and its Roman foster-child headed off in other directions—on the one hand, into ethics, and the question of how to live the good life in a world where certainty isn’t available; on the other, into mysticism, and the question of whether the human mind can experience the truth of things directly through religious experience. A great deal of Plato’s thinking, however, got absorbed by the Christian religion after the latter clawed its way to respectability in the fourth century CE.
Augustine of Hippo, the theologian who basically set the tone of Christianity in the west for the next fifteen centuries, had been a Neoplatonist before he returned to his Christian roots, and he was far from the only Christian of that time to drink deeply from Plato's well. In his wake, Platonism became the standard philosophy of the western church until it was displaced by a modified version of Aristotle’s philosophy in the high Middle Ages. Thinkers divided the human organism into two portions, body and soul, and began the process by which such things as sexuality and the less angelic emotions got exiled from the soul into the body.
Even after Thomas Aquinas made Aristotle popular again, the basic Parmenidean-Platonic notion of truth had been so thoroughly bolted into Christian theology that it rode right over any remaining worries about the limitations of human reason. The soul trained in the use of reason could see straight to the core of things, and recognize by its own operations such basic religious doctrines as the existence of God:  that was the faith with which generations of scholars pursued the scholastic philosophy of medieval times, and those who disagreed with them rarely quarreled over their basic conception—rather, the point at issue was whether the Fall had left the human mind so vulnerable to the machinations of Satan that it couldn’t count on its own conclusions, and the extent to which divine grace would override Satan’s malicious tinkerings anywhere this side of heaven.
If you happen to be a devout Christian, such questions make sense, and they matter. It’s harder to see how they still made sense and mattered as the western world began moving into its post-Christian era in the eighteenth century, and yet the Parmenidean-Platonic faith in the omnipotence of reason gained ground as Christianity ebbed among the educated classes. People stopped talking about soul and body and started talking about mind and body instead.
Since mind, mens in Latin, was already in common use as a term for the faculty of the soul that handled its thinking and could be trained to follow the rules of reason, that shift was of vast importance. It marked the point at which the passions and the emotions were shoved out of the basic self-concept of the individual in western culture, and exiled to the body, that unruly and rebellious lump of matter in which the mind is somehow caged.
That’s one of the core things that Schopenhauer rejected. As he saw it, the mind isn’t the be-all and end-all of the self, stuck somehow into the prison house of the body. Rather, the mind is a frail and unstable set of functions that surface now and then on top of other functions that are much older, stronger, and more enduring. What expresses itself through all these functions, in turn, is will:  at the most basic primary level, as the will to exist; on a secondary level, as the will to live, with all the instincts and drives that unfold from that will; on a tertiary level, as the will to experience, with all the sensory and cognitive apparatus that unfolds from that will; and on a quaternary level, as the will to understand, with all the abstract concepts and relationships that unfold from that will.
Notice that from this point of view, the structure of thought isn't the structure of the cosmos, just a set of convenient models, and thoughts about things are emphatically not more real than the things themselves.  The things themselves are wills, expressing themselves through their several modes.  The things as we know them are representations, and our thoughts about the things are abstract patterns we create out of memories of representations, and thus at two removes from reality.
Notice also that from this point of view, the self is simply a representation—the ur-representation, the first representation each of us makes in infancy as it gradually sinks in that there’s a part of the kaleidoscope of our experience that we can move at will, and a lot more that we can’t, but still just a representation, not a reality. Of course that’s what we see when we first try to pay attention to ourselves, just as we see the coffee cup discussed in the first post in this series. It takes exacting logical analysis, scientific experimentation, or prolonged introspection to get past the representation of the self (or the coffee cup), realize that it’s a subjective construct rather than an objective reality, and grasp the way that it’s assembled out of disparate stimuli according to preexisting frameworks that are partly hardwired into our species and partly assembled over the course of our lives.
Notice, finally, that those functions we like to call “mind”—in the folk metaphysics of our culture, again, these are consciousness and the capacity to think, with a few other tag-ends of other functions dangling here and there—aren’t the essence of who we are, the ghost in the machine, the Mini-Me perched inside the skull that pushes and pulls levers to control the passive mass of the body and gets distracted by the jabs and lurches of the emotions and passions. The functions we call “mind,” rather, are a set of delicate, tentative, and fragile functions of will, less robust and stable than most of the others, and with no inherent right to rule the other functions. The Schopenhauerian self is an ecosystem rather than a hierarchy, and if what we call “mind” sits at the top of the food chain like a fox in a meadow, that simply means that the fox has to spend much of its time figuring out where mice like to go, and even more of its time sleeping in its den, while the mice scamper busily about and the grass goes quietly about turning sunlight, water and carbon dioxide into the nutrients that support the whole system.
Accepting this view of the self requires sweeping revisions of the ways we like to think about ourselves and the world, which is an important reason why so many people react with acute discomfort when it’s suggested. Nonetheless those revisions are of crucial importance, and as this discussion continues, we’ll see how they offer crucial insights into the problems we face in this age of the world—and into their potential solutions.

The World as Will

Wed, 2017-02-15 14:35
It's impressively easy to misunderstand the point made in last week’s post here on The Archdruid Report. To say that the world we experience is made up of representations of reality, constructed in our minds by taking the trickle of data we get from the senses and fitting those into patterns that are there already, doesn’t mean that nothing exists outside of our minds. Quite the contrary, in fact; there are two very good reasons to think that there really is something “out there,” a reality outside our minds that produces the trickle of data we’ve discussed.
The first of those reasons seems almost absurdly simple at first glance: the world doesn’t always make sense to us. Consider, as one example out of godzillions, the way that light seems to behave like a particle on some occasions and like a wave on others. That’s been described, inaccurately, as a paradox, but it’s actually a reflection of the limitations of the human mind.
What, after all, does it mean to call something a particle? Poke around the concept for a while and you’ll find that at root, this concept “particle” is an abstract metaphor, extracted from the common human experience of dealing with little round objects such as pebbles and marbles. What, in turn, is a wave? Another abstract metaphor, extracted from the common human experience of watching water in motion. When a physicist says that light sometimes acts like a particle and sometimes like a wave, what she’s saying is that neither of these two metaphors fits more than a part of the way that light behaves, and we don’t have any better metaphor available.
If the world was nothing but a hallucination projected by our minds, then it would contain nothing that wasn’t already present in our minds—for what other source could there be?  That implies in turn that there would be a perfect match between the contents of the world and the contents of our minds, and we wouldn’t get the kind of mismatch between mind and world that leaves physicists flailing. More generally, the fact that the world so often baffles us offers good evidence that behind the world we experience, the world as representation, there’s some “thing in itself” that’s the source of the sense data we assemble into representations.
The other reason to think that there’s a reality distinct from our representations is that, in a certain sense, we experience such a reality at every moment.
Raise one of your hands to a position where you can see it, and wiggle the fingers. You see the fingers wiggling—or, more precisely, you see a representation of the wiggling fingers, and that representation is constructed in your mind out of bits of visual data, a great deal of memory, and certain patterns that seem to be hardwired into your mind. You also feel the fingers wiggling—or, here again, you feel a representation of the wiggling fingers, which is constructed in your mind out of bits of tactile and kinesthetic data, plus the usual inputs from memory and hardwired patterns. Pay close attention and you might be able to sense the way your mind assembles the visual representation and the tactile one into a single pattern; that happens close enough to the surface of consciousness that a good many people can catch themselves doing it.
So you’ve got a representation of wiggling fingers, part of the world as representation we experience. Now ask yourself this: the action of the will that makes the fingers wiggle—is that a representation?
This is where things get interesting, because the only reasonable answer is no, it’s not. You don’t experience the action of the will as a representation; you don’t experience it at all. You simply wiggle your fingers. Sure, you experience the results of the will’s action in the form of representations—the visual and tactile experiences we’ve just been considering—but not the will itself. If it were true that you could expect to see or hear or feel or smell or taste the impulse of the will rolling down your arm to the fingers, say, it would be reasonable to treat the will as just one more representation. Since that isn’t the case, it’s worth exploring the possibility that in the will, we encounter something that isn’t just a representation of reality—it’s a reality we encounter directly.
That’s the insight at the foundation of Arthur Schopenhauer’s philosophy. Schopenhauer’s one of the two principal guides who are going to show us around the giddy funhouse that philosophy has turned into of late, and guide us to the well-marked exits, so you’ll want to know a little about him. He lived in the ramshackle assortment of little countries that later became the nation of Germany; he was born in 1788 and died in 1860; he got his doctorate in philosophy in 1813; he wrote his most important work, The World as Will and Representation, before he turned thirty; and he spent all but the last ten years of his life in complete obscurity, ignored by the universities and almost everyone else. A small inheritance, carefully managed, kept him from having to work for a living, and so he spent his time reading, writing, playing the flute for an hour a day before dinner, and grumbling under his breath as philosophy went its merry way into metaphysical fantasy. He grumbled a lot, and not always under his breath. Fans of Sesame Street can think of him as philosophy’s answer to Oscar the Grouch.
Schopenhauer came of age intellectually in the wake of Immanuel Kant, whose work we discussed briefly last week, and so the question he faced was how philosophy could respond to the immense challenge Kant threw at the discipline’s feet. Before you go back to chattering about what’s true and what’s real, Kant said in effect, show me that these labels mean something and relate to something, and that you’re not just chasing phantoms manufactured by your own minds.
Most of the philosophers who followed in Kant’s footsteps responded to his challenge by ignoring it, or using various modes of handwaving to pretend that it didn’t matter. One common gambit at the time was to claim that the human mind has a special superpower of intellectual intuition that enables it to leap tall representations in a single bound, and get to a direct experience of reality that way. What that meant in practice, of course, is that philosophers could claim to have intellectually intuited this, that, and the other thing, and then build a great tottering system on top of them. What that meant in practice, of course, that a philosopher could simply treat whatever abstractions he fancied as truths that didn’t have to be proved; after all, he’d intellectually intuited them—prove that he hadn’t!
There were other such gimmicks. What set Schopenhauer apart was that he took Kant’s challenge seriously enough to go looking for something that wasn’t simply a representation. What he found—why, that brings us back to the wiggling fingers.
As discussed in last week’s post, every one of the world’s great philosophical traditions has ended up having to face the same challenge Kant flung in the face of the philosophers of his time. Schopenhauer knew this, since a fair amount of philosophy from India had been translated into European languages by his time, and he read extensively on the subject. This was helpful because Indian philosophy hit its own epistemological crisis around the tenth century BCE, a good twenty-nine centuries before Western philosophy got there, and so had a pretty impressive head start. There’s a rich diversity of responses to that crisis in the classical Indian philosophical schools, but most of them came to see consciousness as a (or the) thing-in-itself, as reality rather than representation.
It’s a plausible claim. Look at your hand again, with or without wiggling fingers. Now be aware of yourself looking at the hand—many people find this difficult, so be willing to work at it, and remember to feel as well as see. There’s your hand; there’s the space between your hand and your eyes; there’s whatever of your face you can see, with or without eyeglasses attached; pay close attention and you can also feel your face and your eyes from within; and then there’s—
There’s the thing we call consciousness, the whatever-it-is that watches through your eyes. Like the act of will that wiggled your fingers, it’s not a representation; you don’t experience it. In fact, it’s very like the act of will that wiggled your fingers, and that’s where Schopenhauer went his own way.
What, after all, does it mean to be conscious of something? Some simple examples will help clarify this. Move your hand until it bumps into something; it’s when something stops the movement that you feel it. Look at anything; you can see it if and only if you can’t see through it. You are conscious of something when, and only when, it resists your will.
That suggested to Schopenhauer that consciousness derives from will, not the other way around. There were other lines of reasoning that point in the same direction, and all of them derive from common human experiences. For example, each of us stops being conscious for some hours out of every day, whenever we go to sleep. During part of the time we’re sleeping, we experience nothing at all; during another part, we experience the weirdly disconnected representations we call “dreams.”  Even in dreamless sleep, though, it’s common for a sleeper to shift a limb away from an unpleasant stimulus. Thus the will is active even when consciousness is absent.
Schopenhauer proposed that there are different forms or, as he put it, grades of the will. Consciousness, which we can define for present purposes as the ability to experience representations, is one grade of the will—one way that the will can adapt to existence in a world that often resists it. Life is another, more basic grade. Consider the way that plants orient themselves toward sunlight, bending and twisting like snakes in slow motion, and seek out concentrations of nutrients with probing, hungry roots. As far as anyone knows, plants aren’t conscious—that is, they don’t experience a world of representations the way that animals do—but they display the kind of goal-seeking behavior that shows the action of will.
Animals also show goal-seeking behavior, and they do it in a much more complex and flexible way than plants do. There’s good reason to think that many animals are conscious, and experience a world of representations in something of the same way we do; certainly students of animal behavior have found that animals let incidents from the past shape their actions in the present, mistake one person for another, and otherwise behave in ways that suggest that their actions are guided, as ours are, by representations rather than direct reaction to stimuli. In animals, the will has developed the ability to represent its environment to itself.
Animals, at least the more complex ones, also have that distinctive mode of consciousness we call emotion. They can be happy, sad, lonely, furious, and so on; they feel affection for some beings and aversion toward others. Pay attention to your own emotions and you’ll soon notice how closely they relate to the will. Some emotions—love and hate are among them—are motives for action, and thus expressions of will; others—happiness and sadness are among them—are responses to the success or failure of the will to achieve its goals. While emotions are tangled up with representations in our minds, and presumably in those of animals as well, they stand apart; they’re best understood as conditions of the will, expressions of its state as it copes with the world through its own representations.
And humans? We’ve got another grade of the will, which we can call intellect:  the ability to add up representations into abstract concepts, which we do, ahem, at will. Here’s one representation, which is brown and furry and barks; here’s another like it; here’s a whole kennel of them—and we lump them all together in a single abstract category, to which we assign a sound such as “dog.” We can then add these categories together, creating broader categories such as “quadruped” and “pet;” we can subdivide the categories to create narrower ones such as “puppy” and “Corgi;” we can extract qualities from the whole and treat them as separate concepts, such as “furry” and “loud;” we can take certain very general qualities and conjure up the entire realm of abstract number, by noticing how many paws most dogs have and using that, and a great many other things, to come up with the concept of “four.”
So life, consciousness, and intellect are three grades of the will. One interesting thing about them is that the more basic ones are more enduring and stable than the more complex ones. Humans, again, are good examples. Humans remain alive all the way from birth to death; they’re conscious only when awake; they’re intelligent only when actively engaged in thinking—which is a lot less often than we generally like to admit. A certain degree of tiredness, a strong emotion, or a good stiff drink are usually enough to shut off the intellect and leave us dealing with the world on the same mental basis as an ordinarily bright dog; it takes quite a bit more to reduce us to the vegetative level, and serious physical trauma to go one more level down.
Let’s take a look at that final level, though. The conventional wisdom of our age holds that everything that exists is made up of something called “matter,” which is configured in various ways; further, that matter is what really exists, and everything else is somehow a function of matter if it exists at all. For most of us, this is the default setting, the philosophical opinion we start from and come back to, and anyone who tries to question it can count on massive pushback.
The difficulty here is that philosophers and scientists have both proved, in their own ways, that the usual conception of matter is quite simply nonsense. Any physical scientist worth his or her sodium chloride, to begin with, will tell you that what we habitually call “solid matter” is nearly as empty as the vacuum of deep space—a bit of four-dimensional curved spacetime that happens to have certain tiny probability waves spinning dizzily in it, and it’s the interaction between those probability waves and those composing that other patch of curved spacetime we each call “my body” that creates the illusions of solidity, color, and the other properties we attribute to matter.
The philosophers got to the same destination a couple of centuries earlier, and by a different route. The epistemologists I mentioned in last week’s post—Locke, Berkeley, and Hobbes—took the common conception of matter apart layer by layer and showed, to use the formulation we’ve already discussed, that all the things we attribute to matter are simply representations in the mind. Is there something out there that causes those representations? As already mentioned, yes, there’s very good reason to think so—but that doesn’t mean that the “something out there” has to consist of matter in any sense of the word that means anything.
That’s where Schopenhauer got to work, and once again, he proceeded by calling attention to certain very basic and common human experiences. Each of us has direct access, in a certain sense, to one portion of the “something out there,” the portion each of us calls “my body.” When we experience our bodies, we experience them as representations, just like anything else—but we also act with them, and as the experiment with the wiggling fingers demonstrated, the will that acts isn’t a representation.
Thus there’s a boundary between the part of the universe we encounter as will and representation, and the part we encounter only as representation. The exact location of that boundary is more complex than it seems at first sight. It’s a commonplace in the martial arts, for example, that a capable martial artist can learn to feel with a weapon as though it were a part of the body. Many kinds of swordsmanship, for example, rely on what fencers call sentiment de fer, the “sense of the steel;” the competent fencer can feel the lightest touch of the other blade against his own, just as though it brushed his hand.
There are also certain circumstances—lovemaking, dancing, ecstatic religious experience, and mob violence are among them—in which under certain hard-to-replicate conditions, two or more people seem to become, at least briefly, a single entity that moves and acts with a will of its own. All of those involve a shift from the intellect to a more basic grade of the will, and they lead in directions that will deserve a good deal more examination later on; for now, the point at issue is that the boundary line between self and other can be a little more fluid than we normally tend to assume.
For our present purposes, though, we can set that aside and focus on the body as the part of the world each of us encounters in a twofold way: as a representation among representations, and as a means of expression for the will.  Everything we perceive about our bodies is a representation, but by noticing these representations, we observe the action of something that isn’t a representation, something we call the will, manifesting in its various grades. That’s all there is. Go looking as long as you want, says Schopenhauer, and you won’t find anything but will and representations. What if that’s all there is—if the thing we call "matter" is simpy the most basic grade of the will, and everything in the world thus amounts to will on the one hand, and representations experienced by that mode of will we call consciousness on the other, and the thing that representations are representing are various expressions of this one energy that, by way of its distinctive manifestations in our own experience, we call the will?
That’s Schopenhauer’s vision. The remarkable thing is how close it is to the vision that comes out of modern science. A century before quantum mechanics, he’d already grasped that behind the facade of sensory representations that you and I call matter lies an incomprehensible and insubstantial reality, a realm of complex forces dancing in the void. Follow his arguments out to their logical conclusion and you get a close enough equivalent of the universe of modern physics that it’s not at all implausible that they’re one and the same. Of course plausibility isn’t proof—but given the fragile, dependent, and derivative nature of the human intellect, it may be as close as we can get.
And of course that latter point is a core reason why Arthur Schopenhauer spent most of his life in complete obscurity and why, after a brief period of mostly posthumous superstardom in the late nineteenth century, his work dropped out of sight and has rarely been noticed since. (To be precise, it’s one of two core reasons; we’ll get to the other one later.) If he’s right, then the universe is not rational. Reason—the disciplined use of the grade of will I’ve called the intellect—isn’t a key to the truth of things.  It’s simply the systematic exploitation of a set of habits of mind that turned out to be convenient for our ancestors as they struggled with the hard but intellectually undemanding tasks of staying fed, attracting mates, chasing off predators, and the like, and later on got pulled out of context and put to work coming up with complicated stories about what causes the representations we experience.
To suggest that, much less to back it up with a great deal of argument and evidence, is to collide head on with one of the most pervasive presuppositions of our culture. We’ll survey the wreckage left behind by that collision in next week’s post.

The World as Representation

Wed, 2017-02-08 14:23
It can be hard to remember these days that not much more than half a century ago, philosophy was something you read about in general-interest magazines and the better grade of newspapers. Existentialist philosopher Jean-Paul Sartre was an international celebrity; the posthumous publication of Pierre Teilhard de Chardin’s Le Phenomenon Humaine (the English translation, predictably, was titled The Phenomenon of Man) got significant flurries of media coverage; Random House’s Vintage Books label brought out cheap mass-market paperback editions of major philosophical writings from Plato straight through to Nietzsche and beyond, and made money off them.
Though philosophy was never really part of the cultural mainstream, it had the same kind of following as avant-garde jazz, say, or science fiction.  At any reasonably large cocktail party you had a pretty fair chance of meeting someone who was into it, and if you knew where to look in any big city—or any college town with pretensions to intellectual culture, for that matter—you could find at least one bar or bookstore or all-night coffee joint where the philosophy geeks hung out, and talked earnestly into the small hours about Kant or Kierkegaard. What’s more, that level of interest in the subject had been pretty standard in the Western world for a very long time.
We’ve come a long way since then, and not in a particularly useful direction. These days, if you hear somebody talk about philosophy in the media, it’s probably a scientific materialist like Neil deGrasse Tyson ranting about how all philosophy is nonsense. The occasional work of philosophical exegesis still gets a page or two in the New York Review of Books now and then, but popular interest in the subject has vanished, and more than vanished: the sort of truculent ignorance about philosophy displayed by Tyson and his many equivalents has become just as common among the chattering classes as a feigned interest in the subject was a half century in the past.
Like most human events, the decline of philosophy in modern times was overdetermined; like the victim in the murder-mystery paperback who was shot, strangled, stabbed, poisoned, whacked over the head with a lead pipe, and then shoved off a bridge to drown, there were more causes of death than the situation actually required. Part of the problem, certainly, was the explosive expansion of the academic industry in the US and elsewhere in the second half of the twentieth century.  In an era when every state teacher’s college aspired to become a university and every state university dreamed of rivaling the Ivy League, a philosophy department was an essential status symbol. The resulting expansion of the field was not necessarily matched by an equivalent increase in genuine philosophers, but it was certainly followed by the transformation of university-employed philosophy professors into a professional caste which, as such castes generally do, defended its status by adopting an impenetrable jargon and ignoring or rebuffing attempts at participation from outside its increasingly airtight circle.
Another factor was the rise of the sort of belligerent scientific materialism exemplified, as noted earlier, by Neil deGrasse Tyson. Scientific inquiry itself is philosophically neutral—it’s possible to practice science from just about any philosophical standpoint you care to name—but the claim at the heart of scientific materialism, the dogmatic insistence that those things that can be investigated using scientific methods and explained by current scientific theory are the only things that can possibly exist, depends on arbitrary metaphysical postulates that were comprehensively disproved by philosophers more than two centuries ago. (We’ll get to those postulates and their problems later on.) Thus the ascendancy of scientific materialism in educated culture pretty much mandated the dismissal of philosophy.
There were plenty of other factors as well, most of them having no more to do with philosophy as such than the ones just cited. Philosophy itself, though, bears some of the responsibility for its own decline. Starting in the seventeenth century and reaching a crisis point in the nineteenth, western philosophy came to a parting of the ways—one that the philosophical traditions of other cultures reached long before it, with similar consequences—and by and large, philosophers and their audiences alike chose a route that led to its present eclipse. That choice isn’t irreparable, and there’s much to be gained by reversing it, but it’s going to take a fair amount of hard intellectual effort and a willingness to abandon some highly popular shibboleths to work back to the mistake that was made, and undo it.
To help make sense of what follows, a concrete metaphor might be useful. If you’re in a place where there are windows nearby, especially if the windows aren’t particularly clean, go look out through a window at the view beyond it. Then, after you’ve done this for a minute or so, change your focus and look at the window rather than through it, so that you see the slight color of the glass and whatever dust or dirt is clinging to it. Repeat the process a few times, until you’re clear on the shift I mean: looking through the window, you see the world; looking at the window, you see the medium through which you see the world—and you might just discover that some of what you thought at first glance was out there in the world was actually on the window glass the whole time.
That, in effect, was the great change that shook western philosophy to its foundations beginning in the seventeenth century. Up to that point, most philosophers in the western world started from a set of unexamined presuppositions about what was true, and used the tools of reasoning and evidence to proceed from those presuppositions to a more or less complete account of the world. They were into what philosophers call metaphysics: reasoned inquiry into the basic principles of existence. That’s the focus of every philosophical tradition in its early years, before the confusing results of metaphysical inquiry refocus attention from “What exists?” to “How do we know what exists?” Metaphysics then gives way to epistemology: reasoned inquiry into what human beings are capable of knowing.
That refocusing happened in Greek philosophy around the fourth century BCE, in Indian philosophy around the tenth century BCE, and in Chinese philosophy a little earlier than in Greece. In each case, philosophers who had been busy constructing elegant explanations of the world on the basis of some set of unexamined cultural assumptions found themselves face to face with hard questions about the validity of those assumptions. In terms of the metaphor suggested above, they were making all kinds of statements about what they saw through the window, and then suddenly realized that the colors they’d attributed to the world were being contributed in part by the window glass and the dust on it, the vast dark shape that seemed to be moving purposefully across the sky was actually a beetle walking on the outside of the window, and so on.
The same refocusing began in the modern world with Rene Descartes, who famously attempted to start his philosophical explorations by doubting everything. That’s a good deal easier said than done, as it happens, and to a modern eye, Descartes’ writings are riddled with unexamined assumptions, but the first attempt had been made and others followed. A trio of epistemologists from the British Isles—John Locke, George Berkeley, and David Hume—rushed in where Descartes feared to tread, demonstrating that the view from the window had much more to do with the window glass than it did with the world outside. The final step in the process was taken by the German philosopher Immanuel Kant, who subjected human sensory and rational knowledge to relentless scrutiny and showed that most of what we think of as “out there,” including such apparently hard realities as space and time, are actually  artifacts of the processes by which we perceive things.
Look at an object nearby: a coffee cup, let’s say. You experience the cup as something solid and real, outside yourself: seeing it, you know you can reach for it and pick it up; and to the extent that you notice the processes by which you perceive it, you experience these as wholly passive, a transparent window on an objective external reality. That’s normal, and there are good practical reasons why we usually experience the world that way, but it’s not actually what’s going on.
What’s going on is that a thin stream of visual information is flowing into your mind in the form of brief fragmentary glimpses of color and shape. Your mind then assembles these together into the mental image of the coffee cup, using your memories of that and other coffee cups, and a range of other things as well, as a template onto which the glimpses can be arranged. Arthur Schopenhauer, about whom we’ll be talking a great deal as we proceed, gave the process we’re discussing the useful label of “representation;” when you look at the coffee cup, you’re not passively seeing the cup as it exists, you’re actively representing—literally re-presenting—an image of the cup in your mind.
There are certain special situations in which you can watch representation at work. If you’ve ever woken up in an unfamiliar room at night, and had a few seconds pass before the dark unknown shapes around you finally turned into ordinary furniture, you’ve had one of those experiences. Another is provided by the kind of optical illusion that can be seen as two different things. With a little practice, you can flip from one way of seeing the illusion to another, and watch the process of representation as it happens.
What makes the realization just described so challenging is that it’s fairly easy to prove that the cup as we represent it has very little in common with the cup as it exists “out there.” You can prove this by means of science: the cup “out there,” according to the evidence collected painstakingly by physicists, consists of an intricate matrix of quantum probability fields and ripples in space-time, which our senses systematically misperceive as a solid object with a certain color, surface texture, and so on. You can also prove this, as it happens, by sheer sustained introspection—that’s how Indian philosophers got there in the age of the Upanishads—and you can prove it just as well by a sufficiently rigorous logical analysis of the basis of human knowledge, which is what Kant did.
The difficulty here, of course, is that once you’ve figured this out, you’ve basically scuttled any chance at pursuing the kind of metaphysics that’s traditional in the formative period of your philosophical tradition. Kant got this, which is why he titled the most relentless of his analyses Prolegomena to Any Future Metaphysics; what he meant by this was that anybody who wanted to try to talk about what actually exists had better be prepared to answer some extremely difficult questions first.  When philosophical traditions hit their epistemological crises, accordingly, some philosophers accept the hard limits on human knowledge, ditch the metaphysics, and look for something more useful to do—a quest that typically leads to ethics, mysticism, or both. Other philosophers double down on the metaphysics and either try to find some way around the epistemological barrier, or simply ignore it, and this latter option is the one that most Western philosophers after Kant ended up choosing.  Where that leads—well, we’ll get to that later on.
For the moment, I want to focus a little more closely on the epistemological crisis itself, because there are certain very common ways to misunderstand it. One of them I remember with a certain amount of discomfort, because I made it myself in my first published book, Paths of Wisdom. This is the sort of argument that sees the sensory organs and the nervous system as the reason for the gap between the reality out there—the “thing in itself” (Ding an Sich), as Kant called it—and the representation as we experience it. It’s superficially very convincing: the eye receives light in certain patterns and turns those into a cascade of electrochemical bursts running up the optic nerve, and the visual centers in the brain then fold, spindle, and mutilate the results into the image we see.
The difficulty? When we look at light, an eye, an optic nerve, a brain, we’re not seeing things in themselves, we’re seeing another set of representations, constructed just as arbitrarily in our minds as any other representation. Nietzsche had fun with this one: “What? and others even go so far as to say that the external world is the work of our organs? But then our body, as a piece of this external world, would be the work of our organs! But then our organs themselves would be—the work of our organs!” That is to say, the body is also a representation—or, more precisely, the body as we perceive it is a representation. It has another aspect, but we’ll get to that in a future post.
Another common misunderstanding of the epistemological crisis is to think that it’s saying that your conscious mind assembles the world, and can do so in whatever way it wishes. Not so. Look at the coffee cup again. Can you, by any act of consciousness, make that coffee cup suddenly sprout wings and fly chirping around your computer desk? Of course not. (Those who disagree should be prepared to show their work.) The crucial point here is that representation is neither a conscious activity nor an arbitrary one. Much of it seems to be hardwired, and most of the rest is learned very early in life—each of us spent our first few years learning how to do it, and scientists such as Jean Piaget have chronicled in detail the processes by which children gradually learn how to assemble the world into the specific meaningful shape their culture expects them to get. 
By the time you’re an adult, you do that instantly, with no more conscious effort than you’re using right now to extract meaning from the little squiggles on your computer screen we call “letters.” Much of the learning process, in turn, involves finding meaningful correlations between the bits of sensory data and weaving those into your representations—thus you’ve learned that when you get the bits of visual data that normally assemble into a coffee cup, you can reach for it and get the bits of tactile data that normally assemble into the feeling of picking up the cup, followed by certain sensations of movement, followed by certain sensations of taste, temperature, etc. corresponding to drinking the coffee.
That’s why Kant included the “thing in itself” in his account: there really does seem to be something out there that gives rise to the data we assemble into our representations. It’s just that the window we’re looking through might as well be a funhouse mirror:  it imposes so much of itself on the data that trickles through it that it’s almost impossible to draw firm conclusions about what’s “out there” from our representations.  The most we can do, most of the time, is to see what representations do the best job of allowing us to predict what the next series of fragmentary sensory images will include. That’s what science does, when its practitioners are honest with themselves about its limitations—and it’s possible to do perfectly good science on that basis, by the way.
It’s possible to do quite a lot intellectually on that basis, in fact. From the golden age of ancient Greece straight through to the end of the Renaissance, in fact, a field of scholarship that’s almost completely forgotten today—topics—was an important part of a general education, the kind of thing you studied as a matter of course once you got past grammar school. Topics is the study of those things that can’t be proved logically, but are broadly accepted as more or less true, and so can be used as “places” (in Greek, topoi) on which you can ground a line of argument. The most important of these are the commonplaces (literally, the common places or topoi) that we all use all the time as a basis for our thinking and speaking; in modern terms, we can think of them as “things on which a general consensus exists.” They aren’t truths; they’re useful approximations of truths, things that have been found to work most of the time, things to be set aside only if you have good reason to do so.
Science could have been seen as a way to expand the range of useful topoi. That’s what a scientific experiment does, after all: it answers the question, “If I do this, what happens?” As the results of experiments add up, you end up with a consensus—usually an approximate consensus, because it’s all but unheard of for repetitions of any experiment to get exactly the same result every time, but a consensus nonetheless—that’s accepted by the scientific community as a useful approximation of the truth, and can be set aside only if you have good reason to do so. To a significant extent, that’s the way science is actually practiced—well, when it hasn’t been hopelessly corrupted for economic or political gain—but that’s not the social role that science has come to fill in modern industrial society.
I’ve written here several times already about the trap into which institutional science has backed itself in recent decades, with the enthusiastic assistance of the belligerent scientific materialists mentioned earlier in this post. Public figures in the scientific community routinely like to insist that the current consensus among scientists on any topic must be accepted by the lay public without question, even when scientific opinion has swung around like a weathercock in living memory, and even when unpleasantly detailed evidence of the deliberate falsification of scientific data is tolerably easy to find, especially but not only in the medical and pharmaceutical fields. That insistence isn’t wearing well; nor does it help when scientific materialists insist—as they very often do—that something can’t exist or something else can’t happen, simply because current theory doesn’t happen to provide a mechanism for it.
Too obsessive a fixation on that claim to authority, and the political and financial baggage that comes with it, could very possibly result in the widespread rejection of science across the industrial world in the decades ahead. That’s not yet set in stone, and it’s still possible that scientists who aren’t too deeply enmeshed in the existing order of things could provide a balancing voice, and help see to it that a less doctrinaire understanding of science gets a voice and a public presence.
Doing that, though, would require an attitude we might as well call epistemic modesty: the recognition that the human capacity to know has hard limits, and the unqualified absolute truth about most things is out of our reach. Socrates was called the wisest of the Greeks because he accepted the need for epistemic modesty, and recognized that he didn’t actually know much of anything for certain. That recognition didn’t keep him from being able to get up in the morning and go to work at his day job as a stonecutter, and it needn’t keep the rest of us from doing what we have to do as industrial civilization lurches down the trajectory toward a difficult future.
Taken seriously, though, epistemic modesty requires some serious second thoughts about certain very deeply ingrained presuppositions of the cultures of the West. Some of those second thoughts are fairly easy to reach, but one of the most challenging starts with a seemingly simple question: is there anything we experience that isn’t a representation? In the weeks ahead we’ll track that question all the way to its deeply troubling destination.

Perched on the Wheel of Time

Wed, 2017-02-01 19:08
There's a curious predictability in the comments I field in response to posts here that talk about the likely shape of the future. The conventional wisdom of our era insists that modern industrial society can’t possibly undergo the same life cycle of rise and fall as every other civilization in history; no, no, there’s got to be some unique future awaiting us—uniquely splendid or uniquely horrible, it doesn’t even seem to matter that much, so long as it’s unique. Since I reject that conventional wisdom, my dissent routinely fields pushback from those of my readers who embrace it.
That’s not surprising in the least, of course. What’s surprising is that the pushback doesn’t surface when the conventional wisdom seems to be producing accurate predictions, as it does now and then. Rather, it shows up like clockwork whenever the conventional wisdom fails.
The present situation is as good an example as any. The basis of my dissident views is the theory of cyclical history—the theory, first proposed in the early 18th century by the Italian historian Giambattista Vico and later refined and developed by such scholars as Oswald Spengler and Arnold Toynbee, that civilizations rise and fall in a predictable life cycle, regardless of scale or technological level. That theory’s not just a vague generalization, either; each of the major writers on the subject set out specific stages that appear in order, showed that these have occurred in all past civilizations, and made detailed, falsifiable predictions about how those stages can be expected to occur in our civilization. Have those panned out? So far, a good deal more often than not.
In the final chapters of his second volume, for example, Spengler noted that civilizations in the stage ours was about to reach always end up racked by conflicts that pit established hierarchies against upstart demagogues who rally the disaffected and transform them into a power base. Looking at the trends visible in his own time, he sketched out the most likely form those conflicts would take in the Winter phase of our civilization. Modern representative democracy, he pointed out, has no effective defenses against corruption by wealth, and so could be expected to evolve into corporate-bureaucratic plutocracies that benefit the affluent at the expense of everyone else. Those left out in the cold by these transformations, in turn, end up backing what Spengler called Caesarism—the rise of charismatic demagogues who challenge and eventually overturn the corporate-bureaucratic order.
These demagogues needn’t come from within the excluded classes, by the way. Julius Caesar, the obvious example, came from an old upper-class Roman family and parlayed his family connections into a successful political career. Watchers of the current political scene may be interested to know that Caesar during his lifetime wasn’t the imposing figure he became in retrospect; he had a high shrill voice, his morals were remarkably flexible even by Roman standards—the scurrilous gossip of his time called him “every man’s wife and every woman’s husband”—and he spent much of his career piling up huge debts and then wriggling out from under them. Yet he became the political standardbearer for the plebeian classes, and his assassination by a conspiracy of rich Senators launched the era of civil wars that ended the rule of the old elite once and for all.
Thus those people watching the political scene last year who knew their way around Spengler, and noticed that a rich guy had suddenly broken with the corporate-bureaucratic consensus and called for changes that would benefit the excluded classes at the expense of the affluent, wouldn’t have had to wonder what was happening, or what the likely outcome would be. It was those who insisted on linear models of history—for example, the claim that the recent ascendancy of modern liberalism counted as the onward march of progress, and therefore was by definition irreversible—who found themselves flailing wildly as history took a turn they considered unthinkable.
The rise of Caesarism, by the way, has other features I haven’t mentioned. As Spengler sketches out the process, it also represents the exhaustion of ideology and its replacement by personality. Those of my readers who watched the political scene over the last few years may have noticed the way that the issues have been sidelined by sweeping claims about the supposed personal qualities of candidates. The practically content-free campaign that swept Barack Obama into the presidency in 2008—“Hope,” “Change,” and “Yes We Can” aren’t statements about issues, you know—was typical of this stage, as was the emergence of competing personality cults around the candidates in the 2016 election.  In the ordinary way of things, we can expect even more of this in elections to come, with messianic hopes clustering around competing politicians until the point of absurdity is well past. These will then implode, and the political process collapse into a raw scramble for power at any cost.
There’s plenty more in Spengler’s characterization of the politics of the Winter phase, and all of it’s well represented in today’s headlines, but the rest can be left to those of my readers interested enough to turn the pages of The Decline of the West for themselves. What I’d like to discuss here is the nature of the pushback I tend to field when I point out that yet again, predictions offered by Spengler and other students of cyclic history turned out to be correct and those who dismissed them turned out to be smoking their shorts. The responses I field are as predictable as—well, the arrival of charismatic demagogues at a certain point in the Winter phase, for example—and they reveal some useful flimpses into the value, or lack of it, of our society’s thinking about the future in this turn of the wheel.
Probably the most common response I get can best be characterized as simple incantation: that is to say, the repetition of some brief summary of the conventional wisdom, usually without a shred of evidence or argument backing it up, as though the mere utterance is enough to disprove all other ideas.   It’s a rare week when I don’t get at least one comment along these lines, and they divide up roughly evenly between those that insist that progress will inevitably triumph over all its obstacles, on the one hand, and those that insist that modern industrial civilization will inevitably crash to ruin in a sudden cataclysmic downfall on the other. I tend to think of this as a sort of futurological fundamentalism along the lines of “pop culture said it, I believe it, that settles it,” and it’s no more useful, or for that matter interesting, than fundamentalism of any other sort.
A little less common and a little more interesting are a second class of arguments, which insist that I can’t dismiss the possibility that something might pop up out of the blue to make things different this time around. As I pointed out very early on in the history of this blog, these are examples of the classic logical fallacy of argumentum ad ignorantiam, the argument from ignorance. They bring in some factor whose existence and relevance is unknown, and use that claim to insist that since the conventional wisdom can’t be disproved, it must be true.
Arguments from ignorance are astonishingly common these days. My readers may have noticed, for example, that every few years some new version of nuclear power gets trotted out as the answer to our species’ energy needs. From thorium fission plants to Bussard fusion reactors to helium-3 from the Moon, they all have one thing in common: nobody’s actually built a working example, and so it’s possible for their proponents to insist that their pet technology will lack the galaxy of technical and economic problems that have made every existing form of nuclear power uneconomical without gargantuan government subsidies. That’s an argument from ignorance: since we haven’t built one yet, it’s impossible to be absolutely certain that they’ll have the usual cascading cost overruns and the rest of it, and therefore their proponents can insist that those won’t happen this time. Prove them wrong!
More generally, it’s impressive how many people can look at the landscape of dysfunctional technology and failed promises that surrounds us today and still insist that the future won’t be like that. Most of us have learned already that upgrades on average have fewer benefits and more bugs than the programs they replace, and that products labeled “new and improved” may be new but they’re rarely improved; it’s starting to sink in that most new technologies are simply more complicated and less satisfactory ways of doing things that older technologies did at least as well at a lower cost.  Try suggesting this as a general principle, though, and I promise you that plenty of people will twist themselves mentally into pretzel shapes trying to avoid the implication that progress has passed its pull date.
Even so, there’s a very simple answer to all such arguments, though in the nature of such things it’s an answer that only speaks to those who aren’t too obsessively wedded to the conventional wisdom. None of the arguments from ignorance I’ve mentioned are new; all of them have been tested repeatedly by events, and they’ve failed. I’ve lost track of the number of times I’ve been told, for example, that the economic crisis du jour could lead to the sudden collapse of the global economy, or that the fashionable energy technology du jour could lead to a new era of abundant energy. No doubt they could, at least in theory, but the fact remains that they don’t. 
It so happens that there are good reasons why they don’t, varying from case to case, but that’s actually beside the point I want to make here. This particular version of the argument from ignorance is also an example of the fallacy the old logicians called petitio principii, better known as “begging the question.” Imagine, by way of counterexample, that someone were to post a comment saying, “Nobody knows what the future will be like, so the future you’ve predicted is as likely as any other.” That would be open to debate, since there’s some reason to think we can in fact predict some things about the future, but at least it would follow logically from the premise.  Still, I don’t think I’ve ever seen anyone make that claim. Nor have I ever seen anybody claim that since nobody knows what the future will be like, say, we can’t assume that progress is going to continue.
In practice, rather, the argument from ignorance is applied to discussions of the future in a distinctly one-sided manner. Predictions based on any point of view other than the conventional wisdom of modern popular culture are dismissed with claims that it might possibly be different this time, while predictions based on the conventional wisdom of modern popular culture are spared that treatment. That’s begging the question: covertly assuming that one side of an argument must be true unless it’s disproved, and that the other side can’t be true unless it’s proved.
Now in fact, a case can be made that we can in fact know quite a bit about the shape of the future, at least in its broad outlines. The heart of that case, as already noted, is the fact that certain theories about the future do in fact make accurate predictions, while others don’t. This in itself shows that history isn’t random—that there’s some structure to the flow of historical events that can be figured out by learning from the past, and that similar causes at work in similar situations will have similar outcomes. Apply that reasoning to any other set of phenomena, and you’ve got the ordinary, uncontroversial basis for the sciences. It’s only when it’s applied to the future that people balk, because it doesn’t promise them the kind of future they want.
The argument by incantation and the argument from ignorance make up most of the pushback I get. I’m pleased to say, though, that every so often I get an argument that’s considerably more original than these. One of those came in last week—tip of the archdruidical hat to DoubtingThomas—and it’s interesting enough that it deserves a detailed discussion.
DoubtingThomas began with the standard argument from ignorance, claiming that it’s always possible that something might possibly happen to disrupt the cyclic patterns of history in any given case, and therefore the cyclic theory should be dismissed no matter how many accurate predictions it scored. As we’ve already seen, this is handwaving, but let’s move on.  He went on from there to argue that much of the shape of history is defined by the actions of unique individuals such as Isaac Newton, whose work sends the world careening along entirely new and unpredicted paths. Such individuals have appeared over and over again in history, he pointed out, and was kind enough to suggest that my activities here on The Archdruid Report were, in a small way, another example of the influence of an individual on history. Given that reality, he insisted, a theory of history that didn’t take the actions of unique individuals into account was invalid.
Fair enough; let’s consider that argument. Does the cyclic theory of history fail to take the actions of unique individuals into account?
Here again, Oswald Spengler’s The Decline of the West is the go-to source, because he’s dealt with the sciences and arts to a much greater extent than other researchers into historical cycles. What he shows, with a wealth of examples drawn from the rise and fall of many different civilizations, is that the phenomenon DoubtingThomas describes is a predictable part of the cycles of history. In every generation, in effect, a certain number of geniuses will be born, but their upbringing, the problems that confront them, and the resources they will have available to solve those problems, are not theirs to choose. All these things are produced by the labors of other creative minds of the past and present, and are profoundly influenced by the cycles of history.
Let’s take Isaac Newton as an example. He happened to be born just as the scientific revolution was beginning to hit its stride, but before it had found its paradigm, the set of accomplishments on which all future scientific efforts would be directly or indirectly modeled. His impressive mathematical and scientific gifts thus fastened onto the biggest unsolved problem of the time—the relationship between the physics of moving bodies sketched out by Galileo and the laws of planetary motion discovered by Kepler—and resulted in the Principia Mathematica, which became the paradigm for the next three hundred years or so of scientific endeavor.
Had he been born a hundred years earlier, none of those preparations would have been in place, and the Principia Mathematica wouldn’t have been possible. Given the different cultural attitudes of the century before Newton’s time, in fact, he would almost certainly become a theologian rather than a mathematician and physicist—as it was, he spent much of his career engaged in theology, a detail usually left out by the more hagiographical of his biographers—and he would be remembered today only by students of theological history. Had he been born a century later, equally, some other great scientific achievement would have provided the paradigm for emerging science—my guess is that it would have been Edmund Halley’s successful prediction of the return of the comet that bears his name—and Newton would have had the same sort of reputation that Karl Friedrich Gauss has today: famous in his field, sure, but a household name? Not a chance.
What makes the point even more precise is that every other civilization from which adequate records survive had its own paradigmatic thinker, the figure whose achievements provided a model for the dawning age of reason and for whatever form of rational thought became that age’s principal cultural expression. In the classical world, for example, it was Pythagoras, who invented the word “philosophy” and whose mathematical discoveries gave classical rationalism its central theme, the idea of an ideal mathematical order to which the hurly-burly of the world of appearances must somehow be reduced. (Like Newton, by the way, Pythagoras was more than half a theologian; it’s a common feature of figures who fill that role.)
To take the same argument to a far more modest level, what about DoubtingThomas’ claim that The Archdruid Report represents the act of a unique individual influencing the course of history? Here again, a glance at history shows otherwise. I’m a figure of an easily recognizable type, which shows up reliably as each civilization’s Age of Reason wanes and it begins moving toward what Spengler called the Second Religiosity, the resurgence of religion that inevitably happens in the wake of rationalism’s failure to deliver on its promises. At such times you get intellectuals who can communicate fluently on both sides of the chasm between rationalism and religion, and who put together syntheses of various kinds that reframe the legacies of the Age of Reason so that they can be taken up by emergent religious movements and preserved for the future.
In the classical world, for example, you got Iamblichus of Chalcis, who stepped into the gap between Greek philosophical rationalism and the burgeoning Second Religiosity of late classical times, and figured out how to make philosophy, logic, and mathematics appealing to the increasingly religious temper of his time. He was one of many such figures, and it was largely because of their efforts that the religious traditions that ended up taking over the classical world—Christianity to the north of the Mediterranean, and Islam to the south—got over their early anti-intellectual streak so readily and ended up preserving so much of the intellectual heritage of the past.
That sort of thing is a worthwhile task, and if I can contribute to it I’ll consider this life well spent. That said, there’s nothing unique about it. What’s more, it’s only possible and meaningful because I happen to be perched on this particular arc of the wheel of time, when our civilization’s Age of Reason is visibly crumbling and the Second Religiosity is only beginning to build up a head of steam. A century earlier or a century later, I’d have faced some different tasks.
All of this presupposes a relationship between the individual and human society that fits very poorly with the unthinking prejudices of our time. That’s something that Spengler grappled with in his book, too;  it’s going to take a long sojourn in some very unfamiliar realms of thought to make sense of what he had to say, but that can’t be helped.
We really are going to have to talk about philosophy, aren’t we? We’ll begin that stunningly unfashionable discussion next week.

How Great the Fall Can Be

Wed, 2017-01-25 13:43
While I type these words, an old Supertramp CD is playing in the next room. Those of my readers who belong to the same slice of an American generation I do will likely remember the words Roger Hodgson is singing just now, the opening line from “Fool’s Overture”:
“History recalls how great the fall can be...”
It’s an apposite quote for a troubled time.
Over the last year or so, in and among the other issues I’ve tried to discuss in this blog, the US presidential campaign has gotten a certain amount of air time. Some of the conversations that resulted generated a good deal more heat than light, but then that’s been true across the board since Donald Trump overturned the established certainties of American political life and launched himself and the nation on an improbable trajectory toward our current situation. Though the diatribes I fielded from various sides were more than occasionally tiresome, I don’t regret making the election a theme for discussion here, as it offered a close-up view of issues I’ve been covering for years now.
A while back on this blog, for example, I spent more than a year sketching out the process by which civilizations fall and dark ages begin, with an eye toward the next five centuries of North American history—a conversation that turned into my book Dark Age America. Among the historical constants I discussed in the posts and the book was the way that governing elites and their affluent supporters stop adapting their policies to changing political and economic conditions, and demand instead that political and economic conditions should conform to their preferred policies. That’s all over today’s headlines, as the governing elites of the industrial world cower before the furious backlash sparked by their rigid commitment to the failed neoliberal nostrums of global trade and open borders.
Another theme I discussed in the same posts and book was the way that science and culture in a civilization in decline become so closely identified with the interests of the governing elite that the backlash against the failed policies of the elite inevitably becomes a backlash against science and culture as well. We’ve got plenty of that in the headlines as well. According to recent news stories, for example, the Trump administration plans to scrap the National Endowment for the Arts, the National Endowment for the Humanities, and the Corporation for Public Broadcasting, and get rid of all the federal offices that study anthropogenic climate change.
Their termination with extreme prejudice isn’t simply a matter of pruning the federal bureaucracy, though that’s a factor. All these organizations display various forms of the identification of science and culture with elite values just discussed, and their dismantling will be greeted by cheers from a great many people outside the circles of the affluent, who have had more than their fill of patronizing lectures from their self-proclaimed betters in recent years. Will many worthwhile programs be lost, along with a great deal that’s less than worthwhile?  Of course.  That’s a normal feature of the twilight years of a civilization.
A couple of years before the sequence of posts on dark age America, for that matter, I did another series on the end of US global hegemony and the rough road down from empire. That sequence also turned into a book, Decline and Fall. In the posts and the book, I pointed out that one of the constants of the history of democratic societies—actual democracies, warts and all, as distinct from the imaginary “real democracy” that exists solely in rhetoric—is a regular cycle of concentration and diffusion of power. The ancient Greek historian Polybius, who worked it out in detail, called it anacyclosis.
A lot can be said about anacyclosis, but the detail that’s relevant just now is the crisis phase, when power has become so gridlocked among competing power centers that it becomes impossible for the system to break out of even the most hopelessly counterproductive policies. That ends, according to Polybius, when a charismatic demagogue gets into power, overturns the existing political order, and sets in motion a general free-for-all in which old alliances shatter and improbable new ones take shape. Does that sound familiar? In a week when union leaders emerged beaming from a meeting with the new president, while Democrats are still stoutly defending the integrity of the CIA, it should.
For that matter, one of the central themes of the sequence of posts and the book was the necessity of stepping back from global commitments that the United States can no longer afford to maintain. That’s happening, too, though it’s being covered up just now by a great deal of Trumped-up bluster about a massive naval expansion. (If we do get a 350-ship navy in the next decade, I’d be willing to bet that a lot of those ships will turn out to be inexpensive corvettes, like the ones the Russians have been using so efficiently as cruise missile platforms on the Caspian Sea.)  European politicians are squawking at top volume about the importance of NATO, which means in practice the continuation of a scheme that allows most European countries to push most of the costs of their own defense onto the United States, but the new administration doesn’t seem to be buying it.
Mind you, I’m far from enthusiastic about the remilitarization of Europe. Outside the brief interval of enforced peace following the Second World War, Europe has been a boiling cauldron of warfare since its modern cultures began to emerge out of the chaos of the post-Roman dark ages. Most of the world’s most devastating wars have been European in origin, and of course it escapes no one’s attention in the rest of the world that it was from Europe that hordes of invaders and colonizers swept over the entire planet from the sixteenth through the nineteenth centuries, as often as not leaving total devastation in their wake. In histories written a thousand years from now, Europeans will have the same sort of reputation that Huns and Mongols have today—and it’s only in the fond fantasies of those who think history has a direction that those days are definitely over.
It can’t be helped, though, for the fact of the matter is that the United States can no longer afford to foot the bill for the defense of other countries. Behind a facade of hallucinatory paper wealth, our nation is effectively bankrupt. The only thing that enables us to pay our debts now is the status of the dollar as the world’s reserve currency—this allows the Treasury to issue debt at a breakneck pace and never have to worry about the cost—and that status is trickling away as one country after another signs bilateral deals to facilitate trading in other currencies. Sooner or later, probably in the next two decades, the United States will be forced to default on its national debt, the way Russia did in 1998.  Before that happens, a great many currently overvalued corporations that support themselves by way of frantic borrowing will have done the same thing by way of the bankruptcy courts, and of course the vast majority of America’s immense consumer debt will have to be discharged the same way.
That means, among other things, that the extravagant lifestyles available to affluent Americans in recent decades will be going away forever in the not too distant future. That’s another point I made in Decline and Fall and the series of posts that became raw material for it. During the era of US global hegemony, the five per cent of our species who lived in the United States disposed of a third of the world’s raw materials and manufactured products and a quarter of its total energy production. That disproportionate share came to us via unbalanced patterns of exchange hardwired into the global economy, and enforced at gunpoint by the military garrisons we keep in more than a hundred countries worldwide. The ballooning US government, corporate, and consumer debt load of recent years was an attempt to keep those imbalances in place even as their basis in geopolitics trickled away. Now the dance is ending and the piper has to be paid.
There’s a certain bleak amusement to be had from the fact that one of the central themes of this blog not that many years back—“Collapse Now and Avoid the Rush”—has already passed its pull date. The rush, in case you haven’t noticed, is already under way. The fraction of US adults of working age who are permanently outside the work force is at an all-time high; so is the fraction of young adults who are living with their parents because they can’t afford to start households of their own. There’s good reason to think that the new administration’s trade and immigration policies may succeed in driving both those figures down, at least for a while, but of course there’ll a price to be paid for that—and those industries and social classes that have profited most from the policies of the last thirty years, and threw their political and financial weight behind the Clinton campaign, will be first in line to pay it. Vae victis!*
More generally, the broader landscape of ideas this blog has tried to explore since its early days remains what it is. The Earth’s economically accessible reserves of fossil carbon dwindle day by day; with each year that passes, on average, the amount of coal, oil, and natural gas burnt exceeds the amount that’s discovered by a wider margin; the current temporary glut in the oil markets is waning so fast that analysts are predicting the next price spike as soon as 2018. Talk of transitioning away from fossil fuels to renewable energy, on the one hand, or nuclear power on the other, remains talk—I encourage anyone who doubts this to look up the amount of fossil fuels burnt each year over the last two decades and see if they can find a noticeable decrease in global fossil fuel consumption to match the much-ballyhooed buildout of solar and wind power.
The industrial world remains shackled to fossil fuels for most of its energy and all of its transportation fuel, for the simple reason that no other energy source in this end of the known universe provides the abundant, concentrated, and fungible energy supply that’s needed to keep our current lifestyles going. There was always an alternative—deliberately downshifting out of the embarrassing extravagance that counts for normal lifestyles in the industrial world these days, accepting more restricted ways of living in order to leave a better world for our descendants—but not enough people were willing to accept that alternative to make a difference while there was still a chance.
Meanwhile the other jaw of the vise that’s tightening around the future is becoming increasingly visible just now. In the Arctic, freak weather systems has sucked warm air up from lower latitudes and brought the normal process of winter ice formation to a standstill. In the Antarctic, the Larsen C ice shelf, until a few years ago considered immovable by most glaciologists, is in the process of loosing an ice sheet the size of Delaware into the Antarctic Ocean. I look out my window and see warm rain falling; here in the north central Appalachians, in January, it’s been most of a month since the thermometer last dipped below freezing. The new administration has committed itself to do nothing about anthropogenic climate change, but then, despite plenty of talk, the Obama administration didn’t do anything about it either.
There’s good reason for that, too. The only way to stop anthropogenic climate change in its tracks is to stop putting greenhouse gases into the atmosphere, and doing that would require the world to ground its airlines, turn its highways over to bicycles and oxcarts, and shut down every other technology that won’t be economically viable if it has to depend on the diffuse intermittent energy available from renewable sources. Does the political will to embrace such changes exist? Since I know of precisely three climate change scientists, out of thousands, who take their own data seriously enough to cut their carbon footprint by giving up air travel, it’s safe to say that the answer is “no.”
So, basically, we’re in for it.
The thing that fascinates me is that this is something I’ve been saying for the whole time this blog has been appearing. The window of opportunity for making a smooth transition to a renewable future slammed shut in the early 1980s, when majorities across the industrial world turned their backs on the previous decade’s promising initiatives toward sustainability, and bought into the triumphalist rhetoric of the Reagan-Thatcher counterrevolution instead. Since then, year after weary year, most of the green movement—with noble exceptions—has been long on talk and short on action.  Excuses for doing nothing and justifications for clinging to lifestyles the planet cannot support have proliferated like rabbits on Viagra, and most of the people who talked about sustainability at all took it for granted that the time to change course was still somewhere conveniently off in the future. That guaranteed that the chance to change course would slide steadily further back into the past.
There was another detail of the post-Seventies sustainability scene that deserves discussion, though, because it’s been displayed with an almost pornographic degree of nakedness in the weeks just past. From the early days of the peak oil movement in the late 1990s on, a remarkably large number of the people who talked eagerly about the looming crisis of our age seemed to think that its consequences would leave them and the people and things they cared about more or less intact. That wasn’t universal by any means; there were always some people who grappled with the hard realities that the end of the fossil fuel age was going to impose on their own lives; but all things considered, there weren’t that many, in comparison to all those who chattered amiably about how comfortable they’d be in their rural doomsteads, lifeboat communities, Transition Towns, et al.
Now, as discussed earlier in this post, we’ve gotten a very modest helping of decline and fall, and people who were enthusiastically discussing the end of the industrial age not that long ago are freaking out six ways from Sunday. If a relatively tame event like the election of an unpopular president can send people into this kind of tailspin, what are they going to do the day their paychecks suddenly turn out to be worth only half as much in terms of goods and services as before—a kind of event that’s already become tolerably common elsewhere, and could quite easily happen in this country as the dollar loses its reserve currency status?
What kinds of meltdowns are we going to get when internet service or modern health care get priced out of reach, or become unavailable at any price?  How are they going to cope if the accelerating crisis of legitimacy in this country causes the federal government to implode, the way the government of the Soviet Union did, and suddenly they’re living under cobbled-together regional governments that don’t have the money to pay for basic services? What sort of reaction are we going to see if the US blunders into a sustained domestic insurgency—suicide bombs going off in public places, firefights between insurgent forces and government troops, death squads from both sides rounding up potential opponents and leaving them in unmarked mass graves—or, heaven help us, all-out civil war?
This is what the decline and fall of a civilization looks like. It’s not about sitting in a cozy earth-sheltered home under a roof loaded with solar panels, living some close approximation of a modern industrial lifestyle, while the rest of the world slides meekly down the chute toward history’s compost bin, leaving you and yours untouched. It’s about political chaos—meaning that you won’t get the leaders you want, and you may not be able to count on the rule of law or even the most basic civil liberties. It’s about economic implosion—meaning that your salary will probably go away, your savings almost certainly won’t keep its value, and if you have gold bars hidden in your home, you’d better hope to Hannah that nobody ever finds out, or it’ll be a race between the local government and the local bandits to see which one gets to tie your family up and torture them to death, starting with the children, until somebody breaks and tells them where your stash is located.
It’s about environmental chaos—meaning that you and the people you care about may have many hungry days ahead as crazy weather messes with the harvests, and it’s by no means certain you won’t die early from some tropical microbe that’s been jarred loose from its native habitat to find a new and tasty home in you. It’s about rapid demographic contraction—meaning that you get to have the experience a lot of people in the Rust Belt have already, of walking past one abandoned house after another and remembering the people who used to live there, until they didn’t any more.
More than anything else, it’s about loss. Things that you value—things you think of as important, meaningful, even necessary—are going to go away forever in the years immediately ahead of us, and there will be nothing you can do about it.  It really is as simple as that. People who live in an age of decline and fall can’t afford to cultivate a sense of entitlement. Unfortunately, for reasons discussed at some length in one of last month’s posts, the notion that the universe is somehow obliged to give people what they think they deserve is very deeply engrained in American popular culture these days. That’s a very unwise notion to believe right now, and as we slide further down the slope, it could very readily become fatal—and no, by the way, I don’t mean that last adjective in a metaphorical sense.
History recalls how great the fall can be, Roger Hodgson sang. In our case, it’s shaping up to be one for the record books—and those of my readers who have worked themselves up to the screaming point about the comparatively mild events we’ve seen so far may want to save some of their breath for the times ahead when it’s going to get much, much worse. _________________*In colloquial English: “It sucks to lose.”

The Hate that Dare Not Speak its Name

Wed, 2017-01-18 15:12
As the United States stumbles toward the last act of its electoral process two days from now, and the new administration prepares to take over the reins of power from its feckless predecessor, the obligatory caterwauling of the losing side has taken on an unfamiliar shrillness. Granted, the behavior of both sides in the last few decades of American elections can be neatly summed up in the words “sore loser”; the Republicans in 1992 and 2008 behaved not one whit better than the Democrats in 1980 and 2000.  I think it’s fair, though, to say that the current example has plunged well past the low-water mark set by those dismal occasions. The question I’d like to discuss here is why that should be.
I think we can all admit that there are plenty of reasons why Americans might reasonably object to the policies and appointments of the incoming president, but the same thing has been true of every other president we’ve had since George Washington’s day. Equally, both of our major parties have long been enthusiastic practitioners of the fine art of shrieking in horror at the other side’s behavior, while blithely excusing the identical behavior on their side.  Had the election last November gone the other way, for example, we can be quite certain that all the people who are ranting about Donald Trump’s appointment of Goldman Sachs employees to various federal offices would be busy explaining how reasonable it was for Hillary Clinton to do exactly the same thing—as of course she would have.
That said, I don’t think reasonable differences of opinion on the one hand, and the ordinary hypocrisy of partisan politics on the other, explain the extraordinarily stridency, the venom, and the hatred being flung at the incoming administration by its enemies. There may be many factors involved, to be sure, but I’d like to suggest that one factor in particular plays a massive role here.
To be precise, I think a lot of what we’re seeing is the product of class bigotry.
Some definitions are probably necessary here. We can define bigotry as the act of believing hateful things about all the members of a given category of people, just because they belong to that category. Thus racial bigots believe hateful things about everyone who belongs to races they don’t like, religious bigots do the same thing to every member of the religions they don’t like, and so on through the dismal chronicle of humanity’s collective nastiness.
Defining social class is a little more difficult to do in the abstract, as different societies draw up and enforce their class barriers in different ways. In the United States, though, the matter is made a good deal easier by the lack of a fully elaborated feudal system in our nation’s past, on the one hand, and on the other, the tolerably precise dependency of how much privilege you have in modern American society on how much money you make. Thus we can describe class bigotry in the United States, without too much inaccuracy, as bigotry directed against people who make either significantly more money than the bigot does, or significantly less. (Of course that’s not all there is to social class, not by a long shot, but for our present purposes, as an ostensive definition, it will do.)
Are the poor bigoted against the well-to-do? You bet. Bigotry directed up the social ladder, though, is far more than matched, in volume and nastiness, by bigotry directed down. It’s a source of repeated amusement to me that rich people in this country so often inveigh against the horrors of class warfare. Class warfare is their bread and butter. The ongoing warfare of the rich against the poor, and of the affluent middle and upper middle classes against the working class, create and maintain the vast disparities of wealth and privilege in contemporary American society. What upsets the rich and the merely affluent about class warfare, of course, is the thought that they might someday be treated the way they treat everyone else.
Until last year, if you wanted to experience the class bigotry that’s so common among the affluent classes in today’s America, you pretty much had to be a member of those affluent classes, or at least good enough at passing to be present at the social events where their bigotry saw free play. Since Donald Trump broke out of the Republican pack early last year, though, that hindrance has gone by the boards. Those who want to observe American class bigotry at its choicest need only listen to what a great many of the public voices of the well-to-do are saying about the people who votes and enthusiasm have sent Trump to the White House.
You see, that’s a massive part of the reason a Trump presidency is so unacceptable to so many affluent Americans:  his candidacy, unlike those of all his rivals, was primarily backed by “those people.”
It’s probably necessary to clarify just who “thosepeople” are. During the election, and even more so afterwards, the mainstream media here in the United States have seemingly been unable to utter the words “working class” without sticking the labels “white” in front and “men” behind. The resulting rhetoric seems to be claiming that the relatively small fraction of the American voting public that’s white, male, and working class somehow managed to hand the election to Donald Trump all by themselves, despite the united efforts of everyone else.
Of course that’s not what happened. A huge majority of white working class women also voted for Trump, for example.  So, according to exit polls, did about a third of Hispanic men and about a quarter of Hispanic women; so did varying fractions of other American minority voting blocs, with African-American voters (the least likely to vote for Trump) still putting something like fourteen per cent in his column. Add it all up, and you’ll find that the majority of people who voted for Trump weren’t white working class men at all—and we don’t even need to talk about the huge number of registered voters of all races and genders who usually turn out for Democratic candidates, but stayed home in disgust this year, and thus deprived Clinton of the turnout that could have given her the victory.
Somehow, though, pundits and activists who fly to their keyboards at a moment’s notice to denounce the erasure of women and people of color in any other context are eagerly cooperating in the erasure of women and people of color in this one case. What’s more, that same erasure went on continuously all through the campaign. Those of my readers who followed the media coverage of the race last year will recall confident proclamations that women wouldn’t vote for Trump because his words and actions had given offense to feminists, that Hispanics (or people of color in general) wouldn’t vote for Trump because social-justice activists denounced his attitudes toward illegal immigrants from Mexico as racist, and so on. The media took these proclamations as simple statements of fact—and of course that was one of the reasons media pundits were blindsided by Trump’s victory.
The facts of the matter are that a great many American women don’t happen to agree with feminists, nor do all people of color agree with the social-justice activists who claim to speak in their name. For that matter, may I point out to my fellow inhabitants of Gringostan that the terms “Hispanic” and “Mexican-American” are not synonyms? Americans of Hispanic descent trace their ancestry to many different nations of origin, each of which has its own distinctive culture and history, and they don’t form a single monolithic electoral bloc. (The Cuban-American community in Florida, to cite only one of the more obvious examples, very often vote Republican and  played a significant role in giving that electoral vote-rich state to Trump.)
Behind the media-manufactured facade of white working class men as the cackling villains who gave the country to Donald Trump, in other words, lies a reality far more in keeping with the complexities of American electoral politics: a ramshackle coalition of many different voting blocs and interest groups, each with its own assortment of reasons for voting for a candidate feared and despised by the US political establishment and the mainstream media.  That coalition included a very large majority of the US working class in general, and while white working class voters of both genders were disproportionately more likely to have voted for Trump than their nonwhite equivalents, it wasn’t simply a matter of whiteness, or for that matter maleness.
It was, however, to a very great extent a matter of social class. This isn’t just because so large a fraction of working class voters generally backed Trump; it’s also because Trump saw this from the beginning, and aimed his campaign squarely at the working class vote. His signature red ball cap was part of that—can you imagine Hillary Clinton wearing so proletarian a garment without absurdity?—but, as I pointed out a year ago, so was his deliberate strategy of saying (and tweeting) things that would get the liberal punditocracy to denounce him. The tones of sneering contempt and condescension they directed at him were all too familiar to his working class audiences, who have been treated to the same tones unceasingly by their soi-disant betters for decades now.
Much of the pushback against Trump’s impending presidency, in turn, is heavily larded with that same sneering contempt and condescension—the unending claims, for example, that the only reason people could possibly have chosen to vote for Trump was because they were racist misogynistic morons, and the like. (These days, terms such as “racist” and “misogynistic,” in the mouths of the affluent, are as often as not class-based insults rather than objective descriptions of attitudes.) The question I’d like to raise at this point, though, is why the affluent don’t seem to be able to bring themselves to come right out and denounce Trump as the candidate of the filthy rabble. Why must they borrow the rhetoric of identity politics and twist it (and themselves) into pretzel shapes instead?
There, dear reader, hangs a tale.
In the aftermath of the social convulsions of the 1960s, the wealthy elite occupying the core positions of power in the United States offered a tacit bargain to a variety of movements for social change.  Those individuals and groups who were willing to give up the struggle to change the system, and settled instead for a slightly improved place within it, suddenly started to receive corporate and government funding, and carefully vetted leaders from within the movements in question were brought into elite circles as junior partners. Those individuals and groups who refused these blandishments were marginalized, generally with the help of their more compliant peers.
If you ever wondered, for example, why environmental groups such as the Sierra Club and Friends of the Earth changed so quickly from scruffy fire-breathing activists to slickly groomed and well-funded corporate enablers, well, now you know. Equally, that’s why mainstream feminist organizations by and large stopped worrying about the concerns of the majority of women and fixated instead on “breaking the glass ceiling”—that is to say, giving women who already belong to the privileged classes access to more privilege than they have already. The core demand placed on former radicals who wanted to cash in on the offer, though, was that they drop their demands for economic justice—and American society being what it is, that meant that they had to stop talking about class issues.
The interesting thing is that a good many American radicals were already willing to meet them halfway on that. The New Left of the 1960s, like the old Left of the between-the-wars era, was mostly Marxist in its theoretical underpinnings, and so was hamstrung by the mismatch between Marxist theory and one of the enduring realities of American politics. According to Marxist theory, socialist revolution is led by the radicalized intelligentsia, but it gets the muscle it needs to overthrow the capitalist system from the working classes. This is the rock on which wave after wave of Marxist activism has broken and gone streaming back out to sea, because the American working classes are serenely uninterested in taking up the world-historical role that Marxist theory assigns to them. All they want is plenty of full time jobs at a living wage.  Give them that, and revolutionary activists can bellow themselves hoarse without getting the least flicker of interest out of them.
Every so often, the affluent classes lose track of this, and try to force the working classes to put up with extensive joblessness and low pay, so that affluent Americans can pocket the proceeds. This never ends well.  After an interval, the working classes pick up whatever implement is handy—Andrew Jackson, the Grange, the Populist movement, the New Deal, Donald Trump—and beat the affluent classes about the head and shoulders with it until the latter finally get a clue. This might seem  promising for Marxist revolutionaries, but it isn’t, because the Marxist revolutionaries inevitably rush in saying, in effect, “No, no, you shouldn’t settle for plenty of full time jobs at a living wage, you should die by the tens of thousands in an orgy of revolutionary violence so that we can seize power in your name.” My readers are welcome to imagine the response of the American working class to this sort of rhetoric.
The New Left, like the other American Marxist movements before its time, thus had a bruising face-first collision with cognitive dissonance: its supposedly infallible theory said one thing, but the facts refused to play along and said something very different. For much of the Sixties and Seventies, New Left theoreticians tried to cope with this by coming up with increasingly Byzantine redefinitions of “working class” that excluded the actual working class, so that they could continue to believe in the inevitability and imminence of the proletarian revolution Marx promised them. Around the time that this effort finally petered out into absurdity, it was replaced by the core concept of the identity politics currently central to the American left: the conviction that the only divisions in American society that matter are those that have some basis in biology.
Skin color, gender, ethnicity, sexual orientation, disability—these are the divisions that the American left likes to talk about these days, to the exclusion of all other social divisions, and especially to the exclusion of social class.  Since the left has dominated public discourse in the United States for many decades now, those have become the divisions that the American right talks about, too. (Please note, by the way, the last four words in the paragraph above: “some basis in biology.” I’m not saying that these categories are purely biological in nature; every one of them is defined in practice by a galaxy of cultural constructs and presuppositions, and the link to biology is an ostensive category marker rather than a definition. I insert this caveat because I’ve noticed that a great many people go out of their way to misunderstand the point I’m trying to make here.)
Are the divisions listed above important when it comes to discriminatory treatment in America today? Of course they are—but social class is also important. It’s by way of the erasure of social class as a major factor in American injustice that we wind up in the absurd situation in which a woman of color who makes a quarter million dollars a year plus benefits as a New York stockbroker can claim to be oppressed by a white guy in Indiana who’s working three part time jobs at minimum wage with no benefits in a desperate effort to keep his kids fed, when the political candidates that she supports and the economic policies from which she profits are largely responsible for his plight.
In politics as in physics, every action produces an equal and opposite reaction, and so absurdities of the sort just described have kindled the inevitable blowback. The Alt-Right scene that’s attracted so much belated attention from politicians and pundits over the last year is in large part a straightforward reaction to the identity politics of the left. Without too much inaccuracy, the Alt-Right can be seen as a network of young white men who’ve noticed that every other identity group in the country is being encouraged to band together to further its own interests at their expense, and responded by saying, “Okay, we can play that game too.” So far, you’ve got to admit, they’ve played it with verve.
That said, on the off chance that any devout worshippers of the great god Kek happen to be within earshot, I have a bit of advice that I hope will prove helpful. The next time you want to goad affluent American liberals into an all-out, fist-pounding, saliva-spraying Donald Duck meltdown, you don’t need the Jew-baiting, the misogyny, the racial slurs, and the rest of it.  All you have to do is call them on their class privilege. You’ll want to have the popcorn popped, buttered, and salted first, though, because if my experience is anything to go by, you’ll be enjoying a world-class hissy fit in seconds.
I’d also like to offer the rest of my readers another bit of advice that, again, I hope will prove helpful. As Donald Trump becomes the forty-fifth president of the United States and begins to push the agenda that got him into the White House, it may be useful to have a convenient way to sort through the mix of signals and noise from the opposition. When you hear people raising reasoned objections to Trump’s policies and appointments, odds are that you’re listening to the sort of thoughtful dissent that’s essential to any semblance of democracy, and it may be worth taking seriously. When you hear people criticizing Trump and his appointees for doing the same thing his rivals would have done, or his predecessors did, odds are that you’re getting the normal hypocrisy of partisan politics, and you can roll your eyes and stroll on.
But when you hear people shrieking that Donald Trump is the illegitimate result of a one-night stand between Ming the Merciless and Cruella de Vil, that he cackles in Russian while barbecuing babies on a bonfire, that everyone who voted for him must be a card-carrying Nazi who hates the human race, or whatever other bit of over-the-top hate speech happens to be fashionable among the chattering classes at the moment—why, then, dear reader, you’re hearing a phenomenon as omnipresent and unmentionable in today’s America as sex was in Victorian England. You’re hearing the voice of class bigotry: the hate that dare not speak its name.

The Embarrassments of Chronocentrism

Wed, 2017-01-11 11:09
It's a curious thing, this attempt of mine to make sense of the future by understanding what’s happened in the past. One of the most curious things about it, at least to me, is the passion with which so many people insist that this isn’t an option at all. In any other context, “Well, what happened the last time someone tried that?” is one of the first and most obviously necessary questions to ask and answer—but heaven help you if you try to raise so straightforward a question about the political, economic, and social phenomena of the present day.
In previous posts here we’ve talked about thoughtstoppers of the “But it’s different this time!” variety, and some of the other means people these days use to protect themselves against the risk of learning anything useful from the hard-earned lessons of the past. This week I want to explore another, subtler method of doing the same thing. As far as I’ve been able to tell, it’s mostly an issue here in the United States, but here it’s played a remarkably pervasive role in convincing people that the only way to open a door marked PULL is to push on it long and hard enough.
It’s going to take a bit of a roundabout journey to make sense of the phenomenon I have in mind, so I’ll have to ask my readers’ forbearance for what will seem at first like several sudden changes of subject.
One of the questions I field tolerably often, when I discuss the societies that will rise after modern industrial civilization finishes its trajectory into history’s compost heap, is whether I think that consciousness evolves. I admit that until fairly recently, I was pretty much at a loss to know how to respond. It rarely took long to find out that the questioner wasn’t thinking about the intriguing theory Julian Jaynes raised in The Origins of Consciousness in the Breakdown of the Bicameral Mind, the Jungian conception Erich Neumann proposed in The Origins and History of Consciousness, or anything of the same kind. Nor, it turned out, was the question usually based on the really rather weird reinterpretations of evolution common in today’s pop-spirituality scene. Rather, it was political.
It took me a certain amount of research, and some puzzled emails to friends more familiar with current left-wing political jargon than I am, to figure out what was behind these questions. Among a good-sized fraction of American leftist circles these days, it turns out it’s become a standard credo that what drives the kind of social changes supported by the left—the abolition of slavery and segregation, the extension of equal (or more than equal) rights to an assortment of disadvantaged groups, and so on—is an ongoing evolution of consciousness, in which people wake up to the fact that things they’ve considered normal and harmless are actually intolerable injustices, and so decide to stop.
Those of my readers who followed the late US presidential election may remember Hillary Clinton’s furious response to a heckler at one of her few speaking gigs:  “We aren’t going back. We’re going forward.” Underlying that outburst is the belief system I’ve just sketched out: the claim that history has a direction, that it moves in a linear fashion from worse to better, and that any given political choice—for example, which of the two most detested people in American public life is going to become the nominal head of a nation in freefall ten days from now—not only can but must be flattened out into a rigidly binary decision between “forward” and “back.”
There’s no shortage of hard questions that could be brought to bear on that way of thinking about history, and we’ll get to a few of them a little later on, but let’s start with the simplest one: does history actually show any such linear movement in terms of social change?
It so happens that I’ve recently finished a round of research bearing on exactly that question, though I wasn’t thinking of politics or the evolution of consciousness when I launched into it. Over the last few years I’ve been working on a sprawling fiction project, a seven-volume epic fantasy titled The Weird of Hali, which takes the horror fantasy of H.P. Lovecraft and stands it on its head, embracing the point of view of the tentacled horrors and multiracial cultists Lovecraft liked to use as images of dread. (The first volume, Innsmouth, is in print in a fine edition and will be out in trade paper this spring, and the second, Kingsport, is available for preorder and will be published later this year.)
One of Lovecraft’s few memorable human characters, the intrepid dream-explorer Randolph Carter, has an important role in the fourth book of my series. According to Lovecraft, Carter was a Boston writer and esthete of the1920s from a well-to-do family, who had no interest in women but a whole series of intimate (and sometimes live-in) friendships with other men, and decidedly outré tastes in interior decoration—well, I could go on. The short version is that he’s very nearly the perfect archetype of an upper-class gay man of his generation. (Whether Lovecraft intended this is a very interesting question that his biographers don’t really answer.) With an eye toward getting a good working sense of Carter’s background, I talked to a couple of gay friends, who pointed me to some friends of theirs, and that was how I ended up reading George Chauncey’s magisterial Gay New York: Gender, Urban Culture, and the Makings of the Gay Male World, 1890-1940.
What Chauncey documents, in great detail and with a wealth of citations from contemporary sources, is that gay men in America had substantially more freedom during the first three decades of the twentieth century than they did for a very long time thereafter. While homosexuality was illegal, the laws against it had more or less the same impact on people’s behavior that the laws against smoking marijuana had in the last few decades of the twentieth century—lots of people did it, that is, and now and then a few of them got busted. Between the beginning of the century and the coming of the Great Depression, in fact, most large American cities had a substantial gay community with its own bars, restaurants, periodicals, entertainment venues, and social events, right out there in public.
Nor did the gay male culture of early twentieth century America conform to current ideas about sexual identity, or the relationship between gay culture and social class, or—well, pretty much anything else, really. A very large number of men who had sex with other men didn’t see that as central to their identity—there were indeed men who embraced what we’d now call a gay identity, but that wasn’t the only game in town by a long shot. What’s more, sex between men was by and large more widely accepted in the working classes than it was further up the social ladder. In turn-of-the-century New York, it was the working class gay men who flaunted the camp mannerisms and the gaudy clothing; upper- and middle-class gay men such as Randolph Carter had to be much more discreet.
So what happened? Did some kind of vast right-wing conspiracy shove the ebullient gay male culture of the early twentieth century into the closet? No, and that’s one of the more elegant ironies of this entire chapter of American cultural history. The crusade against the “lavender menace” (I’m not making that phrase up, by the way) was one of the pet causes of the same Progressive movement responsible for winning women the right to vote and breaking up the fabulously corrupt machine politics of late nineteenth century America. Unpalatable as that fact is in today’s political terms, gay men and lesbians weren’t forced into the closet in the 1930s by the right.  They were driven there by the left.
This is the same Progressive movement, remember, that made Prohibition a central goal of its political agenda, and responded to the total failure of the Prohibition project by refusing to learn the lessons of failure and redirecting its attentions toward banning less popular drugs such as marijuana. That movement was also, by the way, heavily intertwined with what we now call Christian fundamentalism. Some of my readers may have heard of William Jennings Bryan, the supreme orator of the radical left in late nineteenth century America, the man whose “Cross of Gold” speech became the great rallying cry of opposition to the Republican corporate elite in the decades before the First World War.  He was also the prosecuting attorney in the equally famous Scopes Monkey Trial, responsible for pressing charges against a schoolteacher who had dared to affirm in public Darwin’s theory of evolution.
The usual response of people on today’s left to such historical details—well, other than denying or erasing them, which is of course quite common—is to insist that this proves that Bryan et al. were really right-wingers. Not so; again, we’re talking about people who put their political careers on the line to give women the vote and weaken (however temporarily) the grip of corporate money on the US political system. The politics of the Progressive era didn’t assign the same issues to the categories “left” and “right” that today’s politics do, and so all sides in the sprawling political free-for-all of that time embraced some issues that currently belong to the left, others that belong to the right, and still others that have dropped entirely out of the political conversation since then.
I could go on, but let’s veer off in another direction instead. Here’s a question for those of my readers who think they’re well acquainted with American history. The Fifteenth Amendment, which granted the right to vote to all adult men in the United States irrespective of race, was ratified in 1870. Before then, did black men have the right to vote anywhere in the US?
Most people assume as a matter of course that the answer must be no—and they’re wrong. Until the passage of the Fifteenth Amendment, the question of who did and didn’t have voting rights was a matter for each state to decide for itself. Fourteen states either allowed free African-American men to vote in Colonial times or granted them that right when first organized. Later on, ten of them—Delaware in 1792, Kentucky in 1799, Maryland in 1801, New Jersey in 1807, Connecticut in 1814, New York in 1821, Rhode Island in 1822, Tennessee in 1834, North Carolina in 1835, and Pennsylvania in 1838—either denied free black men the vote or raised legal barriers that effectively kept them from voting. Four other states—Massachusetts, Vermont, New Hampshire, and Maine—gave free black men the right to vote in Colonial times and maintained that right until the Fifteenth Amendment made the whole issue moot. Those readers interested in the details can find them in The African American Electorate: A Statistical History by Hanes Walton Jr. et al., which devotes chapter 7 to the subject.
So what happened? Was there a vast right-wing conspiracy to deprive black men of the right to vote? No, and once again we’re deep in irony. The political movements that stripped free American men of African descent of their right to vote were the two great pushes for popular democracy in the early United States, the Democratic-Republican party under Thomas Jefferson and the Democratic party under Andrew Jackson. Read any detailed history of the nineteenth century United States and you’ll learn that before these two movements went to work, each state set a certain minimum level of personal wealth that citizens had to have in order to vote. Both movements forced through reforms in the voting laws, one state at a time, to remove these property requirements and give the right to vote to every adult white man in the state. What you won’t learn, unless you do some serious research, is that in many states these same reforms also stripped adult black men of their right to vote.
Try to explain this to most people on the leftward end of today’s American political spectrum, and you’ll likely end up with a world-class meltdown, because the Jeffersonian Democratic-Republicans and the Jacksonian Democrats, like the Progressive movement, embraced some causes that today’s leftists consider progressive, and others that they consider regressive. The notion that social change is driven by some sort of linear evolution of consciousness, in which people necessarily become “more conscious” (that is to say, conform more closely to the ideology of the contemporary American left) over time, has no room for gay-bashing Progressives and Jacksonian Democrats whose concept of democracy included a strict color bar. The difficulty, of course, is that history is full of Progressives, Jacksonian Democrats, and countless other political movements that can’t be shoehorned into the Procrustean bed of today’s political ideologies.
I could add other examples—how many people remember, for example, that environmental protection was a cause of the far right until the 1960s?—but I think the point has been made. People in the past didn’t divide up the political causes of their time into the same categories left-wing activists like to use today. It’s practically de rigueur for left-wing activists these days to insist that people in the past ought to have seen things in today’s terms rather than the terms of their own time, but that insistence just displays a bad case of chronocentrism.
Chronocentrism? Why, yes.  Most people nowadays are familiar with ethnocentrism, the insistence by members of one ethnic group that the social customs, esthetic notions, moral standards, and so on of that ethnic group are universally applicable, and that anybody who departs from those things is just plain wrong. Chronocentrism is the parallel insistence, on the part of people living in one historical period, that the social customs, esthetic notions, moral standards, and so on of that period are universally applicable, and that people in any other historical period who had different social customs, esthetic notions, moral standards, and so on should have known better.
Chronocentrism is pandemic in our time. Historians have a concept called “Whig history;” it got that moniker from a long line of English historians who belonged to the Whig, i.e., Liberal Party, and who wrote as though all of human history was to be judged according to how well it measured up to the current Liberal Party platform. Such exercises aren’t limited to politics, though; my first exposure to the concept of Whig history came via university courses in the history of science. When I took those courses—this was twenty-five years ago, mind you—historians of science were sharply divided between a majority that judged every scientific activity in every past society on the basis of how well it conformed to our ideas of science, and a minority that tried to point out just how difficult this habit made the already challenging task of understanding the ideas of past thinkers.
To my mind, the minority view in those debates was correct, but at least some of its defenders missed a crucial point. Whig history doesn’t exist to foster understanding of the past.  It exists to justify and support an ideological stance of the present. If the entire history of science is rewritten so that it’s all about how the currently accepted set of scientific theories about the universe rose to their present privileged status, that act of revision makes currently accepted theories look like the inevitable outcome of centuries of progress, rather than jerry-rigged temporary compromises kluged together to cover a mass of recalcitrant data—which, science being what it is, is normally a more accurate description.
In exactly the same sense, the claim that a certain set of social changes in the United States and other industrial countries in recent years result from the “evolution of consciousness,” unfolding on a one-way street from the ignorance of the past to a supposedly enlightened future, doesn’t help make sense of the complicated history of social change. It was never supposed to do that. Rather, it’s an attempt to backstop the legitimacy of a specific set of political agendas here and now by making them look like the inevitable outcome of the march of history. The erasure of the bits of inconvenient history I cited earlier in this essay is part and parcel of that attempt; like all linear schemes of historical change, it falsifies the past and glorifies the future in order to prop up an agenda in the present.
It needs to be remembered in this context that the word “evolution” does not mean “progress.” Evolution is adaptation to changing circumstances, and that’s all it is. When people throw around the phrases “more evolved” and “less evolved,” they’re talking nonsense, or at best engaging in a pseudoscientific way of saying “I like this” and “I don’t like that.” In biology, every organism—you, me, koalas, humpback whales, giant sequoias, pond scum, and all the rest—is equally the product of a few billion years of adaptation to the wildly changing conditions of an unstable planet, with genetic variation shoveling in diversity from one side and natural selection picking and choosing on the other. The habit of using the word “evolution” to mean “progress” is pervasive, and it’s pushed hard by the faith in progress that serves as an ersatz religion in our time, but it’s still wrong.
It’s entirely possible, in fact, to talk about the evolution of political opinion (which is of course what “consciousness” amounts to here) in strictly Darwinian terms. In every society, at every point in its history, groups of people are striving to improve the conditions of their lives by some combination of competition and cooperation with other groups. The causes, issues, and rallying cries that each group uses will vary from time to time as conditions change, and so will the relationships between groups—thus it was to the advantage of affluent liberals of the Progressive era to destroy the thriving gay culture of urban America, just as it was to the advantage of affluent liberals of the late twentieth century to turn around and support the struggle of gay people for civil rights. That’s the way evolution works in the real world, after all.
This sort of thinking doesn’t offer the kind of ideological support that activists of various kinds are used to extracting from various linear schemes of history. On the other hand, that difficulty is more than balanced by a significant benefit, which is that linear predictions inevitably fail, and so by and large do movements based on them. The people who agreed enthusiastically with Hillary Clinton’s insistence that “we aren’t going back, we’re going forward” are still trying to cope with the hard fact that their political agenda will be wandering in the wilderness for at least the next four years. Those who convince themselves that their cause is the wave of the future are constantly being surprised by the embarrassing discovery that waves inevitably break and roll back out to sea. It’s those who remember that history plays no favorites who have a chance of accomplishing their goals.

How Not To Write Like An Archdruid

Wed, 2017-01-04 11:59
Among the occasional amusements I get from writing these weekly essays are earnest comments from people who want to correct my writing style. I field one of them every month or so, and the latest example came in over the electronic transom in response to last week’s post. Like most of its predecessors, it insisted that there’s only one correct way to write for the internet, trotted out a set of canned rules that supposedly encapsulate this one correct way, and assumed as a matter of course that the only reason I didn’t follow those rules is that I’d somehow managed not to hear about them yet.
The latter point is the one I find most amusing, and also most curious. Maybe I’m naive, but it’s always seemed to me that if I ran across someone who was writing in a style I found unusual, the first thing I’d want to do would be to ask the author why he or she had chosen that stylistic option—because, you know, any writer who knows the first thing about his or her craft chooses the style he or she finds appropriate for any given writing project. I field such questions once in a blue moon, and I’m happy to answer them, because I do indeed have reasons for writing these essays in the style I’ve chosen for them. Yet it’s much more common to get the sort of style policing I’ve referenced above—and when that happens, you can bet your bottom dollar that what’s being pushed is the kind of stilted, choppy, dumbed-down journalistic prose that I’ve deliberately chosen not to write.
I’m going to devote a post to all this, partly because I write what I want to write about, the way I want to write about it, for the benefit of those who enjoy reading it, and those who don’t are encouraged to remember that there are thousands of other blogs out there that they’re welcome to read instead. Partly, though, the occasional thudding of what Giordano Bruno called “the battering rams of infants, the catapults of error, the bombards of the inept, and the lightning flashes, thunder, and great tempests of the ignorant”—now there was a man who could write!—raises issues that are central to the occasional series of essays on education I’ve been posting here.
Accepting other people’s advice on writing is a risky business—and yes, that applies to this blog post as well as any other source of such advice. It’s by no means always true that “those who can, do; those who can’t, teach,” but when we’re talking about unsolicited writing advice on the internet, that’s the way to bet.  Thus it’s not enough for some wannabe instructor to tell you “I’ve taught lots of people” (taught them what?) or “I’ve helped lots of people” (to do what?)—the question you need to ask is what the instructor himself or herself has written and where it’s been published.
The second of those matters as much as the first. It so happens, for example, that a great many of the professors who offer writing courses at American universities publish almost exclusively in the sort of little literary quarterlies that have a circulation in three figures and pay contributors in spare copies. (It’s not coincidental that these days, most of the little literary quarterlies in question are published by university English departments.) There’s nothing at all wrong with that, if you dream of writing the sort of stories, essays, and poetry that populate little literary quarterlies.
If you want to write something else, though, it’s worth knowing that these little quarterlies have their own idiosyncratic literary culture. There was a time when the little magazines were one of the standard stepping stones to a successful writing career, but that time went whistling down the wind decades ago. Nowadays, the little magazines have gone one way, the rest of the publishing world has gone another, and many of the habits the little magazines encourage (or even require) in their writers will guarantee prompt and emphatic rejection slips from most other writing venues.
Different kinds of writing, in other words, have their own literary cultures and stylistic customs. In some cases, those can be roughly systematized in the form of rules. That being the case, is there actually some set of rules that are followed by everything good on the internet?
Er, that would be no. I’m by no means a fan of the internet, all things considered—I publish my essays here because most of the older venues I’d prefer no longer exist—but it does have its virtues, and one of them is the remarkable diversity of style to be found there. If you like stilted, choppy, dumbed-down journalistic prose of the sort my commenter wanted to push on me, why, yes, you can find plenty of it online. You can also find lengthy, well-argued essays written in complex and ornate prose, stream-of-consciousness pieces that out-beat the Beat generation, experimental writing of any number of kinds, and more. Sturgeon’s Law (“95% of everything is crap”) applies here as it does to every other human creation, but there are gems to be found online that range across the spectrum of literary forms and styles. No one set of rules applies.
Thus we can dismiss the antics of the style police out of hand. Let’s go deeper, though. If there’s no one set of rules that internet writing ought to follow, are there different rules for each kind of writing? Or are rules themselves the problem? This is where things get interesting.
One of the consistent mental hiccups of American popular culture is the notion that every spectrum consists solely of its two extremes, with no middle ground permitted, and that bit of paralogic gets applied to writing at least as often as to anything else. Thus you have, on the one hand, the claim that the only way to write well is to figure out what the rules are and follow them with maniacal rigidity; on the other, the claim that the only way to write well is to throw all rules into the trash can and let your inner genius, should you happen to have one of those on hand, spew forth the contents of your consciousness all anyhow onto the page. Partisans of those two viewpoints snipe at one another from behind rhetorical sandbags, and neither one of them ever manages more than a partial victory, because neither approach is particularly useful when it comes to the actual practice of writing.
By and large, when people write according to a rigidly applied set of rules—any rigidly applied set of rules—the result is predictable, formulaic, and trite, and therefore boring. By and large, when people write without paying any attention to rules at all, the result is vague, shapeless, and maundering, and therefore boring. Is there a third option? You bet, and it starts by taking the abandoned middle ground: in this case, learning an appropriate set of rules, and using them as a starting point, but departing from them wherever doing so will improve the piece you’re writing.
The set of rules I recommend, by the way, isn’t meant to produce the sort of flat PowerPoint verbiage my commenter insists on. It’s meant to produce good readable English prose, and the source of guidance I recommend to those who are interested in such things is Strunk and White’s deservedly famous The Elements of Style. Those of my readers who haven’t worked with it, who want to improve their writing, and who’ve glanced over what I’ve published and decided that they might be able to learn something useful from me, could do worse than to read it and apply it to their prose.
A note of some importance belongs here, though. There’s a thing called writer’s block, and it happens when you try to edit while you’re writing. I’ve read, though I’ve misplaced the reference, that neurologists have found that the part of the brain that edits and the part of the brain that creates are not only different, they conflict with one another.  If you try to use both of them at once, your brain freezes up in a fairly close neurological equivalent of the Blue Screen of Death, and you stop being able to write at all. That’s writer’s block. To avoid it, NEVER EDIT WHILE YOU’RE WRITING
I mean that quite literally. Don’t even look at the screen if you can’t resist the temptation to second-guess the writing process. If you have to, turn the screen off, so you can’t even see what you’re writing. Eventually, with practice, you’ll learn to move smoothly back and forth between creative mode and editing mode, but if you don’t have a lot of experience writing, leave that for later. For now, just blurt it all out without a second thought, with all its misspellings and garbled grammar intact.
Then, after at least a few hours—or better yet, after a day or so—go back over the mess, cutting, pasting, adding, and deleting as needed, until you’ve turned it into nice clean text that says what you want it to say. Yes, we used to do that back before computers; the process is called “cut and paste” because it was done back then with a pair of scissors and a pot of paste, the kind with a little spatula mounted on the inside of the lid to help you spread the stuff; you’d cut out the good slices of raw prose and stick them onto a convenient sheet of paper, interspersed with handwritten or freshly typed additions. Then you sat down and typed your clean copy from the pasted-up mess thus produced. Now you know how to do it when the internet finally dries up and blows away. (You’re welcome.)
In the same way, you don’t try to write while looking up rules in Strunk & White. Write your piece, set it aside for a while, and then go over it with your well-worn copy of Strunk & White in hand, noting every place you broke one of the rules of style the book suggests you should follow. The first few times, as a learning exercise, you might consider rewriting the whole thing in accordance with those rules—but only the first few times. After that, make your own judgment call: is this a place where you should follow the rules, or is this a place where they need to be bent, broken, or trampled into the dust? Only you, dear reader-turned-writer, can decide.
A second important note deserves to be inserted at this point, though. The contemporary US public school system can be described without too much inaccuracy as a vast mechanism for convincing children that they can’t write. Rigid rules imposed for the convenience of educators rather than the good of the students, part of the industrial mass-production ethos that pervades public schools in this country, leave a great many graduates so bullied, beaten, and bewildered by bad pedagogy that the thought of writing something for anybody else to read makes them turn gray with fear. It’s almost as bad as the terror of public speaking the public schools also go out of their way to inflict, and it plays a comparable role in crippling people’s capacity to communicate outside their narrow circles of friends.
If you suffer from that sort of educational hangover, dear reader, draw a deep breath and relax. The bad grades and nasty little comments in red ink you got from Mrs. Melba McNitpick, your high school English teacher, are no reflection of your actual capacities as a writer. If you can talk, you can write—it’s the same language, after all. For that matter, even if you can’t talk, you may be able to write—there’s a fair number of people out there who are nonverbal for one reason or another, and can still make a keyboard dance.
The reason I mention this here is that the thought of making an independent judgment about when to follow the rules and when to break them fills a great many survivors of American public schools with dread. In far too many cases, students are either expected to follow the rules with mindless obedience and given bad grades if they fail to do so, or given no rules at all and then expected to conform to unstated expectations they have no way to figure out, and either of these forms of bad pedagogy leaves scars. Again, readers who are in this situation should draw a deep breath and relax; having left Mrs. McNitpick’s class, you’re not subject to her opinions any longer, and should ignore them utterly.
So how do you decide where to follow the rules and where to fold, spindle, and mutilate them? That’s where we walk through the walls and into the fire, because what guides you in your decisions regarding the rules of English prose is the factor of literary taste.
Rules can be taught, but taste can only be learned. Does that sound like a paradox? Au contraire, it simply makes the point that only you can learn, refine, and ripen your literary taste—nobody else can do it for you, or even help you to any significant extent—and your sense of taste is therefore going to be irreducibly personal. When it comes to taste, you aren’t answerable to Mrs. McNitpick, to me, to random prose trolls on the internet, or to anyone else. What’s more, you develop your taste for prose the same way you develop your taste for food: by trying lots of different things, figuring out what you like, and paying close attention to what you like, why you like it, and what differentiates it from the things you don’t like as much.
This is applicable, by the way, to every kind of writing, including those kinds at which the snobs among us turn up their well-sharpened noses. I don’t happen to be a fan of the kind of satirical gay pornography that Chuck Tingle has made famous, for example, but friends of mine who are tell me that in that genre, as in all others, there are books that are well written, books that are tolerable, and books that trip over certain overelongated portions of their anatomy and land face first in—well, let’s not go there, shall we? In the same way, if your idea of a good read is nineteenth-century French comedies of manners, you can find a similar spectrum extending from brilliance to bathos.
Every inveterate reader takes in a certain amount of what I call popcorn reading—the sort of thing that’s read once, mildly enjoyed, and then returned to the library, the paperback exchange, or whatever electronic Elysium e-books enter when you hit the delete button. That’s as inevitable as it is harmless. The texts that matter in developing your personal taste, though, are the ones you read more than once, and especially the ones you read over and over again. As you read these for the third or the thirty-third time, step back now and then from the flow of the story or the development of the argument, and notice how the writer uses language. Learn to notice the really well-turned phrases, the figures of speech that are so apt and unexpected that they seize your attention, the moments of humor, the plays on words, the  passages that match tone and pacing to the subject perfectly.
If you’ve got a particular genre in mind—no, let’s stop for a moment and talk about genre, shall we? Those of my readers who endured a normal public school education here in the US probably don’t know that this is pronounced ZHON-ruh (it’s a French word) and it simply means a category of writing. Satirical gay pornography is a genre. The comedy of manners is a genre. The serious contemporary literary novel is a genre.  So are mysteries, romance, science fiction, fantasy, and the list goes on. There are also nonfiction genres—for example, future-oriented social criticism, the genre in which nine of my books from The Long Descent to Dark Age America have their place. Each genre is an answer to the question, “I just read this and I liked it—where can I find something else more or less like it?”
Every genre has its own habits and taboos, and if you want to write for publication, you need to know what those are. That doesn’t mean you have to follow those habits and taboos with the kind of rigid obedience critiqued above—quite the contrary—but you need to know about them, so that when you break the rules you do it deliberately and skillfully, to get the results you want, rather than clumsily, because you didn’t know any better. It also helps to read the classics of the genre—the books that established those habits and taboos—and then go back and read books in the genre written before the classics, to get a sense of what possibilities got misplaced when the classics established the frame through which all later works in that genre would be read.
If you want to write epic fantasy, for example, don’t you dare stop with Tolkien—it’s because so many people stopped with Tolkien that we’ve got so many dreary rehashes of something that was brilliantly innovative in 1949, complete with carbon-copy Dark Lords cackling in chorus and the inevitable and unendearing quest to do something with the Magic McGuffin that alone can save blah blah blah. Read the stuff that influenced Tolkien—William Morris, E.R. Eddison, the Norse sagas, the Kalevala, Beowulf.  Then read something in the way of heroic epic that he probably didn’t get around to reading—the Ramayana, the Heike Monogatari, the Popol Vuh, or what have youand think through what those have to say about the broader genre of heroic wonder tale in which epic fantasy has its place.
The point of this, by the way, isn’t to copy any of these things. It’s to develop your own sense of taste so that you can shape your own prose accordingly. Your goal, if you’re at all serious about writing, isn’t to write like Mrs. McNitpick, like your favorite author of satirical gay pornography or nineteenth-century French comedies of manners, or like me, but to write like yourself.
And that, to extend the same point more broadly, is the goal of any education worth the name. The word “education” itself comes from the Latin word educatio, from ex-ducere, “to lead out or bring out;” it’s about leading or bringing out the undeveloped potentials that exist inside the student, not shoving some indigestible bolus of canned information or technique down the student’s throat. In writing as in all other things that can be learned, that process of bringing out those undeveloped potentials requires the support of rules and examples, but those are means to an end, not ends in themselves—and it’s in the space between the rules and their inevitable exceptions, between the extremes of rigid formalism and shapeless vagueness, that the work of creation takes place.
That’s also true of politics, by the way—and the conventional wisdom of our time fills the same role there that the rules for bad internet prose do for writing. Before we can explore that, though, it’s going to be necessary to take on one of the more pervasive bad habits of contemporary thinking about the relationship between the present and the past. We’ll tackle that next week.
********************In not wholly unrelated news, I’m pleased to announce that Merigan Tales, the anthology of short stories written by Archdruid Report readers set in the world of my novel Star’s Reach, is now in print and available for purchase from Founders House. Those of my readers who enjoyed Star’s Reach and the After Oil anthologies won’t want to miss it.

A Leap in the Dark

Wed, 2016-12-28 17:13
A few days from now, 2016 will have passed into the history books. I know a fair number of people who won’t mourn its departure, but it’s pretty much a given that the New Year celebrations here in the United States, at least, will demonstrate a marked shortage of enthusiasm for the arrival of 2017.
There’s good reason for that, and not just for the bedraggled supporters of Hillary Clinton’s failed and feckless presidential ambitions. None of the pressures that made 2016 a cratered landscape of failed hopes and realized nightmares have gone away. Indeed, many of them are accelerating, as the attempt to maintain a failed model of business as usual in the teeth of political, economic, and environmental realities piles blowback upon blowback onto the loading dock of the new year.
Before we get into that, though, I want to continue the annual Archdruid Report tradition and review the New Year’s predictions that I made at the beginning of 2016. Those of my readers who want to review the original post will find it here. Here’s the gist.
“Thus my core prediction for 2016 is that all the things that got worse in 2015 will keep on getting worse over the year to come. The ongoing depletion of fossil fuels and other nonrenewable resources will keep squeezing the global economy, as the real (i.e., nonfinancial) costs of resource extraction eat up more and more of the world’s total economic output, and this will drive drastic swings in the price of energy and commodities—currently those are still headed down, but they’ll soar again in a few years as demand destruction completes its work. The empty words in Paris a few weeks ago will do nothing to slow the rate at which greenhouse gases are dumped into the atmosphere, raising the economic and human cost of climate-related disasters above 2015’s ghastly totals—and once again, the hard fact that leaving carbon in the ground means giving up the lifestyles that depend on digging it up and burning it is not something that more than a few people will be willing to face.
“Meanwhile, the US economy will continue to sputter and stumble as politicians and financiers try to make up for ongoing declines in real (i.e., nonfinancial) wealth by manufacturing paper wealth at an even more preposterous pace than before, and frantic jerryrigging will keep the stock market from reflecting the actual, increasingly dismal state of the economy.  We’re already in a steep economic downturn, and it’s going to get worse over the year to come, but you won’t find out about that from the mainstream media, which will be full of the usual fact-free cheerleading; you’ll have to watch the rates at which the people you know are being laid off and businesses are shutting their doors instead.” 
It’s almost superfluous to point out that I called it. It’s been noted with much irritation by other bloggers in what’s left of the peak oil blogosphere that it takes no great talent to notice what’s going wrong, and point out that it’s just going to keep on heading the same direction. This I cheerfully admit—but it’s also relevant to note that this method produces accurate predictions. Meanwhile, the world-saving energy breakthroughs, global changes in consciousness, sudden total economic collapses, and other events that get predicted elsewhere year after weary year have been notable by their absence.
I quite understand why it’s still popular to predict these things: after all, they allow people to pretend that they can expect some future other than the one they’re making day after day by their own actions. Nonetheless, the old saying remains true—“if you always do what you’ve always done, you’ll always get what you’ve always gotten”—and I wonder how many of the people who spend each year daydreaming about the energy breakthroughs, changes in consciousness, economic collapses, et al, rather than coming to grips with the rising spiral of crises facing industrial civilization, really want to deal with the future that they’re storing up for themselves by indulging in this habit.
Let’s go on, though.  At the beginning of 2016, I also made four specific predictions, which I admitted at the time were long shots. One of those, specific prediction #3, was that the most likely outcome of the 2016 presidential election would be the inauguration of Donald Trump as President in January 2017. I don’t think I need to say much about that, as it’s already been discussed here at length.  The only thing I’d like to point out here is that much of the Democratic party seems to be fixated on finding someone or something to blame for the debacle, other than the stark incompetence of the Clinton campaign and the failure of Democrats generally to pay attention to anything outside the self-referential echo chambers of affluent liberal opinion. If they keep it up, it’s pretty much a given that Trump will win reelection in 2020.
The other three specific long-shot predictions didn’t pan out, at least not in the way that I anticipated, and it’s only fair—and may be helpful, as we head further into the unknown territory we call 2017—to talk about what didn’t happen, and why.
Specific prediction #1 was that the next tech bust would be under way by the end of 2016.  That’s happening, but not in the way I expected. Back in January I was looking at the maniacally overinflated stock prices of tech companies that have never made a cent in profit and have no meaningful plans to do so, and I expected a repeat of the “tech wreck” of 2000. The difficulty was simply I didn’t take into account the most important economic shift between 2000 and 2016—the de facto policy of negative interest rates being pursued by the Federal Reserve and certain other central banks.
That policy’s going to get a post of its own one of these days, because it marks the arrival of a basic transformation in economic realities that’s as incomprehensible to neoliberal economists as it will be challenging to most of the rest of us. The point I want to discuss here here, though, is a much simpler one. Whenever real interest rates are below zero, those elite borrowers who can get access to money on those terms are being paid to borrow.  Among many other things, this makes it a lot easier to stretch out the downward arc of a failing industry. Cheaper-than-free money is one of the main things that kept the fracking industry from crashing and burning from its own unprofitability once the price of oil plunged in 2013; there’s been a steady string of bankruptcies in the fracking industry and the production of oil from fracked wells has dropped steadily, but it wasn’t the crash many of us expected.
The same thing is happening, in equally slow motion, with the current tech bubble. Real estate prices in San Francisco and other tech hotspots are sliding, overpaid tech employees are being systematically replaced by underpaid foreign workers, the numbers are looking uglier by the week, but the sudden flight of investment money that made the “tech wreck” so colorful sixteen years ago isn’t happening, because tech firms can draw on oceans of relatively cheap funding to turn the sudden popping of the tech bubble into the slow hiss of escaping air. That doesn’t mean that the boom-and-bust cycle has been cancelled—far from it—but it does mean that shoveling bad money after good has just become a lot easier. Exactly how that will impact the economy is a very interesting question that nobody just now knows how to answer.
Let’s move on.  Specific prediction #2 was that the marketing of what would inevitably be called “the PV revolution” would get going in a big way in 2016. Those of my readers who’ve been watching the peak oil scene for more than a few years know that ever since the concept of peak oil clawed its way back out of its long exile in the wilderness of the modern imagination, one energy source after anobter has been trotted out as the reason du jour why the absurdly extravagant lifestyles of today’s privileged classes can roll unhindered into the future.  I figured, based on the way that people in the mainstream environmentalist movement were closing ranks around renewables, that photovoltaic solar energy would be the next beneficiary of that process, and would take off in a big way as the year proceeded.
That this didn’t happen is not the fault of the solar PV industry or its cheerleades in the green media. Naomi Oreskes’ strident insistence a while back that raising questions about the economic viability of renewable energy is just another form of climate denialism seems to have become the party line throughout the privileged end of the green left, and the industrialists are following suit. Elon Musk, whose entire industrial empire has been built on lavish federal subsidies, is back at the feed trough again, announcing a grandiose new plan to manufacture photovoltaic roof shingles; he’s far and away the most colorful of the would-be renewable-energy magnates, but others are elbowing their way toward the trough as well, seeking their own share of the spoils.
The difficulty here is twofold. First, the self-referential cluelessness of the Democratic party since the 2008 election has had the inevitable blowback—something like 1000 state and federal elective offices held by Democrats after that election are held by Republicans today—and the GOP’s traditional hostility toward renewable energy has put a lid on the increased subsidies that would have been needed to kick a solar PV feeding frenzy into the same kind of overdrive we’ve already seen with ethanol and wind. Solar photovoltaic power, like ethanol from corn, has a disastrously low energy return on energy invested—as Pedro Prieto and Charles Hall showed in their 2015 study of real-world data from Spain’s solar PV program, the EROEI on large-scale grid photovoltaic power works out in practice to less than 2.5—and so, like nuclear power, it’s only economically viable if it’s propped up by massive and continuing subsidies. Lacking those, the “PV revolution” is dead in the water.
The second point, though, is the more damaging.  The “recovery” after the 2008-2009 real estate crash was little more than an artifact of statistical manipulation, and even negative interest rates haven’t been able to get a heartbeat going in the economy’s prostrate body. As most economic measurements not subject to fiddling by the enthusiastic accountants of the federal government slide steadily downhill, the economic surplus needed to support any kind of renewables buildout at all is rapidly tricking away. Demand destruction is in the driver’s seat, and the one way of decreasing fossil fuel consumption that affluent environmentalists don’t want to talk about—conservation—is the only viable option just now.
Specific prediction #4 was that the Saudi regime in Arabia would collapse by the end of 2016. As I noted at the time, the replacement of the Saudi monarchy with some other form of government is for all practical purposes a done deal. Of the factors I cited then—the impending bankruptcy of a regime that survives only by buying off dissent with oil money, the military quagmires in Yemen, Syria, and Iraq that have the Saudi military and its foreign mercenaries bogged down inextricably, and the rest of it—none have gone away. Nor has the underlying cause, the ongoing depletion of the once-immense oil reserves that have propped up the Saudi state so far.
That said, as I noted back in January, it’s anyone’s guess what cascade of events will send the Saudi royal family fleeing to refuges overseas while mobs rampage through their abandoned palaces in Riyadh, and some combination of mid-level military officers and Muslim clerics piece together a provisional government in their absence. I thought that it was entirely possible that this would happen in 2016, and of course it didn’t. It’s possible at this point that the price of oil could rise fast enough to give the Saudi regime another lease on life, however brief. That said, the winds are changing across the Middle East; the Russian-Iranian alliance is in the ascendant, and the Saudis have very few options left. It will be interesting, in the sense of the apocryphal Chinese curse, to see how long they survive.
So that’s where we stand, as 2016 stumbles down the ramp into time’s slaughterhouse and 2017 prepares to take its place in the ragged pastures of history. What can we expect in the year ahead?
To some extent, I’ve already answered that question—but only to some extent. Most of the factors that drove events in 2016 are still in place, still pressing in the same direction, and “more of the same” is a fair description of the consequences. Day after day, the remaining fossil fuel reserves of a finite planet are being drawn down to maintain the extravagant and unsustainable lifestyles of the industrial world’s more privileged inmates. Those remaining reserves are increasingly dirty, increasingly costly to extract and process, increasingly laden with a witch’s brew of social, economic, and environmental costs that nobody anywhere is willing to make the fossil fuel industry cover, and those costs don’t go away just because they’re being ignored—they pile up in society, the economy, and the biosphere, producing the rising tide of systemic dysfunction that plays so large and unmentioned a role in daily life today.
Thus we can expect still more social turmoil, more economic instability, and more environmental blowback in 2017. The ferocious populist backlash against the economic status quo that stunned the affluent in Britain and America with the Brexit vote and Trump’s presidential victory respectively, isn’t going away until and unless the valid grievances of the working classes get heard and addressed by political establishments around the industrial world; to judge by examples so far, that’s unlikely to happen any time soon. At the same time, the mismatch between the lifestyles we can afford and the lifestyles that too many of us want to preserve remains immense, and until that changes, the global economy is going to keep on lurching from one crisis to another. Meanwhile the biosphere is responding to the many perturbations imposed on it by human stupidity in the way that systems theory predicts—with ponderous but implacable shifts toward new conditions, many of which don’t augur well for the survival of industrial society.
There are wild cards in the deck, though, and one of them is being played right now over the North Pole. As I write this, air temperatures over the Arctic ice cap are 50°F warmer than usual for this time of year. A destabilized jet stream is sucking masses of warm air north into the Arctic skies, while pushing masses of Arctic air down into the temperate zone. As a result, winter ice formation on the surface of the Arctic ocean has dropped to levels tht were apparently last seen before our species got around to evolving—and a real possibility exists, though it’s by no means a certainty yet, that next summer could see most of the Arctic Ocean free of ice.
Nobody knows what that will do to the global climate. The climatologists who’ve been trying to model the diabolically complex series of cascading feedback loops we call “global climate” have no clue—they have theories and computer models, but so far their ability to predict the rate and consequences of anthropogenic climate change have not exactly been impressive. (For what it’s worth, by the way, most of their computer models have turned out to be far too conservative in their predictions.) Nobody knows yet whether the soaring temperatures over the North Pole this winter are a fluke, a transitory phenomenon driven by the unruly transition between one climate regime and another, or the beginning of a recurring pattern that will restore the north coast of Canada to the conditions it had during the Miocene, when crocodiles sunned themselves on the warm beaches of northern Greenland. We simply don’t know.
In the same way, the populist backlash mentioned above is a wild card whose effects nobody can predict just now. The neoliberal economics that have been welded into place in the industrial world for the last thirty years have failed comprehensively, that’s clear enough.  The abolition of barriers to the flow of goods, capital, and population did not bring the global prosperity that neoliberal economists promised, and now the bill is coming due. The question is what the unraveling of the neoliberal system means for national economies in the years ahead.
There are people—granted, these are mostly neoliberal economists and those who’ve drunk rather too freely of the neoliberal koolaid—who insist that the abandonment of the neoliberal project will inevitably mean economic stagnation and contraction. There are those who insist that the abandonment of the neoliberal project will inevitably mean a return to relative prosperity here in the US, as offshored jobs are forced back stateside by tax policies that penalize imports, and the US balance of trade reverts to something a little closer to parity. The fact of the matter is that nobody knows what the results will be. Here as in Britain, voters faced with a choice between the perpetuation of an intolerable status quo and a leap in the dark chose the latter, and the consequences of that leap can’t be known in advance.
Other examples abound. The US president-elect has claimed repeatedly that the US under his lead will get out of the regime-change business and pursue a less monomaniacally militaristic foreign policy than the one it’s pursued under Bush and Obama, and would have pursued under Clinton. The end of the US neoconservative consensus is a huge change that will send shockwaves through the global political system. Another change, at least as huge, is the rise of Russia as a major player in the Middle East. Another? The remilitarization of Japan and its increasingly forceful pursuit of political and military alliances in East and South Asia. There are others. The familiar order of global politics is changing fast. What will the outcome be? Nobody knows.
As 2017 dawns, in a great many ways, modern industrial civilization has flung itself forward into a darkness where no stars offer guidance and no echoes tell what lies ahead. I suspect that when we look back at the end of this year, the predictable unfolding of ongoing trends will have to be weighed against sudden discontinuities that nobody anywhere saw coming.  We’re not discussing the end of the world, of course; we’re talking events like those that can be found repeated many times in the histories of other failing civilizations.  That said, my guess is that some of those discontinuities are going to be harsh ones.  Those who brace themselves for serious trouble and reduce their vulnerabilities to a brittle and dysfunctional system will be more likely to come through in one piece.
Those who are about to celebrate the end of 2016, in other words, might want to moderate their cheering when it’s over. It’s entirely possible that 2017 will turn out to be rather worse—despite which I hope that the readers of this blog, and the people they care about, will manage to have a happy New Year anyway.

A Season of Consequences

Wed, 2016-12-21 11:33
One of the many advantages of being a Druid is that you get to open your holiday presents four days early. The winter solstice—Alban Arthuan, to use one term for it in the old-fashioned Druid Revival traditions I practice—is one of the four main holy days of the Druid year. Though the actual moment of solstice wobbles across a narrow wedge of the calendar, the celebration traditionally takes place on December 21.  Yes, Druids give each other presents, hang up decorations, and enjoy as sumptuous a meal as resources permit, to celebrate the rekindling of light and hope in the season of darkness.
Come to think of it, I’m far from sure why more people who don’t practice the Christian faith still celebrate Christmas, rather than the solstice. It’s by no means necessary to believe in the Druid gods and goddesses to find the solstice relevant; a simple faith in orbital inclination is sufficient reason for the season, after all—and since a good many Christians in America these days are less than happy about what’s been done to their holy day, it seems to me that it would be polite to leave Christmas to them, have our celebrations four days earlier, and cover their shifts at work on December 25th in exchange for their covering ours on the 21st. (Back before my writing career got going, when I worked in nursing homes to pay the bills, my Christian coworkers and I did this as a matter of course; we also swapped shifts around Easter and the spring equinox. Religious pluralism has its benefits.)
Those of my readers who don’t happen to be Druids, but who are tempted by the prospect just sketched out, will want to be aware of a couple of details. For one thing, you won’t catch Druids killing a tree in order to stick it in their living room for a few weeks as a portable ornament stand and fire hazard. Druids think there should be more trees in the world, not fewer! A live tree or, if you must, an artificial one, would be a workable option, but a lot of Druids simply skip the tree altogether and hang ornaments on the mantel, or what have you.
Oh, and most of us don’t do Santa Claus. I’m not sure why Santa Claus is popular among Christians, for that matter, or among anyone else who isn’t a devout believer in the ersatz religion of Consumerism—which admittedly has no shortage of devotees just now. There was a time when Santa hadn’t yet been turned into a poorly paid marketing consultant to the toy industry; go back several centuries, and he was the Christian figure of St. Nicholas; and before then he may have been something considerably stranger. To those who know their way around the traditions of Siberian shamanism, certainly, the conjunction of flying reindeer and an outfit colored like the famous and perilous hallucinogenic mushroom Amanita muscaria is at least suggestive.
Still, whether he takes the form of salesman, saint, or magic mushroom, Druids tend to give the guy in the red outfit a pass. Solstice symbolism varies from one tradition of Druidry to another—like almost everything else among Druids—but in the end of the tradition I practice, each of the Alban Gates (the solstices and equinoxes) has its own sacred animal, and the animal that corresponds to Alban Arthuan is the bear. If by some bizarre concatenation of circumstances Druidry ever became a large enough faith in America to attract the attention of the crazed marketing minions of consumerdom, you’d doubtless see Hallmark solstice cards for sale with sappy looking cartoon bears on them, bear-themed decorations in windows, bear ornaments to hang from the mantel, and the like.
While I could do without the sappy looking cartoons, I definitely see the point of bears as an emblem of the winter solstice, because there’s something about them that too often gets left out of the symbolism of Christmas and the like—though it used to be there, and relatively important, too. Bears are cute, no question; they’re warm and furry and cuddlesome, too; but they’re also, ahem, carnivores, and every so often, when people get sufficiently stupid in the vicinity of bears, the bears kill and eat them.
That is to say, bears remind us that actions have consequences.
I’m old enough that I still remember the days when the folk mythology surrounding Santa Claus had not quite shed the last traces of a similar reminder. According to the accounts of Santa I learned as a child, naughty little children ran a serious risk of waking up Christmas morning to find no presents at all, and a sorry little lump of coal in their stockings in place of the goodies they expected. I don’t recall any of my playmates having that happen to them, and it never happened to me—though I arguably deserved it rather more than once—but every child I knew took it seriously, and tried to moderate their misbehavior at least a little during the period after Thanksgiving. That detail of the legend may still survive here and there, for all I know, but you wouldn’t know it from the way the big guy in red is retailed by the media these days.
For that matter, the version I learned was a pale shadow of a far more unnerving original. In many parts of Europe, when St. Nicholas does the rounds, he’s accompanied by a frightening figure with various names and forms. In parts of Germany, Switzerland, and Austria, it’s Krampus—a hairy devil with goat’s horns and a long lolling tongue, who prances around with a birch switch in his hand and a wicker basket on his back. While the saint hands out presents to good children, Krampus is there for the benefit of the others; small-time junior malefactors can expect a thrashing with the birch switch, while the legend has it that the shrieking, spoiled little horrors at the far end of the naughty-child spectrum get popped into the wicker basket and taken away, and nobody ever hears from them again.
Yes, I know, that sort of thing’s unthinkable in today’s America, and I have no idea whether anyone still takes it with any degree of seriousness over in Europe. Those of my readers who find the entire concept intolerable, though, may want to stop for a moment and think about the context in which that bit of folk tradition emerged. Before fossil fuels gave the world’s industrial nations the temporary spate of abundance that they now enjoy, the coming of winter in the northern temperate zone was a serious matter. The other three seasons had to be full of hard work and careful husbandry, if you were going to have any particular likelihood of seeing spring before you starved or froze to death.
By the time the solstice came around, you had a tolerably good idea just how tight things were going to be by the time spring arrived and the first wild edibles showed up to pad out the larder a bit. The first pale gleam of dawn after the long solstice night was a welcome reminder that spring was indeed on its way, and so you took whatever stored food you could spare, if you could spare any at all, and turned it into a high-calorie, high-nutrient feast, to provide warm memories and a little additional nourishment for the bleak months immediately ahead.
In those days, remember, children who refused to carry their share of the household economy might indeed expect to be taken away and never be heard from again, though the taking away would normally be done by some combination of hunger, cold, and sickness, rather than a horned and hairy devil with a lolling tongue. Of course a great many children died anyway.  A failed harvest, a longer than usual winter, an epidemic, or the ordinary hazards of life in a nonindustrial society quite regularly put a burst of small graves in the nearest churchyard. It was nonetheless true that good children, meaning here those who paid attention, learned fast, worked hard, and did their best to help keep the household running smoothly, really did have a better shot at survival.
One of the most destructive consequences of the age of temporary abundance that fossil fuels gave to the world’s industrial nations, in turn, is the widespread conviction that consequences don’t matter—that it’s unreasonable, even unfair, to expect anyone to have to deal with the blowback from their own choices. That’s a pervasive notion these days, and its effects show up in an astonishing array of contexts throughout contemporary culture, but yes, it’s particularly apparent when it comes to the way children get raised in the United States these days.
The interesting thing here is that the children aren’t necessarily happy about that. If you’ve ever watched a child systematically misbehave in an attempt to get a parent to react, you already know that kids by and large want to know where the limits are. It’s the adults who want to give tests and then demand that nobody be allowed to fail them, who insist that everybody has to get an equal share of the goodies no matter how much or little they’ve done to earn them, and so on through the whole litany of attempts to erase the reality that actions have consequences.
That erasure goes very deep. Have you noticed, for example, that year after year, at least here in the United States, the Halloween monsters on public display get less and less frightening? These days, far more often than not, the ghosts and witches, vampires and Frankenstein’s monsters splashed over Hallmark cards and window displays in the late October monster ghetto have big goofy grins and big soft eyes. The wholesome primal terrors that made each of these things iconic in the first place—the presence of the unquiet dead, the threat of wicked magic, the ghastly vision of walking corpses, whether risen from the grave to drink your blood or reassembled and reanimated by science run amok—are denied to children, and saccharine simulacra are propped up in their places.
Here again, children aren’t necessarily happy about that. The bizarre modern recrudescence of the Victorian notion that children are innocent little angels tells me, if nothing else, that most adults must go very far out of their way to forget their own childhoods. Children aren’t innocent little angels; they’re fierce little animals, which is of course exactly what they should be, and they need roughly the same blend of gentleness and discipline that wolves use on their pups to teach them to moderate their fierceness and live in relative amity with the other members of the pack.  Being fierce, they like to be scared a little from time to time; that’s why they like to tell each other ghost stories, the more ghoulish the better, and why they run with lolling tongues toward anything that promises them a little vicarious blood and gore. The early twentieth century humorist Ogden Nash nailed it when he titled one of his poems “Don’t Cry, Darling, It’s Blood All Right.”
Traditional fairy tales delighted countless generations of children for three good and sufficient reasons. First of all, they’re packed full of wonderful events. Second, they’re positively dripping with gore, which as already noted is an instant attraction to any self-respecting child. Third, they’ve got a moral—which means, again, that they are about consequences. The selfish, cruel, and stupid characters don’t get patted on the head, given the same prize as everyone else, and shielded from the results of their selfishness, cruelty, and stupidity; instead, they get gobbled up by monsters, turned to stone by witches’ curses, or subjected to some other suitably grisly doom. It’s the characters who are honest, brave, and kind who go on to become King or Queen of Everywhere.
Such things are utterly unacceptable, according to the approved child-rearing notions of our day.  Ask why this should be the case and you can count on being told that expecting a child to have to deal with the consequences of its actions decreases it’s self-esteem. No doubt that’s true, but this is another of those many cases where people in our society manage not to notice that the opposite of one bad thing is usually another bad thing. Is there such a thing as too little self-esteem? Of course—but there is also such a thing as too much self-esteem. In fact, we have a common and convenient English word for somebody who has too much self-esteem. That word is “jerk.”
The cult of self-esteem in contemporary pop psychology has thus produced a bumper crop of jerks in today’s America. I’m thinking here, among many other examples, of the woman who made the news a little while back by strolling right past the boarding desk at an airport, going down the ramp, and taking her seat on the airplane ahead of all the other passengers, just because she felt she was entitled to do so. When the cabin crew asked her to leave and wait her turn like everyone else, she ignored them; security was called, and she ignored them, too. They finally had to drag her down the aisle and up the ramp like a sack of potatoes, and hand her over to the police. I’m pleased to say she’s up on charges now.
That woman had tremendous self-esteem. She esteemed herself so highly that she was convinced that the rules that applied to everyone else surely couldn’t apply to her—and that’s normally the kind of attitude you can count on from someone whose self-esteem has gone up into the toxic-overdose range. Yet the touchstone of excessive self-esteem, the gold standard of jerkdom, is the complete unwillingness to acknowledge the possibility that actions have consequences and you might have to deal with those, whether you want to or not.
That sort of thing is stunningly common in today’s society. It was that kind of overinflated self-esteem that convinced affluent liberals in the United States and Europe that they could spend thirty years backing policies that pandered to their interests while slamming working people face first into the gravel, without ever having to deal with the kind of blowback that arrived so dramatically in the year just past. Now Britain is on its way out of the European Union, Donald Trump is mailing invitations to his inaugural ball, and the blowback’s not finished yet. Try to point this out to the people whose choices made that blowback inevitable, though, and if my experience is anything to go by, you’ll be ignored if you’re not shouted down.
On an even greater scale, of course, there’s the conviction on the part of an astonishing number of people that we can keep on treating this planet as a combination cookie jar to raid and garbage bin to dump wastes in, and never have to deal with the consequences of that appallingly shortsighted set of policies. That’s as true in large swathes of the allegedly green end of things, by the way, as it is among the loudest proponents of smokestacks and strip mines. I’ve long since lost track of the number of people I’ve met who insist loudly on how much they love the Earth and how urgent it is that “we” protect the environment, but who aren’t willing to make a single meaningful change in their own personal consumption of resources and production of pollutants to help that happen.
Consequences don’t go away just because we don’t want to deal with them. That lesson is being taught right now on low-lying seacoasts around the world, where streets that used to be well above the high tide line reliably flood with seawater when a high tide meets an onshore wind; it’s being taught on the ice sheets of Greenland and West Antarctica, which are moving with a decidedly un-glacial rapidity through a trajectory of collapse that hasn’t been seen since the end of the last ice age; it’s being taught in a hundred half-noticed corners of an increasingly dysfunctional global economy, as the externalized costs of technological progress pile up unnoticed and drag economic activity to a halt; and of course it’s being taught, as already noted, in the capitals of the industrial world, where the neoliberal orthodoxy of the last thirty years is reeling under the blows of a furious populist backlash.
It didn’t have to be learned that way. We could have learned it from Krampus or the old Santa Claus, the one who was entirely willing to leave a badly behaved child’s stocking empty on Christmas morning except for that single eloquent lump of coal; we could have learned it from the fairy tales that taught generations of children that consequences matter; we could have learned it from any number of other sources, given a little less single-minded a fixation on maximizing self-esteem right past the red line on the meter—but enough of us didn’t learn it that way, and so here we are.
I’d therefore like to encourage those of my readers who have young children in their lives to consider going out and picking up a good old-fashioned collection of fairy tales, by Charles Perrault or the Brothers Grimm, and use those in place of the latest mass-marketed consequence-free pap when it comes to storytelling time. The children will thank you for it, and so will everyone who has to deal with them in their adult lives. Come to think of it, those of my readers who don’t happen to have young children in their lives might consider doing the same thing for their own benefit, restocking their imaginations with cannibal giants and the other distinctly unmodern conveniences thereof, and benefiting accordingly.
And if, dear reader, you are ever tempted to climb into the lap of the universe and demand that it fork over a long list of goodies, and you glance up expecting to see the jolly and long-suffering face of Santa Claus beaming down at you, don’t be too surprised if you end up staring in horror at the leering yellow eyes and lolling tongue of Krampus instead, as he ponders whether you’ve earned a thrashing with the birch switch or a ride in the wicker basket—or perhaps the great furry face of the Solstice bear, the beast of Alban Arthuan, as she blinks myopically at you for a moment before she either shoves you from her lap with one powerful paw, or tears your arm off and gnaws on it meditatively while you bleed to death on the cold, cold ground.
Because the universe doesn’t care what you think you deserve. It really doesn’t—and, by the way, the willingness of your fellow human beings to take your wants and needs into account will by and large be precisely measured by your willingness to do the same for them.
And on that utterly seasonal note, I wish all my fellow Druids a wonderful solstice; all my Christian friends and readers, a very merry Christmas; and all my readers, whatever their faith or lack thereof, a rekindling of light, hope, and sanity in a dark and troubled time.

Why the Peak Oil Movement Failed

Wed, 2016-12-14 16:12
As I glance back across the trajectory of this blog over the last ten and a half years, one change stands out. When I began blogging in May of 2006, peak oil—the imminent peaking of global production of conventional petroleum, to unpack that gnomic phrase a little—was the central theme of a large, vocal, and tolerably well organized movement. It had its own visible advocacy organizations, it had national and international conferences, it had a small but noticeable presence in the political sphere, and it showed every sign of making its presence felt in the broader conversation of our time.
Today none of that is true. Of the three major peak oil organizations in the US, ASPO-USA—that’s the US branch of the Association for the Study of Peak Oil and Gas, for those who don’t happen to be fluent in acronym—is apparently moribund; Post Carbon Institute, while it still plays a helpful role from time to time as a platform for veteran peak oil researcher Richard Heinberg, has otherwise largely abandoned its former peak oil focus in favor of generic liberal environmentalism; and the US branch of the Transition organization, formerly the Transition Town movement, is spinning its wheels in a rut laid down years back. The conferences ASPO-USA once hosted in Washington DC, with congresscritters in attendance, stopped years ago, and an attempt to host a national conference in southern Pennsylvania fizzled after three years and will apparently not be restarted.
Ten years ago, for that matter, opinion blogs and news aggregators with a peak oil theme were all over the internet. Today that’s no longer the case, either. The fate of the two most influential peak oil sites, The Oil Drum and Energy Bulletin, is indicative. The Oil Drum simply folded, leaving its existing pages up as a legacy of a departed era.  Energy Bulletin, for its part, was taken over by Post Carbon Institute and given a new name and theme as Resilience.org. It then followed PCI in its drift toward the already overcrowded environmental mainstream, replacing the detailed assessment of energy futures that was the staple fare of Energy Bulletin with the sort of uncritical enthusiasm for an assortment of vaguely green causes more typical of the pages of Yes!Magazine.
There are still some peak oil sites soldiering away—notably Peak Oil Barrel, under the direction of former Oil Drum regular Ron Patterson.  There are also a handful of public figures still trying to keep the concept in circulation, with the aforementioned Richard Heinberg arguably first among them. Aside from those few, though, what was once a significant movement is for all practical purposes dead. The question that deserves asking is simple enough: what happened?
One obvious answer is that the peak oil movement was the victim of its own failed predictions. It’s true, to be sure, that failed predictions were a commonplace of the peak oil scene. It wasn’t just the overenthusiastic promoters of alternative energy technologies, who year after year insisted that the next twelve months would see their pet technology leap out of its current obscurity to make petroleum a fading memory; it wasn’t just their exact equivalents, the overenthusiastic promoters of apocalyptic predictions, who year after year insisted that the next twelve months would see the collapse of the global economy, the outbreak of World War III, the imposition of a genocidal police state, or whatever other sudden cataclysm happened to have seized their fancy.
No, the problem with failed predictions ran straight through the movement, even—or especially—in its more serious manifestations. The standard model of the future accepted through most of the peak oil scene started from a set of inescapable facts and an unexamined assumption, and the combination of those things produced consistently false predictions. The inescapable facts were that the Earth is finite, that it contains a finite supply of petroleum, and that various lines of evidence showed conclusively that global production of conventional petroleum was approaching its peak for hard geological reasons, and could no longer keep increasing thereafter.
The unexamined assumption was that geological realities rather than economic forces would govern how fast the remaining reserves of conventional petroleum would be extracted. On that basis, most people in the peak oil movement assumed that as production peaked and began to decline, the price of petroleum would rise rapidly, placing an increasingly obvious burden on the global economy. The optimists in the movement argued that this, in turn, would force nations around the world to recognize what was going on and make the transition to other energy sources, and to the massive conservation programs that would be needed to deal with the gap between the cheap abundant energy that petroleum used to provide and the more expensive and less abundant energy available from other sources. The pessimists, for their part, argued that it was already too late for such a transition, and that industrial civilization would come apart at the seams.
As it turned out, though, the unexamined assumption was wrong. Geological realities imposed, and continue to impose, upper limits on global petroleum production, but economic forces have determined how much less than those upper limits would actually be produced. What happened, as a result, is that when oil prices spiked in 2007 and 2008, and then again in 2014 and 2015, consumers cut back on their use of petroleum products, while producers hurried to bring marginal petroleum sources such as tar sands and oil shales into production to take advantage of the high prices. Both those steps drove prices back down. Low prices, in turn, encouraged consumers to use more petroleum products, and forced producers to shut down marginal sources that couldn’t turn a profit when oil was less than $80 a barrel; both these steps, in turn, sent prices back up.
That doesn’t mean that peak oil has gone away. As oilmen like to say, depletion never sleeps; each time the world passes through the cycle just described, the global economy takes another body blow, and the marginal petroleum sources cost much more to extract and process than the light sweet crude on which the oil industry used to rely. The result, though, is that instead of a sudden upward zoom in prices that couldn’t be ignored, we’ve gotten wild swings in commodity prices, political and social turmoil, and a global economy stuck in creeping dysfunction that stubbornly refuses to behave the way it did when petroleum was still cheap and abundant. The peak oil movement wasn’t prepared for that future.
Granting all this, failed predictions aren’t enough by themselves to stop a movement in its tracks. Here in the United States, especially, we’ve got an astonishing tolerance for predictive idiocy. The economists who insisted that neoliberal policies would surely bring prosperity, for example, haven’t been laughed into obscurity by the mere fact that they were dead wrong; au contraire, they’re still drawing their paychecks and being taken seriously by politicians and the media. The pundits who insisted at the top of their lungs that Britain wouldn’t vote for Brexit and Donald Trump couldn’t possibly win the US presidency are still being taken seriously, too. Nor, to move closer to the activist fringes, has the climate change movement been badly hurt by the embarrassingly linear models of imminent doom it used to deploy with such abandon; the climate change movement is in deep trouble, granted, but its failure has other causes.
It was the indirect impacts of those failed predictions, rather, that helped run the peak oil movement into the ground. The most important of these, to my mind, was the way that those predictions encouraged people in the movement to put their faith in the notion that sometime very soon, governments and businesses would have to take peak oil seriously. That’s what inspired ASPO-USA, for example, to set up a lobbying office in Washington DC with a paid executive director, when the long-term funding for such a project hadn’t yet been secured. On another plane, that’s what undergirded the entire strategy of the Transition Town movement in its original incarnation: get plans drawn up and officially accepted by as many town governments as possible, so that once the arrival of peak oil becomes impossible to ignore, the plan for what to do about it would already be in place.
Of course the difficulty in both cases was that the glorious day of public recognition never arrived. The movement assumed that events would prove its case in the eyes of the general public and the political system alike, and so made no realistic plans about what to do if that didn’t happen. When it didn’t happen, in turn, the movement was left twisting in the wind.
The conviction that politicians, pundits, and the public would be forced by events to acknowledge the truth about peak oil had other consequences that helped hamstring the movement. Outreach to the vast majority that wasn’t yet on board the peak oil bandwagon, for example, got far too little attention or funding. Early on in the movement, several books meant for general audiences—James Howard Kunstler’s The Long Emergency and Richard Heinberg’s The Party’s Over are arguably the best examples—helped lay the foundations for a more effective outreach program, but the organized followup that might have built on those foundations never really happened. Waiting on events took the place of shaping events, and that’s almost always a guarantee of failure.
One particular form of waiting on events that took a particularly steep toll on the movement was its attempts to get funding from wealthy donors. I’ve been told that Post Carbon Institute got itself funded in this way, while as far as I know, ASPO-USA never did. Win or lose, though, begging for scraps at the tables of the rich is a sucker’s game.  In social change as in every other aspect of life, who pays the piper calls the tune, and the rich—who benefit more than anyone else from business as usual—can be counted on to defend their interest by funding only those activities that don’t seriously threaten the continuation of business as usual. Successful movements for social change start by taking effective action with the resources they can muster by themselves, and build their own funding base by attracting people who believe in their mission strongly enough to help pay for it.
There were other reasons why the peak oil movement failed, of course. To its credit, it managed to avoid two of the factors that ran the climate change movement into the ground, as detailed in the essay linked above—it never became a partisan issue, mostly because no political party in the US was willing to touch it with a ten foot pole, and the purity politics that insists that supporters of one cause are only acceptable in its ranks if they also subscribe to a laundry list of other causes never really got a foothold outside of certain limited circles. Piggybacking—the flipside of purity politics, which demands that no movement be allowed to solve one problem without solving every other problem as well—was more of a problem, and so, in a big way, was pandering to the privileged—I long ago lost track of the number of times I heard people in the peak oil scene insist that this or that high-end technology, which was only affordable by the well-to-do, was a meaningful response to the coming of peak oil.
There are doubtless other reasons as well; it’s a feature of all things human that failure is usually overdetermined. At this point, though, I’d like to set that aside for a moment and consider two other points. The first is that the movement didn’t have to fail the way it did. The second is that it could still be revived and gotten back on a more productive track.
To begin with, not everyone in the peak oil scene bought into the unexamined assumption I’ve critiqued above. Well before the movement started running itself into the ground, some of us pointed out that economic factors were going to have a massive impact on the rates of petroleum production and consumption—my first essay on that theme appeared here in April of 2007, and I was far from the first person to notice it. The movement by that time was so invested in its own predictions, with their apparent promise of public recognition and funding, that those concerns didn’t have an impact at the time. Even when the stratospheric oil price spike of 2008 was followed by a bust, though, peak oil organizations by and large don’t seem to have reconsidered their strategies. A mid-course correction at that point, wrenching though it might have been, could have kept the movement alive.
There were also plenty of good examples of effective movements for social change from which useful lessons could have been drawn. One difficulty is that you won’t find such examples in today’s liberal environmental mainstream, which for all practical purposes hasn’t won a battle since Richard Nixon signed the Clean Air Act. The struggle for the right to same-sex marriage, as I’ve noted before, is quite another matter—a grassroots movement that, despite sparse funding and strenuous opposition, played a long game extremely well and achieved its goal. There are other such examples, on both sides of today’s partisan divide, from which useful lessons can be drawn. Pay attention to how movements for change succeed and how they fail, and it’s not hard to figure out how to play the game effectively. That could have been done at any point in the history of the peak oil movement. It could still be done now.
Like same-sex marriage, after all, peak oil isn’t inherently a partisan issue. Like same-sex marriage, it offers plenty of room for compromise and coalition-building. Like same-sex marriage, it’s a single issue, not a fossilized total worldview like those that play so large and dysfunctional a role in today’s political nonconversations. A peak oil movement that placed itself squarely in the abandoned center of contemporary politics, played both sides against each other, and kept its eyes squarely on the prize—educating politicians and the public about the reality of finite fossil fuel reserves, and pushing for projects that will mitigate the cascading environmental and economic impacts of peak oil—could do a great deal to  reshape our collective narrative about energy and, in the process, accomplish quite a bit to make the long road down from peak oil less brutal than it will otherwise be.
I’m sorry to say that the phrase “peak oil,” familiar and convenient as it is, probably has to go.  The failures of the movement that coalesced around that phrase were serious and visible enough that some new moniker will be needed for the time being, to avoid being tarred with a well-used brush. The crucial concept of net energy—the energy a given resource provides once you subtract the energy needed to extract, process, and use it—would have to be central to the first rounds of education and publicity; since it’s precisely equivalent to profit, a concept most people grasp quickly enough, that’s not necessarily a hard thing to accomplish, but it has to be done, because it’s when the concept of net energy is solidly understood that such absurdities as commercial fusion power appear in their true light.
It probably has to be said up front that no such project will keep the end of the industrial age from being an ugly mess. That’s already baked into the cake at this point; what were once problems to be solved have become predicaments that we can, at best, only mitigate. Nor could a project of the sort I’ve very roughly sketched out here expect any kind of overnight success. It would have to play a long game in an era when time is running decidedly short. Challenging? You bet—but I think it’s a possibility worth serious consideration.
***********************In other news, I’m delighted to announce the appearance of two books that will be of interest to readers of this blog. The first is Dmitry Orlov’s latest, Shrinking the Technosphere: Getting a Grip on the Technologies that Limit Our Autonomy, Self-Sufficiency, and Freedom. It’s a trenchant and thoughtful analysis of the gap between the fantasies of human betterment through technological progress and the antihuman mess that’s resulted from the pursuit of those fantasies, and belongs on the same shelf as Theodore Roszak’s Where the Wasteland Ends: Politics and Transcendence in Postindustrial Society and my After Progress: Religion and Reason in the Twilight of the Industrial Age. Copies hot off the press can be ordered from New Society here.
Meanwhile, Space Bats fans will want to know that the anthology of short stories and novellas set in the world of my novel Star’s Reach is now available for preorder from Founders House here. Merigan Tales is a stellar collection, as good as any of the After Oil anthologies, and fans of Star’s Reach won’t want to miss it.

The Fifth Side of the Triangle

Wed, 2016-12-07 11:27
One of the things I’ve had occasion to notice, over the course of the decade or so I’ve put into writing these online essays, is the extent to which repeating patterns in contemporary life go unnoticed by the people who are experiencing them. I’m not talking here about the great cycles of history, which take long enough to roll over that a certain amount of forgetfulness can be expected; the repeating patterns I have in mind come every few years, and yet very few people seem to notice the repetition.
An example that should be familiar to my readers is the way that, until recently, one energy source after another got trotted out on the media and the blogosphere as the excuse du jour for doing nothing about the ongoing depletion of global fossil fuel reserves. When this blog first got under way in 2006, ethanol from corn was the excuse; then it was algal biodiesel; then it was nuclear power from thorium; then it was windfarms and solar PV installations; then it was oil and gas from fracking. In each case, the same rhetorical handwaving about abundance was deployed for the same purpose, the same issues of net energy and concentration were evaded, and the resource in question never managed to live up to the overblown promises made in its name—and yet any attempt to point out the similarities got blank looks and the inevitable refrain, “but this is different.”
The drumbeat of excuses du jour has slackened a bit just now, and that’s also part of a repeating pattern that doesn’t get anything like the scrutiny it deserves. Starting when conventional petroleum production worldwide reached its all-time plateau, in the first years of this century, the price of oil has jolted up and down in a multiyear cycle. The forces driving the cycle are no mystery: high prices encourage producers to bring marginal sources online, but they also decrease demand; the excess inventories of petroleum that result drive down prices; low prices encourage consumers to use more, but they also cause marginal sources to be shut down; the shortfalls of petroleum that result drive prices up, and round and round the mulberry bush we go.
We’re just beginning to come out of the trough following the 2015 price peak, and demand is even lower than it would otherwise be, due to cascading troubles in the global economy. Thus, for the moment, there’s enough petroleum available to supply everyone who can afford to buy it. If the last two cycles are anything to go by, though, oil prices will rise unsteadily from here, reaching a new peak in 2021 or so before slumping down into a new trough. How many people are paying attention to this, and using the current interval of relatively cheap energy to get ready for another period of expensive energy a few years from now? To judge from what I’ve seen, not many.
Just at the moment, though, the example of repetition that comes first to my mind has little to do with energy, except in a metaphorical sense. It’s the way that people committed to a cause—any cause—are so often so flustered when initial successes are followed by something other than repeated triumph forever. Now of course part of the reason that’s on my mind is the contortions still ongoing on the leftward end of the US political landscape, as various people try to understand (or in some cases, do their level best to misunderstand) the implications of last month’s election. Still, that’s not the only reason this particular pattern keeps coming to mind.
I’m also thinking of it as the Eurozone sinks deeper and deeper into political crisis. The project of European unity had its initial successes, and a great many European politicians and pundits seem to have convinced themselves that of course those would be repeated step by step, until a United States of Europe stepped out on the international stage as the world’s next superpower. It’s pretty clear at this point that nothing of the sort is going to happen, because those initial successes were followed by a cascade of missteps and a populist backlash that’s by no means reached its peak yet.
More broadly, the entire project of liberal internationalism that’s guided the affairs of the industrial world since the Berlin Wall came down is in deep trouble. It’s been enormously profitable for the most affluent 20% or so of the industrial world’s population, which is doubtless a core reason why that same 20% insists so strenuously that no other options are possible, but it’s been an ongoing disaster for the other 80% or so, and they are beginning to make their voices heard.
At the heart of the liberal project was the insistence that economics should trump politics—that the free market should determine policy in most matters, leaving governments only an administrative function. Of course that warm and cozy abstraction “the free market” meant in practice the kleptocratic corporate socialism of too-big-to-fail banks and subsidy-guzzling multinationals, which proceeded to pursue their own short-term benefit so recklessly that they’ve driven entire countries into the ground. That’s brought about the inevitable backlash, and the proponents of liberal internationalism are discovering to their bafflement that if enough of the electorate is driven to the wall, the political sphere may just end up holding the Trump card after all.
And of course the same bafflement is on display in the wake of last month’s presidential election, as a great many people who embraced our domestic version of the liberal internationalist idea were left dumbfounded by its defeat at the hands of the electorate—not just by those who voted for Donald Trump, but also by the millions who stayed home and drove Democratic turnout in the 2016 election down to levels disastrously low for Hillary Clinton’s hopes. A great many of the contortions mentioned above have been driven by the conviction on the part of Clinton’s supporters that their candidate’s defeat was caused by a rejection of the ideals of contemporary American liberalism. That some other factor might have been involved is not, at the moment, something many of them are willing to hear.
That’s where the repeating pattern comes in, because movements for social change—whether they come from the grassroots or the summits of power—are subject to certain predictable changes, and if those changes aren’t recognized and countered in advance, they lead to the kind of results I’ve just been discussing. There are several ways to talk about those changes, but the one I’d like to use here unfolds, in a deliberately quirky way, from the Hegelian philosophy of history.
That probably needs an explanation, and indeed an apology, because Georg Wilhelm Friedrich Hegel has been responsible for more sheer political stupidity than any other thinker of modern times. Across the bloodsoaked mess that was the twentieth century, from revolutionary Marxism in its opening years to Francis Fukuyama’s risible fantasy of the End of History in its closing, where you found Hegelian political philosophy, you could be sure that someone was about to make a mistaken prediction.
It may not be entirely fair to blame Hegel personally for this. His writings and lectures are vast heaps of cloudy abstraction in which his students basically had to chase down inkblot patterns of their own making. Hegel’s great rival Arthur Schopenhauer used to insist that Hegel was a deliberate fraud, stringing together meaningless sequences of words in the hope that his readers would mistake obscurity for profundity, and more than once—especially when slogging through the murky prolixities of Hegel’s The Phenomenology of Spirit—I’ve suspected that the old grouch of Frankfurt was right. Still, we can let that pass, because a busy industry of Hegelian philosophers spent the last century and a half churning out theories of their own based, to one extent or another, on Hegel’s vaporings, and it’s this body of work that most people mean when they talk about Hegelian philosophy.
At the core of most Hegelian philosophies of history is a series of words that used to be famous, and still has a certain cachet in some circles: thesis, antithesis, synthesis. (Hegel himself apparently never used those terms in their later sense, but no matter.) That’s the three-step dance to the music of time that, in the Hegelian imagination, shapes human history. You’ve got one condition of being, or state of human consciousness, or economic system, or political system, or what have you; it infallibly generates its opposite; the two collide, and then there’s a synthesis which resolves the initial contradiction. Then the synthesis becomes a thesis, generates its own antithesis, a new synthesis is born, and so on.
One of the oddities about Hegelian philosophies of history is that, having set up this repeating process, their proponents almost always insist that it’s about to stop forever. In the full development of the Marxist theory of history, for example, the alternation of thesis-antithesis-synthesis starts with the primordial state of primitive communism and then chugs merrily, or rather far from merrily, through a whole series of economic systems, until finally true communism appears—and then that’s it; it’s the synthesis that never becomes a thesis and never conjures up an antithesis. In exactly the same way, Fukuyama’s theory of the end of history argued that all history until 1991 or so was a competition between different systems of political economy, of which liberal democratic capitalism and totalitarian Marxism were the last two contenders; capitalism won, Marxism lost, game over.
Now of course that’s part of the reason that Hegelianism so reliably generates false predictions, because in the real world it’s never game over; there’s always another round to play. There’s another dimension of Hegelian mistakenness, though, because the rhythm of the dialectic implies that the gains of one synthesis are never lost. Each synthesis becomes the basis for the next struggle between thesis and antithesis out of which a new synthesis emerges—and the new synthesis is always supposed to embody the best parts of the old.
This is where we move from orthodox Hegelianism to the quirky alternative I have in mind. It didn’t emerge out of the profound ponderings of serious philosophers of history in some famous European university. It first saw the light in a bowling alley in suburban Los Angeles, and the circumstances of its arrival—which, according to the traditional account, involved the miraculous appearance of a dignified elderly chimpanzee and the theophany of a minor figure from Greek mythology—suggest that prodigious amounts of drugs were probably involved.
Yes, we’re talking about Discordianism.
I’m far from sure how many of my readers are familiar with that phenomenon, which exists somewhere on the ill-defined continuum between deadpan put-on and serious philosophical critique. The short form is that it was cooked up by a couple of young men on the fringes of the California Beat scene right as that was beginning its mutation into the first faint adumbrations of the hippie phenomenon. Its original expression was the Principia Discordia, the scripture (more or less) of a religion (more or less) that worships (more or less) Eris, the Greek goddess of chaos, and its central theme is the absurdity of belief systems that treat orderly schemes cooked up in the human mind as though these exist out there in the bubbling, boiling confusion of actual existence.
That may not seem like fertile ground for a philosophy of history, but the Discordians came up with one anyway, probably in mockery of the ultraserious treatment of Hegelian philosophy that was common just then in the Marxist-existentialist end of the Beat scene. Robert Shea and Robert Anton Wilson proceeded to pick up the Discordian theory of history and weave it into their tremendous satire of American conspiracy culture, the Illuminatus!trilogy. That’s where I encountered it originally in the late 1970s; I laughed, and then paused and ran my fingers through my first and very scruffy adolescent beard, realizing that it actually made more sense than any other theory of history I’d encountered.
Here’s how it works. From the Discordian point of view, Hegel went wrong for two reasons. The first was that he didn’t know about the Law of Fives, the basic Discordian principle that all things come in fives, except when they don’t. Thus he left off the final two steps of the dialectical process: after thesis, antithesis, and synthesis, you get parenthesis, and then paralysis.
The second thing Hegel missed is that the synthesis is never actually perfect.  It never succeeds wholly in resolving the conflict between thesis and antithesis; there are always awkward compromises, difficulties that are papered over, downsides that nobody figures out at the time, and so on. Thus it doesn’t take long for the synthesis to start showing signs of strain, and the inevitable response is to try to patch things up without actually changing anything that matters. The synthesis thus never has time to become a thesis and generate its own antithesis; it is its own antithesis, and ever more elaborate arrangements have to be put to work to keep it going despite its increasingly evident flaws; that’s the stage of parenthesis.
The struggle to maintain these arrangements, in turn, gradually usurps so much effort and attention that the original point of the synthesis is lost, and maintaining the arrangements themselves becomes too burdensome to sustain. That’s when you enter the stage of paralysis, when the whole shebang grinds slowly to a halt and then falls apart. Only after paralysis is total do you get a new thesis, which sweeps away the rubble and kickstarts the whole process into motion again.
There are traditional Discordian titles for these stages. The first, thesis, is the state of Chaos, when a group of human beings look out at the bubbling, boiling confusion of actual existence and decide to impose some kind of order on the mess. The second, antithesis, is the state of Discord, when the struggle to impose that order on the mess in question produces an abundance of equal and opposite reactions. The third, synthesis, is the state of Confusion, in which victory is declared over the chaos of mere existence, even though everything’s still bubbling and boiling merrily away as usual. The fourth, parenthesis, is the state of Consternation,* in which the fact that everything’s still bubbling and boiling merrily away as usual becomes increasingly hard to ignore. The fifth and final, paralysis, is the state of Moral Warptitude—don’t blame me, that’s what the Principia Discordiasays—in which everything grinds to a halt and falls to the ground, and everyone stands around in the smoldering wreckage rubbing their eyes and wondering what happened.
*(Yes, I know, Robert Anton Wilson called the last two stages Bureaucracy and Aftermath. He was a heretic. So is every other Discordian, for that matter.)
Let’s apply this to the liberal international order that emerged in the wake of the Soviet Union’s fall, and see how it fits. Thesis, the state of Chaos, was the patchwork of quarrelsome nations into which our species has divided itself, which many people of good will saw as barbarous relics of a violent past that should be restrained by a global economic order. Antithesis, the state of Discord, was the struggle to impose that order by way of trade agreements and the like, in the teeth of often violent resistance—the phrase “WTO Seattle” may come to mind here. Synthesis, the state of Confusion, was the self-satisfied cosmopolitan culture that sprang up among the affluent 20% or so of the industrial world’s population, who became convinced that the temporary ascendancy of policies that favored their interests was not only permanent but self-evidently right and just.
Parenthesis, the state of Consternation, was the decades-long struggle to prop up those policies despite the disastrous economic consequences those policies inflicted on everyone but the affluent. Finally, paralysis, the state of Moral Warptitude, sets in when populist movements, incensed by the unwillingness of the 20% to consider anyone else’s needs but their own, surge into the political sphere and bring the entire project to a halt. It’s worth noting here that the title “moral warptitude” may be bad English, but it’s a good description for the attitude of believers in the synthesis toward the unraveling of their preferred state of affairs. It’s standard, as just noted, for those who benefit from the synthesis to become convinced that it’s not merely advantageous but also morally good, and to see the forces that overthrow it as evil incarnate; this is simply another dimension of their Confusion.
Am I seriously suggesting that the drug-soaked ravings of a bunch of goofy California potheads provide a better guide to history than the serious reflections of Hegelian philosophers? Well, yes, actually, I am. Given the track record of Hegelian thought when it comes to history, a flipped coin is a better guide—use a coin, and you have a 50% better chance of being right. Outside of mainstream macroeconomic theory, it’s hard to think of a branch of modern thought that so consistently turns out false answers once it’s applied to the real world.
No doubt there are more respectable models that also provide a clear grasp of what happens to most movements for social change—the way they lose track of the difference between achieving their goals and pursuing their preferred strategies, and generally end up opting for the latter; the way that their institutional forms become ends in themselves, and gradually absorb the effort and resources that would otherwise have brought about change; the way that they run to extremes, chase off potential and actual supporters, and then busy themselves coming up with increasingly self-referential explanations for the fact that the only tactics they’re willing to consider are those that increase their own marginalization in the wider society, and so on. It’s a familiar litany, and will doubtless become even more familiar in the years ahead.
For what it’s worth, though, it’s not necessary for the two additional steps of the post-Hegelian dialectic, the fourth and fifth sides of his imaginary triangle, to result in the complete collapse of everything that was gained in the first three steps. It’s possible to surf the waves of Consternation and Moral Warptitude—but it’s not easy. Next week, we’ll explore this further, by circling back to the place where this blog began, and having a serious talk about how the peak oil movement failed.
*************In other news, I’m delighted to report that Retrotopia, which originally appeared here as a series of posts, is now in print in book form and available for sale. I’ve revised and somewhat expanded Peter Carr’s journey to the Lakeland Republic, and I hope it meets with the approval of my readers.
Also from Founders House, the first issue of the new science fiction and fantasy quarterly MYTHIC has just been released. Along with plenty of other lively stories, it’s got an essay of mine on the decline and revival of science fiction, and a short story, "The Phantom of the Dust," set in the same fictive universe as my novel The Weird of Hali: Innsmouth, and pitting Owen Merrill and sorceress Jenny Chaudronnier against a sinister mystery from colonial days. Subscriptions and single copies can be ordered here.

The End of the American Century

Wed, 2016-11-30 10:39
I have a bone to pick with the Washington Post. A few days back, as some of my readers may be aware, it published a list of some two hundred blogs that it claimed were circulating Russian propaganda, and I was disappointed to find that The Archdruid Report didn’t make the cut.
Oh, granted, I don’t wait each week for secret orders from Boris Badenov, the mock-iconic Russian spy from the Rocky and Bullwinkle Show of my youth, but that shouldn’t disqualify me.  I’ve seen no evidence that any of the blogs on the list take orders from Moscow, either; certainly the Post offered none worth mentioning. Rather, what seems to have brought down the wrath of “Pravda on the Potomac,” as the Post is unfondly called by many DC locals, is that none of these blogs have been willing to buy into the failed neoconservative consensus that’s guided American foreign policy for the last sixteen years. Of that latter offense, in turn, The Archdruid Report is certainly guilty.
There are at least two significant factors behind the Post’s adoption of the tactics of the late Senator Joe McCarthy, dubious lists and all.  The first is that the failure of Hillary Clinton’s presidential ambitions has thrown into stark relief an existential crisis that has the American news media by the throat. The media sell their services to their sponsors on the assumption that they can then sell products and ideas manufactured by those sponsors to the American people. The Clinton campaign accordingly outspent Trump’s people by a factor of two to one, sinking impressive amounts of the cash she raised from millionaire donors into television advertising and other media buys.
Clinton got the coverage she paid for, too. Nearly every newspaper in the United States endorsed her; pundits from one end of the media to the other solemnly insisted that everyone ought to vote for her; equivocal polls were systematically spun in her favor by a galaxy of talking heads. Pretty much everyone who thought they mattered was on board the bandwagon. The only difficulty, really was that the people who actually mattered—in particular, voters in half a dozen crucial swing states—responded to all this by telling their soi-disant betters, “Thanks, but one turkey this November is enough.”
It turned out that Clinton was playing by a rulebook that was long past its sell-by date, while Trump had gauged the shift in popular opinion and directed his resources accordingly. While she sank her money into television ads on prime time, he concentrated on social media and barnstorming speaking tours through regions that rarely see a presidential candidate. He also figured out early on that the mainstream media was a limitless source of free publicity, and the best way to make use of it was to outrage the tender sensibilities of the media itself and get denounced by media talking heads.
That worked because a very large number of people here in the United States no longer trust the news media to tell them anything remotely resembling the truth. That’s why so many of them have turned to blogs for the services that newspapers and broadcast media used to provide: accurate reporting and thoughtful analysis of the events that affect their lives. Nor is this an unresasonable choice. The issue’s not just that the mainstream news media is biased; it’s not just that it never gets around to mentioning many issues that affect people’s lives in today’s America; it’s not even that it only airs a suffocatingly narrow range of viewpoints, running the gamut of opinion from A to A minus—though of course all these are true.  It’s also that so much of it is so smug, so shallow, and so dull.
The predicament the mainstream media now face is as simple as it is inescapable. After taking billions of dollars from their sponsors, they’ve failed to deliver the goods.  Every source of advertising revenue in the United States has got to be looking at the outcome of the election, thinking, “Fat lot of good all those TV buys did her,” and then pondering their own advertising budgets and wondering how much of that money might as well be poured down a rathole.
Presumably the mainstream news media could earn the trust of the public again by breaking out of the echo chamber that defines the narrow range of acceptable opinions about the equally narrow range of issues open to discussion, but this would offend their sponsors. Worse, it would offend the social strata that play so large a role in defining and enforcing that echo chamber; most mainstream news media employees who have a role in deciding what does and does not appear in print or on the air belong to these same social strata, and are thus powerfully influenced by peer pressure. Talking about supposed Russian plots to try to convince people not to get their news from blogs, though it’s unlikely to work, doesn’t risk trouble from either of those sources.
Why, though, blame it on the Russians? That’s where we move from the first to the second of the factors I want to discuss this week.
A bit of history may be useful here. During the 1990s, the attitude of the American political class toward the rest of the world rarely strayed far from the notions expressed by Francis Fukuyama in his famous and fatuous essay proclaiming the end of history.  The fall of the Soviet Union, according to this line of thought, proved that democracy and capitalism were the best political and economic systems humanity would ever come up with, and the rest of the world would therefore inevitably embrace them in due time. All that was left for the United States and its allies to do was to enforce certain standards of global order on the not-yet-democratic and not-yet-capitalist nations of the world, until they grew up and got with the program.
That same decade, though, saw the emergence of the neoconservative movement.  The neoconservaties were as convinced of the impending triumph of capitalism and democracy as their rivals, but they opposed the serene absurdities of Fukuyama’s thesis with a set of more muscular absurdities of their own. Intoxicated with the collapse of the Soviet Union and its allies, they convinced themselves that identical scenes could be enacted in Baghdad, Tehran, Beijing, and the rest of the world, if only the United States would seize the moment and exploit its global dominance.
During Clinton’s presidency, the neoconservatives formed a pressure group on the fringes of official Washington, setting up lobbying groups such as the Project for a New American Century and bombarding the media with position papers.  The presidency of George W. Bush gave them their chance, and they ran with it. Where the first Iraq war ended with Saddam Hussein beaten but still in power—the appropriate reponse according to the older ideology—the second ended with the US occupying Iraq and a manufactured “democratic” regime installed under its aegis. In the afterglow of victory, neoconservatives talked eagerly about the conquest of Iran and the remaking of the Middle East along the same lines as post-Soviet eastern Europe. Unfortunately for these fond daydreams, what happened instead was a vortex of sectarian warfare and anti-American insurgency.
You might think, dear reader, that the cascading failures of US policy in Iraq might have caused second thoughts in the US political and military elites whose uncritical embrace of neoconservative rhetoric let that happen. You might be forgiven, for that matter, for thinking that the results of US intervention in Afghanistan, where the same assumptions had met with the same disappointment, might have given those second thoughts even more urgency. If so, you’d be quite mistaken. According to the conventional wisdom in today’s America, the only conceivable response to failure is doubling down. 
“If at first you don’t succeed, fail, fail again” thus seems to be the motto of the US political class these days, and rarely has that been so evident as in the conduct of US foreign policy.  The Obama administration embraced the same policies as its feckless predecessor, and the State Department, the CIA, and the Pentagon went their merry way, overthrowing governments right and left, and tossing gasoline onto the flames of ethnic and sectarian strife in various corners of the world, under the serene conviction that the blowback from these actions could never inconvenience the United States.
That would be bad enough. Far worse was the effect of neoconservative policies on certain other nations: Russia, China, and Iran. In the wake of the Soviet Union’s collapse, Russia was a basket case, Iran was a pariah nation isolated from the rest of the world, and China had apparently made its peace with an era of American global dominance, and was concentrating on building up its economy instead of its military. It would have been child’s play for the United States to maintain that state of affairs indefinitely. Russia could have been helped to recover and then integrated economically into Europe; China could have been allowed the same sort of regional primacy the US allows as a matter of course to its former enemies Germany and Japan; and without US intervention in the Middle East to hand it a bumper crop of opening wedges, Iran could have been left to stew in its own juices until it imploded. 
That’s not what happened, though. Instead, two US adminstrations went out of their way to convince Russia and China they had nothing to gain and everything to lose by accepting their assigned places in a US-centric international order. Russia and China have few interests in common and many reasons for conflict; they’ve spent much of their modern history glaring at each other across a long and contentious mutual border; they had no reason to ally with each other, until the United States gave them one. Nor did either nation have any reason to reach out to the Muslim theocracy in Iran—quite the contrary—until they began looking for additional allies to strengthen their hand against the United States.
One of the basic goals of effective foreign policy is to divide your potential enemies against each other, so that they’re so busy worrying about one another that they don’t have the time or resources to bother you. It’s one thing, though, to violate that rule when the enemies you’re driving together lack the power to threaten your interests, and quite another when the resource base, population, and industrial capacity of the nations you’re driving together exceeds your own. The US government’s harebrained pursuit of neoconservative policies has succeeded, against the odds, in creating a sprawling Eurasian alliance with an economic and military potential significantly greater than that of the US.  There have probably been worse foreign policy blunders in the history of the world, but I can’t think of one off hand.
You won’t read about that in the mainstream news media in the United States. At most, you’ll get canned tirades about how Russian president Vladimir Putin is a “brutal tyrant” who is blowing up children in Aleppo or what have you. “Brutal tyrant,” by the way, is a code phrase of the sort you normally get in managed media.  In the US news, it simply means “a head of state who’s insufficiently submissive to the United States.” Putin certainly qualifies as the latter; first in the Caucasus, then in the Ukraine, and now in Syria, he’s deployed military force to advance his country’s interests against those of the United States and its allies. I quite understand that the US political class isn’t pleased by this, but it might be helpful for them to reflect on their own role in making it happen.
The Russian initiative isn’t limited to Syria, though. Those of my readers who only pay attention to US news media probably don’t know yet that Egypt has now joined Russia’s side. Egyptian and Russian troops are carrying out joint military drills, and reports in Middle Eastern news media have it that Egyptian troops will soon join the war in Syria on the side of the Syrian government. If so, that’s a game-changing move, and probably means game over for the murky dealings the United States and its allies have been pursuing in that end of the Middle East.
China and Russia have very different cultural styles when it comes to exerting power. Russian culture celebrates the bold stroke; Chinese culture finds subtle pressure more admirable. Thus the Chinese have been advancing their country’s interests against those of the United States and its allies in a less dramatic but equally effective way. While distracting Washington’s attention with a precisely measured game of “chicken” in the South China Sea, the Chinese have established a line of naval bases along the northern shores of the Indian Ocean from Myanmar to Djibouti, and contracted alliances in East Africa and South Asia. Those of my readers who’ve read Alfred Thayer Mahan and thus know their way around classic maritime strategy will recognize exactly what’s going on here.
Most recently, China has scored two dramatic shifts in the balance of power in the western Pacific. My American readers may have heard of President Rodrigo Duterte of the Phillippines; he’s the one who  got his fifteen minutes of fame in the mainstream media here when he called Barack Obama a son of a whore. The broader context, of course, got left out. Duterte, like the heads of state of many nominal US allies, resents US  interference in his country’s affairs, and at this point he has other options. His outburst was followed in short order by a trip to Beijing, where he and China’s President Xi signed multibillion-dollar aid agreements and talked openly about the end of a US-dominated world order.
A great many Americans seem to think of the Phillippines as a forgettable little country off somewhere unimportant in the Third World. That’s a massive if typical misjudgment. It’s a nation of 100 million people on a sprawling archipelago of more than 7,000 islands, commanding the entire southern end of the South China Sea and a vast swath of the western Pacific, including crucial maritime trade routes. As a US ally, it was a core component of the ring of encirclement holding Chinese maritime forces inside the island ring that walls China’s coastal waters from rest of the Pacific basin. As a Chinese ally, it holds open that southern gate to China’s rapidly expanding navy and air force.
Duterte wasn’t the only Asian head of state to head for Beijing in recent months. Malaysia’s prime minister was there a few weeks later, to sign up for another multibillion-dollar aid package, buy Chinese vessels for the Malaysian navy, and make acid comments about the way that, ahem, former colonial powers keep trying to interfere in Malaysian affairs. Malaysia’s a smaller nation than the Phillippines, but even more strategically placed.  Its territory runs alongside the northern shore of the Malacca Strait:  the most important sea lane in the world, the gateway connecting the Indian Ocean with the Pacific, through which much of the world’s seaborne crude oil transport passes.
All these are opening moves. Those who are familiar with the rise and fall of global powers know what the next moves are; those who don’t might want to consider reading my book Declineand Fall, or my novel Twilight’s Last Gleaming, which makes the same points in narrative form. Had Hillary Clinton won this month’s election, we might have moved into the endgame much sooner.  Her enthusiasm for overthrowing governments during her stint as Secretary of State, and her insistence that the US should impose a no-fly zone over Syria in the teeth of Russian fighters and state-of-the-art antiaircraft defenses, suggests that she could have filled the role of my fictional president Jameson Weed, and sent US military forces into a shooting war they were not realistically prepared to win.
We seem to have dodged that bullet. Even so, the United States remains drastically overextended, with military bases in more than a hundred countries around the world and a military budget nearly equal to all other countries’ put together. Meanwhile, back here at home, our country is falling apart. Leave the bicoastal bubble where the political class and their hangers-on spend their time, and the United States resembles nothing so much as the Soviet Union in its last days: a bleak and dilapidated landscape of economic and social dysfunction, where the enforced cheerfulness of the mainstream media contrasts intolerably with the accelerating disintegration visible all around.
That could have been prevented. If the United States had responded to the end of the Cold War by redirecting the so-called “peace dividend” toward the rebuilding of our national infrastructure and our domestic economy, we wouldn’t be facing the hard choices before us right now—and in all probability, by the way, Donald Trump wouldn’t just have been elected president. Instead, the US political class let itself be caught up in neoconservative fantasies of global dominion, and threw away that opportunity. The one bright spot in that dismal picture is that we have another chance.
History shows that there are two ways that empires end. Their most common fate involves clinging like grim death to their imperial status until it drags them down. Spain’s great age of overseas empire ended that way, with Spain plunging into a long era of economic disarray and civil war. At least it maintained its national unity; the Ottoman and Austro-Hungarian empires both finished their imperial trajectories by being partitioned, as of course did the Soviet Union. There are worse examples; I’m thinking here of the Assyrian Empire of the ancient Middle East, which ceased to exist completely—its nationhood, ethnicity, and language dissolving into those of its neighbors—once it fell.
Then there’s the other option, the one chosen by the Chinese in the fifteenth century and Great Britain in the twentieth. Both nations had extensive overseas empires, and both walked away from them, carrying out a staged withdrawal from imperial overreach. Both nations not only survived the process but came through with their political and cultural institutions remarkably intact. This latter option, with all its benefits, is still available to the United States.
A staged withdrawal of the sort just described would of course be done step by step, giving our allies ample time to step up to the plate and carry the costs of their own defense. Those regions that have little relevance to US national interests, such as the Indian Ocean basin, would see the first round of withdrawals, while more important regions such as Europe and the northwest Pacific would be later on the list. The withdrawal wouldn’t go all the way back to our borders by any means; a strong presence in the Atlantic and eastern Pacific basins and a pivot to our own “near abroad” would be needed, but those would also be more than adequate to maintain our national security.
Meanwhile, the billions upon billions of dollars a year that would be saved could be put to work rebuilding our national infrastructure and economy, with enough left over for a Marshall Plan for Mexico—the most effective way to reduce illegal immigration to the United States, after all, is to help make sure that citizens of the countries near us have plenty of jobs at good wages where they already live. Finally, since the only glue holding the Russo-Chinese alliance together is their mutual opposition to US hegemony, winding up our term as global policeman will let Russia, China and Iran get back to contending with each other rather than with us.
Such projects, on the rare occasions they’re made, get shouted down by today’s US political class as “isolationism.” There’s a huge middle ground between isolationism and empire, though, and that middle ground is where most of the world’s nations stand as they face their neighbors. One way or another, the so-called “American century” is ending; it can end the hard way, the way so many other eras of global hegemony have ended—or it can end with the United States recognizing that it’s a nation among nations, not an overlord among vassals, and acting accordingly.
The mainstream news media here in the United States, if they actually provided the public service they claim, might reasonably be expected to discuss the pros and cons of such a proposal, and of the many other options that face this nation at the end of its era of global hegemony. I can’t say I expect that to happen, though. It’s got to be far more comfortable for them to blame the consequences of their own failure on the supposed Boris Badenovs of the blogosphere, and cling to the rags of their fading role as purveyors of a failed conventional wisdom, until the last of their audience wanders away for good.

The Free Trade Fallacy

Wed, 2016-11-23 11:17
As longtime readers of this blog know, it’s not uncommon for the essays I post here to go veering off on an assortment of tangents, and this week’s post is going to be an addition to that already well-stocked list. Late last week, as the aftermath of the recent election was still spewing all over the media,  I was mulling over one likely consequence of the way things turned out—the end of at least some of the free trade agreements that have played so large and dubious a role in recent economic history
One of the major currents underlying 2016’s political turmoil in Europe and the United States, in fact, has been a sharp disagreement about the value of free trade. The political establishment throughout the modern industrial world insists that free trade policies, backed up by an ever-increasing network of trade agreements, are both inevitable and inevitably good. The movements that have risen up against the status quo—the Brexit campaign in Britain, the populist surge that just made Donald Trump the next US president, and an assortment of similar movements elsewhere—reject both these claims, and argue that free trade is an unwise policy that has a cascade of negative consequences.
It’s important to be clear about what’s under discussion here, since conversations about free trade very often get wrapped up in warm but vague generalities about open borders and the like. Under a system of free trade, goods and capital can pass freely across national borders; there are no tariffs to pay, no quotas to satisfy, no capital restrictions to keep money in one country or out of another. The so-called global economy, in which the consumer goods sold in a nation might be manufactured anywhere on the planet, with funds flowing freely to build a factory here and funnel profits back there, depends on free trade, and the promoters of free trade theory like to insist that this is always a good thing: abolishing trade barriers of all kinds, and allowing the free movement of goods and capital across national boundaries, is supposed to create prosperity for everyone.
That’s the theory, at least. In practice?  Well, not so much. It’s not always remembered that there have been two great eras of free trade in modern history—the first from the 1860s to the beginning of the Great Depression, in which the United States never fully participated; the second from the 1980s to the present, with the United States at dead center—and neither one of them has ushered in a world of universal prosperity. Quite the contrary, both of them have yielded identical results: staggering profits for the rich, impoverishment and immiseration for the working classes, and cascading economic crises. The first such era ended in the Great Depression; the second, just at the moment, looks as though it could end the same way.
Economists—more precisely, the minority of economists who compare their theories to the evidence provided by the real world—like to insist that these unwelcome outcomes aren’t the fault of free trade. As I hope to show, they’re quite mistaken. An important factor has been left out of their analysis, and once that factor has been included, it becomes clear that free trade is bad policy that inevitably produces poverty and economic instability, not prosperity.
To see how this works, let’s imagine a continent with many independent nations, all of which trade with one another. Some of the nations are richer than others; some have valuable natural resources, while others don’t; standards of living and prevailing wages differ from country to country. Under normal conditions, trade barriers of various kinds limit the flow of goods and capital from one nation to another.  Each nation adjusts its trade policy to further its own economic interests.  One nation that’s trying to build up a domestic steel industry, say, may use tariffs, quotas, and the like to shelter that industry from foreign competition.  Another nation with an agricultural surplus may find it necessary to lower tariffs on other products to get neighboring countries to buy its grain.
Outside the two eras of free trade mentioned above, this has been the normal state of affairs, and it has had two reliable results. The first is that the movement of goods and capital between the nations tends toward a rough balance, because every nation uses its trade barriers to police hostile trade policy on the part of its neighbors. Imagine, for example, a nation that tries to monopolize steel production by “dumping”—that is, selling steel on the international market at rock-bottom prices to try to force all other nations’ steel mills into bankruptcy. The other nations respond by slapping tariffs, quotas, or outright bans on imported steel from the dumping country, bringing the project to a screeching halt. Thus trade barriers tend to produce a relative equilibrium between national economies.
Notice that this is an equilibrium, not an equality. When trade barriers exist, it’s usual for some nations to be rich and others to be poor, for a galaxy of reasons having nothing to do with international trade. At the same time, the difficulties this imposes on poor nations are balanced by a relative equilibrium, within nations, between wages and prices.
When the movement of goods and capital across national borders is restricted, the prices of consumer products in each nation will be linked via the law of supply and demand to the purchasing power of consumers in that nation, and thus to the wages paid by employers in that nation. Of course the usual cautions apply; wages and prices fluctuate for a galaxy of reasons, many of which have nothing to do with international trade. Even so, since the wages paid out by employers form the principal income stream that allows consumers to buy the employers’ products, and consumers can have recourse to the political sphere if employers’ attempts to drive down wages get out of hand, there’s a significant pressure toward balance.
Given trade barriers, as a result, people who live in countries that pay low wages generally pay low prices for goods and services, while people who live in countries with high wages face correspondingly high prices when they go shopping. The low prices make life considerably easier for working people in poor countries, just as the tendency of wages to match prices makes life easier for working people in rich countries. Does this always work? Of course not—again, wages and prices fluctuate for countless reasons, and national economies are inherently unstable things—but the factors just enumerated push the economy in the direction of a rough balance between the needs and wants of consumers, on the one hand, and their ability to pay, on the other.
Now let’s imagine that all of the nations we’ve imagined are convinced by a gaggle of neoliberal economists to enact a free trade zone, in which there are no barriers at all to the free movement of goods and capital. What happens?
When there are no trade barriers, the nation that can produce a given good or service at the lowest price will end up with the lion’s share of the market for that good or service. Since labor costs make up so large a portion of the cost of producing goods, those nations with low wages will outbid those with high wages, resulting in high unemployment and decreasing wages in the formerly high-wage countries. The result is a race to the bottom in which wages everywhere decline toward those of the worst-paid labor force in the free trade zone.
When this happens in a single country, as already noted, the labor force can often respond to the economic downdraft by turning to the political sphere. In a free trade zone, though, employers faced with a political challenge to falling wages in one country can simply move elsewhere. It’s the mismatch between economic union and political division that makes free trade unbalanced, and leads to problems we’ll discuss shortly.
Now of course free trade advocates like to insist that jobs lost by wealthier nations to poorer ones will inevitably be replaced by new jobs. History doesn’t support that claim—quite the contrary—and there are good reasons why the jobs that disappear will never be replaced. In a free trade system, it’s more economical for startups in any labor-intensive industry to go straight to one of the countries with low wages; only those industries that are capital-intensive and thus employ comparatively few people have any reason to get under way in the high-wage countries. The computer industry is a classic example—and you’ll notice, I trust, that just as soon as that industry started to become labor-intensive, it moved offshore. Still, there’s another factor at work.
Since wages are a very large fraction of the cost of producing goods, the overall decrease in wages brings about an increase in profits. Thus one result of free trade is a transfer of wealth from the laboring majority, whose income comes from wages, to the affluent minority, whose income comes directly or indirectly from profits. That’s the factor that’s been left out of the picture by the proponents of free trade—its effect on income distribution. Free trade makes the rich richer and the poor poorer, by increasing profits while driving wages down. This no doubt explains why free trade is so popular among the affluent these days, just as it was in the Victorian era. 
There’s a worm in the bud, though, because a skewed income distribution imposes costs of its own, and those costs mount up over time in painfully familiar ways. The difficulty with making the rich richer and the poor poorer, as Henry Ford pointed out a long time ago, is that the wages you pay your employees are also the income stream they use to buy your products. As wages decline, purchasing power declines, and begins to exert downward pressure on returns on investment in every industry that relies on consumer purchases for its income.
Doesn’t the increasing wealth of investors counterbalance the declining wealth of the wage-earning masses? No, because the rich spend a smaller proportion of their incomes on consumer goods than the poor, and divert the rest to investments. Divide a million dollars between a thousand working class family, and the money’s going to be spent to improve the families’ standard of living: better food, a bigger apartment, an extra toy or two around the Christmas tree, and so on. Give the same million to one rich family and it’s a safe bet that much of it’s going to be invested.
This, incidentally, is why the trickle-down economics beloved of Republican politicians of an earlier era simply doesn’t work, and why the Obama administration’s massive handouts of government money to banks in the wake of the 2008-9 financial panic did so little to improve the financial condition of most of the country. When it comes to consumption, the rich simply aren’t as efficient as the poor. If you want to kickstart an economy with consumer expenditures, as a result, you need to make sure that poor and working class people have plenty of money to spend.
There’s a broader principle here as well.  Consumer expenditures and capital for investment are to an economy what sunlight and water are to a plant: you can’t substitute one for the other. You need both. Since free trade policies funnel money away from expenditure toward investment by skewing the income distribution, it causes a shortage of the one and a surplus of the other. As the imbalance builds, it becomes harder for businesses to make a profit because consumers don’t have the cash to buy their products; meanwhile the amount of money available for investment increases steadily. The result is a steady erosion in return on investment, as more and more money chases fewer and fewer worthwhile investment vehicles.
The history of free-trade eras is thus marked by frantic attempts to prop up returns on investment by any means necessary. The offshoring fad that stripped the United States of its manufacturing economy in the 1970s had its exact equivalent in the offshoring of fabric mills from Britain to India in the late Victorian era; in both cases, the move capitalized on remaining disparities in wages and prices between rich and poor areas in a free trade zone. In both cases, offshoring worsened the problem it was meant to fix, by increasing the downward pressure on wages in the richer countries and further decreasing returns on investment across the entire spectrum of consumer industries—then as now, the largest single share of the economy.
A gambit that as far as I know wasn’t tried in the first era of free trade was the attempt to turn capital into ersatz income by convincing consumers to make purchases with borrowed money. That’s been the keystone of economic policy in the United States for most of two decades now.  The housing bubble was only the most exorbitant manifestation of a frantic attempt to get people to spend money they don’t have, and then find some way to pay it all back with interest. It hasn’t worked well, not least because all those interest payments put an additional downward pressure on consumer expenditures.
A variety of other, mostly self-defeating gimmicks have been put in play in both of the modern free trade eras to try to keep consumer expenditures high while wages decline. None of them work, because they don’t address the actual problem—the fact that under free trade, the downward pressure on wages means that consumers can’t afford to spend enough to keep the economy running at a level that will absorb the available investment capital—and so the final solution to the problem of declining returns on investment arrives on schedule: the diversion of capital from productive investment into speculation.
Any of my readers who don’t know how this story ends should get up right now, and go find a copy of John Kenneth Galbraith’s classic The Great Crash 1929. Speculative bubbles, while they last, produce abundant returns; when free trade has driven down wages, forced the consumer economy into stagnation or contraction, and decreased the returns on investment in productive industries to the point of “why bother,” a speculative bubble is very often the only profitable game in town. What’s more, since there are so few investments with decent returns in the late stages of a free trade scheme, there’s a vast amount of money ready to flow into any investment vehicle that can show a decent return, and that’s exactly the environment in which speculative bubbles breed most readily.
So the great free trade era that began tentatively with the repeal of the Corn Laws in 1846, and came into full flower with Gladstone’s abolition of tariffs in 1869, ended in the stock market debacle of 1929 and the Great Depression. The road there was littered with plenty of other crises, too. The economic history of the late nineteenth and early twentieth centuries is a cratered moonscape of speculative busts and stock market crashes, culminating in the Big One in 1929. It resembles, in fact, nothing so much as the economic history of the late twentieth and early twenty-first centuries, which have had their own sequence of busts and crashes: the stock market crash of 1987, the emerging markets crash of 1994, the tech-stock debacle of 2000, the housing bust of 2008, and the beat goes on.
Thus free trade causes the impoverishment and immiseration of the labor force, and a cascading series of economic busts driven by the mismatch between insufficent consumption and excess investment. Those problems aren’t accidental—they’re hardwired into any free trade system—and the only way to stop them in their tracks is to abandon free trade as bad policy, and replace it with sensible trade barriers that ensure that most of the products consumed in each nation are made there.
It’s probably necessary to stop here and point out a couple of things. First of all, the fact that free trade is bad policy doesn’t mean that every kind of trade barrier is good policy.  The habit of insisting that the only possible points along a spectrum are its two ends, common as it is, is an effective way to make really bad decisions; as in most things, there’s a middle ground that yields better results than either of the two extremes. Finding that middle ground isn’t necessarily easy, but the same thing’s true of most economic and political issues.
Second, free trade isn’t the only cause of economic dysfunction, nor is it the only thing that can cause skewed income distribution and the attendant problems that this brings with it. Plenty of factors can cause a national or global economy to run off the rails. What history shows with painful clarity is that free trade inevitably makes this happen. Getting rid of free trade and returning to a normal state of affairs, in which nations provide most of their own needs from within their own borders and trade with other nations to exchange surpluses or get products that aren’t available at home readily, or at all, gets rid of one reliable cause of serious economic dysfunction. That’s all, but arguably it’s enough to make a movement away from free trade a good idea.
Finally, the points I’ve just made suggest that there may be unexpected benefits, even today, to a nation that extracts itself from free trade agreements and puts a well-planned set of trade restrictions in place. There are plenty of factors putting downward pressure on prosperity just now, but the reasoning I’ve just sketched out suggests that the destitution and immiseration so common in the world right now may have been made considerably worse than they would otherwise be by the mania for free trade that’s been so pervasive in recent decades. A country that withdraws from free trade agreements and reorients its economy for the production of goods for domestic consumption might thus expect to see some improvement, not only in the prosperity of its working people, but in rates of return on investment.
That’s the theory I propose. Given the stated policies of the incoming US administration, it’s about to be put to the test—and the results should be apparent over the next few years.
****************On a different and less theoretical note, I’m delighted to report that the third issue of Into The Ruins, the quarterly magazine of deindustrial science fiction, is on its way to subscribers and available for sale to everyone else. The Fall 2016 issue includes stories by regular authors and newcomers alike, including a Matthew Griffiths tale set in the universe of my novel Star’s Reach, along with book reviews, essays, and a letter to the editors column that is turning into one of the liveliest forums in print. If you’re not subscribing yet, you’re missing a treat.
On a less cheery note, it’s been a while now since I proposed a contest, asking readers to write stories about futures that went outside the conventional binary of progress or decline. I think it was a worthwhile project, and some of the stories I received in response were absolutely first-rate—but, I’m sorry to say, there weren’t enough of them to make an anthology. I want to thank everyone who wrote a story in response to my challenge, and since a good many of the stories in question deserve publication, I’m forwarding them to Joel Caris, the editor of Into The Ruins, for his consideration.

When The Shouting Stops

Wed, 2016-11-16 13:08
I've been trying for some time now to understand the reaction of Hillary Clinton’s supporters to her defeat in last week’s election. At first, I simply dismissed it as another round of the amateur theatrics both parties indulge in whenever they lose the White House. Back in 2008, as most of my readers will doubtless recall, Barack Obama’s victory was followed by months of shrieking from Republicans, who insisted—just as a good many Democrats are insisting today—that the election of the other guy meant that democracy had failed, the United States and the world were doomed, and the supporters of the losing party would be rounded up and sent to concentration camps any day now.
That sort of histrionic nonsense has been going on for decades. In 2000, Democrats chewed the scenery in the grand style when George W. Bush was elected president. In 1992, it was the GOP’s turn—I still have somewhere a pamphlet that was circulated by Republicans after the election containing helpful phrases in Russian, so that American citizens would have at least a little preparation when Bill Clinton ran the country into the ground and handed the remains over to the Soviet Union. American politics and popular culture being what it is, this kind of collective hissy fit is probably unavoidable.
Fans of irony have much to savor. You’ve got people who were talking eagerly about how to game the electoral college two weeks ago, who now are denouncing the electoral college root and branch; you’ve got people who insisted that Trump, once he lost, should concede and shut up, who are demonstrating a distinct unwillingness to follow their own advice. You’ve got people in the bluest of blue left coast cities marching in protest as though that’s going to change a single blessed thing—as I’ve pointed out in previous posts here, protest marches that aren’t backed up with effective grassroots political organization are simply a somewhat noisy form of aerobic exercise.
Still, there’s more going on here than that. I know some fairly thoughtful people whose reaction to the election’s outcome wasn’t histrionic at all—it consisted of various degrees of shock, disorientation, and fear. They felt, if the ones I read are typical, that the people who voted for Trump were deliberately rejecting and threatening them personally. That’s something we ought to talk about.
To some extent, to be sure, this was a reflection of the political culture of personal demonization I discussed in last week’s post. Many of Clinton’s supporters convinced themselves, with the help of a great deal of propaganda from the Democratic Party and its bedfellows in the mainstream media, that Donald Trump is a monster of depravity thirsting for their destruction, and anyone who supports him must hate everything good. Now they’re cringing before the bogeyman they imagined, certain that it’s going to act out the role they assigned it and gobble them up.
Another factor at work here is the very strong tendency of people on the leftward end of American politics to believe in what I’ve elsewhere called the religion of progress—the faith that history has an inherent tilt toward improvement, and more to the point, toward the particular kinds of improvement they prefer. Hillary Clinton, in an impromptu response to a heckler at one of her campaign appearances, phrased the central tenet of that religion concisely: “We’re not going to go back. We’re going to go forward.” Like Clinton herself, a great many of her followers saw their cause as another step forward in the direction of progress, and to find themselves “going back” is profoundly disorienting—even though those labels “forward” and “back” are entirely arbitrary when they aren’t the most crassly manipulative sort of propaganda.
That said, there’s another factor driving the reaction of Clinton’s supporters, and the best way I can find to approach it is to consider one of the more thoughtful responses from that side of the political landscape, an incisive essay posted to Livejournal last week by someone who goes by the nom de Web “Ferrett Steinmetz.” The essay’s titled The Cold, Cold Math We’ll Need to Survive the Next Twenty Years, and it comes so close to understanding what happened last Tuesday that the remaining gap offers an unsparing glimpse straight to the heart of the failure of the Left to make its case to the rest of the American people.
At the heart of the essay are two indisputable points. The first is that the core constituencies of the Democratic Party are not large enough by themselves to decide who gets to be president. That’s just as true of the Republican party, by the way, and with few exceptions it’s true in every democratic society.  Each party large enough to matter has a set of core constituencies who can be counted on to vote for it under most circumstances, and then has to figure out how to appeal to enough people outside its own base to win elections. That’s something that both parties in the US tend to forget from time to time, and when they do so, they lose.
The second indisputable point is that if Democrats want to win an election in today’s America, they have to find ways to reach out to people who don’t share the values and interests of the Left. It’s the way that Ferrett Steinmetz frames that second point, though, that shows why the Democratic Party failed to accomplish that necessary task this time. “We have to reach out to people who hate us,” Steinmetz says, and admits that he has no idea at all how to do that.
Let’s take those two assertions one at a time. First, do the people who voted for Donald Trump in this election actually hate Ferrett Steinmetz and his readers—or for that matter, women, people of color, sexual minorities, and so on? Second, how can Steinmetz and his readers reach out to these supposedly hateful people and get them to vote for Democratic candidates?
I have no idea whether Ferrett Steinmetz knows anybody who voted for Donald Trump.  I suspect he doesn’t—or at least, given the number of people I’ve heard from who’ve privately admitted that they voted for Trump but would never let their friends know this, I suspect he doesn’t know anyone who he knows voted for Trump. Here I have a certain advantage. Living in a down-at-the-heels mill town in the north central Appalachians, I know quite a few people who supported Trump; I’ve also heard from a very large number of Trump supporters by way of this blog, and through a variety of other sources.
Are there people among the pro-Trump crowd who are in fact racists, sexists, homophobes, and so on? Of course. I know a couple of thoroughly bigoted racists who cast their votes for him, for example, including at least one bona fide member of the Ku Klux Klan. The point I think the Left tends to miss is that not everyone in flyover country is like that. A few years back, in fact, a bunch of Klansmen came to the town where I live to hold a recruitment rally, and the churches in town—white as well as black—held a counter-rally, stood on the other side of the street, and drowned the Klansmen out, singing hymns at the top of their lungs until the guys in the white robes got back in their cars and drove away.  Surprising? Not at all; in a great deal of middle America, that’s par for the course these days.
To understand why a town that ran off the Klan was a forest of Trump signs in the recent election, it’s necessary to get past the stereotypes and ask a simple question: why did people vote for Trump? I don’t claim to have done a scientific survey, but these are the things I heard Trump voters talking about in the months and weeks leading up to the election:
1. The Risk of War. This was the most common point at issue, especially among women—nearly all the women I know who voted for Trump, in fact, cited it as either the decisive reason for their vote or one of the top two or three. They listened to Hillary Clinton talk about imposing a no-fly zone over Syria in the face of a heavily armed and determined Russian military presence, and looked at the reckless enthusiasm for overthrowing governments she’d displayed during her time as Secretary of State. They compared this to Donald Trump’s advocacy of a less confrontational relationship with Russia, and they decided that Trump was less likely to get the United States into a shooting war.
War isn’t an abstraction here in flyover country. Joining the military is very nearly the only option young people here have if they want a decent income, job training, and the prospect of a college education, and so most families have at least one relative or close friend on active duty.  People here respect the military, but the last two decades of wars of choice in the Middle East have done a remarkably good job of curing middle America of any fondness for military adventurism it might have had.  While affluent feminists swooned over the prospect of a woman taking on another traditionally masculine role, and didn’t seem to care in the least that the role in question was “warmonger,” a great many people in flyover country weighed the other issues against the prospect of having a family member come home in a body bag. Since the Clinton campaign did precisely nothing to reassure them on this point, they voted for Trump.
2. The Obamacare Disaster. This was nearly as influential as Clinton’s reckless militarism. Most of the people I know who voted for Trump make too much money to qualify for a significant federal subsidy, and too little to be able to cover the endlessly rising cost of insurance under the absurdly misnamed “Affordable Care Act.” They recalled, rather too clearly for the electoral prospects of the Democrats, how Obama assured them that the price of health insurance would go down, that they would be able to keep their existing plans and doctors, and so on through all the other broken promises that surrounded Obamacare before it took effect.
It was bad enough that so few of those promises were kept. The real deal-breaker, though, was the last round of double- or triple-digit annual increase in premiums announced this November, on top of increases nearly as drastic a year previously. Even among those who could still afford the new premiums, the writing was on the wall: sooner or later, unless something changed, a lot of people were going to have to choose between losing their health care and being driven into destitution—and then there were the pundits who insisted that everything would be fine, if only the penalties for not getting insurance were raised to equal the cost of insurance! Faced with that, it’s not surprising that a great many people went out and voted for the one candidate who said he’d get rid of Obamacare.
3. Bringing Back Jobs. This is the most difficult one for a lot of people on the Left to grasp, but that’s a measure of the gap between the bicoastal enclaves where the Left’s policies are formed and the hard realities of flyover country. Globalization and open borders sound great when you don’t have to grapple with the economic consequences of shipping tens of millions of manufacturing jobs overseas, on the one hand, and federal policies that flood the labor market with illegal immigrants to drive down wages, on the other. Those two policies, backed by both parties and surrounded by a smokescreen of empty rhetoric about new jobs that somehow never managed to show up, brought about the economic collapse of rural and small town America, driving a vast number of Americans into destitution and misery.
Clinton’s campaign did a really inspired job of rehashing every detail of the empty rhetoric just mentioned, and so gave people out here in flyover country no reason to expect anything but more of the same downward pressure on their incomes, their access to jobs, and the survival of their communities. Trump, by contrast, promised to scrap or renegotiate the trade agreements that played so large a role in encouraging offshoring of jobs, and also promised to put an end to the tacit Federal encouragement of mass illegal immigration that’s driven down wages. That was enough to get a good many voters whose economic survival was on the line to cast their votes for Trump.
4. Punishing the Democratic Party. This one is a bit of an outlier, because the people I know who cast votes for Trump for this reason mostly represented a different demographic from the norm out here: young, politically liberal, and incensed by the way that the Democratic National Committee rigged the nomination process to favor Clinton and shut out Bernie Sanders. They believed that if the campaign for the Democratic nomination had been conducted fairly, Sanders would have been the nominee, and they also believe that Sanders would have stomped Trump in the general election.  For what it’s worth, I think they’re right on both counts.
These voters pointed out to me, often with some heat, that the policies Hillary Clinton supported in her time as senator and secretary of state were all but indistinguishable from those of George W. Bush—you know, the policies Democrats denounced so forcefully a little more than eight years ago.  They argued that voting for Clinton in the general election when she’d been rammed down the throats of the Democratic rank and file by the party’s oligarchy would have signaled the final collapse of the party’s progressive wing into irrelevance. They were willing to accept four years of a Republican in the White House to make it brutally clear to the party hierarchy that the shenanigans that handed the nomination to Clinton were more than they were willing to tolerate.
Those were the reasons I heard people mention when they talked in my hearing about why they were voting for Donald Trump. They didn’t talk about the issues that the media considered important—the email server business, the on-again-off-again FBI investigation, and so on. Again, this isn’t a scientific survey, but I found it interesting that not one Trump voter I knew mentioned those.
What’s more, hatred toward women, people of color, sexual minorities, and the like weren’t among the reasons that people cited for voting for Trump, either. Do a fair number of the people I’m discussing hold attitudes that the Left considers racist, sexist, homophobic, or what have you? No doubt—but the mere fact that such attitudes exist does not prove that those attitudes, rather than the issues just listed, guided their votes.
When I’ve pointed this out to people on the leftward side of the political spectrum, the usual response has been to insist that, well, yes, maybe Trump did address the issues that matter to people in flyover country, but even so, it was utterly wrong of them to vote for a racist, sexist homophobe! We’ll set aside for the moment the question of how far these labels actually apply to Trump, and how much they’re the product of demonizing rhetoric on the part of his political enemies on both sides of the partisan divide. Even accepting the truth of these accusations, what the line of argument just cited claims is that people in the flyover states should have ignored the issues that affect their own lives, and should have voted instead for the issues that liberals think are important.
In some idyllic Utopian world, maybe.  In the real world, that’s not going to happen. People are not going to embrace the current agenda of the American Left if doing so means that they can expect their medical insurance to double in price every couple of years, their wages to continue lurching downward, their communities to sink further in a death spiral of economic collapse, and their kids to come home in body bags from yet another pointless war in the Middle East.
Thus there’s a straightforward answer to both of Ferrett Steinmetz’ baffled questions. Do the people who voted for Trump hate Steinmetz, his readers, or the various groups—women, people of color, sexual minorities—whose concerns are central to the politics of today’s American Left? In many cases, not at all, and in most others, not to any degree that matters politically. They simply don’t care that much about the concerns that the Left considers central—especially when those are weighed against the issues that directly affect their own lives.
As for what Ferrett Steinmetz’s side of the political landscape can offer the people who voted for Trump, that’s at least as simple to answer: listen to those voters, and they’ll tell you. To judge by what I’ve heard them say, they want a less monomaniacally interventionist foreign policy and an end to the endless spiral of wars of choice in the Middle East; they want health insurance that provides reasonable benefits at a price they can afford; they want an end to trade agreements that ship American jobs overseas, and changes to immigration policy that stop the systematic importation of illegal immigrants by big corporate interests to drive down wages and benefits; and they want a means of choosing candidates that actually reflects the will of the people.
The fascinating thing is, of course, that these are things the Democratic Party used to offer. It wasn’t that long ago, in fact, that the Democratic Party made exactly these issues—opposition to reckless military adventurism, government programs that improved the standard of living of working class Americans, and a politics of transparency and integrity—central not only to its platform but to the legislation its congresspeople fought to get passed and its presidents signed into law. Back when that was the case, by the way, the Democratic Party was the majority party in this country, not only in Congress but also in terms of state governorships and legislatures. As the party backed away from offering those things, it lost its majority position. While correlation doesn’t prove causation, I think that in this case a definite case can be made.
More generally, if the Left wants to get the people who voted for Trump to vote for them instead, they’re going to have to address the issues that convinced those voters to cast their ballots the way they did. Oh, and by the way, listening to what the voters in question have to say, rather than loudly insisting that they can only be motivated by hatred, would also help quite a bit. That may be a lot to ask, but once the shouting stops, I hope it’s a possibility.

Reflections on a Democracy in Crisis

Wed, 2016-11-09 16:24
Well, it’s finally over, and I think it’s fair to say I called it. As I predicted back in January of this year, working class Americans—fed up with being treated by the Democratic Party as the one American minority that it’s okay to hate—delivered a stinging rebuke to the politics of business as usual. To the shock and chagrin of the entire US political establishment, and to the tautly focused embarrassment of the pundits, pollsters, and pet intellectuals of the mainstream media, Donald Trump will be the forty-fifth president of the United States of America. 
Like millions of other Americans, I took part in the pleasant civic ritual of the election. My local polling place is in an elementary school on the edge of the poor part of town—the rundown multiracial neighborhood I’ve mentioned here before, where Trump signs blossomed early and often—and I went to vote, as I usually do, in early afternoon, when the lunch rush was over and the torrent of people voting on the way home from work hadn’t yet gotten under way. Thus there was no line; I came in just as two elderly voters on the way out were comparing notes on local restaurants that give discounts to patrons who’ve got the “I Voted” sticker the polls here hand out when you’ve done your civic duty, and left maybe five minutes later as a bottle-blonde housewife was coming in to cast her vote.
Maryland had electronic voting for a while, but did the smart thing and went back to paper ballots this year, so I’m pretty sure my votes got counted the way I cast them. Afterwards I walked home—it was cloudy but warm, as nice a November day as you could ask for—and got back to work on my current writing project. It all made an interesting counterpoint to the nonstop shrieking that’s been emanating for months now from the media and, let’s be fair, from politicians, pundits, and a great many ordinary people all over the world as well.
I don’t see a lot of point just now in talking about what’s going to happen once the dust and the tumult settles, the privileged finish throwing their predictable tantrums, and the Trump administration settles into power in Washington DC.  There will be plenty of time for that later. What I’d like to do here and now is talk about a couple of things that were highlighted by this election, and cast a useful light on the current state of US politics and the challenges that have to be faced as a troubled, beleaguered, and bitterly divided nation staggers on toward its next round of crises.
One of those things showed up with rare clarity in the way that many readers responded to my posts on the election. All along, from my first post on the improbable rise of Donald Trump right up to last week’s pre-election wrapup, I tried to keep the discussion focused on issues: what policies each candidate could be expected to support once the next administration took office.
To my mind, at least, that’s the thing that matters most about an election. Four or eight years from now, after all, the personality of the outgoing president is going to matter less than an average fart in a Category 5 hurricane. The consequences of policy decisions made by the presidency over the next four years, on the other hand, will have implications that extend for years into the future. Should the United States pursue a policy of confrontation with Russia in the Middle East, or should it work out a modus vivendi with the Russians to pursue the common goal of suppressing jihadi terrorism? Should federal policy continue to encourage the offshoring of jobs and the importation of workers to drive down wages, or should it be changed to discourage these things? These are important issues that will affect millions of lives in the United States and elsewhere, and there are other issues of similar importance on which the two candidates had significantly different positions.
Quite a few of the people who responded to those posts, though, displayed no interest in such mundane if important matters. They only wanted to talk about their opinions about the personalities of the candidates: to insist that Clinton was a corrupt stooge, say or that Trump was a hatemongering fascist. (It says something about American politics these days that rather more often than not, the people who did this were too busy slandering the character of the candidate they hated to say much about the one they planned to vote for.) Outside the relatively sheltered waters of The Archdruid Report, in turn, that tendency went into overdrive; for much of the campaign, the only way you could tell the difference between the newspapers of record and the National Enquirer was by noting which candidates they supported, and allegedly serious websites were by and large even worse.
This wasn’t the fault of the candidates, as it happens. Whatever else might be said for or against Hillary Clinton, she tried to avoid a campaign based on content-free sound bites like the one Barack Obama waged against her so cynically and successfully in 2008; the pages of her campaign website displayed a laundry list of things she said she wanted to do if she won the election. While many voters will have had their disagreements with her proposals, she actually tried to talk about the issues, and that’s refreshingly responsible. Trump, for that matter, devoted speech after speech to a range of highly specific policy proposals.
Yet nearly all the talk about both candidates, in and out of the media, focused not on their policy proposals but on their personalities—or rather on nastily distorted parodies of their personalities that defined them, more or less explicitly, as evil incarnate. The Church of Satan, I’m told, has stated categorically that the Devil was not running in this year’s US presidential election, but you’d have a hard time telling that from the rhetoric on both sides. The media certainly worked overtime to foster the fixation on personalities, but I suspect this is one of those cases where the media was simply reflecting something that was already present in the collective consciousness of our society.
All through the campaign I noticed, rather to my surprise, that it wasn’t just those who have nothing in their heads that a television or a website didn’t put there, who ignored the issues and fixated on personalities. I long ago lost track of the number of usually thoughtful people I know who, over the course of the last year, ended up buying into every negative claim about whichever candidate they hated, without even going through the motions of checking the facts. I also lost track months ago of the number of usually thoughtful people I know whose automatic response to an attempt to talk about the issues at stake in this election was to give me a blank look and go right back to ranting about the evilly evil evilness of whichever candidate they hated.
It seems to me that something has been forgotten here.  We didn’t have an election to choose a plaster saint, a new character on My Little Pony, or Miss (or Mister) Goody Two-Shoes 2016. We had an election to choose the official who will head the executive branch of our federal government for the next four years. I’ve read essays by people who know Hillary Clinton and Donald Trump personally, and claim that both of them are actually very pleasant people. You know what? I literally couldn’t care less. I would be just as likely to vote for a surly misanthrope who loathes children, kicks puppies, and has deviant sexual cravings involving household appliances and mayonnaise, if that person supports the policies I want on the issues that matter to me. It really is that simple.
I’d like to suggest, furthermore, that the fixation on personalities—or, again, malicious parodies of personalities—has played a huge role in making politics in the United States so savage, so divisive, and so intractably deadlocked on so many of the things that matter just now. The issues I mentioned a few paragraphs back—US foreign policy toward a resurgent Russia, on the one hand, and US economic policy regarding the offshoring of jobs and the importation of foreign workers—are not only important, they’re issues about which reasonable disagreement is possible. What’s more, they’re issues on which negotiation, compromise, and the working out of a mutually satisfactory modus vivendi between competing interests are also possible, at least in theory.
In practice? Not while each side is insisting at the top of its lungs that the other side is led by a monster of depravity and supported only by people who hate everything good in the world. I’d like to suggest that it’s exactly this replacement of reasoned politics with a pretty close equivalent of the Two Minutes Hate from Orwell’s 1984 that’s among the most important forces keeping this country from solving any of its problems or doing anything to brace itself for the looming crises ahead.
Thus I’d like to encourage all the citizens of my country to turn off the television and the internet for a few moments, take a few deep breaths, and think about the tone of the recent election, and to what extent they might have participated in the bipartisan culture of hatred that filled so much of it. It might be worth pointing out that you’re not likely to convince other people to vote the way you think they ought to vote if you’re simultaneously berating them for being evilly evil with a double helping of evil sauce on the side, or sneering at them for being too ignorant to recognize that voting for your candidate really is in their best interests, or any of the other counterproductive habits that have taken the place of reasonable political discourse in today’s America.
The second point I noticed in the course of the election campaign connects to the one just discussed. That’s the hard fact that the United States at this point in its history may still be a single republic, but it’s not a single nation—and it could be argued on reasonably solid grounds that it never has been. Facile distinctions between “red” and “blue” states barely touch the complexity, much less the depth, of the divisions that separate the great urban centers from the rest of the country, and the different regions from one another.
I think it was Pauline Kael who, in the wake of Richard Nixon’s landslide victory in 1972, commented that she didn’t understand how Nixon could have won—after all, nobody she knew voted for him! The same sentiment is currently being expressed in tones ranging from bewilderment and baffled rage from all corners of the affluent left and their hangers-on among the mainstream media’s well-paid punditry. The 20% or so of Americans who have benefited from the jobless recovery of the last eight years, and the broader neoliberal economic agenda of the last four decades, very rarely leave the echo-chamber environments where they spend their days to find out what the rest of the country is thinking. If they’d done so a bit more often in the last year, they would have watched Trump signs sprouting all over the stark landscapes of poverty that have spread so widely in the America they never see.
But of course the divisions run deeper than this, and considerably more ramified. Compare the political, economic, and social policies that have the approval of people in Massachusetts, say, and those that have the approval of people in Oklahoma, and you’ll find next to no overlap. This isn’t because the people of one state or the other are (insert your insult of choice here); it’s because they belong to different cultures, with incommensurable values, attitudes, and interests. Attempts, well-meaning or otherwise, to impose the mores of either state on the other are guaranteed to result only in hostility and incomprehension—and such attempts have been all too common of late.
Ours is a very diverse country. That may sound like a truism, but it has implications that aren’t usually taken into account. A country with a great deal of cultural uniformity, with a broad consensus of shared values and attitudes, can afford to legislate that consensus on a national basis. A country that doesn’t have that kind of uniformity, that lacks any consensus concerning values and attitudes, very quickly gets into serious trouble if it tries that sort of legislation. If the divergence is serious enough, the only way that reliably allows different nations to function under a single government is a federal system—that is, a system that assigns the national government only those powers and duties that have to be handled on a nationwide basis, while leaving most other questions for local governments and individuals to settle for themselves.
My more historically literate readers will be aware that the United States used to have a federal system—that is, after all, why we still speak of “the federal government.” Under the Constitution as originally written and interpreted, the people of each state had the right to run their own affairs pretty much as they saw fit, within certain very broad limits.  The federal government was assigned certain narrowly defined powers, and all other powers were, in the language of the Tenth Amendment, reserved to the states and the people.
Over the first century and a half of our national history, certain other powers were assigned to the federal government by constitutional amendment, sometimes with good results—the Fourteenth Amendment’s guarantee of equal protection of the laws to all citizens, for example, and the Fifteenth and Nineteenth Amendments’ extension of voting rights to black people and women respectively—and sometimes not—the Eighteenth Amendment’s prohibition of alcohol comes to mind here. The basic federal structure remained intact. Not until the aftermath of the Great Depression and the Second World War did the metastatic growth of the federal government begin in earnest, and so in due time did the various attempts to impose this or that set of moral values on the entire country by force of law.
Those attempts have not worked, and they’re not going to work. I’m not sure how many people have noticed, though, that the election of Donald Trump was not merely a rebuke to the liberal left; it was also a defeat for the religious right. It’s worth recalling that the evangelical wing of the Republican Party had its own favorites in the race for the GOP nomination, and Trump was emphatically not one of them. It has not been a propitious autumn for the movements of left and right whose stock in trade is trying to force their own notion of virtue down the throats of the American people—and maybe, just maybe, that points to the way ahead.
It’s time to consider, I suggest, a renewal of the traditions of American federalism: a systematic devolution of power from the overinflated federal government to the states, and from the states to the people. It’s time for people in Massachusetts to accept that they’re never going to be able to force people in Oklahoma to conform to their notions of moral goodness, and for the people of Oklahoma to accept the same thing about the people of Massachusetts; furthermore, it’s time for government at all levels to give up trying to impose cultural uniformity on the lively diversity of our republic’s many nations, and settle for their proper role of ensuring equal protection under the laws, and those other benefits that governments, by their nature, are best suited to provide for their citizens.
We need a new social compact under which all Americans agree to back away from the politics of personal vilification that dominated all sides in the election just over, let go of the supposed right to force everyone in the country to submit to any one set of social and moral views, and approach the issues that divide us with an eye toward compromise, negotiation, and mutual respect. Most of the problems that face this country could be solved, or at least significantly ameliorated, if our efforts were guided by such a compact—and if that can be done, I suspect that a great many more of us will have the opportunity to experience one of the greatest benefits a political system can bestow: actual, honest-to-goodness liberty. We’ll talk more about that in future posts.

************************
In unrelated and rather less serious news, I’m pleased to announce that the second volume of my Lovecraftian epic fantasy series The Weird of Hali is now available for preorder. Once again, H.P. Lovecraft gets stood on his head, and the tentacled horrors and sinister cultists get the protagonists’ roles; this time the setting is the crumbling seaside town of Kingsport, where Miskatonic University student Jenny Parrish is summoned to attend a certain very ancient festival...
The Weird of Hali: Kingsport, like the first book in the series, The Weird of Hali: Innsmouth, is being released first in two signed and numbered editions, one one merely gorgeous, the other leatherbound, traycased, and utterly over the top for connoisseurs of fine printing and binding. There will be a trade paperback edition in due time, but it’ll be a while. Those of my readers who find eldritch nightmares from the crepuscular beginnings of time itself better company than the current crop of American politicians may find it worth a read.