AODA Blog

How Should We Then Live?

Wed, 2017-03-08 13:28
The philosophy of Arthur Schopenhauer, which we’ve been discussing for several weeks now, isn’t usually approached from the angle by which I’ve been approaching it—that is, as a way to talk about the gap between what we think we know about the world and what we actually know about it. The aspect of his work that usually gets all the publicity is the ethical dimension.
That’s understandable but it’s also unfortunate, because the ethical dimension of Schopenhauer’s philosophy is far and away the weakest part of it. It’s not going too far to say that once he started talking about ethics, Schopenhauer slipped on a banana peel dropped in his path by his own presuppositions, and fell flat on his nose. The banana peel in question is all the more embarrassing in that he spent much of the first half of The World as Will and Representation showing that you can’t make a certain kind of statement without spouting nonsense, and then turned around and based much of the second half on exactly that kind of statement.
Let’s review the basic elements of Schopenhauer’s thinking. First, the only things we can experience are our own representations. There’s probably a real world out there—certainly that hypothesis explains the consistency of our representations with one another, and with those reported by (representations of) other people, with less handwaving than any other theory—but all the data we get from the world out there amounts to a thin trickle of sensory data, which we then assemble into representations of things using a set of prefab templates provided partly by our species’ evolutionary history and partly by habits we picked up in early childhood. How much those representations have to do with what’s actually out there is a really good question that’s probably insoluble in principle.
Second, if we pay attention to our experience, we encounter one thing that isn’t a representation—the will. You don’t experience the will, you encounter its effects, but everything you experience is given its framing and context by the will. Is it “your” will?  The thing you call “yourself” is a representation like any other; explore it using any of at least three toolkits—sustained introspection, logical analysis, and scientific experimentation—and you’ll find that what’s underneath the representation of a single self that chooses and wills is a bundle of blind forces, divergent and usually poorly coordinated, that get in each other’s way, interfere with each other’s actions, and produce the jumbled and self-defeating mess that by and large passes for ordinary human behavior.
Third, the point just made is difficult for us to accept because our culture prefers to think of the universe as consisting of mind and matter—more precisely, active, superior, personal mind and passive, inferior, impersonal matter. Schopenhauer pokes at both of these concepts and finds them wanting. What we call mind, from his perspective, is simply one of the more complex and less robust grades of will—it’s what happens when the will gets sufficiently tangled and bashed about that it picks up the habit of representing a world to itself, so that it can use that as a map to avoid the more obvious sources of pain. Matter is a phantom—an arbitrarily defined “stuff” we use to pretend that our representations really do exist out there in reality.
Fourth, since the only things we encounter when we examine the world are representations, on the one hand, and will in its various modes on the other, we really don’t have any justification for claiming that anything else actually exists. Maybe there are all kinds of other things out there in the cosmos, but if all we actually encounter are will and representations, and a description of the cosmos as representation and will makes sense of everything we meet with in the course of life, why pile up unnecessary hypotheses just because our cultural habits of thought beg for them?
Thus the world Schopenhauer presents to us is the world we encounter—provided that we do in fact pay attention to what we encounter, rather than insisting that our representations are realities and our culturally engrained habits of thought are more real than the things they’re supposed to explain. The difficulty, of course, is that imagining a universe of mind and matter allows us to pretend that our representations are objective realities and that thoughts about things are more real than the things themselves—and both of these dodges are essential to the claim, hammered into the cultural bedrock of contemporary industrial society, that we and we alone know the pure unvarnished truth about things.
From Schopenhauer’s perspective, that’s exactly what none of us can know. We can at best figure out that when this representation appears, that representation will usually follow, and work out formal models—we call these scientific theories—that allow us to predict, more or less, the sequence of representations that appear in certain contexts. We can’t even do that much reliably when things get complex enough; at that point we have to ditch the formal models and just go with narrative patterns, the way I’ve tried to do in discussing the ways that civilizations decline and fall.
Notice that this implies that the more general a statement is, the further removed it is from that thin trickle of sensory data on which the whole world of representations is based, and the more strictly subjective it is. That means, in turn, that any value judgment applied to existence as a whole must be utterly subjective, an expression of the point of view of the person making that judgment, rather than any kind of objective statement about existence itself.
There’s the banana peel on which Schopenhauer slipped, because having set up the vision of existence I’ve just described, he turned around and insisted that existence is objectively awful and the only valid response to it for anyone, anywhere, is to learn to nullify the will to live and, in due time, cease to be.
Is that one possible subjective response to the world in which we find ourselves? Of course, and some people seem to find it satisfying. Mind you, the number of them that actually go out of their way to cease existing is rather noticeably smaller than the number who find such notions pleasing in the abstract. Schopenhauer himself is a helpful example. Having insisted in print that all pleasure is simply a prelude to misery and an ascetic lifestyle ending in extinction is the only meaningful way to live, he proceeded to live to a ripe old age, indulging his taste for fine dining, music, theater, and the more than occasional harlot. I’m not sure how you’d translate “do what I say, not what I do” into classical Greek, but it would have made an appropriate epigraph for The World as Will and Representation.
Now of course a failure to walk one’s talk is far from rare among intellectuals, especially those of ascetic leanings, and the contrast between Schopenhauer’s ideals and his actions doesn’t disprove the value of the more strictly epistemological part of his work. It does, however, point up an obvious contradiction in his thinking. Accept the basic assumptions of his philosophy, after all, and it follows that the value judgments we apply to the representations we encounter are just as much a product of our own minds as the representations themselves; they’re not objective qualities of the things we judge, even though we’re used to treating them that way.
We treat them that way, in turn, because for the last two millennia or so it’s been standard for prophetic religious traditions to treat them that way. By “prophetic religious traditions” I mean those that were founded by individual persons—Gautama the Buddha, Jesus of Nazareth, Muhammad, and so on—or were reshaped in the image of such faiths, the way Judaism was reshaped in the image of the Zoroastrian religion after the Babylonian captivity. (As Raphael Patai pointed out in quite some detail a while back in his book The Hebrew Goddess, Judaism wasn’t monotheistic until the Jews picked up that habit from their Zoroastrian Persian liberators; quite a few other traits of post-Exilic Judaism, such as extensive dietary taboos, also have straightforward Zoroastrian origins.)
A range of contrasts separate the prophetic religions from the older polytheist folk religions that they supplanted over most of the world, but one of the crucial points of difference is in value judgments concerning human behavior—or, as we tend to call them these days, moral judgments. The gods and goddesses of folk religions are by and large no more moral, or interested in morality, than the forces of nature they command and represent; some expect human beings to maintain certain specific customs—Zeus, for example, was held by the ancient Greeks to punish those who violated traditional rules of hospitality—but that was about it. The deities central to most prophetic religions, by contrast, are all about moral judgment.
The scale of the shift can be measured easily enough from the words “morals” and “ethics” themselves. It’s become popular of late to try to make each of these mean something different, but the only actual difference between them is that “morals” comes from Latin and “ethics” comes from Greek. Back in classical times, though, they had a shared meaning that isn’t the one given to them today. The Latin word moralia derives from mores, the Greek word ethike derives from ethoi, and mores and ethoiboth mean “customs” or “habits,” without the language of judgment associated with the modern words.
To grasp something of the difference, it’s enough to pick up a copy of Aristotle’s Nicomachean Ethics, by common consent the most important work of what we’d now call moral philosophy that came out of the ancient world. It’s not ethics or morals in any modern sense of the word; it’s a manual on how to achieve personal greatness, and it manages to discuss most of the territory now covered by ethics without ever stooping to the kind of moral denunciation that pervades ethical thought in our time.
Exactly why religion and morality got so thoroughly conflated in the prophetic religions is an interesting historical question, and one that deserves more space than a fraction of one blog post can provide. The point I want to address here is the very difficult fit between the sharp limits on human knowledge and the sweeping presuppositions of moral knowledge that modern societies have inherited from the age of prophetic religions. If we don’t actually know anything but our representations, and can draw only tentative conclusions from them, do we really know enough to make sweeping generalizations about good and evil?
The prophetic religions themselves actually have a workable response to that challenge. Most of them freely admit that human beings don’t have the capacity to judge rightly between good and evil without help, and go on to argue that this is why everyone needs to follow the rules set down in scripture as interpreted by the religious specialists of their creed. Grant the claim that their scriptures were actually handed down from a superhumanly wise source, and it logically follows that obeying the moral rules included in the scriptures is a reasonable action. It’s the basic claim, of course, that’s generally the sticking point; since every prophetic religion has roughly the same evidence backing its claim to divine inspiration as every other, and their scriptures all contradict one another over important moral issues, it’s not exactly easy to draw straightforward conclusions from them.
Their predicament is a good deal less complex, though, than that of people who’ve abandoned the prophetic religions of their immediate ancestors and still want to make sweeping pronouncements about moral goodness and evil. It’s here that the sly, wry, edgy voice of Friedrich Nietzsche becomes an unavoidable presence, because the heart of his philosophy was an exploration of what morality means once a society can no longer believe that its tribal taboos were handed down intact, and will be enforced via thunderbolt or eternal damnation, by the creator of the universe.
Nietzsche’s philosophical writings are easy to misunderstand, and he very likely meant that to be the case. Where Schopenhauer proceeded step by step through a single idea in all its ramifications, showing that the insight at the core of his vision makes sense of the entire world of our experience, Nietzsche wrote in brief essays and aphorisms, detached from one another, dancing from theme to theme. He was less interested in convincing people than in making them think; each of the short passages that makes up his major philosophical works is meant to be read, pondered, and digested on its own. All in all, his books make excellent bathroom reading—and I suspect that Nietzsche himself would have been amused by that approach to his writings..
The gravitational center around which Nietzsche’s various thought experiments orbited, though, was a challenge to the conventional habits of moral discourse in his time and ours. For those who believe in a single, omniscient divine lawgiver, it makes perfect sense to talk about morals in the way that most people in his time and ours do in fact talk about them—that is to say, as though there’s some set of moral rules that are clearly set out and incontrovertibly correct, and the task of the moral philosopher is to badger and bully his readers into doing what they know they ought to do anyway.
From any other perspective, on the other hand, that approach to talking about morals is frankly bizarre. It’s not just that every set of moral rules that claims to have been handed down by the creator of the universe contradicts every other such set, though of course this is true. It’s that every such set of rules has proven unsatisfactory when applied to human beings. The vast amount of unnecessary misery that’s resulted from historical Christianity’s stark terror of human sexuality is a case in point, though it’s far from the only example, and far from the worst.
Yet, of course, most of us do talk about moral judgments as though we know what we’re talking about, and that’s where Nietszche comes in. Here’s his inimitable voice, from the preface to Beyond Good and Evil, launching a discussion of the point at issue:
“Supposing truth to be a woman—what? Is the suspicion not well founded that all philosophers, when they have been dogmatists, have had little understanding of women? That the gruesome earnestness, the clumsy importunity with which they have hitherto been in the habit of approaching truth have been inept and improper means for winning a wench? Certainly she has not let herself be won—and today every kind of dogmatism stands sad and discouraged.”
Nietzsche elsewhere characterized moral philosophy as the use of bad logic to prop up inherited prejudices. The gibe’s a good one, and generally far more accurate than not, but again it’s easy to misunderstand. Nietzsche was not saying that morality is a waste of time and we all ought to run out and do whatever happens to come into our heads, from whatever source. He was saying that we don’t yet know the first thing about morality, because we’ve allowed bad logic and inherited prejudices to get in the way of asking the necessary questions—because we haven’t realized that we don’t yet have any clear idea of how to live.
To a very great extent, if I may insert a personal reflection here, this realization has been at the heart of this blog’s project since its beginning. The peak oil crisis that called The Archdruid Report into being came about because human beings have as yet no clear idea how to get along with the biosphere that supports all our lives; the broader theme that became the core of my essays here over the years, the decline and fall of industrial civilization, shows with painful clarity that human beings have as yet no clear idea how to deal with the normal and healthy cycles of historical change; the impending fall of the United States’ global empire demonstrates the same point on a more immediate and, to my American readers, more personal scale. Chase down any of the varied ramblings this blog has engaged in over the years, and you’ll find that most if not all of them have the same recognition at their heart: we don’t yet know how to live, and maybe we should get to work figuring that out.
***I’d like to wind up this week’s post with three announcements. First of all, I’m delighted to report that the latest issue of the deindustrial-SF quarterly Into the Ruins is now available. Those of you who’ve read previous issues know that you’re in for a treat; those who haven’t—well, what are you waiting for? Those of my readers who bought a year’s subscription when Into the Ruins first launched last year should also keep in mind that it’s time to re-up, and help support one of the few venues for science fiction about the kind of futures we’re actually likely to get once the fantasy of perpetual progress drops out from under us and we have to start coping with the appalling mess that we’ve made of things.
***Second, I’m equally delighted to announce that a book of mine that’s been out of print for some years is available again. The Academy of the Sword is the most elaborate manual of sword combat ever written; it was penned in the early seventeenth century by Gerard Thibault, one of the greatest European masters of the way of the sword, and published in 1630, and it bases its wickedly effective fencing techniques on Renaissance Pythagorean sacred geometry. I spent almost a decade translating it out of early modern French and finally got it into print in 2006, but the original publisher promptly sank under a flurry of problems that were partly financial and partly ethical. Now the publisher of my books Not the Future We Ordered and Twilight’s Last Gleaming has brought it back into print in an elegant new hardback edition. New editions of my first two published books, Paths of Wisdom and Circles of Power, are under preparation with the same publisher as I write this, so it’s shaping up to be a pleasant spring for me.
***Finally, this will be the last post of The Archdruid Report for a while. I have a very full schedule in the weeks immediately ahead, and several significant changes afoot in my life, and won’t be able to keep up the weekly pace of blog posts while those are happening. I’m also busily sorting through alternative platforms for future blogging and social media—while I’m grateful to Blogger for providing a free platform for my blogging efforts over the past eleven years, each recent upgrade has made it more awkward to use, and it’s probably time to head elsewhere. When I resume blogging, it will thus likely be on a different platform, and quite possibly with a different name and theme. I’ll post something here and on the other blog once things get settled. In the meantime, have a great spring, and keep asking the hard questions even when the talking heads insist they have all the answers.

The Magic Lantern Show

Wed, 2017-03-01 13:12
The philosophy of Arthur Schopenhauer, which we’ve been discussing for the last three weeks, was enormously influential in European intellectual circles from the last quarter of the nineteenth century straight through to the Second World War.  That doesn’t mean that it influenced philosophers; by and large, in fact, the philosophers ignored Schopenhauer completely. His impact landed elsewhere: among composers and dramatists, authors and historians, poets, pop-spirituality teachers—and psychologists.
We could pursue any one of those and end up in the place I want to reach.  The psychologists offer the straightest route there, however, with useful vistas to either side, so that’s the route we’re going to take this week. To the psychologists, two closely linked things mattered about Schopenhauer. The first was that his analysis showed that the thing each of us calls “myself” is a representation rather than a reality, a convenient way of thinking about the loose tangle of competing drives and reactions we’re taught to misinterpret as a single “me” that makes things happen. The second was that his analysis also showed that what lies at the heart of that tangle is not reason, or thinking, or even consciousness, but blind will.
The reason that this was important to them, in turn, was that a rising tide of psychological research in the second half of the nineteenth century made it impossible to take seriously what I’ve called the folk metaphysics of western civilization:  the notion that each of us is a thinking mind perched inside the skull, manipulating the body as though it were a machine, and now and then being jabbed and jolted by the machinery. From Descartes on, as we’ve seen, that way of thinking about the self had come to pervade the western world. The only problem was that it never really worked.
It wasn’t just that it did a very poor job of explaining the way human beings actually relate to themselves, each other, and the surrounding world, though this was certainly true. It also fostered attitudes and behaviors that, when combined with certain attitudes about sexuality and the body, yielded a bumper crop of mental and physical illnesses. Among these was a class of illnesses that seemed to have no physical cause, but caused immense human suffering: the hysterical neuroses.  You don’t see these particular illnesses much any more, and there’s a very good reason for that.
Back in the second half of the nineteenth century, though, a huge number of people, especially but not only in the English-speaking world, were afflicted with apparently neurological illnesses such as paralysis, when their nerves demonstrably had nothing wrong with them. One very common example was “glove anesthesia”: one hand, normally the right hand, would become numb and immobile. From a physical perspective, that makes no sense at all; the nerves that bring feeling and movement to the hand run down the whole arm in narrow strips, so that if there were actually nerve damage, you’d get paralysis in one such strip all the way along the arm. There was no physical cause that could produce glove anesthesia, and yet it was relatively common in Europe and America in those days.
That’s where Sigmund Freud entered the picture.
It’s become popular in recent years to castigate Freud for his many failings, and since some of those failings were pretty significant, this hasn’t been difficult to do. More broadly, his fate is that of all thinkers whose ideas become too widespread: most people forget that somebody had to come up with the ideas in the first place. Before Freud’s time, a phrase like “the conscious self” sounded redundant—it had occurred to very, very few people that there might be any other kind—and the idea that desires that were rejected and denied by the conscious self might seep through the crawlspaces of the psyche and exert an unseen gravitational force on thought and behavior would have been dismissed as disgusting and impossible, if anybody had even thought of it in the first place.
From the pre-Freud perspective, the mind was active and the body was passive; the mind was conscious and the body was incapable of consciousness; the mind was rational and the body was incapable of reasoning; the mind was masculine and the body was feminine; the mind was luminous and pure and the body was dark and filthy.  These two were the only parts of the self; nothing else need apply, and physicians, psychologists, and philosophers alike went out of their way to raise high barriers between the two. This vision of the self, in turn, was what Freud destroyed.
We don’t need to get into the details of his model of the self or his theory of neurosis; most of those have long since been challenged by later research. What mattered, ironically enough, wasn’t Freud’s theories or his clinical skills, but his immense impact on popular culture. It wasn’t all that important, for example, what evidence he presented that glove anesthesia is what happens when someone feels overwhelming guilt about masturbating, and unconsciously resolves that guilt by losing the ability to move or feel the hand habitually used for that pastime.
What mattered was that once a certain amount of knowledge of Freud’s theories spread through popular culture, anybody who had glove anesthesia could be quite sure that every educated person who found out about it would invariably think, “Guess who’s been masturbating!” Since one central point of glove anesthesia was to make a symbolic display of obedience to social convention—“See, I didn’t masturbate, I can’t even use that hand!”—the public discussion of the sexual nature of that particular neurosis made the neurosis itself too much of an embarrassment to put on display.
The frequency of glove anesthesia, and a great many other distinctive neuroses of sexual origin, thus dropped like a rock once Freud’s ideas became a matter of general knowledge. Freud therefore deserves the honor of having extirpated an entire class of diseases from the face of the earth. That the theories that accomplished this feat were flawed and one-sided simply adds to his achievement.
Like so many pioneers in the history of ideas, you see, Freud made the mistake of overgeneralizing from success, and ended up convincing himself and a great many of his students that sex was the only unstated motive that mattered. There, of course, he was quite wrong, and those of his students who were willing to challenge the rapid fossilization of Freudian orthodoxy quickly demonstrated this. Alfred Adler, for example, showed that unacknowledged cravings for power, ranging along the whole spectrum from the lust for domination to the longing for freedom and autonomy, can exert just as forceful a gravitational attraction on thought and behavior as sexuality.
Carl Jung then upped the ante considerably by showing that there is also an unrecognized motive, apparently hardwired in place, that pushed the tangled mess of disparate drives toward states of increased integration. In a few moments we’ll be discussing Jung in rather more detail, as some of his ideas mesh very well indeed with the Schopenhauerian vision we’re pursuing in this sequence of posts. What’s relevant at this point in the discussion is that all the depth psychologists—Freud and the Freudians, Adler and the Adlerians, Jung and the Jungians, not to mention their less famous equivalents—unearthed a great deal of evidence showing that the conscious thinking self, the supposed lord and master of the body, was froth on the surface of a boiling cauldron, much of whose contents was unmentionable in polite company.
Phenomena such as glove anesthesia played a significant role in that unearthing. When someone wracked by guilt about masturbating suddenly loses all feeling and motor control in one hand, when a psychosomatic illness crops up on cue to stop you from doing something you’ve decided you ought to do but really, really, don’t want to do, or when a Freudian slip reveals to all present that you secretly despise the person whom, for practical reasons, you’re trying to flatter—just who is making that decision? Who’s in charge? It’s certainly not the conscious thinking self, who as often as not is completely in the dark about the whole thing and is embarrassed or appalled by the consequences.
The quest for that “who,” in turn, led depth psychologists down a great many twisting byways, but the most useful of them for our present purposes was the one taken by Carl Jung.
Like Freud, Jung gets castigated a lot these days for his failings, and in particular it’s very common for critics to denounce him as an occultist. As it happens, this latter charge is very nearly accurate.  It was little more than an accident of biography that landed him in the medical profession and sent him chasing after the secrets of the psyche using scientific methods; he could have as easily become a professional occultist, part of the thriving early twentieth century central European occult scene with which he had so many close connections throughout his life. The fact remains that he did his level best to pursue his researches in a scientific manner; his first major contribution to psychology was a timed word-association test that offered replicable, quantitative proof of Freud’s theory of repression, and his later theories—however wild they appeared—had a solid base in biology in general, and in particular in ethology, the study of animal behavior.
Ethologists had discovered well before Jung’s time that instincts in the more complex animals seem to work by way of hardwired images in the nervous system. When goslings hatch, for example, they immediately look for the nearest large moving object, which becomes Mom. Ethologist Konrad Lorenz became famous for deliberately triggering that reaction, and being instantly adopted by a small flock of goslings, who followed him dutifully around until they were grown. (He returned the favor by feeding them and teaching them to swim.) What Jung proposed, on the basis of many years of research, is that human beings also have such hardwired images, and a great deal of human behavior can be understood best by watching those images get triggered by outside stimuli.
Consider what happens when a human being falls in love. Those who have had that experience know that there’s nothing rational about it. Something above or below or outside the thinking mind gets triggered and fastens onto another person, who suddenly sprouts an alluring halo visible only to the person in love; the thinking mind gets swept away, shoved aside, or dragged along sputtering and complaining the whole way; the whole world gets repainted in rosy tints—and then, as often as not, the nonrational factor shuts off, and the former lover is left wondering what on Earth he or she was thinking—which is of course exactly the wrong question, since thinking had nothing to do with it.
This, Jung proposed, is the exact equivalent of the goslings following Konrad Lorenz down to the lake to learn how to swim. Most human beings have a similar set of reactions hardwired into their nervous systems, put there over countless generations of evolutionary time, which has evolved for the purpose of establishing the sexual pair bonds that play so important a role in human life. Exactly what triggers those reactions varies significantly from person to person, for reasons that (like most aspects of human psychology) are partly genetic, partly epigenetic, partly a matter of environment and early experience, and partly unknown. Jung called the hardwired image at the center of that reaction an archetype, and showed that it surfaces in predictable ways in dreams, fantasies, and other contexts where the deeper, nonrational levels come within reach of consciousness.
The pair bonding instinct isn’t the only one that has its distinctive archetype. There are several others. For example, there’s a mother-image and a father-image, which are usually (but not always) triggered by the people who raise an infant, and may be triggered again at various points in later life by other people. Another very powerful archetype is the image of the enemy, which Jung called the Shadow. The Shadow is everything you hate, which means in effect that it’s everything you hate about yourself—but inevitably, until a great deal of self-knowledge has been earned the hard way, that’s not apparent at all. Just as the Anima or Animus, the archetypal image of the lover, is inevitably projected onto other human beings, so is the Shadow, very often with disastrous results.
In evolutionary terms, the Shadow fills a necessary role. Confronted with a hostile enemy, human or animal, the human or not-quite-human individual who can access the ferocious irrational energies of rage and hatred is rather more likely to come through alive and victorious than the one who can only draw on the very limited strengths of the conscious thinking self. Outside such contexts, though, the Shadow is a massive and recurring problem in human affairs, because it constantly encourages us to attribute all of our own most humiliating and unwanted characteristics to the people we like least, and to blame them for the things we project onto them.
Bigotries of every kind, including the venomous class bigotries I discussed in an earlier post, show the presence of the Shadow.  We project hateful qualities onto every member of a group of people because that makes it easier for us to ignore those same qualities in ourselves. Notice that the Shadow doesn’t define its own content; it’s a dumpster that can be filled with anything that cultural pressures or personal experiences lead us to despise.
Another archetype, though, deserves our attention here, and it’s the one that the Shadow helpfully clears of unwanted content. That’s the ego, the archetype that each of us normally projects upon ourselves. In place of the loose tangle of drives and reactions each of us actually are, a complex interplay of blind pressures striving with one another and with a universe of pressures from without, the archetype of the ego portrays us to ourselves as single, unified, active, enduring, conscious beings. Like the Shadow, the ego-archetype doesn’t define its own content, which is why different societies around the world and throughout time have defined the individual in different ways.
In the industrial cultures of the modern western world, though, the ego-archetype typically gets filled with a familiar set of contents, the ones we discussed in last week’s post: the mind, the conscious thinking self, as distinct from the body, comprising every other aspect of human experience and action. That’s the disguise in which the loose tangle of complex and conflicting will takes in us, and it meets us at first glance whenever we turn our attention to ourselves just as inevitably as the rose-tinted glory of giddy infatuation meets the infatuated lover who glances at his or her beloved, or the snarling, hateful, inhuman grimace of the Shadow meets those who encountes one of the people onto whom they have projected their own unacceptable qualities.
All this, finally, circles back to points I made in the first post in this sequence. The same process of projection we’ve just been observing is the same, in essence, as the one that creates all the other representations that form the world we experience. You look at a coffee cup, again, and you think you see a solid, three-dimensional material object, because you no longer notice the complicated process by which you assemble fragmentary glimpses of unrelated sensory input into the representation we call a coffee cup. In exactly the same way, but to an even greater extent, you don’t notice the processes by which the loose tangle of conflicting wills each of us calls “myself” gets overlaid with the image of the conscious thinking self, which our cultures provide as raw material for the ego-archetype to feed on.
Nor, of course, do you notice the acts of awareness that project warm and alluring emotions onto the person you love, or hateful qualities onto the person you hate. It’s an essential part of the working of the mind that, under normal circumstances, these wholly subjective qualities should be experienced as objective realities. If the lover doesn’t project that roseate halo onto the beloved, if the bigot doesn’t project all those hateful qualities onto whatever class of people has been selected for their object, the archetype isn’t doing its job properly, and it will fail to have its effects—which, again, exist because they’ve proven to be more successful than not over the course of evolutionary time.
Back when Freud was still in medical school, one common entertainment among the well-to-do classes of Victorian Europe was the magic lantern show. A magic lantern is basically an early slide projector; they were used in some of the same ways that PowerPoint presentations are used today, though in the absence of moving visual media, they also filled many of the same niches as movies and television do today. (I’m old enough to remember when slide shows of photos from distant countries were still a tolerably common entertainment, for that matter.) The most lurid and popular of magic lantern shows, though, used the technology to produce spooky images in a darkened room—look, there’s a ghost! There’s a demon! There’s Helen of Troy come back from the dead!  Like the performances of stage magicians, the magic lantern show produced a simulacrum of wonders in an age that had convinced itself that miracles didn’t exist but still longed for them.
The entire metaphor of “projection” used by Jung and other depth psychologists came from these same performances, and it’s a very useful way of making sense of the process in question. An image inside the magic lantern appears to be out there in the world, when it’s just projected onto the nearest convenient surface; in the same way, an image within the loose tangle of conflicting wills we call “ourselves” appears to be out there in the world, when it’s just projected onto the nearest convenient person—or it appears to be the whole truth about the self, when it’s just projected onto the nearest convenient tangle of conflicting wills.
Is there a way out of the magic lantern show? Schopenhauer and Jung both argued that yes, there is—not, to be sure, a way to turn off the magic lantern, but a way to stop mistaking the projections for realities.  There’s a way to stop spending our time professing undying love on bended knee to one set of images projected on blank walls, and flinging ourselves into mortal combat against another set of images so projected; there’s a way, to step back out of the metaphor, to stop confusing the people around us with the images we like to project on them, and interact with them rather than with the images we’ve projected. 
The ways forward that Jung and Schopenhauer offered were different in some ways, though the philosopher’s vision influenced the psychologist’s to a great extent. We’ll get to their road maps as this conversation proceeds; first, though, we’re going to have to talk about some extremely awkward issues, including the festering swamp of metastatic abstractions and lightly camouflaged bullying that goes these days by the name of ethics.
I’ll offer one hint here, though. Just as we don’t actually learn how to love until we find a way to integrate infatuation with less giddy approaches to relating to another person, just as we don’t learn to fight competently until we can see the other guy’s strengths and weaknesses for what they are rather than what our projections would like them to be, we can’t come to terms with ourselves until we stop mistaking the ego-image for the whole loose tangled mess of self, and let something else make its presence felt. As for what that something else might be—why, we’ll get to that in due time.

A Muddle of Mind and Matter

Wed, 2017-02-22 10:02
The philosophy of Arthur Schopenhauer, which we’ve been discussing for the last two weeks, has a feature that reliably irritates most people when they encounter it for the first time: it doesn’t divide up the world the way people in modern western societies habitually do. To say, as Schopenhauer does, that the world we experience is a world of subjective representations, and that we encounter the reality behind those representations in will, is to map out the world in a way so unfamiliar that it grates on the nerves. Thus it came as no surprise that last week’s post fielded a flurry of responses trying to push the discussion back onto the more familiar ground of mind and matter.
That was inevitable. Every society has what I suppose could be called its folk metaphysics, a set of beliefs about the basic nature of existence that are taken for granted by most people in that society, and the habit of dividing the world of our experience into mind and matter is among the core elements of the folk metaphysics of the modern western world. Most of us think of it, on those occasions when we think of it at all, as simply the way the world is. It rarely occurs to most of us that there’s any other way to think of things—and when one shows up, a great many of us back away from it as fast as possible.
Yet dividing the world into mind and matter is really rather problematic, all things considered. The most obvious difficulty is the relation between the two sides of the division. This is usually called the mind-body problem, after the place where each of us encounters that difficulty most directly. Grant for the sake of argument that each of us really does consist of a mind contained in a material body, how do these two connect? It’s far from easy to come up with an answer that works.
Several approaches have been tried in the attempt to solve the mind-body problem. There’s dualism, which is the claim that there are two entirely different and independent kinds of things in the world—minds and bodies—and requires proponents to comes up with various ways to justify the connection between them. First place for philosophical brashness in this connection goes to Rene Descartes, who argued that the link was directly and miraculously caused by the will of God. Plenty of less blatant methods of handwaving have been used to accomplish the same trick, but all of them require question-begging maneuvers of various kinds, and none has yet managed to present any kind of convincing evidence for itself.
Then there are the reductionistic monisms, which attempt to account for the relationship of mind and matter by reducing one of them to the other. The most popular reductionistic monism these days is reductionistic materialism, which claims that what we call “mind” is simply the electrochemical activity of those lumps of matter we call human brains. Though it’s a good deal less popular these days, there’s also reductionistic idealism, which claims that what we call “matter” is the brought into being by the activity of minds, or of Mind.
Further out still, you get the eliminative monisms, which deal with the relationship between mind and matter by insisting that one of them doesn’t exist. There are eliminative materialists, for example, who insist that mental experiences don’t exist, and our conviction that we think, feel, experience pain and pleasure, etc. is an “introspective illusion.” (I’ve often thought that one good response to such a claim would be to ask, “Do you really think so?” The consistent eliminative materialist would have to answer “No.”) There are also eliminative idealists, who insist that matter doesn’t exist and that all is mind.
There’s probably been as much effort expended in attempting to solve the mind-body problem as any other single philosophical issue has gotten in modern times, and yet it remains the focus of endless debates even today. That sort of intellectual merry-go-round is usually a pretty good sign that the basic assumptions at the root of the question have some kind of lethal flaw. That’s particularly true when this sort of ongoing donnybrook isn’t the only persistent difficulty surrounding the same set of ideas—and that’s very much the case here.
After all, there’s a far more personal sense in which the phrase “mind-body problem” can be taken. To speak in the terms usual for our culture, this thing we’re calling “mind” includes only a certain portion of what we think of as our inner lives. What, after all, counts as “mind”? In the folk metaphysics of our culture, and in most of the more formal systems of thought based on it, “mind” is consciousness plus the thinking and reasoning functions, perhaps with intuition (however defined) tied on like a squirrel’s  tail to the antenna of an old-fashioned jalopy. The emotions aren’t part of mind, and neither are such very active parts of our lives as sexual desire and the other passions; it sounds absurd, in fact, to talk about “the emotion-body problem” or the “passion-body problem.” Why does it sound absurd? Because, consciously or unconsciously, we assign the emotions and the passions to the category of “body,” along with the senses.
This is where we get the second form of the mind-body problem, which is that we’re taught implicitly and explicitly that the mind governs the body, and yet the functions we label “body” show a distinct lack of interest in obeying the functions we call “mind.” Sexual desire is of course the most obvious example. What people actually desire and what they think they ought to desire are quite often two very different things, and when the “mind” tries to bully the “body” into desiring what the “mind” thinks it ought to desire, the results are predictably bad. Add enough moral panic to the mix, in fact, and you end up with sexual hysteria of the classic Victorian type, in which the body ends up being experienced as a sinister Other responding solely to its own evil propensities, the seductive wiles of other persons, or the machinations of Satan himself despite all the efforts of the mind to rein it in.
Notice the implicit hierarchy woven into the folk metaphysics just sketched out, too. Mind is supposed to rule matter, not the other way around; mind is active, while matter is passive or, at most, subject to purely mechanical pressures that make it lurch around in predictable ways. When things don’t behave that way, you tend to see people melt down in one way or another—and the universe being what it is, things don’t actually behave that way very often, so the meltdowns come at regular intervals.
They also arrive in an impressive range of contexts, because the way of thinking about things that divides them into mind and matter is remarkably pervasive in western societies, and pops up in the most extraordinary places.  Think of the way that our mainstream religions portray God as the divine Mind ruling omnipotently over a universe of passive matter; that’s the ideal toward which our notions of mind and body strive, and predictably never reach. Think of the way that our entertainment media can always evoke a shudder of horror by imagining something we assign to the category of lifeless matter—a corpse in the case of zombie flicks, a machine in such tales as Stephen King’s Christine, or what have you—suddenly starts acting as though it possesses a mind.
For that matter, listen to the more frantic end of the rhetoric on the American left following the recent presidential election and you’ll hear the same theme echoing off the hills. The left likes to think of itself as the smart people, the educated people, the sensitive and thoughtful and reasonable people—in effect, the people of Mind. The hate speech that many of them direct toward their political opponents leans just as heavily on the notion that these latter are stupid, uneducated, insensitive, irrational, and so on—that is to say, the people of Matter. Part of the hysteria that followed Trump’s election, in turn, might best be described as the political equivalent of the instinctive reaction to a zombie flick: the walking dead have suddenly lurched out of their graves and stalked toward the ballot box, the body politic has rebelled against its self-proclaimed mind!
Let’s go deeper, though. The habit of dividing the universe of human experience into mind and matter isn’t hardwired into the world, or for that matter into human consciousness; there have been, and are still, societies in which people simply don’t experience themselves and the world that way. The mind-body problem and the habits of thought that give rise to it have a history, and it’s by understanding that history that it becomes possible to see past the problem toward a solution.
That history takes its rise from an interesting disparity among the world’s great philosophical traditions. The three that arose independently—the Chinese, the Indian, and the Greek—focused on different aspects of humanity’s existence in the world. Chinese philosophy from earliest times directed its efforts to understanding the relationship between the individual and society; that’s why the Confucian mainstream of Chinese philosophy is resolutely political and social in its focus, exploring ways that the individual can find a viable place within society, and the alternative Taoist tradition in its oldest forms (before it absorbed mysticism from Indian sources) focused on ways that the individual can find a viable place outside society. Indian philosophy, by contrast, directed its efforts to understanding the nature of individual existence itself; that’s why the great Indian philosophical schools all got deeply into epistemology and ended up with a strong mystical bent.
The Greek philosophical tradition, in turn, went to work on a different set of problems. Greek philosophy, once it got past its initial fumblings, fixed its attention on the world of thought. That’s what led Greek thinkers to transform mathematics from a unsorted heap of practical techniques to the kind of ordered system of axioms and theorems best exemplified by Euclid’s Elements of Geometry, and it’s also what led Greek thinkers in the same generation as Euclid to create logic, one of the half dozen or so greatest creations of the human mind. Yet it also led to something considerably more problematic: the breathtaking leap of faith by which some of the greatest intellects of the ancient world convinced themselves that the structure of their thoughts was the true structure of the universe, and that thoughts about things were therefore more real than the things themselves.
The roots of that conviction go back all the way to the beginnings of Greek philosophy, but it really came into its own with Parmenides, an important philosopher of the generation immediately before Plato. Parmenides argued that there were two ways of understanding the world, the way of truth and the way of opinion; the way of opinion consisted of understanding the world as it appears to the senses, which according to Parmenides means it’s false, while the way of truth consisted of understanding the world the way that reason proved it had to be, even when this contradicted the testimony of the senses. To be sure, there are times and places where the testimony of the senses does indeed need to be corrected by logic, but it’s at least questionable whether this should be taken anything like as far as Parmenides took it—he argued, for example, that motion was logically impossible, and so nothing ever actually moves, even though it seems that way to our deceiving senses.
The idea that thoughts about things are more real than things settled into what would be its classic form in the writings of Plato, who took Parmenides’ distinction and set to work to explain the relationship between the worlds of truth and opinion. To Plato, the world of truth became a world of forms or ideas, on which everything in the world of sensory experience is modeled. The chair we see, in other words, is a projection or reflection downwards into the world of matter of the timeless, pure, and perfect form or idea of chair-ness. The senses show us the projections or reflections; the reasoning mind shows us the eternal form from which they descend.
That was the promise of classic Platonism—that the mind could know the truth about the universe directly, without the intervention of the senses, the same way it could know the truth of a mathematical demonstration. The difficulty with this enticing claim, though, was that when people tried to find the truth about the universe by examining their thinking processes, no two of them discovered exactly the same truth, and the wider the cultural and intellectual differences between them, the more different the truths turned out to be. It was for this reason among others that Aristotle, whose life’s work was basically that of cleaning up the mess that Plato and his predecessors left behind, made such a point of claiming that nothing enters the mind except through the medium of the senses. It’s also why the Academy, the school founded by Plato, in the generations immediately after his time took a hard skeptical turn, and focused relentlessly on the limits of human knowledge and reasoning.
Later on, Greek philosophy and its Roman foster-child headed off in other directions—on the one hand, into ethics, and the question of how to live the good life in a world where certainty isn’t available; on the other, into mysticism, and the question of whether the human mind can experience the truth of things directly through religious experience. A great deal of Plato’s thinking, however, got absorbed by the Christian religion after the latter clawed its way to respectability in the fourth century CE.
Augustine of Hippo, the theologian who basically set the tone of Christianity in the west for the next fifteen centuries, had been a Neoplatonist before he returned to his Christian roots, and he was far from the only Christian of that time to drink deeply from Plato's well. In his wake, Platonism became the standard philosophy of the western church until it was displaced by a modified version of Aristotle’s philosophy in the high Middle Ages. Thinkers divided the human organism into two portions, body and soul, and began the process by which such things as sexuality and the less angelic emotions got exiled from the soul into the body.
Even after Thomas Aquinas made Aristotle popular again, the basic Parmenidean-Platonic notion of truth had been so thoroughly bolted into Christian theology that it rode right over any remaining worries about the limitations of human reason. The soul trained in the use of reason could see straight to the core of things, and recognize by its own operations such basic religious doctrines as the existence of God:  that was the faith with which generations of scholars pursued the scholastic philosophy of medieval times, and those who disagreed with them rarely quarreled over their basic conception—rather, the point at issue was whether the Fall had left the human mind so vulnerable to the machinations of Satan that it couldn’t count on its own conclusions, and the extent to which divine grace would override Satan’s malicious tinkerings anywhere this side of heaven.
If you happen to be a devout Christian, such questions make sense, and they matter. It’s harder to see how they still made sense and mattered as the western world began moving into its post-Christian era in the eighteenth century, and yet the Parmenidean-Platonic faith in the omnipotence of reason gained ground as Christianity ebbed among the educated classes. People stopped talking about soul and body and started talking about mind and body instead.
Since mind, mens in Latin, was already in common use as a term for the faculty of the soul that handled its thinking and could be trained to follow the rules of reason, that shift was of vast importance. It marked the point at which the passions and the emotions were shoved out of the basic self-concept of the individual in western culture, and exiled to the body, that unruly and rebellious lump of matter in which the mind is somehow caged.
That’s one of the core things that Schopenhauer rejected. As he saw it, the mind isn’t the be-all and end-all of the self, stuck somehow into the prison house of the body. Rather, the mind is a frail and unstable set of functions that surface now and then on top of other functions that are much older, stronger, and more enduring. What expresses itself through all these functions, in turn, is will:  at the most basic primary level, as the will to exist; on a secondary level, as the will to live, with all the instincts and drives that unfold from that will; on a tertiary level, as the will to experience, with all the sensory and cognitive apparatus that unfolds from that will; and on a quaternary level, as the will to understand, with all the abstract concepts and relationships that unfold from that will.
Notice that from this point of view, the structure of thought isn't the structure of the cosmos, just a set of convenient models, and thoughts about things are emphatically not more real than the things themselves.  The things themselves are wills, expressing themselves through their several modes.  The things as we know them are representations, and our thoughts about the things are abstract patterns we create out of memories of representations, and thus at two removes from reality.
Notice also that from this point of view, the self is simply a representation—the ur-representation, the first representation each of us makes in infancy as it gradually sinks in that there’s a part of the kaleidoscope of our experience that we can move at will, and a lot more that we can’t, but still just a representation, not a reality. Of course that’s what we see when we first try to pay attention to ourselves, just as we see the coffee cup discussed in the first post in this series. It takes exacting logical analysis, scientific experimentation, or prolonged introspection to get past the representation of the self (or the coffee cup), realize that it’s a subjective construct rather than an objective reality, and grasp the way that it’s assembled out of disparate stimuli according to preexisting frameworks that are partly hardwired into our species and partly assembled over the course of our lives.
Notice, finally, that those functions we like to call “mind”—in the folk metaphysics of our culture, again, these are consciousness and the capacity to think, with a few other tag-ends of other functions dangling here and there—aren’t the essence of who we are, the ghost in the machine, the Mini-Me perched inside the skull that pushes and pulls levers to control the passive mass of the body and gets distracted by the jabs and lurches of the emotions and passions. The functions we call “mind,” rather, are a set of delicate, tentative, and fragile functions of will, less robust and stable than most of the others, and with no inherent right to rule the other functions. The Schopenhauerian self is an ecosystem rather than a hierarchy, and if what we call “mind” sits at the top of the food chain like a fox in a meadow, that simply means that the fox has to spend much of its time figuring out where mice like to go, and even more of its time sleeping in its den, while the mice scamper busily about and the grass goes quietly about turning sunlight, water and carbon dioxide into the nutrients that support the whole system.
Accepting this view of the self requires sweeping revisions of the ways we like to think about ourselves and the world, which is an important reason why so many people react with acute discomfort when it’s suggested. Nonetheless those revisions are of crucial importance, and as this discussion continues, we’ll see how they offer crucial insights into the problems we face in this age of the world—and into their potential solutions.

The World as Will

Wed, 2017-02-15 14:35
It's impressively easy to misunderstand the point made in last week’s post here on The Archdruid Report. To say that the world we experience is made up of representations of reality, constructed in our minds by taking the trickle of data we get from the senses and fitting those into patterns that are there already, doesn’t mean that nothing exists outside of our minds. Quite the contrary, in fact; there are two very good reasons to think that there really is something “out there,” a reality outside our minds that produces the trickle of data we’ve discussed.
The first of those reasons seems almost absurdly simple at first glance: the world doesn’t always make sense to us. Consider, as one example out of godzillions, the way that light seems to behave like a particle on some occasions and like a wave on others. That’s been described, inaccurately, as a paradox, but it’s actually a reflection of the limitations of the human mind.
What, after all, does it mean to call something a particle? Poke around the concept for a while and you’ll find that at root, this concept “particle” is an abstract metaphor, extracted from the common human experience of dealing with little round objects such as pebbles and marbles. What, in turn, is a wave? Another abstract metaphor, extracted from the common human experience of watching water in motion. When a physicist says that light sometimes acts like a particle and sometimes like a wave, what she’s saying is that neither of these two metaphors fits more than a part of the way that light behaves, and we don’t have any better metaphor available.
If the world was nothing but a hallucination projected by our minds, then it would contain nothing that wasn’t already present in our minds—for what other source could there be?  That implies in turn that there would be a perfect match between the contents of the world and the contents of our minds, and we wouldn’t get the kind of mismatch between mind and world that leaves physicists flailing. More generally, the fact that the world so often baffles us offers good evidence that behind the world we experience, the world as representation, there’s some “thing in itself” that’s the source of the sense data we assemble into representations.
The other reason to think that there’s a reality distinct from our representations is that, in a certain sense, we experience such a reality at every moment.
Raise one of your hands to a position where you can see it, and wiggle the fingers. You see the fingers wiggling—or, more precisely, you see a representation of the wiggling fingers, and that representation is constructed in your mind out of bits of visual data, a great deal of memory, and certain patterns that seem to be hardwired into your mind. You also feel the fingers wiggling—or, here again, you feel a representation of the wiggling fingers, which is constructed in your mind out of bits of tactile and kinesthetic data, plus the usual inputs from memory and hardwired patterns. Pay close attention and you might be able to sense the way your mind assembles the visual representation and the tactile one into a single pattern; that happens close enough to the surface of consciousness that a good many people can catch themselves doing it.
So you’ve got a representation of wiggling fingers, part of the world as representation we experience. Now ask yourself this: the action of the will that makes the fingers wiggle—is that a representation?
This is where things get interesting, because the only reasonable answer is no, it’s not. You don’t experience the action of the will as a representation; you don’t experience it at all. You simply wiggle your fingers. Sure, you experience the results of the will’s action in the form of representations—the visual and tactile experiences we’ve just been considering—but not the will itself. If it were true that you could expect to see or hear or feel or smell or taste the impulse of the will rolling down your arm to the fingers, say, it would be reasonable to treat the will as just one more representation. Since that isn’t the case, it’s worth exploring the possibility that in the will, we encounter something that isn’t just a representation of reality—it’s a reality we encounter directly.
That’s the insight at the foundation of Arthur Schopenhauer’s philosophy. Schopenhauer’s one of the two principal guides who are going to show us around the giddy funhouse that philosophy has turned into of late, and guide us to the well-marked exits, so you’ll want to know a little about him. He lived in the ramshackle assortment of little countries that later became the nation of Germany; he was born in 1788 and died in 1860; he got his doctorate in philosophy in 1813; he wrote his most important work, The World as Will and Representation, before he turned thirty; and he spent all but the last ten years of his life in complete obscurity, ignored by the universities and almost everyone else. A small inheritance, carefully managed, kept him from having to work for a living, and so he spent his time reading, writing, playing the flute for an hour a day before dinner, and grumbling under his breath as philosophy went its merry way into metaphysical fantasy. He grumbled a lot, and not always under his breath. Fans of Sesame Street can think of him as philosophy’s answer to Oscar the Grouch.
Schopenhauer came of age intellectually in the wake of Immanuel Kant, whose work we discussed briefly last week, and so the question he faced was how philosophy could respond to the immense challenge Kant threw at the discipline’s feet. Before you go back to chattering about what’s true and what’s real, Kant said in effect, show me that these labels mean something and relate to something, and that you’re not just chasing phantoms manufactured by your own minds.
Most of the philosophers who followed in Kant’s footsteps responded to his challenge by ignoring it, or using various modes of handwaving to pretend that it didn’t matter. One common gambit at the time was to claim that the human mind has a special superpower of intellectual intuition that enables it to leap tall representations in a single bound, and get to a direct experience of reality that way. What that meant in practice, of course, is that philosophers could claim to have intellectually intuited this, that, and the other thing, and then build a great tottering system on top of them. What that meant in practice, of course, that a philosopher could simply treat whatever abstractions he fancied as truths that didn’t have to be proved; after all, he’d intellectually intuited them—prove that he hadn’t!
There were other such gimmicks. What set Schopenhauer apart was that he took Kant’s challenge seriously enough to go looking for something that wasn’t simply a representation. What he found—why, that brings us back to the wiggling fingers.
As discussed in last week’s post, every one of the world’s great philosophical traditions has ended up having to face the same challenge Kant flung in the face of the philosophers of his time. Schopenhauer knew this, since a fair amount of philosophy from India had been translated into European languages by his time, and he read extensively on the subject. This was helpful because Indian philosophy hit its own epistemological crisis around the tenth century BCE, a good twenty-nine centuries before Western philosophy got there, and so had a pretty impressive head start. There’s a rich diversity of responses to that crisis in the classical Indian philosophical schools, but most of them came to see consciousness as a (or the) thing-in-itself, as reality rather than representation.
It’s a plausible claim. Look at your hand again, with or without wiggling fingers. Now be aware of yourself looking at the hand—many people find this difficult, so be willing to work at it, and remember to feel as well as see. There’s your hand; there’s the space between your hand and your eyes; there’s whatever of your face you can see, with or without eyeglasses attached; pay close attention and you can also feel your face and your eyes from within; and then there’s—
There’s the thing we call consciousness, the whatever-it-is that watches through your eyes. Like the act of will that wiggled your fingers, it’s not a representation; you don’t experience it. In fact, it’s very like the act of will that wiggled your fingers, and that’s where Schopenhauer went his own way.
What, after all, does it mean to be conscious of something? Some simple examples will help clarify this. Move your hand until it bumps into something; it’s when something stops the movement that you feel it. Look at anything; you can see it if and only if you can’t see through it. You are conscious of something when, and only when, it resists your will.
That suggested to Schopenhauer that consciousness derives from will, not the other way around. There were other lines of reasoning that point in the same direction, and all of them derive from common human experiences. For example, each of us stops being conscious for some hours out of every day, whenever we go to sleep. During part of the time we’re sleeping, we experience nothing at all; during another part, we experience the weirdly disconnected representations we call “dreams.”  Even in dreamless sleep, though, it’s common for a sleeper to shift a limb away from an unpleasant stimulus. Thus the will is active even when consciousness is absent.
Schopenhauer proposed that there are different forms or, as he put it, grades of the will. Consciousness, which we can define for present purposes as the ability to experience representations, is one grade of the will—one way that the will can adapt to existence in a world that often resists it. Life is another, more basic grade. Consider the way that plants orient themselves toward sunlight, bending and twisting like snakes in slow motion, and seek out concentrations of nutrients with probing, hungry roots. As far as anyone knows, plants aren’t conscious—that is, they don’t experience a world of representations the way that animals do—but they display the kind of goal-seeking behavior that shows the action of will.
Animals also show goal-seeking behavior, and they do it in a much more complex and flexible way than plants do. There’s good reason to think that many animals are conscious, and experience a world of representations in something of the same way we do; certainly students of animal behavior have found that animals let incidents from the past shape their actions in the present, mistake one person for another, and otherwise behave in ways that suggest that their actions are guided, as ours are, by representations rather than direct reaction to stimuli. In animals, the will has developed the ability to represent its environment to itself.
Animals, at least the more complex ones, also have that distinctive mode of consciousness we call emotion. They can be happy, sad, lonely, furious, and so on; they feel affection for some beings and aversion toward others. Pay attention to your own emotions and you’ll soon notice how closely they relate to the will. Some emotions—love and hate are among them—are motives for action, and thus expressions of will; others—happiness and sadness are among them—are responses to the success or failure of the will to achieve its goals. While emotions are tangled up with representations in our minds, and presumably in those of animals as well, they stand apart; they’re best understood as conditions of the will, expressions of its state as it copes with the world through its own representations.
And humans? We’ve got another grade of the will, which we can call intellect:  the ability to add up representations into abstract concepts, which we do, ahem, at will. Here’s one representation, which is brown and furry and barks; here’s another like it; here’s a whole kennel of them—and we lump them all together in a single abstract category, to which we assign a sound such as “dog.” We can then add these categories together, creating broader categories such as “quadruped” and “pet;” we can subdivide the categories to create narrower ones such as “puppy” and “Corgi;” we can extract qualities from the whole and treat them as separate concepts, such as “furry” and “loud;” we can take certain very general qualities and conjure up the entire realm of abstract number, by noticing how many paws most dogs have and using that, and a great many other things, to come up with the concept of “four.”
So life, consciousness, and intellect are three grades of the will. One interesting thing about them is that the more basic ones are more enduring and stable than the more complex ones. Humans, again, are good examples. Humans remain alive all the way from birth to death; they’re conscious only when awake; they’re intelligent only when actively engaged in thinking—which is a lot less often than we generally like to admit. A certain degree of tiredness, a strong emotion, or a good stiff drink are usually enough to shut off the intellect and leave us dealing with the world on the same mental basis as an ordinarily bright dog; it takes quite a bit more to reduce us to the vegetative level, and serious physical trauma to go one more level down.
Let’s take a look at that final level, though. The conventional wisdom of our age holds that everything that exists is made up of something called “matter,” which is configured in various ways; further, that matter is what really exists, and everything else is somehow a function of matter if it exists at all. For most of us, this is the default setting, the philosophical opinion we start from and come back to, and anyone who tries to question it can count on massive pushback.
The difficulty here is that philosophers and scientists have both proved, in their own ways, that the usual conception of matter is quite simply nonsense. Any physical scientist worth his or her sodium chloride, to begin with, will tell you that what we habitually call “solid matter” is nearly as empty as the vacuum of deep space—a bit of four-dimensional curved spacetime that happens to have certain tiny probability waves spinning dizzily in it, and it’s the interaction between those probability waves and those composing that other patch of curved spacetime we each call “my body” that creates the illusions of solidity, color, and the other properties we attribute to matter.
The philosophers got to the same destination a couple of centuries earlier, and by a different route. The epistemologists I mentioned in last week’s post—Locke, Berkeley, and Hobbes—took the common conception of matter apart layer by layer and showed, to use the formulation we’ve already discussed, that all the things we attribute to matter are simply representations in the mind. Is there something out there that causes those representations? As already mentioned, yes, there’s very good reason to think so—but that doesn’t mean that the “something out there” has to consist of matter in any sense of the word that means anything.
That’s where Schopenhauer got to work, and once again, he proceeded by calling attention to certain very basic and common human experiences. Each of us has direct access, in a certain sense, to one portion of the “something out there,” the portion each of us calls “my body.” When we experience our bodies, we experience them as representations, just like anything else—but we also act with them, and as the experiment with the wiggling fingers demonstrated, the will that acts isn’t a representation.
Thus there’s a boundary between the part of the universe we encounter as will and representation, and the part we encounter only as representation. The exact location of that boundary is more complex than it seems at first sight. It’s a commonplace in the martial arts, for example, that a capable martial artist can learn to feel with a weapon as though it were a part of the body. Many kinds of swordsmanship, for example, rely on what fencers call sentiment de fer, the “sense of the steel;” the competent fencer can feel the lightest touch of the other blade against his own, just as though it brushed his hand.
There are also certain circumstances—lovemaking, dancing, ecstatic religious experience, and mob violence are among them—in which under certain hard-to-replicate conditions, two or more people seem to become, at least briefly, a single entity that moves and acts with a will of its own. All of those involve a shift from the intellect to a more basic grade of the will, and they lead in directions that will deserve a good deal more examination later on; for now, the point at issue is that the boundary line between self and other can be a little more fluid than we normally tend to assume.
For our present purposes, though, we can set that aside and focus on the body as the part of the world each of us encounters in a twofold way: as a representation among representations, and as a means of expression for the will.  Everything we perceive about our bodies is a representation, but by noticing these representations, we observe the action of something that isn’t a representation, something we call the will, manifesting in its various grades. That’s all there is. Go looking as long as you want, says Schopenhauer, and you won’t find anything but will and representations. What if that’s all there is—if the thing we call "matter" is simpy the most basic grade of the will, and everything in the world thus amounts to will on the one hand, and representations experienced by that mode of will we call consciousness on the other, and the thing that representations are representing are various expressions of this one energy that, by way of its distinctive manifestations in our own experience, we call the will?
That’s Schopenhauer’s vision. The remarkable thing is how close it is to the vision that comes out of modern science. A century before quantum mechanics, he’d already grasped that behind the facade of sensory representations that you and I call matter lies an incomprehensible and insubstantial reality, a realm of complex forces dancing in the void. Follow his arguments out to their logical conclusion and you get a close enough equivalent of the universe of modern physics that it’s not at all implausible that they’re one and the same. Of course plausibility isn’t proof—but given the fragile, dependent, and derivative nature of the human intellect, it may be as close as we can get.
And of course that latter point is a core reason why Arthur Schopenhauer spent most of his life in complete obscurity and why, after a brief period of mostly posthumous superstardom in the late nineteenth century, his work dropped out of sight and has rarely been noticed since. (To be precise, it’s one of two core reasons; we’ll get to the other one later.) If he’s right, then the universe is not rational. Reason—the disciplined use of the grade of will I’ve called the intellect—isn’t a key to the truth of things.  It’s simply the systematic exploitation of a set of habits of mind that turned out to be convenient for our ancestors as they struggled with the hard but intellectually undemanding tasks of staying fed, attracting mates, chasing off predators, and the like, and later on got pulled out of context and put to work coming up with complicated stories about what causes the representations we experience.
To suggest that, much less to back it up with a great deal of argument and evidence, is to collide head on with one of the most pervasive presuppositions of our culture. We’ll survey the wreckage left behind by that collision in next week’s post.

The World as Representation

Wed, 2017-02-08 14:23
It can be hard to remember these days that not much more than half a century ago, philosophy was something you read about in general-interest magazines and the better grade of newspapers. Existentialist philosopher Jean-Paul Sartre was an international celebrity; the posthumous publication of Pierre Teilhard de Chardin’s Le Phenomenon Humaine (the English translation, predictably, was titled The Phenomenon of Man) got significant flurries of media coverage; Random House’s Vintage Books label brought out cheap mass-market paperback editions of major philosophical writings from Plato straight through to Nietzsche and beyond, and made money off them.
Though philosophy was never really part of the cultural mainstream, it had the same kind of following as avant-garde jazz, say, or science fiction.  At any reasonably large cocktail party you had a pretty fair chance of meeting someone who was into it, and if you knew where to look in any big city—or any college town with pretensions to intellectual culture, for that matter—you could find at least one bar or bookstore or all-night coffee joint where the philosophy geeks hung out, and talked earnestly into the small hours about Kant or Kierkegaard. What’s more, that level of interest in the subject had been pretty standard in the Western world for a very long time.
We’ve come a long way since then, and not in a particularly useful direction. These days, if you hear somebody talk about philosophy in the media, it’s probably a scientific materialist like Neil deGrasse Tyson ranting about how all philosophy is nonsense. The occasional work of philosophical exegesis still gets a page or two in the New York Review of Books now and then, but popular interest in the subject has vanished, and more than vanished: the sort of truculent ignorance about philosophy displayed by Tyson and his many equivalents has become just as common among the chattering classes as a feigned interest in the subject was a half century in the past.
Like most human events, the decline of philosophy in modern times was overdetermined; like the victim in the murder-mystery paperback who was shot, strangled, stabbed, poisoned, whacked over the head with a lead pipe, and then shoved off a bridge to drown, there were more causes of death than the situation actually required. Part of the problem, certainly, was the explosive expansion of the academic industry in the US and elsewhere in the second half of the twentieth century.  In an era when every state teacher’s college aspired to become a university and every state university dreamed of rivaling the Ivy League, a philosophy department was an essential status symbol. The resulting expansion of the field was not necessarily matched by an equivalent increase in genuine philosophers, but it was certainly followed by the transformation of university-employed philosophy professors into a professional caste which, as such castes generally do, defended its status by adopting an impenetrable jargon and ignoring or rebuffing attempts at participation from outside its increasingly airtight circle.
Another factor was the rise of the sort of belligerent scientific materialism exemplified, as noted earlier, by Neil deGrasse Tyson. Scientific inquiry itself is philosophically neutral—it’s possible to practice science from just about any philosophical standpoint you care to name—but the claim at the heart of scientific materialism, the dogmatic insistence that those things that can be investigated using scientific methods and explained by current scientific theory are the only things that can possibly exist, depends on arbitrary metaphysical postulates that were comprehensively disproved by philosophers more than two centuries ago. (We’ll get to those postulates and their problems later on.) Thus the ascendancy of scientific materialism in educated culture pretty much mandated the dismissal of philosophy.
There were plenty of other factors as well, most of them having no more to do with philosophy as such than the ones just cited. Philosophy itself, though, bears some of the responsibility for its own decline. Starting in the seventeenth century and reaching a crisis point in the nineteenth, western philosophy came to a parting of the ways—one that the philosophical traditions of other cultures reached long before it, with similar consequences—and by and large, philosophers and their audiences alike chose a route that led to its present eclipse. That choice isn’t irreparable, and there’s much to be gained by reversing it, but it’s going to take a fair amount of hard intellectual effort and a willingness to abandon some highly popular shibboleths to work back to the mistake that was made, and undo it.
To help make sense of what follows, a concrete metaphor might be useful. If you’re in a place where there are windows nearby, especially if the windows aren’t particularly clean, go look out through a window at the view beyond it. Then, after you’ve done this for a minute or so, change your focus and look at the window rather than through it, so that you see the slight color of the glass and whatever dust or dirt is clinging to it. Repeat the process a few times, until you’re clear on the shift I mean: looking through the window, you see the world; looking at the window, you see the medium through which you see the world—and you might just discover that some of what you thought at first glance was out there in the world was actually on the window glass the whole time.
That, in effect, was the great change that shook western philosophy to its foundations beginning in the seventeenth century. Up to that point, most philosophers in the western world started from a set of unexamined presuppositions about what was true, and used the tools of reasoning and evidence to proceed from those presuppositions to a more or less complete account of the world. They were into what philosophers call metaphysics: reasoned inquiry into the basic principles of existence. That’s the focus of every philosophical tradition in its early years, before the confusing results of metaphysical inquiry refocus attention from “What exists?” to “How do we know what exists?” Metaphysics then gives way to epistemology: reasoned inquiry into what human beings are capable of knowing.
That refocusing happened in Greek philosophy around the fourth century BCE, in Indian philosophy around the tenth century BCE, and in Chinese philosophy a little earlier than in Greece. In each case, philosophers who had been busy constructing elegant explanations of the world on the basis of some set of unexamined cultural assumptions found themselves face to face with hard questions about the validity of those assumptions. In terms of the metaphor suggested above, they were making all kinds of statements about what they saw through the window, and then suddenly realized that the colors they’d attributed to the world were being contributed in part by the window glass and the dust on it, the vast dark shape that seemed to be moving purposefully across the sky was actually a beetle walking on the outside of the window, and so on.
The same refocusing began in the modern world with Rene Descartes, who famously attempted to start his philosophical explorations by doubting everything. That’s a good deal easier said than done, as it happens, and to a modern eye, Descartes’ writings are riddled with unexamined assumptions, but the first attempt had been made and others followed. A trio of epistemologists from the British Isles—John Locke, George Berkeley, and David Hume—rushed in where Descartes feared to tread, demonstrating that the view from the window had much more to do with the window glass than it did with the world outside. The final step in the process was taken by the German philosopher Immanuel Kant, who subjected human sensory and rational knowledge to relentless scrutiny and showed that most of what we think of as “out there,” including such apparently hard realities as space and time, are actually  artifacts of the processes by which we perceive things.
Look at an object nearby: a coffee cup, let’s say. You experience the cup as something solid and real, outside yourself: seeing it, you know you can reach for it and pick it up; and to the extent that you notice the processes by which you perceive it, you experience these as wholly passive, a transparent window on an objective external reality. That’s normal, and there are good practical reasons why we usually experience the world that way, but it’s not actually what’s going on.
What’s going on is that a thin stream of visual information is flowing into your mind in the form of brief fragmentary glimpses of color and shape. Your mind then assembles these together into the mental image of the coffee cup, using your memories of that and other coffee cups, and a range of other things as well, as a template onto which the glimpses can be arranged. Arthur Schopenhauer, about whom we’ll be talking a great deal as we proceed, gave the process we’re discussing the useful label of “representation;” when you look at the coffee cup, you’re not passively seeing the cup as it exists, you’re actively representing—literally re-presenting—an image of the cup in your mind.
There are certain special situations in which you can watch representation at work. If you’ve ever woken up in an unfamiliar room at night, and had a few seconds pass before the dark unknown shapes around you finally turned into ordinary furniture, you’ve had one of those experiences. Another is provided by the kind of optical illusion that can be seen as two different things. With a little practice, you can flip from one way of seeing the illusion to another, and watch the process of representation as it happens.
What makes the realization just described so challenging is that it’s fairly easy to prove that the cup as we represent it has very little in common with the cup as it exists “out there.” You can prove this by means of science: the cup “out there,” according to the evidence collected painstakingly by physicists, consists of an intricate matrix of quantum probability fields and ripples in space-time, which our senses systematically misperceive as a solid object with a certain color, surface texture, and so on. You can also prove this, as it happens, by sheer sustained introspection—that’s how Indian philosophers got there in the age of the Upanishads—and you can prove it just as well by a sufficiently rigorous logical analysis of the basis of human knowledge, which is what Kant did.
The difficulty here, of course, is that once you’ve figured this out, you’ve basically scuttled any chance at pursuing the kind of metaphysics that’s traditional in the formative period of your philosophical tradition. Kant got this, which is why he titled the most relentless of his analyses Prolegomena to Any Future Metaphysics; what he meant by this was that anybody who wanted to try to talk about what actually exists had better be prepared to answer some extremely difficult questions first.  When philosophical traditions hit their epistemological crises, accordingly, some philosophers accept the hard limits on human knowledge, ditch the metaphysics, and look for something more useful to do—a quest that typically leads to ethics, mysticism, or both. Other philosophers double down on the metaphysics and either try to find some way around the epistemological barrier, or simply ignore it, and this latter option is the one that most Western philosophers after Kant ended up choosing.  Where that leads—well, we’ll get to that later on.
For the moment, I want to focus a little more closely on the epistemological crisis itself, because there are certain very common ways to misunderstand it. One of them I remember with a certain amount of discomfort, because I made it myself in my first published book, Paths of Wisdom. This is the sort of argument that sees the sensory organs and the nervous system as the reason for the gap between the reality out there—the “thing in itself” (Ding an Sich), as Kant called it—and the representation as we experience it. It’s superficially very convincing: the eye receives light in certain patterns and turns those into a cascade of electrochemical bursts running up the optic nerve, and the visual centers in the brain then fold, spindle, and mutilate the results into the image we see.
The difficulty? When we look at light, an eye, an optic nerve, a brain, we’re not seeing things in themselves, we’re seeing another set of representations, constructed just as arbitrarily in our minds as any other representation. Nietzsche had fun with this one: “What? and others even go so far as to say that the external world is the work of our organs? But then our body, as a piece of this external world, would be the work of our organs! But then our organs themselves would be—the work of our organs!” That is to say, the body is also a representation—or, more precisely, the body as we perceive it is a representation. It has another aspect, but we’ll get to that in a future post.
Another common misunderstanding of the epistemological crisis is to think that it’s saying that your conscious mind assembles the world, and can do so in whatever way it wishes. Not so. Look at the coffee cup again. Can you, by any act of consciousness, make that coffee cup suddenly sprout wings and fly chirping around your computer desk? Of course not. (Those who disagree should be prepared to show their work.) The crucial point here is that representation is neither a conscious activity nor an arbitrary one. Much of it seems to be hardwired, and most of the rest is learned very early in life—each of us spent our first few years learning how to do it, and scientists such as Jean Piaget have chronicled in detail the processes by which children gradually learn how to assemble the world into the specific meaningful shape their culture expects them to get. 
By the time you’re an adult, you do that instantly, with no more conscious effort than you’re using right now to extract meaning from the little squiggles on your computer screen we call “letters.” Much of the learning process, in turn, involves finding meaningful correlations between the bits of sensory data and weaving those into your representations—thus you’ve learned that when you get the bits of visual data that normally assemble into a coffee cup, you can reach for it and get the bits of tactile data that normally assemble into the feeling of picking up the cup, followed by certain sensations of movement, followed by certain sensations of taste, temperature, etc. corresponding to drinking the coffee.
That’s why Kant included the “thing in itself” in his account: there really does seem to be something out there that gives rise to the data we assemble into our representations. It’s just that the window we’re looking through might as well be a funhouse mirror:  it imposes so much of itself on the data that trickles through it that it’s almost impossible to draw firm conclusions about what’s “out there” from our representations.  The most we can do, most of the time, is to see what representations do the best job of allowing us to predict what the next series of fragmentary sensory images will include. That’s what science does, when its practitioners are honest with themselves about its limitations—and it’s possible to do perfectly good science on that basis, by the way.
It’s possible to do quite a lot intellectually on that basis, in fact. From the golden age of ancient Greece straight through to the end of the Renaissance, in fact, a field of scholarship that’s almost completely forgotten today—topics—was an important part of a general education, the kind of thing you studied as a matter of course once you got past grammar school. Topics is the study of those things that can’t be proved logically, but are broadly accepted as more or less true, and so can be used as “places” (in Greek, topoi) on which you can ground a line of argument. The most important of these are the commonplaces (literally, the common places or topoi) that we all use all the time as a basis for our thinking and speaking; in modern terms, we can think of them as “things on which a general consensus exists.” They aren’t truths; they’re useful approximations of truths, things that have been found to work most of the time, things to be set aside only if you have good reason to do so.
Science could have been seen as a way to expand the range of useful topoi. That’s what a scientific experiment does, after all: it answers the question, “If I do this, what happens?” As the results of experiments add up, you end up with a consensus—usually an approximate consensus, because it’s all but unheard of for repetitions of any experiment to get exactly the same result every time, but a consensus nonetheless—that’s accepted by the scientific community as a useful approximation of the truth, and can be set aside only if you have good reason to do so. To a significant extent, that’s the way science is actually practiced—well, when it hasn’t been hopelessly corrupted for economic or political gain—but that’s not the social role that science has come to fill in modern industrial society.
I’ve written here several times already about the trap into which institutional science has backed itself in recent decades, with the enthusiastic assistance of the belligerent scientific materialists mentioned earlier in this post. Public figures in the scientific community routinely like to insist that the current consensus among scientists on any topic must be accepted by the lay public without question, even when scientific opinion has swung around like a weathercock in living memory, and even when unpleasantly detailed evidence of the deliberate falsification of scientific data is tolerably easy to find, especially but not only in the medical and pharmaceutical fields. That insistence isn’t wearing well; nor does it help when scientific materialists insist—as they very often do—that something can’t exist or something else can’t happen, simply because current theory doesn’t happen to provide a mechanism for it.
Too obsessive a fixation on that claim to authority, and the political and financial baggage that comes with it, could very possibly result in the widespread rejection of science across the industrial world in the decades ahead. That’s not yet set in stone, and it’s still possible that scientists who aren’t too deeply enmeshed in the existing order of things could provide a balancing voice, and help see to it that a less doctrinaire understanding of science gets a voice and a public presence.
Doing that, though, would require an attitude we might as well call epistemic modesty: the recognition that the human capacity to know has hard limits, and the unqualified absolute truth about most things is out of our reach. Socrates was called the wisest of the Greeks because he accepted the need for epistemic modesty, and recognized that he didn’t actually know much of anything for certain. That recognition didn’t keep him from being able to get up in the morning and go to work at his day job as a stonecutter, and it needn’t keep the rest of us from doing what we have to do as industrial civilization lurches down the trajectory toward a difficult future.
Taken seriously, though, epistemic modesty requires some serious second thoughts about certain very deeply ingrained presuppositions of the cultures of the West. Some of those second thoughts are fairly easy to reach, but one of the most challenging starts with a seemingly simple question: is there anything we experience that isn’t a representation? In the weeks ahead we’ll track that question all the way to its deeply troubling destination.

Perched on the Wheel of Time

Wed, 2017-02-01 19:08
There's a curious predictability in the comments I field in response to posts here that talk about the likely shape of the future. The conventional wisdom of our era insists that modern industrial society can’t possibly undergo the same life cycle of rise and fall as every other civilization in history; no, no, there’s got to be some unique future awaiting us—uniquely splendid or uniquely horrible, it doesn’t even seem to matter that much, so long as it’s unique. Since I reject that conventional wisdom, my dissent routinely fields pushback from those of my readers who embrace it.
That’s not surprising in the least, of course. What’s surprising is that the pushback doesn’t surface when the conventional wisdom seems to be producing accurate predictions, as it does now and then. Rather, it shows up like clockwork whenever the conventional wisdom fails.
The present situation is as good an example as any. The basis of my dissident views is the theory of cyclical history—the theory, first proposed in the early 18th century by the Italian historian Giambattista Vico and later refined and developed by such scholars as Oswald Spengler and Arnold Toynbee, that civilizations rise and fall in a predictable life cycle, regardless of scale or technological level. That theory’s not just a vague generalization, either; each of the major writers on the subject set out specific stages that appear in order, showed that these have occurred in all past civilizations, and made detailed, falsifiable predictions about how those stages can be expected to occur in our civilization. Have those panned out? So far, a good deal more often than not.
In the final chapters of his second volume, for example, Spengler noted that civilizations in the stage ours was about to reach always end up racked by conflicts that pit established hierarchies against upstart demagogues who rally the disaffected and transform them into a power base. Looking at the trends visible in his own time, he sketched out the most likely form those conflicts would take in the Winter phase of our civilization. Modern representative democracy, he pointed out, has no effective defenses against corruption by wealth, and so could be expected to evolve into corporate-bureaucratic plutocracies that benefit the affluent at the expense of everyone else. Those left out in the cold by these transformations, in turn, end up backing what Spengler called Caesarism—the rise of charismatic demagogues who challenge and eventually overturn the corporate-bureaucratic order.
These demagogues needn’t come from within the excluded classes, by the way. Julius Caesar, the obvious example, came from an old upper-class Roman family and parlayed his family connections into a successful political career. Watchers of the current political scene may be interested to know that Caesar during his lifetime wasn’t the imposing figure he became in retrospect; he had a high shrill voice, his morals were remarkably flexible even by Roman standards—the scurrilous gossip of his time called him “every man’s wife and every woman’s husband”—and he spent much of his career piling up huge debts and then wriggling out from under them. Yet he became the political standardbearer for the plebeian classes, and his assassination by a conspiracy of rich Senators launched the era of civil wars that ended the rule of the old elite once and for all.
Thus those people watching the political scene last year who knew their way around Spengler, and noticed that a rich guy had suddenly broken with the corporate-bureaucratic consensus and called for changes that would benefit the excluded classes at the expense of the affluent, wouldn’t have had to wonder what was happening, or what the likely outcome would be. It was those who insisted on linear models of history—for example, the claim that the recent ascendancy of modern liberalism counted as the onward march of progress, and therefore was by definition irreversible—who found themselves flailing wildly as history took a turn they considered unthinkable.
The rise of Caesarism, by the way, has other features I haven’t mentioned. As Spengler sketches out the process, it also represents the exhaustion of ideology and its replacement by personality. Those of my readers who watched the political scene over the last few years may have noticed the way that the issues have been sidelined by sweeping claims about the supposed personal qualities of candidates. The practically content-free campaign that swept Barack Obama into the presidency in 2008—“Hope,” “Change,” and “Yes We Can” aren’t statements about issues, you know—was typical of this stage, as was the emergence of competing personality cults around the candidates in the 2016 election.  In the ordinary way of things, we can expect even more of this in elections to come, with messianic hopes clustering around competing politicians until the point of absurdity is well past. These will then implode, and the political process collapse into a raw scramble for power at any cost.
There’s plenty more in Spengler’s characterization of the politics of the Winter phase, and all of it’s well represented in today’s headlines, but the rest can be left to those of my readers interested enough to turn the pages of The Decline of the West for themselves. What I’d like to discuss here is the nature of the pushback I tend to field when I point out that yet again, predictions offered by Spengler and other students of cyclic history turned out to be correct and those who dismissed them turned out to be smoking their shorts. The responses I field are as predictable as—well, the arrival of charismatic demagogues at a certain point in the Winter phase, for example—and they reveal some useful flimpses into the value, or lack of it, of our society’s thinking about the future in this turn of the wheel.
Probably the most common response I get can best be characterized as simple incantation: that is to say, the repetition of some brief summary of the conventional wisdom, usually without a shred of evidence or argument backing it up, as though the mere utterance is enough to disprove all other ideas.   It’s a rare week when I don’t get at least one comment along these lines, and they divide up roughly evenly between those that insist that progress will inevitably triumph over all its obstacles, on the one hand, and those that insist that modern industrial civilization will inevitably crash to ruin in a sudden cataclysmic downfall on the other. I tend to think of this as a sort of futurological fundamentalism along the lines of “pop culture said it, I believe it, that settles it,” and it’s no more useful, or for that matter interesting, than fundamentalism of any other sort.
A little less common and a little more interesting are a second class of arguments, which insist that I can’t dismiss the possibility that something might pop up out of the blue to make things different this time around. As I pointed out very early on in the history of this blog, these are examples of the classic logical fallacy of argumentum ad ignorantiam, the argument from ignorance. They bring in some factor whose existence and relevance is unknown, and use that claim to insist that since the conventional wisdom can’t be disproved, it must be true.
Arguments from ignorance are astonishingly common these days. My readers may have noticed, for example, that every few years some new version of nuclear power gets trotted out as the answer to our species’ energy needs. From thorium fission plants to Bussard fusion reactors to helium-3 from the Moon, they all have one thing in common: nobody’s actually built a working example, and so it’s possible for their proponents to insist that their pet technology will lack the galaxy of technical and economic problems that have made every existing form of nuclear power uneconomical without gargantuan government subsidies. That’s an argument from ignorance: since we haven’t built one yet, it’s impossible to be absolutely certain that they’ll have the usual cascading cost overruns and the rest of it, and therefore their proponents can insist that those won’t happen this time. Prove them wrong!
More generally, it’s impressive how many people can look at the landscape of dysfunctional technology and failed promises that surrounds us today and still insist that the future won’t be like that. Most of us have learned already that upgrades on average have fewer benefits and more bugs than the programs they replace, and that products labeled “new and improved” may be new but they’re rarely improved; it’s starting to sink in that most new technologies are simply more complicated and less satisfactory ways of doing things that older technologies did at least as well at a lower cost.  Try suggesting this as a general principle, though, and I promise you that plenty of people will twist themselves mentally into pretzel shapes trying to avoid the implication that progress has passed its pull date.
Even so, there’s a very simple answer to all such arguments, though in the nature of such things it’s an answer that only speaks to those who aren’t too obsessively wedded to the conventional wisdom. None of the arguments from ignorance I’ve mentioned are new; all of them have been tested repeatedly by events, and they’ve failed. I’ve lost track of the number of times I’ve been told, for example, that the economic crisis du jour could lead to the sudden collapse of the global economy, or that the fashionable energy technology du jour could lead to a new era of abundant energy. No doubt they could, at least in theory, but the fact remains that they don’t. 
It so happens that there are good reasons why they don’t, varying from case to case, but that’s actually beside the point I want to make here. This particular version of the argument from ignorance is also an example of the fallacy the old logicians called petitio principii, better known as “begging the question.” Imagine, by way of counterexample, that someone were to post a comment saying, “Nobody knows what the future will be like, so the future you’ve predicted is as likely as any other.” That would be open to debate, since there’s some reason to think we can in fact predict some things about the future, but at least it would follow logically from the premise.  Still, I don’t think I’ve ever seen anyone make that claim. Nor have I ever seen anybody claim that since nobody knows what the future will be like, say, we can’t assume that progress is going to continue.
In practice, rather, the argument from ignorance is applied to discussions of the future in a distinctly one-sided manner. Predictions based on any point of view other than the conventional wisdom of modern popular culture are dismissed with claims that it might possibly be different this time, while predictions based on the conventional wisdom of modern popular culture are spared that treatment. That’s begging the question: covertly assuming that one side of an argument must be true unless it’s disproved, and that the other side can’t be true unless it’s proved.
Now in fact, a case can be made that we can in fact know quite a bit about the shape of the future, at least in its broad outlines. The heart of that case, as already noted, is the fact that certain theories about the future do in fact make accurate predictions, while others don’t. This in itself shows that history isn’t random—that there’s some structure to the flow of historical events that can be figured out by learning from the past, and that similar causes at work in similar situations will have similar outcomes. Apply that reasoning to any other set of phenomena, and you’ve got the ordinary, uncontroversial basis for the sciences. It’s only when it’s applied to the future that people balk, because it doesn’t promise them the kind of future they want.
The argument by incantation and the argument from ignorance make up most of the pushback I get. I’m pleased to say, though, that every so often I get an argument that’s considerably more original than these. One of those came in last week—tip of the archdruidical hat to DoubtingThomas—and it’s interesting enough that it deserves a detailed discussion.
DoubtingThomas began with the standard argument from ignorance, claiming that it’s always possible that something might possibly happen to disrupt the cyclic patterns of history in any given case, and therefore the cyclic theory should be dismissed no matter how many accurate predictions it scored. As we’ve already seen, this is handwaving, but let’s move on.  He went on from there to argue that much of the shape of history is defined by the actions of unique individuals such as Isaac Newton, whose work sends the world careening along entirely new and unpredicted paths. Such individuals have appeared over and over again in history, he pointed out, and was kind enough to suggest that my activities here on The Archdruid Report were, in a small way, another example of the influence of an individual on history. Given that reality, he insisted, a theory of history that didn’t take the actions of unique individuals into account was invalid.
Fair enough; let’s consider that argument. Does the cyclic theory of history fail to take the actions of unique individuals into account?
Here again, Oswald Spengler’s The Decline of the West is the go-to source, because he’s dealt with the sciences and arts to a much greater extent than other researchers into historical cycles. What he shows, with a wealth of examples drawn from the rise and fall of many different civilizations, is that the phenomenon DoubtingThomas describes is a predictable part of the cycles of history. In every generation, in effect, a certain number of geniuses will be born, but their upbringing, the problems that confront them, and the resources they will have available to solve those problems, are not theirs to choose. All these things are produced by the labors of other creative minds of the past and present, and are profoundly influenced by the cycles of history.
Let’s take Isaac Newton as an example. He happened to be born just as the scientific revolution was beginning to hit its stride, but before it had found its paradigm, the set of accomplishments on which all future scientific efforts would be directly or indirectly modeled. His impressive mathematical and scientific gifts thus fastened onto the biggest unsolved problem of the time—the relationship between the physics of moving bodies sketched out by Galileo and the laws of planetary motion discovered by Kepler—and resulted in the Principia Mathematica, which became the paradigm for the next three hundred years or so of scientific endeavor.
Had he been born a hundred years earlier, none of those preparations would have been in place, and the Principia Mathematica wouldn’t have been possible. Given the different cultural attitudes of the century before Newton’s time, in fact, he would almost certainly become a theologian rather than a mathematician and physicist—as it was, he spent much of his career engaged in theology, a detail usually left out by the more hagiographical of his biographers—and he would be remembered today only by students of theological history. Had he been born a century later, equally, some other great scientific achievement would have provided the paradigm for emerging science—my guess is that it would have been Edmund Halley’s successful prediction of the return of the comet that bears his name—and Newton would have had the same sort of reputation that Karl Friedrich Gauss has today: famous in his field, sure, but a household name? Not a chance.
What makes the point even more precise is that every other civilization from which adequate records survive had its own paradigmatic thinker, the figure whose achievements provided a model for the dawning age of reason and for whatever form of rational thought became that age’s principal cultural expression. In the classical world, for example, it was Pythagoras, who invented the word “philosophy” and whose mathematical discoveries gave classical rationalism its central theme, the idea of an ideal mathematical order to which the hurly-burly of the world of appearances must somehow be reduced. (Like Newton, by the way, Pythagoras was more than half a theologian; it’s a common feature of figures who fill that role.)
To take the same argument to a far more modest level, what about DoubtingThomas’ claim that The Archdruid Report represents the act of a unique individual influencing the course of history? Here again, a glance at history shows otherwise. I’m a figure of an easily recognizable type, which shows up reliably as each civilization’s Age of Reason wanes and it begins moving toward what Spengler called the Second Religiosity, the resurgence of religion that inevitably happens in the wake of rationalism’s failure to deliver on its promises. At such times you get intellectuals who can communicate fluently on both sides of the chasm between rationalism and religion, and who put together syntheses of various kinds that reframe the legacies of the Age of Reason so that they can be taken up by emergent religious movements and preserved for the future.
In the classical world, for example, you got Iamblichus of Chalcis, who stepped into the gap between Greek philosophical rationalism and the burgeoning Second Religiosity of late classical times, and figured out how to make philosophy, logic, and mathematics appealing to the increasingly religious temper of his time. He was one of many such figures, and it was largely because of their efforts that the religious traditions that ended up taking over the classical world—Christianity to the north of the Mediterranean, and Islam to the south—got over their early anti-intellectual streak so readily and ended up preserving so much of the intellectual heritage of the past.
That sort of thing is a worthwhile task, and if I can contribute to it I’ll consider this life well spent. That said, there’s nothing unique about it. What’s more, it’s only possible and meaningful because I happen to be perched on this particular arc of the wheel of time, when our civilization’s Age of Reason is visibly crumbling and the Second Religiosity is only beginning to build up a head of steam. A century earlier or a century later, I’d have faced some different tasks.
All of this presupposes a relationship between the individual and human society that fits very poorly with the unthinking prejudices of our time. That’s something that Spengler grappled with in his book, too;  it’s going to take a long sojourn in some very unfamiliar realms of thought to make sense of what he had to say, but that can’t be helped.
We really are going to have to talk about philosophy, aren’t we? We’ll begin that stunningly unfashionable discussion next week.

How Great the Fall Can Be

Wed, 2017-01-25 13:43
While I type these words, an old Supertramp CD is playing in the next room. Those of my readers who belong to the same slice of an American generation I do will likely remember the words Roger Hodgson is singing just now, the opening line from “Fool’s Overture”:
“History recalls how great the fall can be...”
It’s an apposite quote for a troubled time.
Over the last year or so, in and among the other issues I’ve tried to discuss in this blog, the US presidential campaign has gotten a certain amount of air time. Some of the conversations that resulted generated a good deal more heat than light, but then that’s been true across the board since Donald Trump overturned the established certainties of American political life and launched himself and the nation on an improbable trajectory toward our current situation. Though the diatribes I fielded from various sides were more than occasionally tiresome, I don’t regret making the election a theme for discussion here, as it offered a close-up view of issues I’ve been covering for years now.
A while back on this blog, for example, I spent more than a year sketching out the process by which civilizations fall and dark ages begin, with an eye toward the next five centuries of North American history—a conversation that turned into my book Dark Age America. Among the historical constants I discussed in the posts and the book was the way that governing elites and their affluent supporters stop adapting their policies to changing political and economic conditions, and demand instead that political and economic conditions should conform to their preferred policies. That’s all over today’s headlines, as the governing elites of the industrial world cower before the furious backlash sparked by their rigid commitment to the failed neoliberal nostrums of global trade and open borders.
Another theme I discussed in the same posts and book was the way that science and culture in a civilization in decline become so closely identified with the interests of the governing elite that the backlash against the failed policies of the elite inevitably becomes a backlash against science and culture as well. We’ve got plenty of that in the headlines as well. According to recent news stories, for example, the Trump administration plans to scrap the National Endowment for the Arts, the National Endowment for the Humanities, and the Corporation for Public Broadcasting, and get rid of all the federal offices that study anthropogenic climate change.
Their termination with extreme prejudice isn’t simply a matter of pruning the federal bureaucracy, though that’s a factor. All these organizations display various forms of the identification of science and culture with elite values just discussed, and their dismantling will be greeted by cheers from a great many people outside the circles of the affluent, who have had more than their fill of patronizing lectures from their self-proclaimed betters in recent years. Will many worthwhile programs be lost, along with a great deal that’s less than worthwhile?  Of course.  That’s a normal feature of the twilight years of a civilization.
A couple of years before the sequence of posts on dark age America, for that matter, I did another series on the end of US global hegemony and the rough road down from empire. That sequence also turned into a book, Decline and Fall. In the posts and the book, I pointed out that one of the constants of the history of democratic societies—actual democracies, warts and all, as distinct from the imaginary “real democracy” that exists solely in rhetoric—is a regular cycle of concentration and diffusion of power. The ancient Greek historian Polybius, who worked it out in detail, called it anacyclosis.
A lot can be said about anacyclosis, but the detail that’s relevant just now is the crisis phase, when power has become so gridlocked among competing power centers that it becomes impossible for the system to break out of even the most hopelessly counterproductive policies. That ends, according to Polybius, when a charismatic demagogue gets into power, overturns the existing political order, and sets in motion a general free-for-all in which old alliances shatter and improbable new ones take shape. Does that sound familiar? In a week when union leaders emerged beaming from a meeting with the new president, while Democrats are still stoutly defending the integrity of the CIA, it should.
For that matter, one of the central themes of the sequence of posts and the book was the necessity of stepping back from global commitments that the United States can no longer afford to maintain. That’s happening, too, though it’s being covered up just now by a great deal of Trumped-up bluster about a massive naval expansion. (If we do get a 350-ship navy in the next decade, I’d be willing to bet that a lot of those ships will turn out to be inexpensive corvettes, like the ones the Russians have been using so efficiently as cruise missile platforms on the Caspian Sea.)  European politicians are squawking at top volume about the importance of NATO, which means in practice the continuation of a scheme that allows most European countries to push most of the costs of their own defense onto the United States, but the new administration doesn’t seem to be buying it.
Mind you, I’m far from enthusiastic about the remilitarization of Europe. Outside the brief interval of enforced peace following the Second World War, Europe has been a boiling cauldron of warfare since its modern cultures began to emerge out of the chaos of the post-Roman dark ages. Most of the world’s most devastating wars have been European in origin, and of course it escapes no one’s attention in the rest of the world that it was from Europe that hordes of invaders and colonizers swept over the entire planet from the sixteenth through the nineteenth centuries, as often as not leaving total devastation in their wake. In histories written a thousand years from now, Europeans will have the same sort of reputation that Huns and Mongols have today—and it’s only in the fond fantasies of those who think history has a direction that those days are definitely over.
It can’t be helped, though, for the fact of the matter is that the United States can no longer afford to foot the bill for the defense of other countries. Behind a facade of hallucinatory paper wealth, our nation is effectively bankrupt. The only thing that enables us to pay our debts now is the status of the dollar as the world’s reserve currency—this allows the Treasury to issue debt at a breakneck pace and never have to worry about the cost—and that status is trickling away as one country after another signs bilateral deals to facilitate trading in other currencies. Sooner or later, probably in the next two decades, the United States will be forced to default on its national debt, the way Russia did in 1998.  Before that happens, a great many currently overvalued corporations that support themselves by way of frantic borrowing will have done the same thing by way of the bankruptcy courts, and of course the vast majority of America’s immense consumer debt will have to be discharged the same way.
That means, among other things, that the extravagant lifestyles available to affluent Americans in recent decades will be going away forever in the not too distant future. That’s another point I made in Decline and Fall and the series of posts that became raw material for it. During the era of US global hegemony, the five per cent of our species who lived in the United States disposed of a third of the world’s raw materials and manufactured products and a quarter of its total energy production. That disproportionate share came to us via unbalanced patterns of exchange hardwired into the global economy, and enforced at gunpoint by the military garrisons we keep in more than a hundred countries worldwide. The ballooning US government, corporate, and consumer debt load of recent years was an attempt to keep those imbalances in place even as their basis in geopolitics trickled away. Now the dance is ending and the piper has to be paid.
There’s a certain bleak amusement to be had from the fact that one of the central themes of this blog not that many years back—“Collapse Now and Avoid the Rush”—has already passed its pull date. The rush, in case you haven’t noticed, is already under way. The fraction of US adults of working age who are permanently outside the work force is at an all-time high; so is the fraction of young adults who are living with their parents because they can’t afford to start households of their own. There’s good reason to think that the new administration’s trade and immigration policies may succeed in driving both those figures down, at least for a while, but of course there’ll a price to be paid for that—and those industries and social classes that have profited most from the policies of the last thirty years, and threw their political and financial weight behind the Clinton campaign, will be first in line to pay it. Vae victis!*
More generally, the broader landscape of ideas this blog has tried to explore since its early days remains what it is. The Earth’s economically accessible reserves of fossil carbon dwindle day by day; with each year that passes, on average, the amount of coal, oil, and natural gas burnt exceeds the amount that’s discovered by a wider margin; the current temporary glut in the oil markets is waning so fast that analysts are predicting the next price spike as soon as 2018. Talk of transitioning away from fossil fuels to renewable energy, on the one hand, or nuclear power on the other, remains talk—I encourage anyone who doubts this to look up the amount of fossil fuels burnt each year over the last two decades and see if they can find a noticeable decrease in global fossil fuel consumption to match the much-ballyhooed buildout of solar and wind power.
The industrial world remains shackled to fossil fuels for most of its energy and all of its transportation fuel, for the simple reason that no other energy source in this end of the known universe provides the abundant, concentrated, and fungible energy supply that’s needed to keep our current lifestyles going. There was always an alternative—deliberately downshifting out of the embarrassing extravagance that counts for normal lifestyles in the industrial world these days, accepting more restricted ways of living in order to leave a better world for our descendants—but not enough people were willing to accept that alternative to make a difference while there was still a chance.
Meanwhile the other jaw of the vise that’s tightening around the future is becoming increasingly visible just now. In the Arctic, freak weather systems has sucked warm air up from lower latitudes and brought the normal process of winter ice formation to a standstill. In the Antarctic, the Larsen C ice shelf, until a few years ago considered immovable by most glaciologists, is in the process of loosing an ice sheet the size of Delaware into the Antarctic Ocean. I look out my window and see warm rain falling; here in the north central Appalachians, in January, it’s been most of a month since the thermometer last dipped below freezing. The new administration has committed itself to do nothing about anthropogenic climate change, but then, despite plenty of talk, the Obama administration didn’t do anything about it either.
There’s good reason for that, too. The only way to stop anthropogenic climate change in its tracks is to stop putting greenhouse gases into the atmosphere, and doing that would require the world to ground its airlines, turn its highways over to bicycles and oxcarts, and shut down every other technology that won’t be economically viable if it has to depend on the diffuse intermittent energy available from renewable sources. Does the political will to embrace such changes exist? Since I know of precisely three climate change scientists, out of thousands, who take their own data seriously enough to cut their carbon footprint by giving up air travel, it’s safe to say that the answer is “no.”
So, basically, we’re in for it.
The thing that fascinates me is that this is something I’ve been saying for the whole time this blog has been appearing. The window of opportunity for making a smooth transition to a renewable future slammed shut in the early 1980s, when majorities across the industrial world turned their backs on the previous decade’s promising initiatives toward sustainability, and bought into the triumphalist rhetoric of the Reagan-Thatcher counterrevolution instead. Since then, year after weary year, most of the green movement—with noble exceptions—has been long on talk and short on action.  Excuses for doing nothing and justifications for clinging to lifestyles the planet cannot support have proliferated like rabbits on Viagra, and most of the people who talked about sustainability at all took it for granted that the time to change course was still somewhere conveniently off in the future. That guaranteed that the chance to change course would slide steadily further back into the past.
There was another detail of the post-Seventies sustainability scene that deserves discussion, though, because it’s been displayed with an almost pornographic degree of nakedness in the weeks just past. From the early days of the peak oil movement in the late 1990s on, a remarkably large number of the people who talked eagerly about the looming crisis of our age seemed to think that its consequences would leave them and the people and things they cared about more or less intact. That wasn’t universal by any means; there were always some people who grappled with the hard realities that the end of the fossil fuel age was going to impose on their own lives; but all things considered, there weren’t that many, in comparison to all those who chattered amiably about how comfortable they’d be in their rural doomsteads, lifeboat communities, Transition Towns, et al.
Now, as discussed earlier in this post, we’ve gotten a very modest helping of decline and fall, and people who were enthusiastically discussing the end of the industrial age not that long ago are freaking out six ways from Sunday. If a relatively tame event like the election of an unpopular president can send people into this kind of tailspin, what are they going to do the day their paychecks suddenly turn out to be worth only half as much in terms of goods and services as before—a kind of event that’s already become tolerably common elsewhere, and could quite easily happen in this country as the dollar loses its reserve currency status?
What kinds of meltdowns are we going to get when internet service or modern health care get priced out of reach, or become unavailable at any price?  How are they going to cope if the accelerating crisis of legitimacy in this country causes the federal government to implode, the way the government of the Soviet Union did, and suddenly they’re living under cobbled-together regional governments that don’t have the money to pay for basic services? What sort of reaction are we going to see if the US blunders into a sustained domestic insurgency—suicide bombs going off in public places, firefights between insurgent forces and government troops, death squads from both sides rounding up potential opponents and leaving them in unmarked mass graves—or, heaven help us, all-out civil war?
This is what the decline and fall of a civilization looks like. It’s not about sitting in a cozy earth-sheltered home under a roof loaded with solar panels, living some close approximation of a modern industrial lifestyle, while the rest of the world slides meekly down the chute toward history’s compost bin, leaving you and yours untouched. It’s about political chaos—meaning that you won’t get the leaders you want, and you may not be able to count on the rule of law or even the most basic civil liberties. It’s about economic implosion—meaning that your salary will probably go away, your savings almost certainly won’t keep its value, and if you have gold bars hidden in your home, you’d better hope to Hannah that nobody ever finds out, or it’ll be a race between the local government and the local bandits to see which one gets to tie your family up and torture them to death, starting with the children, until somebody breaks and tells them where your stash is located.
It’s about environmental chaos—meaning that you and the people you care about may have many hungry days ahead as crazy weather messes with the harvests, and it’s by no means certain you won’t die early from some tropical microbe that’s been jarred loose from its native habitat to find a new and tasty home in you. It’s about rapid demographic contraction—meaning that you get to have the experience a lot of people in the Rust Belt have already, of walking past one abandoned house after another and remembering the people who used to live there, until they didn’t any more.
More than anything else, it’s about loss. Things that you value—things you think of as important, meaningful, even necessary—are going to go away forever in the years immediately ahead of us, and there will be nothing you can do about it.  It really is as simple as that. People who live in an age of decline and fall can’t afford to cultivate a sense of entitlement. Unfortunately, for reasons discussed at some length in one of last month’s posts, the notion that the universe is somehow obliged to give people what they think they deserve is very deeply engrained in American popular culture these days. That’s a very unwise notion to believe right now, and as we slide further down the slope, it could very readily become fatal—and no, by the way, I don’t mean that last adjective in a metaphorical sense.
History recalls how great the fall can be, Roger Hodgson sang. In our case, it’s shaping up to be one for the record books—and those of my readers who have worked themselves up to the screaming point about the comparatively mild events we’ve seen so far may want to save some of their breath for the times ahead when it’s going to get much, much worse. _________________*In colloquial English: “It sucks to lose.”

The Hate that Dare Not Speak its Name

Wed, 2017-01-18 15:12
As the United States stumbles toward the last act of its electoral process two days from now, and the new administration prepares to take over the reins of power from its feckless predecessor, the obligatory caterwauling of the losing side has taken on an unfamiliar shrillness. Granted, the behavior of both sides in the last few decades of American elections can be neatly summed up in the words “sore loser”; the Republicans in 1992 and 2008 behaved not one whit better than the Democrats in 1980 and 2000.  I think it’s fair, though, to say that the current example has plunged well past the low-water mark set by those dismal occasions. The question I’d like to discuss here is why that should be.
I think we can all admit that there are plenty of reasons why Americans might reasonably object to the policies and appointments of the incoming president, but the same thing has been true of every other president we’ve had since George Washington’s day. Equally, both of our major parties have long been enthusiastic practitioners of the fine art of shrieking in horror at the other side’s behavior, while blithely excusing the identical behavior on their side.  Had the election last November gone the other way, for example, we can be quite certain that all the people who are ranting about Donald Trump’s appointment of Goldman Sachs employees to various federal offices would be busy explaining how reasonable it was for Hillary Clinton to do exactly the same thing—as of course she would have.
That said, I don’t think reasonable differences of opinion on the one hand, and the ordinary hypocrisy of partisan politics on the other, explain the extraordinarily stridency, the venom, and the hatred being flung at the incoming administration by its enemies. There may be many factors involved, to be sure, but I’d like to suggest that one factor in particular plays a massive role here.
To be precise, I think a lot of what we’re seeing is the product of class bigotry.
Some definitions are probably necessary here. We can define bigotry as the act of believing hateful things about all the members of a given category of people, just because they belong to that category. Thus racial bigots believe hateful things about everyone who belongs to races they don’t like, religious bigots do the same thing to every member of the religions they don’t like, and so on through the dismal chronicle of humanity’s collective nastiness.
Defining social class is a little more difficult to do in the abstract, as different societies draw up and enforce their class barriers in different ways. In the United States, though, the matter is made a good deal easier by the lack of a fully elaborated feudal system in our nation’s past, on the one hand, and on the other, the tolerably precise dependency of how much privilege you have in modern American society on how much money you make. Thus we can describe class bigotry in the United States, without too much inaccuracy, as bigotry directed against people who make either significantly more money than the bigot does, or significantly less. (Of course that’s not all there is to social class, not by a long shot, but for our present purposes, as an ostensive definition, it will do.)
Are the poor bigoted against the well-to-do? You bet. Bigotry directed up the social ladder, though, is far more than matched, in volume and nastiness, by bigotry directed down. It’s a source of repeated amusement to me that rich people in this country so often inveigh against the horrors of class warfare. Class warfare is their bread and butter. The ongoing warfare of the rich against the poor, and of the affluent middle and upper middle classes against the working class, create and maintain the vast disparities of wealth and privilege in contemporary American society. What upsets the rich and the merely affluent about class warfare, of course, is the thought that they might someday be treated the way they treat everyone else.
Until last year, if you wanted to experience the class bigotry that’s so common among the affluent classes in today’s America, you pretty much had to be a member of those affluent classes, or at least good enough at passing to be present at the social events where their bigotry saw free play. Since Donald Trump broke out of the Republican pack early last year, though, that hindrance has gone by the boards. Those who want to observe American class bigotry at its choicest need only listen to what a great many of the public voices of the well-to-do are saying about the people who votes and enthusiasm have sent Trump to the White House.
You see, that’s a massive part of the reason a Trump presidency is so unacceptable to so many affluent Americans:  his candidacy, unlike those of all his rivals, was primarily backed by “those people.”
It’s probably necessary to clarify just who “thosepeople” are. During the election, and even more so afterwards, the mainstream media here in the United States have seemingly been unable to utter the words “working class” without sticking the labels “white” in front and “men” behind. The resulting rhetoric seems to be claiming that the relatively small fraction of the American voting public that’s white, male, and working class somehow managed to hand the election to Donald Trump all by themselves, despite the united efforts of everyone else.
Of course that’s not what happened. A huge majority of white working class women also voted for Trump, for example.  So, according to exit polls, did about a third of Hispanic men and about a quarter of Hispanic women; so did varying fractions of other American minority voting blocs, with African-American voters (the least likely to vote for Trump) still putting something like fourteen per cent in his column. Add it all up, and you’ll find that the majority of people who voted for Trump weren’t white working class men at all—and we don’t even need to talk about the huge number of registered voters of all races and genders who usually turn out for Democratic candidates, but stayed home in disgust this year, and thus deprived Clinton of the turnout that could have given her the victory.
Somehow, though, pundits and activists who fly to their keyboards at a moment’s notice to denounce the erasure of women and people of color in any other context are eagerly cooperating in the erasure of women and people of color in this one case. What’s more, that same erasure went on continuously all through the campaign. Those of my readers who followed the media coverage of the race last year will recall confident proclamations that women wouldn’t vote for Trump because his words and actions had given offense to feminists, that Hispanics (or people of color in general) wouldn’t vote for Trump because social-justice activists denounced his attitudes toward illegal immigrants from Mexico as racist, and so on. The media took these proclamations as simple statements of fact—and of course that was one of the reasons media pundits were blindsided by Trump’s victory.
The facts of the matter are that a great many American women don’t happen to agree with feminists, nor do all people of color agree with the social-justice activists who claim to speak in their name. For that matter, may I point out to my fellow inhabitants of Gringostan that the terms “Hispanic” and “Mexican-American” are not synonyms? Americans of Hispanic descent trace their ancestry to many different nations of origin, each of which has its own distinctive culture and history, and they don’t form a single monolithic electoral bloc. (The Cuban-American community in Florida, to cite only one of the more obvious examples, very often vote Republican and  played a significant role in giving that electoral vote-rich state to Trump.)
Behind the media-manufactured facade of white working class men as the cackling villains who gave the country to Donald Trump, in other words, lies a reality far more in keeping with the complexities of American electoral politics: a ramshackle coalition of many different voting blocs and interest groups, each with its own assortment of reasons for voting for a candidate feared and despised by the US political establishment and the mainstream media.  That coalition included a very large majority of the US working class in general, and while white working class voters of both genders were disproportionately more likely to have voted for Trump than their nonwhite equivalents, it wasn’t simply a matter of whiteness, or for that matter maleness.
It was, however, to a very great extent a matter of social class. This isn’t just because so large a fraction of working class voters generally backed Trump; it’s also because Trump saw this from the beginning, and aimed his campaign squarely at the working class vote. His signature red ball cap was part of that—can you imagine Hillary Clinton wearing so proletarian a garment without absurdity?—but, as I pointed out a year ago, so was his deliberate strategy of saying (and tweeting) things that would get the liberal punditocracy to denounce him. The tones of sneering contempt and condescension they directed at him were all too familiar to his working class audiences, who have been treated to the same tones unceasingly by their soi-disant betters for decades now.
Much of the pushback against Trump’s impending presidency, in turn, is heavily larded with that same sneering contempt and condescension—the unending claims, for example, that the only reason people could possibly have chosen to vote for Trump was because they were racist misogynistic morons, and the like. (These days, terms such as “racist” and “misogynistic,” in the mouths of the affluent, are as often as not class-based insults rather than objective descriptions of attitudes.) The question I’d like to raise at this point, though, is why the affluent don’t seem to be able to bring themselves to come right out and denounce Trump as the candidate of the filthy rabble. Why must they borrow the rhetoric of identity politics and twist it (and themselves) into pretzel shapes instead?
There, dear reader, hangs a tale.
In the aftermath of the social convulsions of the 1960s, the wealthy elite occupying the core positions of power in the United States offered a tacit bargain to a variety of movements for social change.  Those individuals and groups who were willing to give up the struggle to change the system, and settled instead for a slightly improved place within it, suddenly started to receive corporate and government funding, and carefully vetted leaders from within the movements in question were brought into elite circles as junior partners. Those individuals and groups who refused these blandishments were marginalized, generally with the help of their more compliant peers.
If you ever wondered, for example, why environmental groups such as the Sierra Club and Friends of the Earth changed so quickly from scruffy fire-breathing activists to slickly groomed and well-funded corporate enablers, well, now you know. Equally, that’s why mainstream feminist organizations by and large stopped worrying about the concerns of the majority of women and fixated instead on “breaking the glass ceiling”—that is to say, giving women who already belong to the privileged classes access to more privilege than they have already. The core demand placed on former radicals who wanted to cash in on the offer, though, was that they drop their demands for economic justice—and American society being what it is, that meant that they had to stop talking about class issues.
The interesting thing is that a good many American radicals were already willing to meet them halfway on that. The New Left of the 1960s, like the old Left of the between-the-wars era, was mostly Marxist in its theoretical underpinnings, and so was hamstrung by the mismatch between Marxist theory and one of the enduring realities of American politics. According to Marxist theory, socialist revolution is led by the radicalized intelligentsia, but it gets the muscle it needs to overthrow the capitalist system from the working classes. This is the rock on which wave after wave of Marxist activism has broken and gone streaming back out to sea, because the American working classes are serenely uninterested in taking up the world-historical role that Marxist theory assigns to them. All they want is plenty of full time jobs at a living wage.  Give them that, and revolutionary activists can bellow themselves hoarse without getting the least flicker of interest out of them.
Every so often, the affluent classes lose track of this, and try to force the working classes to put up with extensive joblessness and low pay, so that affluent Americans can pocket the proceeds. This never ends well.  After an interval, the working classes pick up whatever implement is handy—Andrew Jackson, the Grange, the Populist movement, the New Deal, Donald Trump—and beat the affluent classes about the head and shoulders with it until the latter finally get a clue. This might seem  promising for Marxist revolutionaries, but it isn’t, because the Marxist revolutionaries inevitably rush in saying, in effect, “No, no, you shouldn’t settle for plenty of full time jobs at a living wage, you should die by the tens of thousands in an orgy of revolutionary violence so that we can seize power in your name.” My readers are welcome to imagine the response of the American working class to this sort of rhetoric.
The New Left, like the other American Marxist movements before its time, thus had a bruising face-first collision with cognitive dissonance: its supposedly infallible theory said one thing, but the facts refused to play along and said something very different. For much of the Sixties and Seventies, New Left theoreticians tried to cope with this by coming up with increasingly Byzantine redefinitions of “working class” that excluded the actual working class, so that they could continue to believe in the inevitability and imminence of the proletarian revolution Marx promised them. Around the time that this effort finally petered out into absurdity, it was replaced by the core concept of the identity politics currently central to the American left: the conviction that the only divisions in American society that matter are those that have some basis in biology.
Skin color, gender, ethnicity, sexual orientation, disability—these are the divisions that the American left likes to talk about these days, to the exclusion of all other social divisions, and especially to the exclusion of social class.  Since the left has dominated public discourse in the United States for many decades now, those have become the divisions that the American right talks about, too. (Please note, by the way, the last four words in the paragraph above: “some basis in biology.” I’m not saying that these categories are purely biological in nature; every one of them is defined in practice by a galaxy of cultural constructs and presuppositions, and the link to biology is an ostensive category marker rather than a definition. I insert this caveat because I’ve noticed that a great many people go out of their way to misunderstand the point I’m trying to make here.)
Are the divisions listed above important when it comes to discriminatory treatment in America today? Of course they are—but social class is also important. It’s by way of the erasure of social class as a major factor in American injustice that we wind up in the absurd situation in which a woman of color who makes a quarter million dollars a year plus benefits as a New York stockbroker can claim to be oppressed by a white guy in Indiana who’s working three part time jobs at minimum wage with no benefits in a desperate effort to keep his kids fed, when the political candidates that she supports and the economic policies from which she profits are largely responsible for his plight.
In politics as in physics, every action produces an equal and opposite reaction, and so absurdities of the sort just described have kindled the inevitable blowback. The Alt-Right scene that’s attracted so much belated attention from politicians and pundits over the last year is in large part a straightforward reaction to the identity politics of the left. Without too much inaccuracy, the Alt-Right can be seen as a network of young white men who’ve noticed that every other identity group in the country is being encouraged to band together to further its own interests at their expense, and responded by saying, “Okay, we can play that game too.” So far, you’ve got to admit, they’ve played it with verve.
That said, on the off chance that any devout worshippers of the great god Kek happen to be within earshot, I have a bit of advice that I hope will prove helpful. The next time you want to goad affluent American liberals into an all-out, fist-pounding, saliva-spraying Donald Duck meltdown, you don’t need the Jew-baiting, the misogyny, the racial slurs, and the rest of it.  All you have to do is call them on their class privilege. You’ll want to have the popcorn popped, buttered, and salted first, though, because if my experience is anything to go by, you’ll be enjoying a world-class hissy fit in seconds.
I’d also like to offer the rest of my readers another bit of advice that, again, I hope will prove helpful. As Donald Trump becomes the forty-fifth president of the United States and begins to push the agenda that got him into the White House, it may be useful to have a convenient way to sort through the mix of signals and noise from the opposition. When you hear people raising reasoned objections to Trump’s policies and appointments, odds are that you’re listening to the sort of thoughtful dissent that’s essential to any semblance of democracy, and it may be worth taking seriously. When you hear people criticizing Trump and his appointees for doing the same thing his rivals would have done, or his predecessors did, odds are that you’re getting the normal hypocrisy of partisan politics, and you can roll your eyes and stroll on.
But when you hear people shrieking that Donald Trump is the illegitimate result of a one-night stand between Ming the Merciless and Cruella de Vil, that he cackles in Russian while barbecuing babies on a bonfire, that everyone who voted for him must be a card-carrying Nazi who hates the human race, or whatever other bit of over-the-top hate speech happens to be fashionable among the chattering classes at the moment—why, then, dear reader, you’re hearing a phenomenon as omnipresent and unmentionable in today’s America as sex was in Victorian England. You’re hearing the voice of class bigotry: the hate that dare not speak its name.

The Embarrassments of Chronocentrism

Wed, 2017-01-11 11:09
It's a curious thing, this attempt of mine to make sense of the future by understanding what’s happened in the past. One of the most curious things about it, at least to me, is the passion with which so many people insist that this isn’t an option at all. In any other context, “Well, what happened the last time someone tried that?” is one of the first and most obviously necessary questions to ask and answer—but heaven help you if you try to raise so straightforward a question about the political, economic, and social phenomena of the present day.
In previous posts here we’ve talked about thoughtstoppers of the “But it’s different this time!” variety, and some of the other means people these days use to protect themselves against the risk of learning anything useful from the hard-earned lessons of the past. This week I want to explore another, subtler method of doing the same thing. As far as I’ve been able to tell, it’s mostly an issue here in the United States, but here it’s played a remarkably pervasive role in convincing people that the only way to open a door marked PULL is to push on it long and hard enough.
It’s going to take a bit of a roundabout journey to make sense of the phenomenon I have in mind, so I’ll have to ask my readers’ forbearance for what will seem at first like several sudden changes of subject.
One of the questions I field tolerably often, when I discuss the societies that will rise after modern industrial civilization finishes its trajectory into history’s compost heap, is whether I think that consciousness evolves. I admit that until fairly recently, I was pretty much at a loss to know how to respond. It rarely took long to find out that the questioner wasn’t thinking about the intriguing theory Julian Jaynes raised in The Origins of Consciousness in the Breakdown of the Bicameral Mind, the Jungian conception Erich Neumann proposed in The Origins and History of Consciousness, or anything of the same kind. Nor, it turned out, was the question usually based on the really rather weird reinterpretations of evolution common in today’s pop-spirituality scene. Rather, it was political.
It took me a certain amount of research, and some puzzled emails to friends more familiar with current left-wing political jargon than I am, to figure out what was behind these questions. Among a good-sized fraction of American leftist circles these days, it turns out it’s become a standard credo that what drives the kind of social changes supported by the left—the abolition of slavery and segregation, the extension of equal (or more than equal) rights to an assortment of disadvantaged groups, and so on—is an ongoing evolution of consciousness, in which people wake up to the fact that things they’ve considered normal and harmless are actually intolerable injustices, and so decide to stop.
Those of my readers who followed the late US presidential election may remember Hillary Clinton’s furious response to a heckler at one of her few speaking gigs:  “We aren’t going back. We’re going forward.” Underlying that outburst is the belief system I’ve just sketched out: the claim that history has a direction, that it moves in a linear fashion from worse to better, and that any given political choice—for example, which of the two most detested people in American public life is going to become the nominal head of a nation in freefall ten days from now—not only can but must be flattened out into a rigidly binary decision between “forward” and “back.”
There’s no shortage of hard questions that could be brought to bear on that way of thinking about history, and we’ll get to a few of them a little later on, but let’s start with the simplest one: does history actually show any such linear movement in terms of social change?
It so happens that I’ve recently finished a round of research bearing on exactly that question, though I wasn’t thinking of politics or the evolution of consciousness when I launched into it. Over the last few years I’ve been working on a sprawling fiction project, a seven-volume epic fantasy titled The Weird of Hali, which takes the horror fantasy of H.P. Lovecraft and stands it on its head, embracing the point of view of the tentacled horrors and multiracial cultists Lovecraft liked to use as images of dread. (The first volume, Innsmouth, is in print in a fine edition and will be out in trade paper this spring, and the second, Kingsport, is available for preorder and will be published later this year.)
One of Lovecraft’s few memorable human characters, the intrepid dream-explorer Randolph Carter, has an important role in the fourth book of my series. According to Lovecraft, Carter was a Boston writer and esthete of the1920s from a well-to-do family, who had no interest in women but a whole series of intimate (and sometimes live-in) friendships with other men, and decidedly outré tastes in interior decoration—well, I could go on. The short version is that he’s very nearly the perfect archetype of an upper-class gay man of his generation. (Whether Lovecraft intended this is a very interesting question that his biographers don’t really answer.) With an eye toward getting a good working sense of Carter’s background, I talked to a couple of gay friends, who pointed me to some friends of theirs, and that was how I ended up reading George Chauncey’s magisterial Gay New York: Gender, Urban Culture, and the Makings of the Gay Male World, 1890-1940.
What Chauncey documents, in great detail and with a wealth of citations from contemporary sources, is that gay men in America had substantially more freedom during the first three decades of the twentieth century than they did for a very long time thereafter. While homosexuality was illegal, the laws against it had more or less the same impact on people’s behavior that the laws against smoking marijuana had in the last few decades of the twentieth century—lots of people did it, that is, and now and then a few of them got busted. Between the beginning of the century and the coming of the Great Depression, in fact, most large American cities had a substantial gay community with its own bars, restaurants, periodicals, entertainment venues, and social events, right out there in public.
Nor did the gay male culture of early twentieth century America conform to current ideas about sexual identity, or the relationship between gay culture and social class, or—well, pretty much anything else, really. A very large number of men who had sex with other men didn’t see that as central to their identity—there were indeed men who embraced what we’d now call a gay identity, but that wasn’t the only game in town by a long shot. What’s more, sex between men was by and large more widely accepted in the working classes than it was further up the social ladder. In turn-of-the-century New York, it was the working class gay men who flaunted the camp mannerisms and the gaudy clothing; upper- and middle-class gay men such as Randolph Carter had to be much more discreet.
So what happened? Did some kind of vast right-wing conspiracy shove the ebullient gay male culture of the early twentieth century into the closet? No, and that’s one of the more elegant ironies of this entire chapter of American cultural history. The crusade against the “lavender menace” (I’m not making that phrase up, by the way) was one of the pet causes of the same Progressive movement responsible for winning women the right to vote and breaking up the fabulously corrupt machine politics of late nineteenth century America. Unpalatable as that fact is in today’s political terms, gay men and lesbians weren’t forced into the closet in the 1930s by the right.  They were driven there by the left.
This is the same Progressive movement, remember, that made Prohibition a central goal of its political agenda, and responded to the total failure of the Prohibition project by refusing to learn the lessons of failure and redirecting its attentions toward banning less popular drugs such as marijuana. That movement was also, by the way, heavily intertwined with what we now call Christian fundamentalism. Some of my readers may have heard of William Jennings Bryan, the supreme orator of the radical left in late nineteenth century America, the man whose “Cross of Gold” speech became the great rallying cry of opposition to the Republican corporate elite in the decades before the First World War.  He was also the prosecuting attorney in the equally famous Scopes Monkey Trial, responsible for pressing charges against a schoolteacher who had dared to affirm in public Darwin’s theory of evolution.
The usual response of people on today’s left to such historical details—well, other than denying or erasing them, which is of course quite common—is to insist that this proves that Bryan et al. were really right-wingers. Not so; again, we’re talking about people who put their political careers on the line to give women the vote and weaken (however temporarily) the grip of corporate money on the US political system. The politics of the Progressive era didn’t assign the same issues to the categories “left” and “right” that today’s politics do, and so all sides in the sprawling political free-for-all of that time embraced some issues that currently belong to the left, others that belong to the right, and still others that have dropped entirely out of the political conversation since then.
I could go on, but let’s veer off in another direction instead. Here’s a question for those of my readers who think they’re well acquainted with American history. The Fifteenth Amendment, which granted the right to vote to all adult men in the United States irrespective of race, was ratified in 1870. Before then, did black men have the right to vote anywhere in the US?
Most people assume as a matter of course that the answer must be no—and they’re wrong. Until the passage of the Fifteenth Amendment, the question of who did and didn’t have voting rights was a matter for each state to decide for itself. Fourteen states either allowed free African-American men to vote in Colonial times or granted them that right when first organized. Later on, ten of them—Delaware in 1792, Kentucky in 1799, Maryland in 1801, New Jersey in 1807, Connecticut in 1814, New York in 1821, Rhode Island in 1822, Tennessee in 1834, North Carolina in 1835, and Pennsylvania in 1838—either denied free black men the vote or raised legal barriers that effectively kept them from voting. Four other states—Massachusetts, Vermont, New Hampshire, and Maine—gave free black men the right to vote in Colonial times and maintained that right until the Fifteenth Amendment made the whole issue moot. Those readers interested in the details can find them in The African American Electorate: A Statistical History by Hanes Walton Jr. et al., which devotes chapter 7 to the subject.
So what happened? Was there a vast right-wing conspiracy to deprive black men of the right to vote? No, and once again we’re deep in irony. The political movements that stripped free American men of African descent of their right to vote were the two great pushes for popular democracy in the early United States, the Democratic-Republican party under Thomas Jefferson and the Democratic party under Andrew Jackson. Read any detailed history of the nineteenth century United States and you’ll learn that before these two movements went to work, each state set a certain minimum level of personal wealth that citizens had to have in order to vote. Both movements forced through reforms in the voting laws, one state at a time, to remove these property requirements and give the right to vote to every adult white man in the state. What you won’t learn, unless you do some serious research, is that in many states these same reforms also stripped adult black men of their right to vote.
Try to explain this to most people on the leftward end of today’s American political spectrum, and you’ll likely end up with a world-class meltdown, because the Jeffersonian Democratic-Republicans and the Jacksonian Democrats, like the Progressive movement, embraced some causes that today’s leftists consider progressive, and others that they consider regressive. The notion that social change is driven by some sort of linear evolution of consciousness, in which people necessarily become “more conscious” (that is to say, conform more closely to the ideology of the contemporary American left) over time, has no room for gay-bashing Progressives and Jacksonian Democrats whose concept of democracy included a strict color bar. The difficulty, of course, is that history is full of Progressives, Jacksonian Democrats, and countless other political movements that can’t be shoehorned into the Procrustean bed of today’s political ideologies.
I could add other examples—how many people remember, for example, that environmental protection was a cause of the far right until the 1960s?—but I think the point has been made. People in the past didn’t divide up the political causes of their time into the same categories left-wing activists like to use today. It’s practically de rigueur for left-wing activists these days to insist that people in the past ought to have seen things in today’s terms rather than the terms of their own time, but that insistence just displays a bad case of chronocentrism.
Chronocentrism? Why, yes.  Most people nowadays are familiar with ethnocentrism, the insistence by members of one ethnic group that the social customs, esthetic notions, moral standards, and so on of that ethnic group are universally applicable, and that anybody who departs from those things is just plain wrong. Chronocentrism is the parallel insistence, on the part of people living in one historical period, that the social customs, esthetic notions, moral standards, and so on of that period are universally applicable, and that people in any other historical period who had different social customs, esthetic notions, moral standards, and so on should have known better.
Chronocentrism is pandemic in our time. Historians have a concept called “Whig history;” it got that moniker from a long line of English historians who belonged to the Whig, i.e., Liberal Party, and who wrote as though all of human history was to be judged according to how well it measured up to the current Liberal Party platform. Such exercises aren’t limited to politics, though; my first exposure to the concept of Whig history came via university courses in the history of science. When I took those courses—this was twenty-five years ago, mind you—historians of science were sharply divided between a majority that judged every scientific activity in every past society on the basis of how well it conformed to our ideas of science, and a minority that tried to point out just how difficult this habit made the already challenging task of understanding the ideas of past thinkers.
To my mind, the minority view in those debates was correct, but at least some of its defenders missed a crucial point. Whig history doesn’t exist to foster understanding of the past.  It exists to justify and support an ideological stance of the present. If the entire history of science is rewritten so that it’s all about how the currently accepted set of scientific theories about the universe rose to their present privileged status, that act of revision makes currently accepted theories look like the inevitable outcome of centuries of progress, rather than jerry-rigged temporary compromises kluged together to cover a mass of recalcitrant data—which, science being what it is, is normally a more accurate description.
In exactly the same sense, the claim that a certain set of social changes in the United States and other industrial countries in recent years result from the “evolution of consciousness,” unfolding on a one-way street from the ignorance of the past to a supposedly enlightened future, doesn’t help make sense of the complicated history of social change. It was never supposed to do that. Rather, it’s an attempt to backstop the legitimacy of a specific set of political agendas here and now by making them look like the inevitable outcome of the march of history. The erasure of the bits of inconvenient history I cited earlier in this essay is part and parcel of that attempt; like all linear schemes of historical change, it falsifies the past and glorifies the future in order to prop up an agenda in the present.
It needs to be remembered in this context that the word “evolution” does not mean “progress.” Evolution is adaptation to changing circumstances, and that’s all it is. When people throw around the phrases “more evolved” and “less evolved,” they’re talking nonsense, or at best engaging in a pseudoscientific way of saying “I like this” and “I don’t like that.” In biology, every organism—you, me, koalas, humpback whales, giant sequoias, pond scum, and all the rest—is equally the product of a few billion years of adaptation to the wildly changing conditions of an unstable planet, with genetic variation shoveling in diversity from one side and natural selection picking and choosing on the other. The habit of using the word “evolution” to mean “progress” is pervasive, and it’s pushed hard by the faith in progress that serves as an ersatz religion in our time, but it’s still wrong.
It’s entirely possible, in fact, to talk about the evolution of political opinion (which is of course what “consciousness” amounts to here) in strictly Darwinian terms. In every society, at every point in its history, groups of people are striving to improve the conditions of their lives by some combination of competition and cooperation with other groups. The causes, issues, and rallying cries that each group uses will vary from time to time as conditions change, and so will the relationships between groups—thus it was to the advantage of affluent liberals of the Progressive era to destroy the thriving gay culture of urban America, just as it was to the advantage of affluent liberals of the late twentieth century to turn around and support the struggle of gay people for civil rights. That’s the way evolution works in the real world, after all.
This sort of thinking doesn’t offer the kind of ideological support that activists of various kinds are used to extracting from various linear schemes of history. On the other hand, that difficulty is more than balanced by a significant benefit, which is that linear predictions inevitably fail, and so by and large do movements based on them. The people who agreed enthusiastically with Hillary Clinton’s insistence that “we aren’t going back, we’re going forward” are still trying to cope with the hard fact that their political agenda will be wandering in the wilderness for at least the next four years. Those who convince themselves that their cause is the wave of the future are constantly being surprised by the embarrassing discovery that waves inevitably break and roll back out to sea. It’s those who remember that history plays no favorites who have a chance of accomplishing their goals.

How Not To Write Like An Archdruid

Wed, 2017-01-04 11:59
Among the occasional amusements I get from writing these weekly essays are earnest comments from people who want to correct my writing style. I field one of them every month or so, and the latest example came in over the electronic transom in response to last week’s post. Like most of its predecessors, it insisted that there’s only one correct way to write for the internet, trotted out a set of canned rules that supposedly encapsulate this one correct way, and assumed as a matter of course that the only reason I didn’t follow those rules is that I’d somehow managed not to hear about them yet.
The latter point is the one I find most amusing, and also most curious. Maybe I’m naive, but it’s always seemed to me that if I ran across someone who was writing in a style I found unusual, the first thing I’d want to do would be to ask the author why he or she had chosen that stylistic option—because, you know, any writer who knows the first thing about his or her craft chooses the style he or she finds appropriate for any given writing project. I field such questions once in a blue moon, and I’m happy to answer them, because I do indeed have reasons for writing these essays in the style I’ve chosen for them. Yet it’s much more common to get the sort of style policing I’ve referenced above—and when that happens, you can bet your bottom dollar that what’s being pushed is the kind of stilted, choppy, dumbed-down journalistic prose that I’ve deliberately chosen not to write.
I’m going to devote a post to all this, partly because I write what I want to write about, the way I want to write about it, for the benefit of those who enjoy reading it, and those who don’t are encouraged to remember that there are thousands of other blogs out there that they’re welcome to read instead. Partly, though, the occasional thudding of what Giordano Bruno called “the battering rams of infants, the catapults of error, the bombards of the inept, and the lightning flashes, thunder, and great tempests of the ignorant”—now there was a man who could write!—raises issues that are central to the occasional series of essays on education I’ve been posting here.
Accepting other people’s advice on writing is a risky business—and yes, that applies to this blog post as well as any other source of such advice. It’s by no means always true that “those who can, do; those who can’t, teach,” but when we’re talking about unsolicited writing advice on the internet, that’s the way to bet.  Thus it’s not enough for some wannabe instructor to tell you “I’ve taught lots of people” (taught them what?) or “I’ve helped lots of people” (to do what?)—the question you need to ask is what the instructor himself or herself has written and where it’s been published.
The second of those matters as much as the first. It so happens, for example, that a great many of the professors who offer writing courses at American universities publish almost exclusively in the sort of little literary quarterlies that have a circulation in three figures and pay contributors in spare copies. (It’s not coincidental that these days, most of the little literary quarterlies in question are published by university English departments.) There’s nothing at all wrong with that, if you dream of writing the sort of stories, essays, and poetry that populate little literary quarterlies.
If you want to write something else, though, it’s worth knowing that these little quarterlies have their own idiosyncratic literary culture. There was a time when the little magazines were one of the standard stepping stones to a successful writing career, but that time went whistling down the wind decades ago. Nowadays, the little magazines have gone one way, the rest of the publishing world has gone another, and many of the habits the little magazines encourage (or even require) in their writers will guarantee prompt and emphatic rejection slips from most other writing venues.
Different kinds of writing, in other words, have their own literary cultures and stylistic customs. In some cases, those can be roughly systematized in the form of rules. That being the case, is there actually some set of rules that are followed by everything good on the internet?
Er, that would be no. I’m by no means a fan of the internet, all things considered—I publish my essays here because most of the older venues I’d prefer no longer exist—but it does have its virtues, and one of them is the remarkable diversity of style to be found there. If you like stilted, choppy, dumbed-down journalistic prose of the sort my commenter wanted to push on me, why, yes, you can find plenty of it online. You can also find lengthy, well-argued essays written in complex and ornate prose, stream-of-consciousness pieces that out-beat the Beat generation, experimental writing of any number of kinds, and more. Sturgeon’s Law (“95% of everything is crap”) applies here as it does to every other human creation, but there are gems to be found online that range across the spectrum of literary forms and styles. No one set of rules applies.
Thus we can dismiss the antics of the style police out of hand. Let’s go deeper, though. If there’s no one set of rules that internet writing ought to follow, are there different rules for each kind of writing? Or are rules themselves the problem? This is where things get interesting.
One of the consistent mental hiccups of American popular culture is the notion that every spectrum consists solely of its two extremes, with no middle ground permitted, and that bit of paralogic gets applied to writing at least as often as to anything else. Thus you have, on the one hand, the claim that the only way to write well is to figure out what the rules are and follow them with maniacal rigidity; on the other, the claim that the only way to write well is to throw all rules into the trash can and let your inner genius, should you happen to have one of those on hand, spew forth the contents of your consciousness all anyhow onto the page. Partisans of those two viewpoints snipe at one another from behind rhetorical sandbags, and neither one of them ever manages more than a partial victory, because neither approach is particularly useful when it comes to the actual practice of writing.
By and large, when people write according to a rigidly applied set of rules—any rigidly applied set of rules—the result is predictable, formulaic, and trite, and therefore boring. By and large, when people write without paying any attention to rules at all, the result is vague, shapeless, and maundering, and therefore boring. Is there a third option? You bet, and it starts by taking the abandoned middle ground: in this case, learning an appropriate set of rules, and using them as a starting point, but departing from them wherever doing so will improve the piece you’re writing.
The set of rules I recommend, by the way, isn’t meant to produce the sort of flat PowerPoint verbiage my commenter insists on. It’s meant to produce good readable English prose, and the source of guidance I recommend to those who are interested in such things is Strunk and White’s deservedly famous The Elements of Style. Those of my readers who haven’t worked with it, who want to improve their writing, and who’ve glanced over what I’ve published and decided that they might be able to learn something useful from me, could do worse than to read it and apply it to their prose.
A note of some importance belongs here, though. There’s a thing called writer’s block, and it happens when you try to edit while you’re writing. I’ve read, though I’ve misplaced the reference, that neurologists have found that the part of the brain that edits and the part of the brain that creates are not only different, they conflict with one another.  If you try to use both of them at once, your brain freezes up in a fairly close neurological equivalent of the Blue Screen of Death, and you stop being able to write at all. That’s writer’s block. To avoid it, NEVER EDIT WHILE YOU’RE WRITING
I mean that quite literally. Don’t even look at the screen if you can’t resist the temptation to second-guess the writing process. If you have to, turn the screen off, so you can’t even see what you’re writing. Eventually, with practice, you’ll learn to move smoothly back and forth between creative mode and editing mode, but if you don’t have a lot of experience writing, leave that for later. For now, just blurt it all out without a second thought, with all its misspellings and garbled grammar intact.
Then, after at least a few hours—or better yet, after a day or so—go back over the mess, cutting, pasting, adding, and deleting as needed, until you’ve turned it into nice clean text that says what you want it to say. Yes, we used to do that back before computers; the process is called “cut and paste” because it was done back then with a pair of scissors and a pot of paste, the kind with a little spatula mounted on the inside of the lid to help you spread the stuff; you’d cut out the good slices of raw prose and stick them onto a convenient sheet of paper, interspersed with handwritten or freshly typed additions. Then you sat down and typed your clean copy from the pasted-up mess thus produced. Now you know how to do it when the internet finally dries up and blows away. (You’re welcome.)
In the same way, you don’t try to write while looking up rules in Strunk & White. Write your piece, set it aside for a while, and then go over it with your well-worn copy of Strunk & White in hand, noting every place you broke one of the rules of style the book suggests you should follow. The first few times, as a learning exercise, you might consider rewriting the whole thing in accordance with those rules—but only the first few times. After that, make your own judgment call: is this a place where you should follow the rules, or is this a place where they need to be bent, broken, or trampled into the dust? Only you, dear reader-turned-writer, can decide.
A second important note deserves to be inserted at this point, though. The contemporary US public school system can be described without too much inaccuracy as a vast mechanism for convincing children that they can’t write. Rigid rules imposed for the convenience of educators rather than the good of the students, part of the industrial mass-production ethos that pervades public schools in this country, leave a great many graduates so bullied, beaten, and bewildered by bad pedagogy that the thought of writing something for anybody else to read makes them turn gray with fear. It’s almost as bad as the terror of public speaking the public schools also go out of their way to inflict, and it plays a comparable role in crippling people’s capacity to communicate outside their narrow circles of friends.
If you suffer from that sort of educational hangover, dear reader, draw a deep breath and relax. The bad grades and nasty little comments in red ink you got from Mrs. Melba McNitpick, your high school English teacher, are no reflection of your actual capacities as a writer. If you can talk, you can write—it’s the same language, after all. For that matter, even if you can’t talk, you may be able to write—there’s a fair number of people out there who are nonverbal for one reason or another, and can still make a keyboard dance.
The reason I mention this here is that the thought of making an independent judgment about when to follow the rules and when to break them fills a great many survivors of American public schools with dread. In far too many cases, students are either expected to follow the rules with mindless obedience and given bad grades if they fail to do so, or given no rules at all and then expected to conform to unstated expectations they have no way to figure out, and either of these forms of bad pedagogy leaves scars. Again, readers who are in this situation should draw a deep breath and relax; having left Mrs. McNitpick’s class, you’re not subject to her opinions any longer, and should ignore them utterly.
So how do you decide where to follow the rules and where to fold, spindle, and mutilate them? That’s where we walk through the walls and into the fire, because what guides you in your decisions regarding the rules of English prose is the factor of literary taste.
Rules can be taught, but taste can only be learned. Does that sound like a paradox? Au contraire, it simply makes the point that only you can learn, refine, and ripen your literary taste—nobody else can do it for you, or even help you to any significant extent—and your sense of taste is therefore going to be irreducibly personal. When it comes to taste, you aren’t answerable to Mrs. McNitpick, to me, to random prose trolls on the internet, or to anyone else. What’s more, you develop your taste for prose the same way you develop your taste for food: by trying lots of different things, figuring out what you like, and paying close attention to what you like, why you like it, and what differentiates it from the things you don’t like as much.
This is applicable, by the way, to every kind of writing, including those kinds at which the snobs among us turn up their well-sharpened noses. I don’t happen to be a fan of the kind of satirical gay pornography that Chuck Tingle has made famous, for example, but friends of mine who are tell me that in that genre, as in all others, there are books that are well written, books that are tolerable, and books that trip over certain overelongated portions of their anatomy and land face first in—well, let’s not go there, shall we? In the same way, if your idea of a good read is nineteenth-century French comedies of manners, you can find a similar spectrum extending from brilliance to bathos.
Every inveterate reader takes in a certain amount of what I call popcorn reading—the sort of thing that’s read once, mildly enjoyed, and then returned to the library, the paperback exchange, or whatever electronic Elysium e-books enter when you hit the delete button. That’s as inevitable as it is harmless. The texts that matter in developing your personal taste, though, are the ones you read more than once, and especially the ones you read over and over again. As you read these for the third or the thirty-third time, step back now and then from the flow of the story or the development of the argument, and notice how the writer uses language. Learn to notice the really well-turned phrases, the figures of speech that are so apt and unexpected that they seize your attention, the moments of humor, the plays on words, the  passages that match tone and pacing to the subject perfectly.
If you’ve got a particular genre in mind—no, let’s stop for a moment and talk about genre, shall we? Those of my readers who endured a normal public school education here in the US probably don’t know that this is pronounced ZHON-ruh (it’s a French word) and it simply means a category of writing. Satirical gay pornography is a genre. The comedy of manners is a genre. The serious contemporary literary novel is a genre.  So are mysteries, romance, science fiction, fantasy, and the list goes on. There are also nonfiction genres—for example, future-oriented social criticism, the genre in which nine of my books from The Long Descent to Dark Age America have their place. Each genre is an answer to the question, “I just read this and I liked it—where can I find something else more or less like it?”
Every genre has its own habits and taboos, and if you want to write for publication, you need to know what those are. That doesn’t mean you have to follow those habits and taboos with the kind of rigid obedience critiqued above—quite the contrary—but you need to know about them, so that when you break the rules you do it deliberately and skillfully, to get the results you want, rather than clumsily, because you didn’t know any better. It also helps to read the classics of the genre—the books that established those habits and taboos—and then go back and read books in the genre written before the classics, to get a sense of what possibilities got misplaced when the classics established the frame through which all later works in that genre would be read.
If you want to write epic fantasy, for example, don’t you dare stop with Tolkien—it’s because so many people stopped with Tolkien that we’ve got so many dreary rehashes of something that was brilliantly innovative in 1949, complete with carbon-copy Dark Lords cackling in chorus and the inevitable and unendearing quest to do something with the Magic McGuffin that alone can save blah blah blah. Read the stuff that influenced Tolkien—William Morris, E.R. Eddison, the Norse sagas, the Kalevala, Beowulf.  Then read something in the way of heroic epic that he probably didn’t get around to reading—the Ramayana, the Heike Monogatari, the Popol Vuh, or what have youand think through what those have to say about the broader genre of heroic wonder tale in which epic fantasy has its place.
The point of this, by the way, isn’t to copy any of these things. It’s to develop your own sense of taste so that you can shape your own prose accordingly. Your goal, if you’re at all serious about writing, isn’t to write like Mrs. McNitpick, like your favorite author of satirical gay pornography or nineteenth-century French comedies of manners, or like me, but to write like yourself.
And that, to extend the same point more broadly, is the goal of any education worth the name. The word “education” itself comes from the Latin word educatio, from ex-ducere, “to lead out or bring out;” it’s about leading or bringing out the undeveloped potentials that exist inside the student, not shoving some indigestible bolus of canned information or technique down the student’s throat. In writing as in all other things that can be learned, that process of bringing out those undeveloped potentials requires the support of rules and examples, but those are means to an end, not ends in themselves—and it’s in the space between the rules and their inevitable exceptions, between the extremes of rigid formalism and shapeless vagueness, that the work of creation takes place.
That’s also true of politics, by the way—and the conventional wisdom of our time fills the same role there that the rules for bad internet prose do for writing. Before we can explore that, though, it’s going to be necessary to take on one of the more pervasive bad habits of contemporary thinking about the relationship between the present and the past. We’ll tackle that next week.
********************In not wholly unrelated news, I’m pleased to announce that Merigan Tales, the anthology of short stories written by Archdruid Report readers set in the world of my novel Star’s Reach, is now in print and available for purchase from Founders House. Those of my readers who enjoyed Star’s Reach and the After Oil anthologies won’t want to miss it.

A Leap in the Dark

Wed, 2016-12-28 17:13
A few days from now, 2016 will have passed into the history books. I know a fair number of people who won’t mourn its departure, but it’s pretty much a given that the New Year celebrations here in the United States, at least, will demonstrate a marked shortage of enthusiasm for the arrival of 2017.
There’s good reason for that, and not just for the bedraggled supporters of Hillary Clinton’s failed and feckless presidential ambitions. None of the pressures that made 2016 a cratered landscape of failed hopes and realized nightmares have gone away. Indeed, many of them are accelerating, as the attempt to maintain a failed model of business as usual in the teeth of political, economic, and environmental realities piles blowback upon blowback onto the loading dock of the new year.
Before we get into that, though, I want to continue the annual Archdruid Report tradition and review the New Year’s predictions that I made at the beginning of 2016. Those of my readers who want to review the original post will find it here. Here’s the gist.
“Thus my core prediction for 2016 is that all the things that got worse in 2015 will keep on getting worse over the year to come. The ongoing depletion of fossil fuels and other nonrenewable resources will keep squeezing the global economy, as the real (i.e., nonfinancial) costs of resource extraction eat up more and more of the world’s total economic output, and this will drive drastic swings in the price of energy and commodities—currently those are still headed down, but they’ll soar again in a few years as demand destruction completes its work. The empty words in Paris a few weeks ago will do nothing to slow the rate at which greenhouse gases are dumped into the atmosphere, raising the economic and human cost of climate-related disasters above 2015’s ghastly totals—and once again, the hard fact that leaving carbon in the ground means giving up the lifestyles that depend on digging it up and burning it is not something that more than a few people will be willing to face.
“Meanwhile, the US economy will continue to sputter and stumble as politicians and financiers try to make up for ongoing declines in real (i.e., nonfinancial) wealth by manufacturing paper wealth at an even more preposterous pace than before, and frantic jerryrigging will keep the stock market from reflecting the actual, increasingly dismal state of the economy.  We’re already in a steep economic downturn, and it’s going to get worse over the year to come, but you won’t find out about that from the mainstream media, which will be full of the usual fact-free cheerleading; you’ll have to watch the rates at which the people you know are being laid off and businesses are shutting their doors instead.” 
It’s almost superfluous to point out that I called it. It’s been noted with much irritation by other bloggers in what’s left of the peak oil blogosphere that it takes no great talent to notice what’s going wrong, and point out that it’s just going to keep on heading the same direction. This I cheerfully admit—but it’s also relevant to note that this method produces accurate predictions. Meanwhile, the world-saving energy breakthroughs, global changes in consciousness, sudden total economic collapses, and other events that get predicted elsewhere year after weary year have been notable by their absence.
I quite understand why it’s still popular to predict these things: after all, they allow people to pretend that they can expect some future other than the one they’re making day after day by their own actions. Nonetheless, the old saying remains true—“if you always do what you’ve always done, you’ll always get what you’ve always gotten”—and I wonder how many of the people who spend each year daydreaming about the energy breakthroughs, changes in consciousness, economic collapses, et al, rather than coming to grips with the rising spiral of crises facing industrial civilization, really want to deal with the future that they’re storing up for themselves by indulging in this habit.
Let’s go on, though.  At the beginning of 2016, I also made four specific predictions, which I admitted at the time were long shots. One of those, specific prediction #3, was that the most likely outcome of the 2016 presidential election would be the inauguration of Donald Trump as President in January 2017. I don’t think I need to say much about that, as it’s already been discussed here at length.  The only thing I’d like to point out here is that much of the Democratic party seems to be fixated on finding someone or something to blame for the debacle, other than the stark incompetence of the Clinton campaign and the failure of Democrats generally to pay attention to anything outside the self-referential echo chambers of affluent liberal opinion. If they keep it up, it’s pretty much a given that Trump will win reelection in 2020.
The other three specific long-shot predictions didn’t pan out, at least not in the way that I anticipated, and it’s only fair—and may be helpful, as we head further into the unknown territory we call 2017—to talk about what didn’t happen, and why.
Specific prediction #1 was that the next tech bust would be under way by the end of 2016.  That’s happening, but not in the way I expected. Back in January I was looking at the maniacally overinflated stock prices of tech companies that have never made a cent in profit and have no meaningful plans to do so, and I expected a repeat of the “tech wreck” of 2000. The difficulty was simply I didn’t take into account the most important economic shift between 2000 and 2016—the de facto policy of negative interest rates being pursued by the Federal Reserve and certain other central banks.
That policy’s going to get a post of its own one of these days, because it marks the arrival of a basic transformation in economic realities that’s as incomprehensible to neoliberal economists as it will be challenging to most of the rest of us. The point I want to discuss here here, though, is a much simpler one. Whenever real interest rates are below zero, those elite borrowers who can get access to money on those terms are being paid to borrow.  Among many other things, this makes it a lot easier to stretch out the downward arc of a failing industry. Cheaper-than-free money is one of the main things that kept the fracking industry from crashing and burning from its own unprofitability once the price of oil plunged in 2013; there’s been a steady string of bankruptcies in the fracking industry and the production of oil from fracked wells has dropped steadily, but it wasn’t the crash many of us expected.
The same thing is happening, in equally slow motion, with the current tech bubble. Real estate prices in San Francisco and other tech hotspots are sliding, overpaid tech employees are being systematically replaced by underpaid foreign workers, the numbers are looking uglier by the week, but the sudden flight of investment money that made the “tech wreck” so colorful sixteen years ago isn’t happening, because tech firms can draw on oceans of relatively cheap funding to turn the sudden popping of the tech bubble into the slow hiss of escaping air. That doesn’t mean that the boom-and-bust cycle has been cancelled—far from it—but it does mean that shoveling bad money after good has just become a lot easier. Exactly how that will impact the economy is a very interesting question that nobody just now knows how to answer.
Let’s move on.  Specific prediction #2 was that the marketing of what would inevitably be called “the PV revolution” would get going in a big way in 2016. Those of my readers who’ve been watching the peak oil scene for more than a few years know that ever since the concept of peak oil clawed its way back out of its long exile in the wilderness of the modern imagination, one energy source after anobter has been trotted out as the reason du jour why the absurdly extravagant lifestyles of today’s privileged classes can roll unhindered into the future.  I figured, based on the way that people in the mainstream environmentalist movement were closing ranks around renewables, that photovoltaic solar energy would be the next beneficiary of that process, and would take off in a big way as the year proceeded.
That this didn’t happen is not the fault of the solar PV industry or its cheerleades in the green media. Naomi Oreskes’ strident insistence a while back that raising questions about the economic viability of renewable energy is just another form of climate denialism seems to have become the party line throughout the privileged end of the green left, and the industrialists are following suit. Elon Musk, whose entire industrial empire has been built on lavish federal subsidies, is back at the feed trough again, announcing a grandiose new plan to manufacture photovoltaic roof shingles; he’s far and away the most colorful of the would-be renewable-energy magnates, but others are elbowing their way toward the trough as well, seeking their own share of the spoils.
The difficulty here is twofold. First, the self-referential cluelessness of the Democratic party since the 2008 election has had the inevitable blowback—something like 1000 state and federal elective offices held by Democrats after that election are held by Republicans today—and the GOP’s traditional hostility toward renewable energy has put a lid on the increased subsidies that would have been needed to kick a solar PV feeding frenzy into the same kind of overdrive we’ve already seen with ethanol and wind. Solar photovoltaic power, like ethanol from corn, has a disastrously low energy return on energy invested—as Pedro Prieto and Charles Hall showed in their 2015 study of real-world data from Spain’s solar PV program, the EROEI on large-scale grid photovoltaic power works out in practice to less than 2.5—and so, like nuclear power, it’s only economically viable if it’s propped up by massive and continuing subsidies. Lacking those, the “PV revolution” is dead in the water.
The second point, though, is the more damaging.  The “recovery” after the 2008-2009 real estate crash was little more than an artifact of statistical manipulation, and even negative interest rates haven’t been able to get a heartbeat going in the economy’s prostrate body. As most economic measurements not subject to fiddling by the enthusiastic accountants of the federal government slide steadily downhill, the economic surplus needed to support any kind of renewables buildout at all is rapidly tricking away. Demand destruction is in the driver’s seat, and the one way of decreasing fossil fuel consumption that affluent environmentalists don’t want to talk about—conservation—is the only viable option just now.
Specific prediction #4 was that the Saudi regime in Arabia would collapse by the end of 2016. As I noted at the time, the replacement of the Saudi monarchy with some other form of government is for all practical purposes a done deal. Of the factors I cited then—the impending bankruptcy of a regime that survives only by buying off dissent with oil money, the military quagmires in Yemen, Syria, and Iraq that have the Saudi military and its foreign mercenaries bogged down inextricably, and the rest of it—none have gone away. Nor has the underlying cause, the ongoing depletion of the once-immense oil reserves that have propped up the Saudi state so far.
That said, as I noted back in January, it’s anyone’s guess what cascade of events will send the Saudi royal family fleeing to refuges overseas while mobs rampage through their abandoned palaces in Riyadh, and some combination of mid-level military officers and Muslim clerics piece together a provisional government in their absence. I thought that it was entirely possible that this would happen in 2016, and of course it didn’t. It’s possible at this point that the price of oil could rise fast enough to give the Saudi regime another lease on life, however brief. That said, the winds are changing across the Middle East; the Russian-Iranian alliance is in the ascendant, and the Saudis have very few options left. It will be interesting, in the sense of the apocryphal Chinese curse, to see how long they survive.
So that’s where we stand, as 2016 stumbles down the ramp into time’s slaughterhouse and 2017 prepares to take its place in the ragged pastures of history. What can we expect in the year ahead?
To some extent, I’ve already answered that question—but only to some extent. Most of the factors that drove events in 2016 are still in place, still pressing in the same direction, and “more of the same” is a fair description of the consequences. Day after day, the remaining fossil fuel reserves of a finite planet are being drawn down to maintain the extravagant and unsustainable lifestyles of the industrial world’s more privileged inmates. Those remaining reserves are increasingly dirty, increasingly costly to extract and process, increasingly laden with a witch’s brew of social, economic, and environmental costs that nobody anywhere is willing to make the fossil fuel industry cover, and those costs don’t go away just because they’re being ignored—they pile up in society, the economy, and the biosphere, producing the rising tide of systemic dysfunction that plays so large and unmentioned a role in daily life today.
Thus we can expect still more social turmoil, more economic instability, and more environmental blowback in 2017. The ferocious populist backlash against the economic status quo that stunned the affluent in Britain and America with the Brexit vote and Trump’s presidential victory respectively, isn’t going away until and unless the valid grievances of the working classes get heard and addressed by political establishments around the industrial world; to judge by examples so far, that’s unlikely to happen any time soon. At the same time, the mismatch between the lifestyles we can afford and the lifestyles that too many of us want to preserve remains immense, and until that changes, the global economy is going to keep on lurching from one crisis to another. Meanwhile the biosphere is responding to the many perturbations imposed on it by human stupidity in the way that systems theory predicts—with ponderous but implacable shifts toward new conditions, many of which don’t augur well for the survival of industrial society.
There are wild cards in the deck, though, and one of them is being played right now over the North Pole. As I write this, air temperatures over the Arctic ice cap are 50°F warmer than usual for this time of year. A destabilized jet stream is sucking masses of warm air north into the Arctic skies, while pushing masses of Arctic air down into the temperate zone. As a result, winter ice formation on the surface of the Arctic ocean has dropped to levels tht were apparently last seen before our species got around to evolving—and a real possibility exists, though it’s by no means a certainty yet, that next summer could see most of the Arctic Ocean free of ice.
Nobody knows what that will do to the global climate. The climatologists who’ve been trying to model the diabolically complex series of cascading feedback loops we call “global climate” have no clue—they have theories and computer models, but so far their ability to predict the rate and consequences of anthropogenic climate change have not exactly been impressive. (For what it’s worth, by the way, most of their computer models have turned out to be far too conservative in their predictions.) Nobody knows yet whether the soaring temperatures over the North Pole this winter are a fluke, a transitory phenomenon driven by the unruly transition between one climate regime and another, or the beginning of a recurring pattern that will restore the north coast of Canada to the conditions it had during the Miocene, when crocodiles sunned themselves on the warm beaches of northern Greenland. We simply don’t know.
In the same way, the populist backlash mentioned above is a wild card whose effects nobody can predict just now. The neoliberal economics that have been welded into place in the industrial world for the last thirty years have failed comprehensively, that’s clear enough.  The abolition of barriers to the flow of goods, capital, and population did not bring the global prosperity that neoliberal economists promised, and now the bill is coming due. The question is what the unraveling of the neoliberal system means for national economies in the years ahead.
There are people—granted, these are mostly neoliberal economists and those who’ve drunk rather too freely of the neoliberal koolaid—who insist that the abandonment of the neoliberal project will inevitably mean economic stagnation and contraction. There are those who insist that the abandonment of the neoliberal project will inevitably mean a return to relative prosperity here in the US, as offshored jobs are forced back stateside by tax policies that penalize imports, and the US balance of trade reverts to something a little closer to parity. The fact of the matter is that nobody knows what the results will be. Here as in Britain, voters faced with a choice between the perpetuation of an intolerable status quo and a leap in the dark chose the latter, and the consequences of that leap can’t be known in advance.
Other examples abound. The US president-elect has claimed repeatedly that the US under his lead will get out of the regime-change business and pursue a less monomaniacally militaristic foreign policy than the one it’s pursued under Bush and Obama, and would have pursued under Clinton. The end of the US neoconservative consensus is a huge change that will send shockwaves through the global political system. Another change, at least as huge, is the rise of Russia as a major player in the Middle East. Another? The remilitarization of Japan and its increasingly forceful pursuit of political and military alliances in East and South Asia. There are others. The familiar order of global politics is changing fast. What will the outcome be? Nobody knows.
As 2017 dawns, in a great many ways, modern industrial civilization has flung itself forward into a darkness where no stars offer guidance and no echoes tell what lies ahead. I suspect that when we look back at the end of this year, the predictable unfolding of ongoing trends will have to be weighed against sudden discontinuities that nobody anywhere saw coming.  We’re not discussing the end of the world, of course; we’re talking events like those that can be found repeated many times in the histories of other failing civilizations.  That said, my guess is that some of those discontinuities are going to be harsh ones.  Those who brace themselves for serious trouble and reduce their vulnerabilities to a brittle and dysfunctional system will be more likely to come through in one piece.
Those who are about to celebrate the end of 2016, in other words, might want to moderate their cheering when it’s over. It’s entirely possible that 2017 will turn out to be rather worse—despite which I hope that the readers of this blog, and the people they care about, will manage to have a happy New Year anyway.

A Season of Consequences

Wed, 2016-12-21 11:33
One of the many advantages of being a Druid is that you get to open your holiday presents four days early. The winter solstice—Alban Arthuan, to use one term for it in the old-fashioned Druid Revival traditions I practice—is one of the four main holy days of the Druid year. Though the actual moment of solstice wobbles across a narrow wedge of the calendar, the celebration traditionally takes place on December 21.  Yes, Druids give each other presents, hang up decorations, and enjoy as sumptuous a meal as resources permit, to celebrate the rekindling of light and hope in the season of darkness.
Come to think of it, I’m far from sure why more people who don’t practice the Christian faith still celebrate Christmas, rather than the solstice. It’s by no means necessary to believe in the Druid gods and goddesses to find the solstice relevant; a simple faith in orbital inclination is sufficient reason for the season, after all—and since a good many Christians in America these days are less than happy about what’s been done to their holy day, it seems to me that it would be polite to leave Christmas to them, have our celebrations four days earlier, and cover their shifts at work on December 25th in exchange for their covering ours on the 21st. (Back before my writing career got going, when I worked in nursing homes to pay the bills, my Christian coworkers and I did this as a matter of course; we also swapped shifts around Easter and the spring equinox. Religious pluralism has its benefits.)
Those of my readers who don’t happen to be Druids, but who are tempted by the prospect just sketched out, will want to be aware of a couple of details. For one thing, you won’t catch Druids killing a tree in order to stick it in their living room for a few weeks as a portable ornament stand and fire hazard. Druids think there should be more trees in the world, not fewer! A live tree or, if you must, an artificial one, would be a workable option, but a lot of Druids simply skip the tree altogether and hang ornaments on the mantel, or what have you.
Oh, and most of us don’t do Santa Claus. I’m not sure why Santa Claus is popular among Christians, for that matter, or among anyone else who isn’t a devout believer in the ersatz religion of Consumerism—which admittedly has no shortage of devotees just now. There was a time when Santa hadn’t yet been turned into a poorly paid marketing consultant to the toy industry; go back several centuries, and he was the Christian figure of St. Nicholas; and before then he may have been something considerably stranger. To those who know their way around the traditions of Siberian shamanism, certainly, the conjunction of flying reindeer and an outfit colored like the famous and perilous hallucinogenic mushroom Amanita muscaria is at least suggestive.
Still, whether he takes the form of salesman, saint, or magic mushroom, Druids tend to give the guy in the red outfit a pass. Solstice symbolism varies from one tradition of Druidry to another—like almost everything else among Druids—but in the end of the tradition I practice, each of the Alban Gates (the solstices and equinoxes) has its own sacred animal, and the animal that corresponds to Alban Arthuan is the bear. If by some bizarre concatenation of circumstances Druidry ever became a large enough faith in America to attract the attention of the crazed marketing minions of consumerdom, you’d doubtless see Hallmark solstice cards for sale with sappy looking cartoon bears on them, bear-themed decorations in windows, bear ornaments to hang from the mantel, and the like.
While I could do without the sappy looking cartoons, I definitely see the point of bears as an emblem of the winter solstice, because there’s something about them that too often gets left out of the symbolism of Christmas and the like—though it used to be there, and relatively important, too. Bears are cute, no question; they’re warm and furry and cuddlesome, too; but they’re also, ahem, carnivores, and every so often, when people get sufficiently stupid in the vicinity of bears, the bears kill and eat them.
That is to say, bears remind us that actions have consequences.
I’m old enough that I still remember the days when the folk mythology surrounding Santa Claus had not quite shed the last traces of a similar reminder. According to the accounts of Santa I learned as a child, naughty little children ran a serious risk of waking up Christmas morning to find no presents at all, and a sorry little lump of coal in their stockings in place of the goodies they expected. I don’t recall any of my playmates having that happen to them, and it never happened to me—though I arguably deserved it rather more than once—but every child I knew took it seriously, and tried to moderate their misbehavior at least a little during the period after Thanksgiving. That detail of the legend may still survive here and there, for all I know, but you wouldn’t know it from the way the big guy in red is retailed by the media these days.
For that matter, the version I learned was a pale shadow of a far more unnerving original. In many parts of Europe, when St. Nicholas does the rounds, he’s accompanied by a frightening figure with various names and forms. In parts of Germany, Switzerland, and Austria, it’s Krampus—a hairy devil with goat’s horns and a long lolling tongue, who prances around with a birch switch in his hand and a wicker basket on his back. While the saint hands out presents to good children, Krampus is there for the benefit of the others; small-time junior malefactors can expect a thrashing with the birch switch, while the legend has it that the shrieking, spoiled little horrors at the far end of the naughty-child spectrum get popped into the wicker basket and taken away, and nobody ever hears from them again.
Yes, I know, that sort of thing’s unthinkable in today’s America, and I have no idea whether anyone still takes it with any degree of seriousness over in Europe. Those of my readers who find the entire concept intolerable, though, may want to stop for a moment and think about the context in which that bit of folk tradition emerged. Before fossil fuels gave the world’s industrial nations the temporary spate of abundance that they now enjoy, the coming of winter in the northern temperate zone was a serious matter. The other three seasons had to be full of hard work and careful husbandry, if you were going to have any particular likelihood of seeing spring before you starved or froze to death.
By the time the solstice came around, you had a tolerably good idea just how tight things were going to be by the time spring arrived and the first wild edibles showed up to pad out the larder a bit. The first pale gleam of dawn after the long solstice night was a welcome reminder that spring was indeed on its way, and so you took whatever stored food you could spare, if you could spare any at all, and turned it into a high-calorie, high-nutrient feast, to provide warm memories and a little additional nourishment for the bleak months immediately ahead.
In those days, remember, children who refused to carry their share of the household economy might indeed expect to be taken away and never be heard from again, though the taking away would normally be done by some combination of hunger, cold, and sickness, rather than a horned and hairy devil with a lolling tongue. Of course a great many children died anyway.  A failed harvest, a longer than usual winter, an epidemic, or the ordinary hazards of life in a nonindustrial society quite regularly put a burst of small graves in the nearest churchyard. It was nonetheless true that good children, meaning here those who paid attention, learned fast, worked hard, and did their best to help keep the household running smoothly, really did have a better shot at survival.
One of the most destructive consequences of the age of temporary abundance that fossil fuels gave to the world’s industrial nations, in turn, is the widespread conviction that consequences don’t matter—that it’s unreasonable, even unfair, to expect anyone to have to deal with the blowback from their own choices. That’s a pervasive notion these days, and its effects show up in an astonishing array of contexts throughout contemporary culture, but yes, it’s particularly apparent when it comes to the way children get raised in the United States these days.
The interesting thing here is that the children aren’t necessarily happy about that. If you’ve ever watched a child systematically misbehave in an attempt to get a parent to react, you already know that kids by and large want to know where the limits are. It’s the adults who want to give tests and then demand that nobody be allowed to fail them, who insist that everybody has to get an equal share of the goodies no matter how much or little they’ve done to earn them, and so on through the whole litany of attempts to erase the reality that actions have consequences.
That erasure goes very deep. Have you noticed, for example, that year after year, at least here in the United States, the Halloween monsters on public display get less and less frightening? These days, far more often than not, the ghosts and witches, vampires and Frankenstein’s monsters splashed over Hallmark cards and window displays in the late October monster ghetto have big goofy grins and big soft eyes. The wholesome primal terrors that made each of these things iconic in the first place—the presence of the unquiet dead, the threat of wicked magic, the ghastly vision of walking corpses, whether risen from the grave to drink your blood or reassembled and reanimated by science run amok—are denied to children, and saccharine simulacra are propped up in their places.
Here again, children aren’t necessarily happy about that. The bizarre modern recrudescence of the Victorian notion that children are innocent little angels tells me, if nothing else, that most adults must go very far out of their way to forget their own childhoods. Children aren’t innocent little angels; they’re fierce little animals, which is of course exactly what they should be, and they need roughly the same blend of gentleness and discipline that wolves use on their pups to teach them to moderate their fierceness and live in relative amity with the other members of the pack.  Being fierce, they like to be scared a little from time to time; that’s why they like to tell each other ghost stories, the more ghoulish the better, and why they run with lolling tongues toward anything that promises them a little vicarious blood and gore. The early twentieth century humorist Ogden Nash nailed it when he titled one of his poems “Don’t Cry, Darling, It’s Blood All Right.”
Traditional fairy tales delighted countless generations of children for three good and sufficient reasons. First of all, they’re packed full of wonderful events. Second, they’re positively dripping with gore, which as already noted is an instant attraction to any self-respecting child. Third, they’ve got a moral—which means, again, that they are about consequences. The selfish, cruel, and stupid characters don’t get patted on the head, given the same prize as everyone else, and shielded from the results of their selfishness, cruelty, and stupidity; instead, they get gobbled up by monsters, turned to stone by witches’ curses, or subjected to some other suitably grisly doom. It’s the characters who are honest, brave, and kind who go on to become King or Queen of Everywhere.
Such things are utterly unacceptable, according to the approved child-rearing notions of our day.  Ask why this should be the case and you can count on being told that expecting a child to have to deal with the consequences of its actions decreases it’s self-esteem. No doubt that’s true, but this is another of those many cases where people in our society manage not to notice that the opposite of one bad thing is usually another bad thing. Is there such a thing as too little self-esteem? Of course—but there is also such a thing as too much self-esteem. In fact, we have a common and convenient English word for somebody who has too much self-esteem. That word is “jerk.”
The cult of self-esteem in contemporary pop psychology has thus produced a bumper crop of jerks in today’s America. I’m thinking here, among many other examples, of the woman who made the news a little while back by strolling right past the boarding desk at an airport, going down the ramp, and taking her seat on the airplane ahead of all the other passengers, just because she felt she was entitled to do so. When the cabin crew asked her to leave and wait her turn like everyone else, she ignored them; security was called, and she ignored them, too. They finally had to drag her down the aisle and up the ramp like a sack of potatoes, and hand her over to the police. I’m pleased to say she’s up on charges now.
That woman had tremendous self-esteem. She esteemed herself so highly that she was convinced that the rules that applied to everyone else surely couldn’t apply to her—and that’s normally the kind of attitude you can count on from someone whose self-esteem has gone up into the toxic-overdose range. Yet the touchstone of excessive self-esteem, the gold standard of jerkdom, is the complete unwillingness to acknowledge the possibility that actions have consequences and you might have to deal with those, whether you want to or not.
That sort of thing is stunningly common in today’s society. It was that kind of overinflated self-esteem that convinced affluent liberals in the United States and Europe that they could spend thirty years backing policies that pandered to their interests while slamming working people face first into the gravel, without ever having to deal with the kind of blowback that arrived so dramatically in the year just past. Now Britain is on its way out of the European Union, Donald Trump is mailing invitations to his inaugural ball, and the blowback’s not finished yet. Try to point this out to the people whose choices made that blowback inevitable, though, and if my experience is anything to go by, you’ll be ignored if you’re not shouted down.
On an even greater scale, of course, there’s the conviction on the part of an astonishing number of people that we can keep on treating this planet as a combination cookie jar to raid and garbage bin to dump wastes in, and never have to deal with the consequences of that appallingly shortsighted set of policies. That’s as true in large swathes of the allegedly green end of things, by the way, as it is among the loudest proponents of smokestacks and strip mines. I’ve long since lost track of the number of people I’ve met who insist loudly on how much they love the Earth and how urgent it is that “we” protect the environment, but who aren’t willing to make a single meaningful change in their own personal consumption of resources and production of pollutants to help that happen.
Consequences don’t go away just because we don’t want to deal with them. That lesson is being taught right now on low-lying seacoasts around the world, where streets that used to be well above the high tide line reliably flood with seawater when a high tide meets an onshore wind; it’s being taught on the ice sheets of Greenland and West Antarctica, which are moving with a decidedly un-glacial rapidity through a trajectory of collapse that hasn’t been seen since the end of the last ice age; it’s being taught in a hundred half-noticed corners of an increasingly dysfunctional global economy, as the externalized costs of technological progress pile up unnoticed and drag economic activity to a halt; and of course it’s being taught, as already noted, in the capitals of the industrial world, where the neoliberal orthodoxy of the last thirty years is reeling under the blows of a furious populist backlash.
It didn’t have to be learned that way. We could have learned it from Krampus or the old Santa Claus, the one who was entirely willing to leave a badly behaved child’s stocking empty on Christmas morning except for that single eloquent lump of coal; we could have learned it from the fairy tales that taught generations of children that consequences matter; we could have learned it from any number of other sources, given a little less single-minded a fixation on maximizing self-esteem right past the red line on the meter—but enough of us didn’t learn it that way, and so here we are.
I’d therefore like to encourage those of my readers who have young children in their lives to consider going out and picking up a good old-fashioned collection of fairy tales, by Charles Perrault or the Brothers Grimm, and use those in place of the latest mass-marketed consequence-free pap when it comes to storytelling time. The children will thank you for it, and so will everyone who has to deal with them in their adult lives. Come to think of it, those of my readers who don’t happen to have young children in their lives might consider doing the same thing for their own benefit, restocking their imaginations with cannibal giants and the other distinctly unmodern conveniences thereof, and benefiting accordingly.
And if, dear reader, you are ever tempted to climb into the lap of the universe and demand that it fork over a long list of goodies, and you glance up expecting to see the jolly and long-suffering face of Santa Claus beaming down at you, don’t be too surprised if you end up staring in horror at the leering yellow eyes and lolling tongue of Krampus instead, as he ponders whether you’ve earned a thrashing with the birch switch or a ride in the wicker basket—or perhaps the great furry face of the Solstice bear, the beast of Alban Arthuan, as she blinks myopically at you for a moment before she either shoves you from her lap with one powerful paw, or tears your arm off and gnaws on it meditatively while you bleed to death on the cold, cold ground.
Because the universe doesn’t care what you think you deserve. It really doesn’t—and, by the way, the willingness of your fellow human beings to take your wants and needs into account will by and large be precisely measured by your willingness to do the same for them.
And on that utterly seasonal note, I wish all my fellow Druids a wonderful solstice; all my Christian friends and readers, a very merry Christmas; and all my readers, whatever their faith or lack thereof, a rekindling of light, hope, and sanity in a dark and troubled time.

Why the Peak Oil Movement Failed

Wed, 2016-12-14 16:12
As I glance back across the trajectory of this blog over the last ten and a half years, one change stands out. When I began blogging in May of 2006, peak oil—the imminent peaking of global production of conventional petroleum, to unpack that gnomic phrase a little—was the central theme of a large, vocal, and tolerably well organized movement. It had its own visible advocacy organizations, it had national and international conferences, it had a small but noticeable presence in the political sphere, and it showed every sign of making its presence felt in the broader conversation of our time.
Today none of that is true. Of the three major peak oil organizations in the US, ASPO-USA—that’s the US branch of the Association for the Study of Peak Oil and Gas, for those who don’t happen to be fluent in acronym—is apparently moribund; Post Carbon Institute, while it still plays a helpful role from time to time as a platform for veteran peak oil researcher Richard Heinberg, has otherwise largely abandoned its former peak oil focus in favor of generic liberal environmentalism; and the US branch of the Transition organization, formerly the Transition Town movement, is spinning its wheels in a rut laid down years back. The conferences ASPO-USA once hosted in Washington DC, with congresscritters in attendance, stopped years ago, and an attempt to host a national conference in southern Pennsylvania fizzled after three years and will apparently not be restarted.
Ten years ago, for that matter, opinion blogs and news aggregators with a peak oil theme were all over the internet. Today that’s no longer the case, either. The fate of the two most influential peak oil sites, The Oil Drum and Energy Bulletin, is indicative. The Oil Drum simply folded, leaving its existing pages up as a legacy of a departed era.  Energy Bulletin, for its part, was taken over by Post Carbon Institute and given a new name and theme as Resilience.org. It then followed PCI in its drift toward the already overcrowded environmental mainstream, replacing the detailed assessment of energy futures that was the staple fare of Energy Bulletin with the sort of uncritical enthusiasm for an assortment of vaguely green causes more typical of the pages of Yes!Magazine.
There are still some peak oil sites soldiering away—notably Peak Oil Barrel, under the direction of former Oil Drum regular Ron Patterson.  There are also a handful of public figures still trying to keep the concept in circulation, with the aforementioned Richard Heinberg arguably first among them. Aside from those few, though, what was once a significant movement is for all practical purposes dead. The question that deserves asking is simple enough: what happened?
One obvious answer is that the peak oil movement was the victim of its own failed predictions. It’s true, to be sure, that failed predictions were a commonplace of the peak oil scene. It wasn’t just the overenthusiastic promoters of alternative energy technologies, who year after year insisted that the next twelve months would see their pet technology leap out of its current obscurity to make petroleum a fading memory; it wasn’t just their exact equivalents, the overenthusiastic promoters of apocalyptic predictions, who year after year insisted that the next twelve months would see the collapse of the global economy, the outbreak of World War III, the imposition of a genocidal police state, or whatever other sudden cataclysm happened to have seized their fancy.
No, the problem with failed predictions ran straight through the movement, even—or especially—in its more serious manifestations. The standard model of the future accepted through most of the peak oil scene started from a set of inescapable facts and an unexamined assumption, and the combination of those things produced consistently false predictions. The inescapable facts were that the Earth is finite, that it contains a finite supply of petroleum, and that various lines of evidence showed conclusively that global production of conventional petroleum was approaching its peak for hard geological reasons, and could no longer keep increasing thereafter.
The unexamined assumption was that geological realities rather than economic forces would govern how fast the remaining reserves of conventional petroleum would be extracted. On that basis, most people in the peak oil movement assumed that as production peaked and began to decline, the price of petroleum would rise rapidly, placing an increasingly obvious burden on the global economy. The optimists in the movement argued that this, in turn, would force nations around the world to recognize what was going on and make the transition to other energy sources, and to the massive conservation programs that would be needed to deal with the gap between the cheap abundant energy that petroleum used to provide and the more expensive and less abundant energy available from other sources. The pessimists, for their part, argued that it was already too late for such a transition, and that industrial civilization would come apart at the seams.
As it turned out, though, the unexamined assumption was wrong. Geological realities imposed, and continue to impose, upper limits on global petroleum production, but economic forces have determined how much less than those upper limits would actually be produced. What happened, as a result, is that when oil prices spiked in 2007 and 2008, and then again in 2014 and 2015, consumers cut back on their use of petroleum products, while producers hurried to bring marginal petroleum sources such as tar sands and oil shales into production to take advantage of the high prices. Both those steps drove prices back down. Low prices, in turn, encouraged consumers to use more petroleum products, and forced producers to shut down marginal sources that couldn’t turn a profit when oil was less than $80 a barrel; both these steps, in turn, sent prices back up.
That doesn’t mean that peak oil has gone away. As oilmen like to say, depletion never sleeps; each time the world passes through the cycle just described, the global economy takes another body blow, and the marginal petroleum sources cost much more to extract and process than the light sweet crude on which the oil industry used to rely. The result, though, is that instead of a sudden upward zoom in prices that couldn’t be ignored, we’ve gotten wild swings in commodity prices, political and social turmoil, and a global economy stuck in creeping dysfunction that stubbornly refuses to behave the way it did when petroleum was still cheap and abundant. The peak oil movement wasn’t prepared for that future.
Granting all this, failed predictions aren’t enough by themselves to stop a movement in its tracks. Here in the United States, especially, we’ve got an astonishing tolerance for predictive idiocy. The economists who insisted that neoliberal policies would surely bring prosperity, for example, haven’t been laughed into obscurity by the mere fact that they were dead wrong; au contraire, they’re still drawing their paychecks and being taken seriously by politicians and the media. The pundits who insisted at the top of their lungs that Britain wouldn’t vote for Brexit and Donald Trump couldn’t possibly win the US presidency are still being taken seriously, too. Nor, to move closer to the activist fringes, has the climate change movement been badly hurt by the embarrassingly linear models of imminent doom it used to deploy with such abandon; the climate change movement is in deep trouble, granted, but its failure has other causes.
It was the indirect impacts of those failed predictions, rather, that helped run the peak oil movement into the ground. The most important of these, to my mind, was the way that those predictions encouraged people in the movement to put their faith in the notion that sometime very soon, governments and businesses would have to take peak oil seriously. That’s what inspired ASPO-USA, for example, to set up a lobbying office in Washington DC with a paid executive director, when the long-term funding for such a project hadn’t yet been secured. On another plane, that’s what undergirded the entire strategy of the Transition Town movement in its original incarnation: get plans drawn up and officially accepted by as many town governments as possible, so that once the arrival of peak oil becomes impossible to ignore, the plan for what to do about it would already be in place.
Of course the difficulty in both cases was that the glorious day of public recognition never arrived. The movement assumed that events would prove its case in the eyes of the general public and the political system alike, and so made no realistic plans about what to do if that didn’t happen. When it didn’t happen, in turn, the movement was left twisting in the wind.
The conviction that politicians, pundits, and the public would be forced by events to acknowledge the truth about peak oil had other consequences that helped hamstring the movement. Outreach to the vast majority that wasn’t yet on board the peak oil bandwagon, for example, got far too little attention or funding. Early on in the movement, several books meant for general audiences—James Howard Kunstler’s The Long Emergency and Richard Heinberg’s The Party’s Over are arguably the best examples—helped lay the foundations for a more effective outreach program, but the organized followup that might have built on those foundations never really happened. Waiting on events took the place of shaping events, and that’s almost always a guarantee of failure.
One particular form of waiting on events that took a particularly steep toll on the movement was its attempts to get funding from wealthy donors. I’ve been told that Post Carbon Institute got itself funded in this way, while as far as I know, ASPO-USA never did. Win or lose, though, begging for scraps at the tables of the rich is a sucker’s game.  In social change as in every other aspect of life, who pays the piper calls the tune, and the rich—who benefit more than anyone else from business as usual—can be counted on to defend their interest by funding only those activities that don’t seriously threaten the continuation of business as usual. Successful movements for social change start by taking effective action with the resources they can muster by themselves, and build their own funding base by attracting people who believe in their mission strongly enough to help pay for it.
There were other reasons why the peak oil movement failed, of course. To its credit, it managed to avoid two of the factors that ran the climate change movement into the ground, as detailed in the essay linked above—it never became a partisan issue, mostly because no political party in the US was willing to touch it with a ten foot pole, and the purity politics that insists that supporters of one cause are only acceptable in its ranks if they also subscribe to a laundry list of other causes never really got a foothold outside of certain limited circles. Piggybacking—the flipside of purity politics, which demands that no movement be allowed to solve one problem without solving every other problem as well—was more of a problem, and so, in a big way, was pandering to the privileged—I long ago lost track of the number of times I heard people in the peak oil scene insist that this or that high-end technology, which was only affordable by the well-to-do, was a meaningful response to the coming of peak oil.
There are doubtless other reasons as well; it’s a feature of all things human that failure is usually overdetermined. At this point, though, I’d like to set that aside for a moment and consider two other points. The first is that the movement didn’t have to fail the way it did. The second is that it could still be revived and gotten back on a more productive track.
To begin with, not everyone in the peak oil scene bought into the unexamined assumption I’ve critiqued above. Well before the movement started running itself into the ground, some of us pointed out that economic factors were going to have a massive impact on the rates of petroleum production and consumption—my first essay on that theme appeared here in April of 2007, and I was far from the first person to notice it. The movement by that time was so invested in its own predictions, with their apparent promise of public recognition and funding, that those concerns didn’t have an impact at the time. Even when the stratospheric oil price spike of 2008 was followed by a bust, though, peak oil organizations by and large don’t seem to have reconsidered their strategies. A mid-course correction at that point, wrenching though it might have been, could have kept the movement alive.
There were also plenty of good examples of effective movements for social change from which useful lessons could have been drawn. One difficulty is that you won’t find such examples in today’s liberal environmental mainstream, which for all practical purposes hasn’t won a battle since Richard Nixon signed the Clean Air Act. The struggle for the right to same-sex marriage, as I’ve noted before, is quite another matter—a grassroots movement that, despite sparse funding and strenuous opposition, played a long game extremely well and achieved its goal. There are other such examples, on both sides of today’s partisan divide, from which useful lessons can be drawn. Pay attention to how movements for change succeed and how they fail, and it’s not hard to figure out how to play the game effectively. That could have been done at any point in the history of the peak oil movement. It could still be done now.
Like same-sex marriage, after all, peak oil isn’t inherently a partisan issue. Like same-sex marriage, it offers plenty of room for compromise and coalition-building. Like same-sex marriage, it’s a single issue, not a fossilized total worldview like those that play so large and dysfunctional a role in today’s political nonconversations. A peak oil movement that placed itself squarely in the abandoned center of contemporary politics, played both sides against each other, and kept its eyes squarely on the prize—educating politicians and the public about the reality of finite fossil fuel reserves, and pushing for projects that will mitigate the cascading environmental and economic impacts of peak oil—could do a great deal to  reshape our collective narrative about energy and, in the process, accomplish quite a bit to make the long road down from peak oil less brutal than it will otherwise be.
I’m sorry to say that the phrase “peak oil,” familiar and convenient as it is, probably has to go.  The failures of the movement that coalesced around that phrase were serious and visible enough that some new moniker will be needed for the time being, to avoid being tarred with a well-used brush. The crucial concept of net energy—the energy a given resource provides once you subtract the energy needed to extract, process, and use it—would have to be central to the first rounds of education and publicity; since it’s precisely equivalent to profit, a concept most people grasp quickly enough, that’s not necessarily a hard thing to accomplish, but it has to be done, because it’s when the concept of net energy is solidly understood that such absurdities as commercial fusion power appear in their true light.
It probably has to be said up front that no such project will keep the end of the industrial age from being an ugly mess. That’s already baked into the cake at this point; what were once problems to be solved have become predicaments that we can, at best, only mitigate. Nor could a project of the sort I’ve very roughly sketched out here expect any kind of overnight success. It would have to play a long game in an era when time is running decidedly short. Challenging? You bet—but I think it’s a possibility worth serious consideration.
***********************In other news, I’m delighted to announce the appearance of two books that will be of interest to readers of this blog. The first is Dmitry Orlov’s latest, Shrinking the Technosphere: Getting a Grip on the Technologies that Limit Our Autonomy, Self-Sufficiency, and Freedom. It’s a trenchant and thoughtful analysis of the gap between the fantasies of human betterment through technological progress and the antihuman mess that’s resulted from the pursuit of those fantasies, and belongs on the same shelf as Theodore Roszak’s Where the Wasteland Ends: Politics and Transcendence in Postindustrial Society and my After Progress: Religion and Reason in the Twilight of the Industrial Age. Copies hot off the press can be ordered from New Society here.
Meanwhile, Space Bats fans will want to know that the anthology of short stories and novellas set in the world of my novel Star’s Reach is now available for preorder from Founders House here. Merigan Tales is a stellar collection, as good as any of the After Oil anthologies, and fans of Star’s Reach won’t want to miss it.

The Fifth Side of the Triangle

Wed, 2016-12-07 11:27
One of the things I’ve had occasion to notice, over the course of the decade or so I’ve put into writing these online essays, is the extent to which repeating patterns in contemporary life go unnoticed by the people who are experiencing them. I’m not talking here about the great cycles of history, which take long enough to roll over that a certain amount of forgetfulness can be expected; the repeating patterns I have in mind come every few years, and yet very few people seem to notice the repetition.
An example that should be familiar to my readers is the way that, until recently, one energy source after another got trotted out on the media and the blogosphere as the excuse du jour for doing nothing about the ongoing depletion of global fossil fuel reserves. When this blog first got under way in 2006, ethanol from corn was the excuse; then it was algal biodiesel; then it was nuclear power from thorium; then it was windfarms and solar PV installations; then it was oil and gas from fracking. In each case, the same rhetorical handwaving about abundance was deployed for the same purpose, the same issues of net energy and concentration were evaded, and the resource in question never managed to live up to the overblown promises made in its name—and yet any attempt to point out the similarities got blank looks and the inevitable refrain, “but this is different.”
The drumbeat of excuses du jour has slackened a bit just now, and that’s also part of a repeating pattern that doesn’t get anything like the scrutiny it deserves. Starting when conventional petroleum production worldwide reached its all-time plateau, in the first years of this century, the price of oil has jolted up and down in a multiyear cycle. The forces driving the cycle are no mystery: high prices encourage producers to bring marginal sources online, but they also decrease demand; the excess inventories of petroleum that result drive down prices; low prices encourage consumers to use more, but they also cause marginal sources to be shut down; the shortfalls of petroleum that result drive prices up, and round and round the mulberry bush we go.
We’re just beginning to come out of the trough following the 2015 price peak, and demand is even lower than it would otherwise be, due to cascading troubles in the global economy. Thus, for the moment, there’s enough petroleum available to supply everyone who can afford to buy it. If the last two cycles are anything to go by, though, oil prices will rise unsteadily from here, reaching a new peak in 2021 or so before slumping down into a new trough. How many people are paying attention to this, and using the current interval of relatively cheap energy to get ready for another period of expensive energy a few years from now? To judge from what I’ve seen, not many.
Just at the moment, though, the example of repetition that comes first to my mind has little to do with energy, except in a metaphorical sense. It’s the way that people committed to a cause—any cause—are so often so flustered when initial successes are followed by something other than repeated triumph forever. Now of course part of the reason that’s on my mind is the contortions still ongoing on the leftward end of the US political landscape, as various people try to understand (or in some cases, do their level best to misunderstand) the implications of last month’s election. Still, that’s not the only reason this particular pattern keeps coming to mind.
I’m also thinking of it as the Eurozone sinks deeper and deeper into political crisis. The project of European unity had its initial successes, and a great many European politicians and pundits seem to have convinced themselves that of course those would be repeated step by step, until a United States of Europe stepped out on the international stage as the world’s next superpower. It’s pretty clear at this point that nothing of the sort is going to happen, because those initial successes were followed by a cascade of missteps and a populist backlash that’s by no means reached its peak yet.
More broadly, the entire project of liberal internationalism that’s guided the affairs of the industrial world since the Berlin Wall came down is in deep trouble. It’s been enormously profitable for the most affluent 20% or so of the industrial world’s population, which is doubtless a core reason why that same 20% insists so strenuously that no other options are possible, but it’s been an ongoing disaster for the other 80% or so, and they are beginning to make their voices heard.
At the heart of the liberal project was the insistence that economics should trump politics—that the free market should determine policy in most matters, leaving governments only an administrative function. Of course that warm and cozy abstraction “the free market” meant in practice the kleptocratic corporate socialism of too-big-to-fail banks and subsidy-guzzling multinationals, which proceeded to pursue their own short-term benefit so recklessly that they’ve driven entire countries into the ground. That’s brought about the inevitable backlash, and the proponents of liberal internationalism are discovering to their bafflement that if enough of the electorate is driven to the wall, the political sphere may just end up holding the Trump card after all.
And of course the same bafflement is on display in the wake of last month’s presidential election, as a great many people who embraced our domestic version of the liberal internationalist idea were left dumbfounded by its defeat at the hands of the electorate—not just by those who voted for Donald Trump, but also by the millions who stayed home and drove Democratic turnout in the 2016 election down to levels disastrously low for Hillary Clinton’s hopes. A great many of the contortions mentioned above have been driven by the conviction on the part of Clinton’s supporters that their candidate’s defeat was caused by a rejection of the ideals of contemporary American liberalism. That some other factor might have been involved is not, at the moment, something many of them are willing to hear.
That’s where the repeating pattern comes in, because movements for social change—whether they come from the grassroots or the summits of power—are subject to certain predictable changes, and if those changes aren’t recognized and countered in advance, they lead to the kind of results I’ve just been discussing. There are several ways to talk about those changes, but the one I’d like to use here unfolds, in a deliberately quirky way, from the Hegelian philosophy of history.
That probably needs an explanation, and indeed an apology, because Georg Wilhelm Friedrich Hegel has been responsible for more sheer political stupidity than any other thinker of modern times. Across the bloodsoaked mess that was the twentieth century, from revolutionary Marxism in its opening years to Francis Fukuyama’s risible fantasy of the End of History in its closing, where you found Hegelian political philosophy, you could be sure that someone was about to make a mistaken prediction.
It may not be entirely fair to blame Hegel personally for this. His writings and lectures are vast heaps of cloudy abstraction in which his students basically had to chase down inkblot patterns of their own making. Hegel’s great rival Arthur Schopenhauer used to insist that Hegel was a deliberate fraud, stringing together meaningless sequences of words in the hope that his readers would mistake obscurity for profundity, and more than once—especially when slogging through the murky prolixities of Hegel’s The Phenomenology of Spirit—I’ve suspected that the old grouch of Frankfurt was right. Still, we can let that pass, because a busy industry of Hegelian philosophers spent the last century and a half churning out theories of their own based, to one extent or another, on Hegel’s vaporings, and it’s this body of work that most people mean when they talk about Hegelian philosophy.
At the core of most Hegelian philosophies of history is a series of words that used to be famous, and still has a certain cachet in some circles: thesis, antithesis, synthesis. (Hegel himself apparently never used those terms in their later sense, but no matter.) That’s the three-step dance to the music of time that, in the Hegelian imagination, shapes human history. You’ve got one condition of being, or state of human consciousness, or economic system, or political system, or what have you; it infallibly generates its opposite; the two collide, and then there’s a synthesis which resolves the initial contradiction. Then the synthesis becomes a thesis, generates its own antithesis, a new synthesis is born, and so on.
One of the oddities about Hegelian philosophies of history is that, having set up this repeating process, their proponents almost always insist that it’s about to stop forever. In the full development of the Marxist theory of history, for example, the alternation of thesis-antithesis-synthesis starts with the primordial state of primitive communism and then chugs merrily, or rather far from merrily, through a whole series of economic systems, until finally true communism appears—and then that’s it; it’s the synthesis that never becomes a thesis and never conjures up an antithesis. In exactly the same way, Fukuyama’s theory of the end of history argued that all history until 1991 or so was a competition between different systems of political economy, of which liberal democratic capitalism and totalitarian Marxism were the last two contenders; capitalism won, Marxism lost, game over.
Now of course that’s part of the reason that Hegelianism so reliably generates false predictions, because in the real world it’s never game over; there’s always another round to play. There’s another dimension of Hegelian mistakenness, though, because the rhythm of the dialectic implies that the gains of one synthesis are never lost. Each synthesis becomes the basis for the next struggle between thesis and antithesis out of which a new synthesis emerges—and the new synthesis is always supposed to embody the best parts of the old.
This is where we move from orthodox Hegelianism to the quirky alternative I have in mind. It didn’t emerge out of the profound ponderings of serious philosophers of history in some famous European university. It first saw the light in a bowling alley in suburban Los Angeles, and the circumstances of its arrival—which, according to the traditional account, involved the miraculous appearance of a dignified elderly chimpanzee and the theophany of a minor figure from Greek mythology—suggest that prodigious amounts of drugs were probably involved.
Yes, we’re talking about Discordianism.
I’m far from sure how many of my readers are familiar with that phenomenon, which exists somewhere on the ill-defined continuum between deadpan put-on and serious philosophical critique. The short form is that it was cooked up by a couple of young men on the fringes of the California Beat scene right as that was beginning its mutation into the first faint adumbrations of the hippie phenomenon. Its original expression was the Principia Discordia, the scripture (more or less) of a religion (more or less) that worships (more or less) Eris, the Greek goddess of chaos, and its central theme is the absurdity of belief systems that treat orderly schemes cooked up in the human mind as though these exist out there in the bubbling, boiling confusion of actual existence.
That may not seem like fertile ground for a philosophy of history, but the Discordians came up with one anyway, probably in mockery of the ultraserious treatment of Hegelian philosophy that was common just then in the Marxist-existentialist end of the Beat scene. Robert Shea and Robert Anton Wilson proceeded to pick up the Discordian theory of history and weave it into their tremendous satire of American conspiracy culture, the Illuminatus!trilogy. That’s where I encountered it originally in the late 1970s; I laughed, and then paused and ran my fingers through my first and very scruffy adolescent beard, realizing that it actually made more sense than any other theory of history I’d encountered.
Here’s how it works. From the Discordian point of view, Hegel went wrong for two reasons. The first was that he didn’t know about the Law of Fives, the basic Discordian principle that all things come in fives, except when they don’t. Thus he left off the final two steps of the dialectical process: after thesis, antithesis, and synthesis, you get parenthesis, and then paralysis.
The second thing Hegel missed is that the synthesis is never actually perfect.  It never succeeds wholly in resolving the conflict between thesis and antithesis; there are always awkward compromises, difficulties that are papered over, downsides that nobody figures out at the time, and so on. Thus it doesn’t take long for the synthesis to start showing signs of strain, and the inevitable response is to try to patch things up without actually changing anything that matters. The synthesis thus never has time to become a thesis and generate its own antithesis; it is its own antithesis, and ever more elaborate arrangements have to be put to work to keep it going despite its increasingly evident flaws; that’s the stage of parenthesis.
The struggle to maintain these arrangements, in turn, gradually usurps so much effort and attention that the original point of the synthesis is lost, and maintaining the arrangements themselves becomes too burdensome to sustain. That’s when you enter the stage of paralysis, when the whole shebang grinds slowly to a halt and then falls apart. Only after paralysis is total do you get a new thesis, which sweeps away the rubble and kickstarts the whole process into motion again.
There are traditional Discordian titles for these stages. The first, thesis, is the state of Chaos, when a group of human beings look out at the bubbling, boiling confusion of actual existence and decide to impose some kind of order on the mess. The second, antithesis, is the state of Discord, when the struggle to impose that order on the mess in question produces an abundance of equal and opposite reactions. The third, synthesis, is the state of Confusion, in which victory is declared over the chaos of mere existence, even though everything’s still bubbling and boiling merrily away as usual. The fourth, parenthesis, is the state of Consternation,* in which the fact that everything’s still bubbling and boiling merrily away as usual becomes increasingly hard to ignore. The fifth and final, paralysis, is the state of Moral Warptitude—don’t blame me, that’s what the Principia Discordiasays—in which everything grinds to a halt and falls to the ground, and everyone stands around in the smoldering wreckage rubbing their eyes and wondering what happened.
*(Yes, I know, Robert Anton Wilson called the last two stages Bureaucracy and Aftermath. He was a heretic. So is every other Discordian, for that matter.)
Let’s apply this to the liberal international order that emerged in the wake of the Soviet Union’s fall, and see how it fits. Thesis, the state of Chaos, was the patchwork of quarrelsome nations into which our species has divided itself, which many people of good will saw as barbarous relics of a violent past that should be restrained by a global economic order. Antithesis, the state of Discord, was the struggle to impose that order by way of trade agreements and the like, in the teeth of often violent resistance—the phrase “WTO Seattle” may come to mind here. Synthesis, the state of Confusion, was the self-satisfied cosmopolitan culture that sprang up among the affluent 20% or so of the industrial world’s population, who became convinced that the temporary ascendancy of policies that favored their interests was not only permanent but self-evidently right and just.
Parenthesis, the state of Consternation, was the decades-long struggle to prop up those policies despite the disastrous economic consequences those policies inflicted on everyone but the affluent. Finally, paralysis, the state of Moral Warptitude, sets in when populist movements, incensed by the unwillingness of the 20% to consider anyone else’s needs but their own, surge into the political sphere and bring the entire project to a halt. It’s worth noting here that the title “moral warptitude” may be bad English, but it’s a good description for the attitude of believers in the synthesis toward the unraveling of their preferred state of affairs. It’s standard, as just noted, for those who benefit from the synthesis to become convinced that it’s not merely advantageous but also morally good, and to see the forces that overthrow it as evil incarnate; this is simply another dimension of their Confusion.
Am I seriously suggesting that the drug-soaked ravings of a bunch of goofy California potheads provide a better guide to history than the serious reflections of Hegelian philosophers? Well, yes, actually, I am. Given the track record of Hegelian thought when it comes to history, a flipped coin is a better guide—use a coin, and you have a 50% better chance of being right. Outside of mainstream macroeconomic theory, it’s hard to think of a branch of modern thought that so consistently turns out false answers once it’s applied to the real world.
No doubt there are more respectable models that also provide a clear grasp of what happens to most movements for social change—the way they lose track of the difference between achieving their goals and pursuing their preferred strategies, and generally end up opting for the latter; the way that their institutional forms become ends in themselves, and gradually absorb the effort and resources that would otherwise have brought about change; the way that they run to extremes, chase off potential and actual supporters, and then busy themselves coming up with increasingly self-referential explanations for the fact that the only tactics they’re willing to consider are those that increase their own marginalization in the wider society, and so on. It’s a familiar litany, and will doubtless become even more familiar in the years ahead.
For what it’s worth, though, it’s not necessary for the two additional steps of the post-Hegelian dialectic, the fourth and fifth sides of his imaginary triangle, to result in the complete collapse of everything that was gained in the first three steps. It’s possible to surf the waves of Consternation and Moral Warptitude—but it’s not easy. Next week, we’ll explore this further, by circling back to the place where this blog began, and having a serious talk about how the peak oil movement failed.
*************In other news, I’m delighted to report that Retrotopia, which originally appeared here as a series of posts, is now in print in book form and available for sale. I’ve revised and somewhat expanded Peter Carr’s journey to the Lakeland Republic, and I hope it meets with the approval of my readers.
Also from Founders House, the first issue of the new science fiction and fantasy quarterly MYTHIC has just been released. Along with plenty of other lively stories, it’s got an essay of mine on the decline and revival of science fiction, and a short story, "The Phantom of the Dust," set in the same fictive universe as my novel The Weird of Hali: Innsmouth, and pitting Owen Merrill and sorceress Jenny Chaudronnier against a sinister mystery from colonial days. Subscriptions and single copies can be ordered here.