Summary (in One Sentence)

This book’s ad hominem makes its ideologically driven and deliberately limited selection of Mircea Eliade’s body of work (on patterns of religious and mythological symbolism spanning most of the globe) most notable for how it falls most often into the very trap it purports to set for Eliade.

Pre-Disclaimer

Last year in 2012, I set myself the task to read at least ten pages per day, and now I’m not sure if I kept up. I have the same task this year, and I’ve added that I will write a book reaction for each one that I finish (or give up on, if I stop). These reactions will not be Amazon-type reviews, with synopses, background research done on the author or the book itself, unless that strikes me as necessary or if the book inspired me to that when I read it. In general, these amount to assessments of in what ways I found the book helpful somehow.

Consequently, I may provide spoilers, may misunderstand books or get stuff wrong, or get off on a gratuitous tear about the thing in some way, &c. I may say stupid stuff, poorly informed stuff. There are some in the world who expect everyone to be omniscient and can’t be bothered to engage in a human dialogue toward figuring out how to make the world a better place. To the extent that each reaction I offer for a book is a here’s what I found helpful about this, then it is further up to us (you, me, us) to correct, refine, trash and start over, this or whatever it is we see as potentially helpful toward making the world a better place. If you can’t be bothered to take up your end of that bargain, that’s part of the problem to be solved.

A Reaction To: Strenski’s (1993)[1] Religion in Relation: Method, Application, and Moral Location

In 1993, Ivan Strenski (a professor at the University of California, Santa Barbara and the American editor-in-chief of the journal Religion) took upon himself the task in his (1993) Religion in Relation: Method, Application, and Moral Location to reorient the study of religions. Part of this task involved skewering previous theorists and debunking their intellectual commitments in different ways. This necessarily meant skewering the founder of comparative religious studies, Mircea Eliade. The following answers many of his criticisms.

As an initial framing for this essay, it must be noted that Strenski begins (in his Introduction) with almost exclusively ad hominem rhetoric. In fact, to say that he evinces a loathing toward Eliade may not go too far. He writes,

Here Eliade realizes that he can discover the necessary truths of religious meaning without recourse to the social and historical sciences—in this case ethnology … : a fantastic discovery if it be so! His only reason for withdrawal from a full admission of this ‘fact’ is a theoretically irrelevant one: ‘our own historic moment obliges us to understand the non-European cultures and engage in conversation with their authentic representatives.’ His only reason for not admitting the theoretical consequences of the independence of the history of religions from the social and historical sciences seems to be the desire to avoid the chorus of criticism such as [sic] admission would rain down on his theory. As it is, he avoids the admission only to preserve a certain appearance of humility (22, emphasis added).

What is odd here is the repeated insistence on only (along with uncharitable armchair psychological guesses about Eliade’s motivation) and the fact that Strenski feels he can tag as irrelevant precisely the reason Eliade gives for dialoguing with non-Europeans. Here (and elsewhere) Strenski paranoiacally reads these passages as lip-service at best, and attempts to throw people off the path at worst. Given how Eliade’s work actually reads, Strenski would have been wiser to say that Eliade was simply not self-aware; that he believed he was taking history into account when really he was not. That would still be cant, but it would be more plausible or credible than saying Eliade is rhetorically manipulating his readers with malice aforethought.

Another passage is even more striking.

What Eliade and the partisans of absolute autonomy feared was change! What they feared was the loss of the precious epistemological privileges which they awarded themselves and the so-called ‘History of Religions’ …. Lévi-Strauss’s rigor and resistance to the mystagoguery of the Joseph Campbells and Mircea Eliades of this world freed students of religion from the burden of the guru’s role. Structural mythology meant that one could engage myth without committing oneself in advance to the predigested diet of meanings and pseudoreligiosity a Campbell or Eliade dispensed (5).

The ad hominem aspect is acutely felt in the reification of “Eliades” and “Campbells” and again when Strenski refers to Eliade’s work as a new kind of preaching, as an echo of the mystagoguery and pseudoreligiosity alluded to here. But, as Strenski informs us, “one can tell that the line separating mere technique from fundamental value orientations has been crossed when disputants get overwrought” (10). What is odd about this is one of two things. Either Strenski is inferring Eliade’s position from what Eliade has written or he is, a priori as it were, interpreting Eliade’s approach according to his own predigested interpretive pseudopsychology. If the latter, then this is especially odd because it is precisely this disastrous fault that Strenski credits Eliade with. If the former, then arrogating to himself the ability to interpret correctly is the same arrogance he imputes to Eliade again.

Strenski’s central thesis is that nothing is without context.  In this sense, he invokes a particular sense of reductionism and then promotes it. Reductionism in his sense involves not isolating a subject, not hermetically sealing it in some supposedly unique domain where it cannot be reached by any other discipline. Insofar as reductionism usually seems to mean circumscribing a subject in such a way that other explanations are not allowed to be considered, Strenski’s use seems the reverse of the norm, but so mote it be. Strenski notes that “theories are social and historical facts just as much as they are loci of arguments. As such, theories ought to be studied historically and socially just as much as we study them for their cogency” (9, emphasis in original). One can agree, and then one would have a history of theories (of religion), not a history of religion.

Keeping this frame by Strenski in mind, we may now reply to his attempts to criticize Eliade’s “method”.

Strenski insists that Eliade has an ideological ax to grind, and sees this as a problem. On the one hand, it is hard to see why because Strenski (like some others who feel compelled to criticize Eliade on what are fundamentally irrelevant grounds, i.e., Jonathan Smith) gives the impression of having not read Eliade’s work very closely or poorly understood what he read.

For instance, a central complaint is that Eliade (not just Eliade’s work, so once again, this is an ad hominem complaint) resents history. In The Myth of the Eternal Return, Eliade saves this issue for the end of the book where he describes, certainly with a tone of respectful acknowledgement if not wistfulness, the ability of archaic man to abolish time and start anew.  This pattern—of abolishing time, resetting the clock to the time before the Fall, as it were, to start anew—does indeed become a central interpretive symbol for Eliade in many subsequent books and passages, but how is this a resentment or rejection of history? It takes history as a problem, which we human beings living in it must decipher how to engage, even when we are big fans of history (as some are).

Where Strenski’s ax becomes most ridiculous is when he quotes verbatim passages by Eliade in which he specifically acknowledges the importance of history while also offering a distinction for the history of religions.

But let’s pause a moment. Strenski is not a historian per se (that is, he is a professor of religion or a professor of comparative religion) and yet he elects to wax indignant on behalf of historians. He takes Eliade to task for thinking that the historian’s job “is merely to piece together an event or a series of events with the aid of the few bits of evidence that are preserved to him” (21). Strenski reminds his readers that history must also creatively imagine the whole from those scraps and place that whole in the larger fabric that is History itself. He says this as if Eliade deliberately intends to leave out these steps, when it seems unlikely that he would or did. Eliade’s point, in his (1958)[2] Patterns in Comparative Religion, emphasizes that “the religious historian must trace not only the history of a given hierophany [the few bits of evidence that are preserved for him], but must first of all understand and explain the modality of the sacred that that hierophany discloses” (21). Eliade’s point, then, is not to what happens to the preserved bits subsequently, but that there is a step prior to the assembling of the preserved bits in the first place.

Let’s begin to split some hairs because, although Strenski seems to fail to understand or deliberately ignores Eliade’s distinction here, he nevertheless makes a great deal of hay out of the fact that Eliade at this point announces, in effect, that one must start with an interpretive framework, rather than let one emerge through the accumulation of preserved bits of data.

As for a first hair, it is worth remembering that Eliade is a student of comparative religions (not religion). The presupposition is that religions can be compared.  This being so, Strenski’s notion that Eliade does not consider religions in relation to anything else is hard to see as anything but blockheaded or merely ideological. Anthropologists have long been painfully aware of the immense intellectual problem associated with trying to compare cultures, so the issue in fact implicates wholes of cultures and is not some hermetically sealed bubble. But this “theoretical” observation aside, there is simply the scholarliness of Eliade and the evidence in his writing that he is comparing religions, which leads to the second hair.

Strenski seems overly exercised by Eliade’s resort to symbols (more precisely, he uses the phrase depth psychology as a blanket term for criticizing Eliade’s approach), but this seems a piece of willful ignorance. Confronted by sun worship in two cultures, it would be daft to insist that one was the original and the other a copy when distance precludes diffusion. More precisely, how might one characterize the relationship between these two sun worships?  One (time-tested) way is to abstract them both; to presuppose, solely for the sake of argument, that there is some archetype or template as a descriptive hermeneutic that might suffice.  This is, after all, comparative religion.  To merely catalogue the traits of one and juxtapose it to a catalogue of the other would not be research. To insist that to analogize similarities can be invalidated by pointing out differences has manifold objections, the most potent being if we took that argument seriously no one (dissimilar) human being could ever possibly hope to communicate with another—“similarity” or “difference” are not either/or but and/both.[3] And because Eliade offers comparative religion, this necessarily circumscribes as outside the domain of the discussion the historical specificity from which the archetype was abstracted—the emphasis, precisely, is on similarity, without discounting the importance of difference. Thus Eliade specifically says, “I am not denying the importance of history … for the estimate of the true value of this or that symbol as it was understood and lived in a specific culture” (15).

How Strenski can cite this passage and then pretend that Eliade dismisses history is mysterious. The object of comparative religion can never (and never will) be the concrete specificity of a religious symbol (a hierophany) except as it refines the abstraction process that permits cross-cultural comparison.

Although “history of religions” is a misnomer for Eliade’s work that he might have used from time to time, it is notable that Strenski does not take Eliade to task for not trying to construct a chronological history of religions[4]. In point of fact, and even then only where there is a strictly documented tradition, any attempt to make a “history of religions” must be a either a hopeless task or one already highly implicated in whatever modality of religion that history discloses.

As for another hair, Strenski’s ideological scientism leads him to throw around terms like falsifiability without giving any pertinent examples while making claims for empiricism that seem ill-founded. If all that can be said about a given culture is a specific, concrete description of its religion, then this traps it entirely in a cultural autism that doesn’t permit comparison with any other in the first place. But this is hardly a new problem and is one that goes to the issue of relating the specific to the general. Strenski’s solution seems to be to blithely plop intellectual constructs down in other contexts just to see how they change meaning, and while this may be academically useful (and ideologically appealing), it is also intellectually incoherent. Because if the concrete is in fact genuinely concrete, and therefore incommensurable with anything else, then to compare it with anything else is to engage in a game of smoke and mirrors that has no truth, much less falsifiability or scientific merit.

This fact will not stop partisans of concrete history from running counterfactuals (comparing one historical moment to another or one society to another) as if there were any legitimacy to doing so, but neither will it become intellectually defensible. The entire root of the problem in all “social sciences,” in fact, is precisely that animate things (and especially people) do not behave in a reliable (predictable) way such that no Laws of social sciences can be established (except, of course, for statistical guessing, which abstracts away from the concrete and neither predicts the behavior of any given person or in fact actually correctly predicts the real percentages of measured behavior, but only a probable range of possible outcomes).

All of this matters because it is the essence of the gauntlet thrown down by Strenski, and is the club by which he browbeats Eliade’s approach, even though Eliade adopts no scientistic conceit like Strenski does.

As yet another hair, Strenski lambasts Eliade for having an a priori interpretive mechanism, i.e., that the historian “must first of all understand and explain the modality of the sacred that that hierophany discloses” (Patterns in Comparative Religion, 21). Recognizing a hierophany is a minimum first step in this process (i.e., recognizing the sacred as sacred). This is no different from the historian, in fact, who must first identify what is history out of the mass of events and happenings in the world.

This is hardly a minor point, and I wonder how many historians could actually explain the distinction between (or at what point) an event becomes history.  Whatever the answer, what will be involved will surely be an a priori interpretive mechanism that will first of all understand and explain the modality of the history that mere event discloses. At its bluntest, this will mean that some historians by fiat declares such and such an event to historical. The main difference between Strenski’s distinction of Eliade’s and the historian’s interpretive framework for selection is that Eliade is consciously aware of his, while the historian may not be. Since all mental activity proceeds from within some kind of framework (or a shifting framework of frameworks), then the researcher who is conscious of the framework choices being made seems more reliable and may be less prone to invisible errors caused by a present but unnoticed framework.

Another hair to be split on this point is the capaciousness of scope and vision in Eliade’s interpretive framework compared to a world where the concrete is treated as concrete and incommensurable with other instances of concreteness. If everything is atomized particulars (or arbitrarily recontextualized particulars), then alienation, innecessity, and pointlessness become the rule. By contrast, the impression from reading Eliade is far from some dippy New Age sense of world unity. The hope in Eliade’s work resides in his engagement with the human condition of loss (the profane) and hope (the sacred), which drives him to look for and find patterns in (to organize) that experience in people and culture everywhere. When he waxes almost nostalgically over archaic man’s ability to abolish time, it’s perfectly clear that this is an interpretive framework that may or may not be true (one must judge from the whole argument). Nevertheless, it points to the possibility that a reader might adopt the method.

Strenski also seems to be especially exercised by Eliade’s use of the word sacred. This is indeed a major and central word in Eliade’s work, such that “if the Sacred does not exist, the history of religions, in Eliade’s sense, would be impossible. Fortunately for us all, Eliade is mistaken—at least about the dependence of the study of religion upon the truth of the existential claims of his neo-theology” (Strenski, 4). Again, per Strenski, the overwrought language here (“Fortunately for us all, Eliade is mistaken”), along with the grudging qualification that Eliade may not be mistaken about everything, should serve to clue us in that some fundamental value-orientation is at work in Strenski’s stridency.

And once again, this seems to involve a serious confusion on Strenski’s part. When I read Eliade, my sense of the sacred as he uses it is phenomenological, not ontological. Moreover, it’s distinctly omnihierophantic—there is nothing that cannot be sacred, and nothing that has not been made sacred in all likelihood—the sacred equivalent of Rule 34. Revelation may take the form of a meteorite, a strangely shaped tree, a dream—anything out of the norm. Moreover, these hierophanies are distinctly personal, or at least once were. One could encounter them everywhere. And what might be a hierophany in one culture may not be in another—once again, the inclusion of the social is a rule, not an exception, in Eliade’s work.

In one spot, Eliade speaks very poignantly about the ever-present sense of human loss. This is a loss one does well not to focus exclusively on, as each moment of a life ticking toward personal extinction, but it is always there to be seen. And counterbalancing that, the desire to overcome it, and maybe even to overcome the condition that catches us in it. We will all die, that’s true, but what can we make of what life we have? That hope of gain counterbalances the loss. And when it reaches out beyond us—to our friends, our communities, or even to something more nebulous still—then what is reached for can be called the sacred, just as Brahman (the inconceivable super-deity of Indian religion) can be identified with the world outside of my sensory perceptions which I can never experience directly.

Eliade’s sacred is existential and phenomenological in this sense. Certainly in that sense it is experientially real and in that sense is both undeniable and seemingly ontological, but that’s the extent of the sacred in my reading of Eliade. It is that which is torn away from the profane, from loss, from death. Why such a notion makes Strenski overwrought is not clear. Eliade is not threatening to supplant some existing mythology of the sacred. It’s less worth speculating and more useful to observe that much of Strenski’s railing against the Sacred does so from the incorrect standpoint that the sacred in Eliade must be taken as ontologically real.  That doesn’t seem to be Eliade’s intention.

Over the next two sections of Strenski’s perverse reading of Eliade, he cites Eliade nine times (and Jung once) from texts generally from 1959–1961; six references are from the prelude to Eliade’s The History of Religions: Essays in Methodology, partly because this comprises Eliade’s main statement regarding the use of depth psychology for religious studies. And while he can’t help getting overwrought again, and thus unnecessarily shrill about, what one could call methodological difficulties in re Jung’s “depth psychology,” what is missing from his objection is an acknowledgment of the decades of empirical observation of patients daily that informed Jung’s notions of depth psychology. Strenski seems also not to acknowledge that depth psychology was informed by comparative religious studies conducted both within and without the discipline. Finally, Strenski ignores that depth psychology is hermeneutic and directed toward helping people, rather than serving as a philosophy, theology, or ideology. Consequently, his summary of his construction of his understanding of the three problems as he sees them in Eliade is marvelously parallel, but only restates what already was beside the point (i.e., that he is comparing or confusing again the different domain processes of intra-cultural versus cross-cultural analysis, or of history versus comparative religion).

It seems clear enough that a history of religion is plausible (and better still, histories of a religion), but a history of religions is currently beyond human knowledge and reach. Why then should Strenski object if Eliade refers necessarily to a non-historic approach? Just because Eliade titled his essay on methodological problems of religious history The History of Religions? Clearly, Eliade recognized that this presents a problem. In that respect, it may be significant that his magnum opus is instead titled A History of Religious Ideas; a history (note the indefinite article) that organizes itself simply by chronology. Again, Strenski’s thesis that Eliade is ideologically averse to history amounts to a mischaracterization.

When Strenski next tries to argue that intuition is not self-authenticating, he first offers an analogy, then falters, provides an inadequate example, and finally merely restates the conclusion that intuition is not self-authenticating—perhaps on the basis of his initial intuition. But how this is supposed to be an adequate answer against (or more precisely, a logically consistent critique of) Eliade’s methodology is not clear. What is clear from reading Strenski’s essay is that his argument breaks down in the text[5].

Introspection is not self-authenticating, though it is often mistakenly thought to be so—it is admittedly difficult to imagine how one could introspect falsely. Imagine what it would mean to say that one was wrong in thinking that one was thinking of an automobile, for example. We might not tell the truth when someone asks ‘A penny for your thoughts’, but this would not mean that our introspection had failed—only that we had told a lie. One might, however, say that there was a pain in one’s back and later want to correct that statement by saying that it was a pain in one’s neck. We might, after all, have been mistaken in our first introspection. Intuition, as well, although it may be the way we discover certain truths—if it stands for one thing and is not just a convenient label for saying that knowledge has been attained but in an unknown way—is not the way we certify this discovery: intuitions are not self-authenticating. It has, however, at least since Descartes in modern times, mistakenly been thought to be self-authenticating (26).[6]

After this, once again verification and falsification are invoked as if they are relevant to the kind of approach Eliade adopts. The complaint is like saying a qualitative research study is not quantitative. (Because Eliade’s sets out to provide an explanatory meaning for myths, and because meaning itself is not quantifiable except possibly statistically,[7] to demand quantification of meaning and thus verification or falsification in the sense Strenski wants is a misplaced and impossible demand.) To become an apposite argument, this would require human beings to be non-living systems (i.e., not to be human beings). Thus, the objection is particularly disingenuous.

So Strenski’s objection that “Eliade has taken the self-authentication of intuition and introspection as the epistemological grounds of his discipline” (26), the carping tone and pejorative accents attached to the terminology in this descriptive passage notwithstanding, is beside the point because he (Strenski) seems to desire to (impossibly) substitute an (inappropriate) scientistic approach, i.e., a non-scientific approach, in place of Eliade’s methodologically sound one.

Strenski’s next section “Eliade and the Study of Myths” bears some special attention.  The miasma of ad hominem scorn it exhibits toward Eliade is left for the reader to discover. What shall be done here instead is a close analysis of Strenski’s text. Please bear with me.

Strenski’s attack on Eliade’s approach to myth has twenty-nine citations, one of which is to a tangential remark (note 70, about a theory of stories), two of which are of works critical of Eliade (note 65, that he is non-empirical in general, and note 68, that the “terror of history” as a generalization of myth is not supported in the two earliest literate cultures of the ancient world), and one (note 69), which Strenski takes as a critique of Eliade’s position in re the terror of history. The rest (notes 42-64, 66-7) are from five books and two essays (or chapters) by Eliade, which Strenski uses to characterize Eliade’s sense of the meaning and function of myth. These break down chronologically as follow (the difference in publication dates noted below reflect the difference between the original publication date by Eliade and the publication date cited by Strenski in his book):

  • The Sacred and the Profane (1957/1961), note 63, 64
  • Rites and Symbols of Initiation (1958/1965), note 60
  • Myth Dreams, and Mysteries (1959/1968), note 46, 48, 50, 54, 61, 62
  • Images and Symbols (1961), note 47, 49, 51, 52
  • Myth and Reality (1963/1964), note 42, 43, 44, 45, 53, 56, 58, 59, 66
  • The Two and the One (1965), note 66
  • “Crisis and Renewal in the History of Religions” from New Theology: No 4, note 67
  • “Cosmogonic Myth and ‘Sacred History’” from Religious Studies 2, note 57

The most obvious feature of this list is that 80 percent (twenty of twenty-five) of the citations come from three books (Myths, Dreams, and Mysteries, Images and Symbols, Myth and Reality) published from 1959 to 1963 (representing a period less than 10 percent of Eliade’s publication history from 1933–1985). However, these represent the publication dates for English translations. Images and Symbols was originally published in French in 1952. This is the earliest text that Strenski cites in this section (albeit in the translated 1961 edition), even though The Myth of Eternal Return (published originally in 1949, and again in English in 1954) is the earliest study by Eliade where his claims about the function of origin myths is most overtly stated. In fact, Strenski cites The Myth of Eternal Return only twice (note 13 and 34), which bears looking at.[8] Strenski writes:

Implying an understanding of history as a limited source of meaning for religious matters—as an explanation-giving discipline of very limited value—Eliade says:

I am not denying the importance of history … for the estimate of the true value of this or that system as it was understood and lived in a specific culture….But it is not by ‘placing’ a symbol in its own history that we can resolve the essential problem—namely, to know what is revealed us, not by any ‘particular version’ of a symbol, but by the whole of a symbolism.12

Moreover these higher meanings seem to ‘condition’ or ‘make possible’ logically, chronically and somehow ontologically13 the meanings of symbols or myths which history proper gives us. Eliade brings this out by using a favourite example—the Cosmic Tree and its supposed higher meaning, ‘the perpetual regeneration of the world’:

It is because [ontological and/or logical] the Cosmic tree symbolizes the mystery of the world in perpetual regeneration that it can symbolize, at the same time or successively, the pillar of the world and the cradle of the human race….Each one of these new valorizations is possible because from the beginning [logical, chronological, or ontological] the symbol of the Cosmic Tree reveals itself as a ‘cipher’ of the world grasped as a living reality, sacred and inexhaustible.14 (My italics and bracketed annotations.)(18-9).

Given that Eliade avows, “I am not denying the importance of history” (Images and Symbols, 1952), it is perverse enough already that Strenski would cite this as proof that Eliade scorns history[9], but how he then cobbles his argument together is even more misleading, drawing in the word ontologically (from 1949, note 13’s Myth of the Eternal Return), and then concluding the argument with an illustration from 1959 (note 14’s “Methodological Remarks on Religious Symbolism” from The History of Religion: Essays in Methodology.) The point Strenski aims to make here spans a decade, as if nothing in Eliade’s thinking changed during that period. But again it is curious that someone who seems to want to suggest that Eliade’s terror of history led him to a definition of myth that removes history entirely wouldn’t cite The Myth of the Eternal Return more directly or often, given that Eliade discusses the topic extensively there.  It may be (assuming Strenski read that critical text by Eliade) that Strenski could not find a damning enough statement in that book and had to resort to later texts (e.g., Myth, Dreams, and Mysteries and Myth and Reality) instead. The fact that Strenski insists, methodologically, that we should look at things in context makes yoking together passages separated by a decade almost comically contradictory.

Strenski’s second reference to The Myth of Eternal Return (note 34) arises in his discussion of an enthusiasm he detects in Eliade for depth-psychology: “Likewise since, for Eliade, the term ‘archaic’ characterizes almost all of mankind outside the citizens of Europe and the Americas,34 depth-psychological introspections attain to the thought-worlds of all but ‘modern secular’ man” (25). This is an only supporting, not material, reference to a book that specifically invokes the terror of history and an (almost) nostalgic envy for ‘archaic’ man who still has at his disposal the ability to abolish time, and thus escape from that terror. And yet, Strenski prefers to pick his quotations from other texts instead. Specifically, when Strenski notes: “For Eliade, a myth is always an origin story which functions for existential orientation in the widest sense, all at once: for psychic and social orientation and for the orientation of man within the whole universe (31, my emphasis), readers of Yoga: Freedom and Immortality (1933, in English 1958), Shamanism (1951, in English 1964), or even Patterns in Comparative Religion (1949, in English 1958) may object that that always is a gross overstatement. And yet from Myth and Reality (1963) Strenski cites, “a myth is always related to a ‘creation’, it tells us how something came into existence, or how a pattern of behaviour, an institution, a manner of working were established…42” (31).

It may seem like splitting a hair, but while Eliade frequently, even relentlessly, tended to refer his analyses of myths back to primordial origins (to creations), that it must always be so is not a feature of his earlier work. One might more justly accuse him of selecting to study only myths that belied such creations, except that there seems to be no such selectivity evident in his work. Perhaps the conclusion should be, against Strenski’s implication that Eliade always held this to be true of myth, that thirty years into his publication career in 1963, Eliade began to make larger claims for myth than he had previously.

This will be pursued more, but the reason for harping on this should be made clear beforehand. Strenski’s overall thesis—i.e., that Eliade’s approach to comparative religion has been “disastrous” (16) and that notions like “myth is always related to a ‘creation’” should be expunged from the discipline of comparative religion forever—amounts to a demand that Eliade’s entire contribution to comparative religious study (not just his methodology) should be discarded.

We want to show by our critique of Eliade’s understanding of myth and the study of religion, why Eliade’s history of religions does not deserve the respect which religious scholars have accorded it. In doing so we want to indicate that, contrary to Professor Eliade’s gloomy predictions, the study of religions has a bright and hopeful future largely because it will have set aside Professor Eliade’s wayward prescriptions (16, my emphasis).

If there is any merit at all to such a mean-spirited program, then it necessarily hinges according to Strenski’s argument on Eliade’s understanding of myth. And if that understanding of myth was different at one point in Eliade’s career than at another, then any work prior such a “disastrous” change of understanding would have to be admitted as a still a valuable contribution to comparative religion. As things stand at the moment, Strenski seems to be identifying the watershed moment as 1963.

The simplemindedness of this suggestion, along with the massive sigh of relief that accompanies the declaration that comparative religion will ignore “Professor Eliade,”[10] once again points to that sort of overwroughtness, which Strenski insists indicates moments of ideological value-orientation.

Elsewhere, Strenski concludes:

Eliade adds his dualist metaphysical interpretation of the problem of comparison to this discussion by affirming that cross-cultural comparability is sanctioned because the ‘awakening to the knowledge of a “limit-situation”’, the existential crisis situation common to all myths, is itself an experience which is universally the same because it is properly speaking ‘non-historical’ and therefore partakes in a transcendental existence: ‘myths … always disclose a boundary situation of man—not only a historical situation’.52 (32).

This piece of argument actually concatenates seven alternating and widely ranging citations from Images and Symbols (1952, in English 1961) and Myths, Dreams, and Mysteries (orig. in English 1959, cited as 1968 by Strenski):

  • Note 47: “A myth allows one to discover one’s ontological place in the universe” (referring to Images and Symbols, 34)
  • Note 48: “Every existential crisis brings once again into question both the reality of the world and the presence of man in the world” (Myths, Dreams, and Mysteries, 17)
  • Note 49: “it is impossible that they (myths) should not be found again in any and every existential situation of man in the Cosmos;” (Images and Symbols, 25)
  • Note 50: that myths are “the privileged expressions of the existential situations of peoples belonging to various types of societies.” (Myths, Dreams, and Mysteries, 10)
  • Note 51: “the symbol itself expresses an awakening to the knowledge of a ‘limit-situation’” (Images and Symbols, 176)
  • Note 52: “myths … always disclose a boundary situation of man—not only a historical situation (Images and Symbols, 34[11]).

What is curious here is why, in arguing toward the conclusion, which already concatenates citations from pages 176 and 34 of Images and Symbols, respectively (note 51 and 52), Strenski also pingpongs back and forth between two texts separated by seven years.

Reading notes 47, 49, 51, and 52 (from Images and Symbols) in isolation, what is specifically missing is any explicit references to an existential crisis. Note 47 underscores the ontologically orienting function of myth; note 49 (thus somewhat redundantly) notes that myth must be present in every existential circumstance humankind confronts; note 52 distinguishes between boundary and historical situations; while note 51 suggests that awakening to such a situation generates symbols.

Only note 48 (from Myths, Dreams, and Mysteries) mentions existential crises that throw the world up for grabs. It is this note that seems to allow Strenski (“is sanctioned because the ‘awakening to the knowledge of a “limit-situation”’, the existential crisis situation common to all myths, is itself an experience”) to insert the italicized portion of this sentence, to back write crisis into texts otherwise without it. If, in fact, myths are existentially present in every circumstance of humankind, then one could infer that an existential crisis could be a crisis of myth and thus humanly universal.

But Eliade apparently does not say this in Images and Symbols, otherwise why must Strenski cite another text more than half a decade older to secure his point? Moreover, is something universal necessarily not historical? Is there no history of feasting because all humans universally must eat and have eaten publically together from before the dawn of history itself? Or because they were born?

The point here, of course, is not to defend or criticize what Eliade said in Images and Symbols, but to note Strenski’s method, of citing texts (and/or summarizing them) in such a way that they misrepresent what Eliade said.

I assert the above based strictly on the “evidence” and texts that Strenski himself has elected to include to support his argument.  Perhaps Strenski feels that the general run of Eliade’s work is already well enough known that he can assume its meaning, in which case why provide direct quotations? Moreover, no one held a gun to Strenski’s head and said he could only choose the above passages. Whatever the case may be, these are the nuggets he found fit to cite. And taken together, given their more than half decade separation in time, they cannot yield the conclusion Strenski forces.

However the situation may be, we must do a disservice to Professor Strenski—either he intellectually failed to grasp what he was doing and naively felt he’d made the case, or he is engaging in intellectual duplicity and relying on lazy readers not to check his argument. Or perhaps this is an example of what he means by putting things in context—in which case his methodology could use some more rigor, because currently it is yielding at best intellectual anachronism.

The main reason for drawing attention here to (the lack of citations of) The Myth of Eternal Return and Images and Symbols is that these are the oldest of Eliade’s text’s cited. If indeed in 1963 Eliade is saying that myths are always a creation, then this may be a programmatic insistence that was not present in earlier work, even though Eliade then seemed never to find a myth that was not also a creation.  It’s a fine hair to split to be sure (and this assumes that the idea that myths are always a creation is a genuinely problematic statement), but in potential at least it’s the difference between and enthusiastic student earlier versus some kind of dogmatist later.[12]

One might say that Strenski is not implying that later opinions of Eliade should color his earlier works, but that the egregiousness of the later works must find their roots in (or at least support for) earlier texts. However, this is not the impression Strenski appears to project. Throughout the essay, Eliade’s thought gets presented monolithically, all of a piece.  There is no hint or suggestion that his ideas ever changed, which would indeed be a damning remark over a fifty plus year academic career. But perhaps just as Strenski accused the partisans of absolute autonomy in religious studies of fearing change, because what would then be lost was “the precious epistemological privileges which they awarded themselves” (5), the same seems to be true for Strenski’s rhetoric, because if Eliade’s ideas changed over time, then there might be some portion of his work thereby left above reproach and unassailable.[13] Not that Eliade’s work never has errors, of course—the natural process of scholarship is working them out, and even finding Eliade’s critics too quick to pull the trigger at times—but this is not Strenski’s point or object. Anything less than the total extermination of “Professor Eliade” from comparative religious studies will not satisfy him.

Strenski berates Eliade for having no sense of time, yet to nail down his argument that Eliade gives priority of place to origin myths in all situations, he resorts once again to texts separated in time (this time by half a decade).

It might be reasonably asked now why Eliade goes beyond the information afforded by a single group of societies and assigns general priority in value to the creation myth. Two main reasons seem behind such a move. First, for Eliade the temporal and ontological priority of the existence of a cosmogony entails a priority in value:

A new state of things always implies a preceding state and the latter, in the last analysis, is the world. The cosmic milieu in which one lives, limited as it may be, constitutes the ‘World’, its ‘origin’ and ‘history’ precede any other individual history….A thing has an ‘origin’ because it was created…like a power clearly manifested itself in the World, an event took place.59

And in another place which refers to the previous citation: ‘by the very fact that the creation of the world precedes everything else the cosmogony enjoys a special prestige’.60 (Strenski, 33).

The first thing to point out here appears to be a factual error on Strenski’s part. Note 60 (from Rites and Symbols of Initiation, published in 1958) seems unlikely to be referring “to the previous citation,” Note 59 (from Myth and Reality 1963). Rites and Symbols was reworked and re-released (in French) in 1959, and was reprinted in English in 1965 (most likely the edition Strenski is using) and most recently in 2009, but Rites and Symbols is still the antecedent text to Myth and Reality.

Given that Eliade’s Rites and Symbols of Initiation: The Mysteries of Birth and Rebirth concern, precisely, birth and rebirth, it is apposite to note in such a context that “cosmogony enjoys a special prestige”. But even beyond this, as Eliade demonstrated exhaustively in the much longer Shamanism, a return to the origin is a well-attested interpretation of initiation.

By contrast, in Myth and Reality, Eliade is not citing any special prestige of cosmogony, but rather that we are born into a pre-existing world. And while there are indeed arguments that could contend with such an assertion, Strenski’s position seems hardly about to take them up. On the contrary, he seems to be arguing precisely that society (not eternity) is the proper ground for analysis.

So it seems, once again, that Strenski has put Eliade “in relation” in a way that is misleading and anachronistic, since there is no reason to think that the perfectly apposite comment that cosmogony has a place of prestige (note, not necessarily even pride of place) in initiation contexts must be linked to a comment about the pre-existence of the World we are born into. Moreover, one searches in vain for the priority of value (or the value itself) that results from Eliade supposedly asserting a “temporal and ontological priority of the existence of a cosmogony” (33). Even if it were simply a “special prestige,” what does that mean? What is the value exactly? Surely Strenski is aware of the importance that some people attach to things being first (the Judeo-Christian claim of being the first monotheism, for instance). Mere vanity, one supposes, but it is hardly unsupported to note (as Eliade did) that cultures give a special prestige to cosmogony because it was first.

But the point again is not what Eliade means so much as why Strenski resorts to disparate texts and tries to piece together arguments from distant, disparate, probably unrelated (or decontextualized at least) statements. Thus, when these placements in relation are shown apart from Strenski’s slight-of-hand, his whole argument arises from a fraction of Eliade’s total output and none of it[14] materially from prior to The Myth of Eternal Return, which is distinctly slighted.

Moreover, the quality of response to what Strenski acknowledges is a “complex position” (34)[15] consists of little more than saying “maybe it’s not” with respect to Eliade’s theoretical statements. For several pages, Strenski’s essay continues in this vein, finally to end on a citation to the effect that Eliade was not very empirical and didn’t seem to care to be. As an ad hominem attack, this is irrelevant to begin with, but it is also an empty jab, since Strenski isn’t precise about what being empirical would mean and doesn’t seem to care to be. With just as much intellectual effort, one may say, “Or maybe Eliade is right” or “There’s no reason Strenski’s argument must hold” and be done with it all.

In conclusion, it is hard not to conclude that Strenski had (or has) an ax to grind less with Eliade’s ideas and more with the man. But beyond that, as a document purporting to advocate a new methodology (the introduction to Strenski’s book proudly declaims it to be so), then Strenski’s essay on Eliade is a very poor (or extremely alarming) example of the method. As an example of scholarliness, it seems to contain factual errors, it seems to put forth the idea that two documents separated by a span of years may be taken as having the same intellectual ground, and it lavishly employs ad hominem remarks (due either, perhaps, to an overwroughtness on Strenski’s part or because rational argumentation was thought insufficient). The nastiness of the text is enough that it might well make people unenthusiastic to read any further, which hardly seems a sound strategy for advancing a methodology.

Endnotes

[0] I wrote most of this essay in 2010, so this is not one of the books I read this year.

[1] Strenski, I (1993). Religion in relation : method, application, and moral location. Columbia, SC: University of South Carolina Press.

[2] Eliade, M. (1996). Patterns in comparative religion. Lincoln: University of Nebraska Press.

[3] Jung’s chapter on the type problem in relation to aesthetics discusses this distinction helpfully.

[4] Although one could sense an implied one in Patterns in Comparative Religion insofar as it proceeds from the most elementary hierophanies and kratophanies to more complex manifestations, like Time and the World, Eliade makes it clear (in Shamanism as well) that theories of diffusion and influence are so tentative, and the recurrence of nearly identical imagery  from wholly disparate cultures is so frequent, that the conceit of trying to chronologically order any progression of religions is hopeless. What’s more, there are numerous “advanced” religions that have prominent overly archaic and “primitive” religions with signs of advanced elements (not arising from some more advanced neighbor), which further defeats any chronological order

[5] “Introspection is not self-authenticating, though it is often mistakenly thought to be so—it is admittedly difficult to imagine how one could introspect falsely. Imagine what it would mean to say that one was wrong in thinking that one was thinking of an automobile, for example. We might not tell the truth when someone asks ‘A penny for your thoughts’, but this would not mean that our introspection had failed—only that we had told a lie. One might, however, say that there was a pain in one’s back and later want to correct that statement by saying that it was a pain in one’s neck. We might, after all, have been mistaken in our first introspection. Intuition, as well, although it may be the way we discover certain truths—if it stands for one thing and is not just a convenient label for saying that knowledge has been attained but in an unknown way—is not the way we certify this discovery: intuitions are not self-authenticating. It has, however, at least since Descartes in modern times, mistakenly been thought to be self-authenticating” (26). Notice that Strenski’s argument here hinges on an undefined distinction between introspection and intuition and then apparently analogizing the two (so that the claimed observation about introspection must apply to intuition), but the point I wish to underscore is the falseness of the example (about the pain in the back that proves to be a pain in the neck).  When one shifts from the introspection “I observe there is a pain in my back” to the later “I observe that it was really a pain in my neck,” the later statement does not make the earlier statement false.  Given that one can only know what one knows, experientially the pain in the back is a pain in the back—introspection is not making an error here, even when later retrospection reveals something else.

[6] Notice that Strenski’s argument here hinges on an undefined distinction between introspection and intuition and then apparently analogizing the two (so that the claimed observation about introspection must apply to intuition), but the point I wish to underscore is the falseness of the example (about the pain in the back that proves to be a pain in the neck).  When one shifts from the introspection “I observe there is a pain in my back” to the later “I observe that it was really a pain in my neck,” the later statement does not make the earlier statement false.  Given that one can only know what one knows, experientially the pain in the back is a pain in the back—introspection is not making an error here, even when later retrospection reveals something else.

[7] If a poll of meaning were conducted, statistical analysis of that poll would yield a consensus of meaning, but not meaning itself. If meaning is to be equated with truth, then any number of situations may arise where a consensus of confused people are wrong compared to an individual who has it right. But even socially, to equate the meaning of something with a consensus is undesirable. A consensus will (statistically) be the starting point for any social discussion of that meaning, of course, but without at least other voices who do not share the consensus meaning, there can be no social dialogue at all. To statistically equate meaning with consensus meaning, then, amounts to a socially undesirable suppression of alternatives (except for those who benefit from such suppression). As such, socially one should desire not to equate consensus with meaning, thus making a statistical analysis of meaning unnecessary. Rather, meaning may more socially be conceived as the full range of meanings from those polled. Moreover, and without equating meaning with truth per se, not all meanings polled may be equal weighted. Some meanings may be socially undesirable, whether the consensus or not. Some meanings may derive from decisions better informed than others. (The opinion of an educated scholar about the meaning of a myth may be better informed than the consensus, or might be more suspect than others when strong ideological or racial biases may be in play).

[8] The note marks in the following refer to Strenski’s original. Also, the sentence “(My italics and bracketed annotations.”) is in Strenski’s original.

[9] It seems a piece of willfulness to misread Eliade’s point. In the perennial comparative dilemma of working out how to relate the differences between any two things to be compared and their similarities, it must necessarily be the differences that are sacrificed for the sake of finding similarities. Even Jonathan Z. Smith, one of the new critics Strenski applauds and who also shares Strenski’s interest in contextualizing, adopts a mode of metaphor for comparing two disparate cultures. This is his solution for resolving the tension of differences and similarities. Here, Eliade is describing the same problem. And in that context, the historical specificity of an image in a given culture (its differences from other cultures) are indeed of limited value for the project of comparisons (of similarities) that Eliade is undertaking. This is hardly a denigration of history, just as Eliade says. Rather, it is a recognition of the limited usefulness of history to such a project.  It seems ironic that Strenski, who claims to be so concerned about context, should fail to recognize that the context of history is of limited value when seeking similarities between disparate cultures.

[10] The continuous iteration of Eliade as “Professor Eliade,” sometimes in a way that suggests direct address even though Eliade died in 1986, is one of the more pointed, and surreal, indications of the miasma of passive-aggressive contempt that informs Strenski’s essay.

[11] It is worth nothing again here that Eliade says “not only historical situations” (Images and Symbols, 34), once again including, rather than denigrating or ignoring, history.

[12] Saying this may be altogether taking Strenski’s argument too much at face value.  A more credible conclusion is that Strenski despises Eliade the man, possibly for racial reasons, and that that’s the primary motive for otherwise hiding behind the rhetoric about methodology. Nevertheless, it is important to show that Strenski’s argument is poorly constructed at best, if not also lax to the point of deliberate negligence. The glibness of the last section of this essay certainly seems to warrant the conclusion.

[13] Saying this might simply be a tit for tat, except that in case  where overwroughtness has clouded one’s mind, the possibility of succumbing to the irrationality of projection becomes much more likely.  In his dudgeon, which sprawls with a high degree of consistency in this essay, Strenski’s seemingly normally acute faculties might well have been overridden such that he either failed to cover his tracks well enough or he simply didn’t see there were tracks at all. It would certainly be an interesting academic exercise to analyze this whole essay in detail to demonstrate the justness of this assertion.

[14] The main citations are from Myth and Reality (1963) and Myths, Dreams, and Mysteries (1959).

[15] Assuming he’s not being facetious.

Abstract

Wherever bigotry is a matter of policy enforceable by an authority, then the sanctioned violence that results from that authority explicitly involves dehumanization (deflation of the value of individual people and groups) accompanied by demonization (hyperinflation of the danger the visible or recognizable person or group poses). In this circumstance, the problem for the individual (or group), then, involves being addressed simultaneously through the profound depreciation they experience as human beings along with a grotesque overappreciation that positions them as the source of all the present ills of a place. To make this gesture of enforceable policy under Authority first requires one person (or one group) to arrogate the right to name an Other over against all the claims or self-descriptions of that Other.

Introduction & Disclaimer

This is the twenty-eighth entry in a series that ambitiously addresses, section by section over the course of a year+ Canetti’s  Crowds and Power[1] and the second to address Part 4 (The Crowd in History), which Canetti breaks up into several sections.  Here I cover sections 2–3, “Germany and Versailles, Inflation and the Crowd”.[2]

Germany and Versailles

Canetti’s version of turn of the twentieth century European history goes like this. With the completion of the Franco-Prussian War, militarism became the religion of Germany—became the 180px-Treaty_of_Versailles,_English_version most propitious outlet for the lively practice of command amongst Germans. The humiliation of Versailles at the end of the Great War, then, counteracted the victory of German unification Bismarck had previously declared there—that, in itself, being a victory over Louis XIV’s insults (issuing from Versailles) and Napoleon’s generally. “Versailles”—more specifically the “Diktat of Versailles”—is the rallying slogan for the most profound wound inflicted on the Germans after the Great War: the prohibition of a German army. Canetti insists in more than one form that “there is plenty of confirmation of the effect which the word ‘Versailles’ had on Germans at this time” (182); given such plentitude, one example should have been easy to produce.

Canetti claims, “I said earlier that it was only in a very limited sense that armies could be called crowds” (180). Where he does so remains unclear, but on the same page, he wrote, “The actual army … in which every young German served, functioned as a closed crowd” (180, emphasis in original)—note: Canetti does not say a crowd symbol here, but a crowd itself. Still earlier:

If we consider both warring parties simultaneously war presents a picture of two doubly interlocked crowds. An army, itself as large as possible, is bent on creating the largest possible heap of enemy dead. And exactly the same is true of the other side. Thus every participant in the war belongs simultaneously to two crowds. From the point of view of his own people he belongs to the crowd of living fighters; from that of the enemy to the potential and desired crowd of the dead (71).

However, while insisting “I said earlier that it was only in a very limited sense that armies could be called crowds” (180), Canetti attempts to disagree with himself; “this, however, was not so with a German; the army was by far the most important closed crowd he experienced” (180)—note again, Canetti does not say crowd symbol, but crowd. Despite this, in a conversation elsewhere[3] with Adorno—where Canetti is at pains to refute Freud rather than himself—he notes:

Freud speaks of two concrete crowds that he gives as examples. One is the church; the other, the army. The fact that he fixes upon what we might call two hierarchically articulated groups in order to explain his theory of the crowd seems to me to be very revealing about him. For me, the army is not a crowd at all. The army is a collection of people that is held together by a specific chain of command in such a way  that it does not become a crowd. In an army it is extremely important that an order can split off two people or five; three hundred can be split off and sent somewhere or other as a single unit. An army is divisible at any time. At given moments, moments of flight or unusually fierce attack, an army can become a crowd, but in principle it is not a crowd at all in my sense of the term. So for me it is significant that Freud should use the army to explain his theory. Another important point of disagreement is that Freud really speaks only of crowds that have a leader. He always sees an individual at the head of a crowd (Adorno and Canetti, 2003, 196–7).

All of this points to the fact that Canetti really doesn’t think armies are crowds, which is surely correct. Consequently, it’s intellectually disingenuous to try to foist the army as a crowd onto the Germans, whether he conceptualizes that as a crowd symbol or crowd. So also—grabbing an example from further in the book—when the dead of the Xosa promise an army to assist the living, Canetti insists, “as an army, and thus as a crowd of dead warriors, they will reinforce the army of the living Xosas in precisely the same way that  one tribe would reinforce another as a result of an alliance” (Crowds and Power, 197, emphasis in original). Or similarly, “For a German, forest and army are so intimately connected that either can equally well stand as the crowd symbol of the nation; in this respect they are identical” (180).

What is perhaps odd in all of this: Canetti’s point in this section is the role of the crowd in history, and there were certainly no shortages of mass crowd formations in the rise of National treaty_of_versailles_7Socialism in Germany. That it was linked to militarism—as all Fascism would seem to be, c.f., Italy and Japan—is a given but to say it originates in that puts the cart before the horse. That is, it is unclear why Canetti feels compelled to torture his own tortured method to twist the German army—obscurely eliding between crowd and crowd symbol—into a crowd as part of his account of National Socialism. Insofar as his vehement denial of Freud hinges on what he views as the mistaken inclusion of a leader at the head of a crowd, it seems that Canetti wants a leaderless crowd, perhaps as a prerequisite for lazy or unfounded remarks about an authoritarian personality in Germans, but sliding from crowd to crowd symbol or army to forest or claiming they are interchangeable in this way doesn’t cut it. Similarly, whatever the value of Canetti’s remarks about trends in German history in this period, his reiteration of any crowd symbol for any group of people is aberrant at best and functionally racist. Arrogating to himself the right to name an Other like this—over  against all claims or self-descriptions by that Other, or simply contrary evidence—is a central gesture toward the future establishment of a genocide.

Canetti closes this section with another fantasia, this time on the स्वस्तिक (or svástika), which in German is Hakenkreuz (literally “hook cross”):

Its effect is a twofold one; that of the sign and that of the word. And both have something cruel about them. The sign resembles two twisted gallows; it threatens the spectator insidiously, as though it said, “You wait. You will be surprised at what will hang here”. In as far as the swastika has a revolving movement, this too contains menace; it recalls the limbs of the criminals who used to be broken on the wheel.

The word has absorbed the cruel and bloodthirsty elements of the Christian cross, as though it were good to crucify. Haken, the first part of the German word, recalls hakenstellen, an expression commonly used by boys for “tripping up”. Thus it forebodes the fall of many. For some it conjures up military visions of heel clicking; the German for “heels” being hacken. Thus, with the threat of cruel punishment, it combines an insidious viciousness and a hidden reminder of military discipline (183).

I’ll stipulate the putatively obvious reasons for Canetti’s framing of the matter this way and will still say that his deconstruction of this symbol nevertheless has spurious elements. It is not my object of course to perform any kind of rehabilitation on the German National Socialist use of the symbol.[4] It is also the case that, while as I read this passage by him, it made me think of how it might be received by someone whose faith holds this symbol to be sacred.

christ blowjob maryImagine were someone to travel through India or China or Indonesia (the most populace countries in the world) showing pictures of Jesus sodomizing children with the caption from Matthew 19:14[5] underneath or Mary at the Cross sucking off her son with Isaiah 66:13,[6] John 19:26,[7] or Revelations 17:5[8] as a caption—a suggestion apparently so vile that not even the Internet has a ready example of it.[9] Surely the smart money would bet that some Christian adherents might get at least a proprietary about the inaccurate impression such imagery might offer to other cultures of the faith.

Similarly, then, any claim that the svástika has passed beyond the pale of any legitimate or possible use—because it seems impossible to encounter it in its sense as it is still used in India, Việt Nam and elsewhere—functions as a tidy piece of orientalist racism in itself (as if the misappropriation of the symbol were someone other than the National Socialists’ fault, or as if anyone who would still recognize it would be, if not a flagrant eugenicist simply by that very fact, then at least grossly insensitive.)[10] The desire to reduce a religious symbol “to a joke” would not generally be greeted with applause.

With this in mind, we can address Canetti’s point that the स्वस्तिक has an effect as a sign and a word.

As a sign, it’s necessary to remember that this symbol has numerous incarnations materially different from its National Socialist version. The thing is very old archaeologically, and even in specific cases its meaning remains ambiguous.[11] The effort to make it into a cross, like a +, strikes me as desperate, particularly since the right- and left-facing symbols (卍 and 卐) are not rotationally related to each other. It is, rather, by placing the two symbols on top of each other that one arrives at ⊕.[12] Nonetheless, seeing it as related to the broken solar symbol may be

Broken Solar Cross

Broken Solar Cross

too hasty.

If there is one thing that seems certain, it is that the solar symbol, i.e., a + or ⊕, does not immediately bring to mind turning or motion, while 卍 and 卐 do. This sense of turning, or even walking, is very apparent in:

It's Walking

It’s Walking

Insofar as one major sense of the symbol’s significance by no later than the fifth century CE  is as shakti, then movement, change, manifestation, and power/energy/potency (associated with the sun or not) denote the emphasized symbolism, hence possibly the movement of the sun through the sky, as opposed to the stable, unchanging, ground of change, i.e., the sun as denoted by + or ⊕, around which the earth revolves. With the usual more cosmic turn:

The Hindus represent it as the Universe in our own spiral galaxy in the fore finger of Lord Vishnu. galaxyThis carries most significance in establishing the creation of the Universe and the arms as ‘kal’ or time,[13] a calendar that is seen to be more advanced than the lunar calendar … where the seasons drift from calendar year to calendar year (from here).[14]

In Chinese cartography, the symbol may be used to designate temples, while 萬came to stand in for 卍 with a sense of “ten thousand” or “innumerable”.[15] With regard to the arms as they relate to ‘kal’[16] or time: “These four yugas,[17] rotating a thousand times, comprise one day of Brahmā, and the same number comprise one night” (emphasis added, see note 11 below). Perhaps it is for the sake of this rotation that ‘kal’ in Tamil also indicates a car-wheel or carriage wheel, as also a quarter (one-fourth of a whole) as well; and in Sanskrit kālacakra[18] indicates “the wheel of time (time represented as a wheel which always turns round),” and/or “the wheel of fortune (sometimes regarded as a weapon)” and/or a “name for the sun”.

None of this accounts for the dots. If the dots are anything like bindi, then they likely associate with enhanced concentration, protection from demons, marriage or auspiciousness in general, and the fostering of intellect (toward the attainment of purity by right thoughts).[19]

I have heard the German National Socialist Hakenkreuz described as an ancient fertility symbol; it is obvious from the foregoing that fertility is not at all any kind of important focus of the symbol. In its quadriform character, it suggests a totality, whether as the four purushartha (dharma, kama, artha, and mokṣa) that comprise a human life or the four yugas that rotate a thousand times to comprise a day of Brahmā. If it is related to the sun, it is because the sun itself moves; in Zoroastrianism, it symbolized the rotating sun, i.e., ongoing creation.  It marks the historical movement of the sun; for the Hopi, it denoted their historical movement generally.

Purusha

Purusha

It seems then more a symbol of time, not space (of which the Sun is the center)—or a multitude of people as a unity spread over both Time and Space., the manifestation of creation in Time and Space, not the source—in this sense, it is exactly shakti (manifestation) in contrast to source (puruṣa or पुरुष). The Hindu context that puts two of the purushartha, i.e., kama and artha (desire and wealth), as transitory with respect to this Time and two, i.e., dharma and moka (or essential humanness and liberation, respectively), as non-transitory with respect to this Time is perfectly consistent.  Thus the four dots may serve in a similar way, linking safeguards against the transitory, in the protection from demons (kama) and the distractions marriage or auspiciousness in general (artha), and encouraging the pursuit of the non-transitory, as the attainment of purity by right thoughts (dharma) and liberation through determined concentration (mokṣa).

As Jung makes clear, a symbol is never constructed but is received. If it could be constructed it would be exactly what Canetti calls it, a sign. As a symbol, the dotted svástika cannot be reduced only to a discursive description. It evokes, more than means—generating meaning out of its evocative that is never fully contained by any description of it afterward.

Visually, then, as a symbol (as a “sign” in Canetti’s terminology), the most prominent feature of the German National Socialist Hakenkreuz (the absence of dots aside) compared to many is its being tilted. It is not the only example of this, and to generalize what tilted or upright symbols might mean from all of the world-examples one can find would be hazardous at best; if there’s one thing clear from anthropology, it’s that humans might put potentially any accent or meaning on something in different contexts. From the fact that solar symbols and tilted svástika at least sometimes co-occur on the same piece points at least to a distinction of meaning they offer when deployed in cultural representations, even if we can’t recover that meaning itself. Of various images generally available, when one finds a tilted svástika, it tends to be of the backward-facing sort; the most familiar forward-facing sort being the German National Socialist symbol. If even numbers can represent periods of “stability” and “balance” (but also therefore “complacency” and “stagnation”) and odd numbers an represent periods of “instability” or “unbalance” (but also therefore “initiative” and “movement”), a similar distinction may obtain for the non-tilted versus tilted swastikas.

When Canetti says it represents a twisted gallows, it must surely be this kind of gallows he has in mind (and not the kind more set up like a child’s swing set):

gallows

However, this sentiment would apply better to the untilted swastika, i.e., precisely the one that German National Socialists did not use—a tilted gallows wouldn’t work very well. Moreover, while Canetti acknowledges the rotating movement of the swastika generally, that “it recalls the limbs of the criminals who used to be broken on the wheel” seems merely gratuitous. He has already mentioned the bloodthirstiness of Christianity, which takes as its central symbol the point of intersection where their god was tortured, so he seems to be conflating things. There is no torture associated with the svástika whatsoever. One would halve to invoke Puruṣa, as the primal or cosmic giant sacrificed to create the whole of the cosmos,[20] but linking puruṣa in this sense to that which is more adequately represented by shakti would seem to rule this out. Destruction by wheel suggests also the (sign and word of the) juggernaut associated with the massive temple cars of India’s Ratha Yatra:

juggernaut

In this, we can see (just one of countless examples of) how sign changes to word:

In colloquial English usage [juggernaut] is a literal or metaphorical force regarded as mercilessly destructive and unstoppable. Originating ca. 1850, the term is a metaphorical reference to the Hindu Ratha Yatra temple car, which apocryphally was reputed to crush devotees under its wheels.[21] The figurative sense of the English word, with the sense of “something that demands blind devotion or merciless sacrifice” was coined in the mid-19th century. For example, it was used to describe the out-of-control character Hyde in Robert Louis Stevenson’s Dr. Jekyll and Mr. Hyde.

So the temple car itself (as a juggernaut, “something that demands blind devotion or merciless sacrifice”) is viewed in a sense of menace just s Canetti views the svástika. Here, the apocryphal sublimity of ecstatic death—the kind Canetti provides reports of for Shia Islam—as viewed (in this case) by fourteenth century Europeans gets turned by a nineteenth century European into a sign of sexual psychopathology.[22] So similarly Canetti takes a numinous Wheel9-Hogarth_zps2fc5615asymbol, with the notes of cannibalism and human sacrifice appended to it by European writers, and turns it into a doubly parodic image of illegality: whether in the kinder and gentler form of capital punishment offered by the eighteenth century and beyond (the gallows) or the more “barbaric” earlier method (of the Catherine or breaking wheel, a primarily European form of torture and execution it seems).

As for the word, Canetti, dwelling on the German version of it (Hakenkreuz), insists it has absorbed the cruel and bloodthirsty elements of intolerant monotheism:

Haken, the first part of the German word, recalls hakenstellen, an expression commonly used by boys for “tripping up”. Thus it forebodes the fall of many. For some it conjures up military visions of heel clicking; the German for “heels” being hacken. Thus, with the threat of cruel punishment, it combines an insidious viciousness and a hidden reminder of military discipline (183).

Etymologically, the word svástika derives from and originally and currently still actually betokens:

“su” meaning “good” or “auspicious,” “asti” meaning “to be,” and “ka” as a suffix. The swastika literally means “to be good”. Or another translation can be made: “swa” is “higher self”, “asti” meaning “being”, and “ka” as a suffix, so the translation can be interpreted as “being with higher self” (from here). … [Taken together, it means:] a kind of bard (who utters words of welcome or eulogy);  any lucky or auspicious object, especially a kind of mystical cross or mark made on persons and things to denote good luck, a swastika;  the crossing of the arms or hands on the breast;  a bandage in the form of a cross; a dish of a particular form; a kind of cake; a triangular crest-jewel; the meeting of four roads, a crossroad; a particular symbol made of ground rice and formed like a triangle (it is used in fumigating the image of Durgā , and is said to symbolize the liṅga); a species of garlic; cock;[23] libertine (from here)

This is more than nowhere close to “Thus, with the threat of cruel punishment, it combines an insidious viciousness and a hidden reminder of military discipline” (183) and also has nothing to do with “heels,” unless that happens to be the libertine’s area of interest.

And if it seems gratuitous to compare Sanskrit or Hindi meanings of the word to Canetti’s expostulations about it in German, then it is equally gratuitous for him to expostulate about what a swastika is through the German word “Hakenkreuz” much less the German National Socialist misprision of the symbol.

Inflation and the Crowd

In an inflation something happens which was certainly never intended and which is so dangerous that anyone with any measure of public responsibility who is capable of foreseeing it must fear it. It is a double devaluation originating in a double identification. The individual feels depreciated because the unit on which he replied, and with which he had equated himself, starts sliding; and the crowd feels depreciated because the million is. … As the millions mount up, a whole people, numbered in millions, becomes nothing (186).

“No one forgets a sudden depreciation of himself” (187)—overgeneralization notwithstanding—also too enthusiastically makes the self = money equation. He proposes this as a root (if not the root) of Hitler’s Germany:

[the Jewish people’s] long-standing connection with money, their traditional understanding of its movements and fluctuations, their skill in speculation, the way they flocked together in money markets, where their behavior contrasted strikingly with the soldierly conduct which was the German ideal—all this, in a time of doubt, instability and hostility to money, could not but make them appear dubious and hostile. The individual Jew seemed “bad” because he was on good terms with money when others did not know how to manage it and would have prefer to have nothing more to do with it. If the inflation had led only to the depreciation of German as individuals, the incitement of hatred against individual Jews would have suffice. But this was not so, for, when their millions tumbled, the Germans also felt humiliated as a crowd. Hitler saw this clearly and therefore turned his activities against the Jews as a whole (187).

Canetti is making a literal identification of depreciation, but the first thing to note in his rhetoric is the error: it is not that the million are worth nothing, it is that it takes a million to have the value previously given to the less than a million. My dollar or my self are devalued, so it takes many more dollars and many more selves, then, to get to the same point I used to be able to get to.

scapegoatHowever, what German National Socialism rhetoric aimed for may not fit in terms of Canetti’s sense of inflation and depreciation. Inflation means that it takes more of a unit value to get the original purchasing power value; so, in Canetti’s terms, whereas previously one might blame a small number of people for whatever problem wanted an explanation, now it would take a vast number of people, and thus the scope of social response becomes correspondingly vaster. However, the situation more resembles YHWH’s neurotic response to Job, as Jung (1952)[24] makes clear.

Jung notes how YHWH appears to address Job as an equal, completely forgetting or choosing to ignore that he is picking on someone well-nigh infinitely weaker than he is. In Jung’s terms, YHWH is nothing but one massive identification with archaic unconscious material and thus, literally, embodies the godlikeness that Jung identified as a typically necessary first step on the way to overcoming neurotic fixation.

For the development of personality, then, strict differentiation from the collective psyche is absolutely necessary, since partial or blurred differentiation leads to an immediate melting away of the individual in the collective. There is now a danger that in the analysis of the unconscious the collective and the personal psyche may be fused together, with, as I have intimated, highly unfortunate results.  These results are injurious both to the patient’s life-feeling and to his fellow men, if he has any power over his environment. Through his identification with the collective psyche he will infallibly try to force the demands of his unconscious upon others, for identity with the collective psyche always brings with it a feeling of universal validity – ‘godlikeness’ – which completely ignores all the differences in the personal psyche of his fellows (Two Essays on Analytical Psychology, ¶240)[25]

One may well ask in this regard if something like the Jivaro tsantsa—the shrunken head acquired and made by one Jivaro group from its real or perceived enemies that Canetti previously drew attention to (pp. 132–4)—involves an inflation or a deflation. If we want to play with the notion, we would note that the tsantsa grows in power as it shrinks from the normal size of a human head down to something roughly the size of an orange, as Canetti

Tsantsa

Tsantsa

emphasizes more than once. Canetti similarly emphasizes, more than once, the vast importance of the tsantsa despite its tiny size.[26] The faithful Job vis-à-vis YHWH as Jung makes clear is similarly fascinating despite his infinitesimal being. In the political economy of in tolerant monotheisms, הַשָּׂטָן  is but a single devil blown up into the total quintessence of evil.

The difference I would suggest is whether bigotry is State-sanctioned or not. For example, insofar as history is written by those who are taking note of things, the culture of “gay brothels” (or Molly houses) in England tends to get dated to the eighteenth century.[27] Spencer (1996)[28] notes that the goings-on at Mother Clap’s and other molly houses were tolerated, “Margaret Clap’s house was described as ‘the public character of a place of rendezvous of sodomites’, and ‘notorious for being a molly house’. The neighbours of molly houses knew what was going on, for they were fairly easy to find and visitors were not grilled on their habits and preferences. So they were not hidden” (191). It was only later in part due to bigoted reaction to this visibility that the especially harsh antihomosexuality laws got put on England’s law books. Whatever denigration of homosexuals is involved in this, it was accompanied not simply by hyperinflation of the danger posed by homosexuals—Bray (1982)[29] makes abundantly clear the absolutely strident anti-sodomy sentiment in England that went hand-in-hand with almost zero prosecutions for the stuff over centuries—but also a large-scale political will to act on that sentiment finally. A similar pattern seems visible in genocides, where simmering cultural prejudices play themselves out as they will but remaining essentially, if however destructively, on the personal level until a State takes them up.

Even on the “personal level,” visibility of course is the sine qua non. The lesbian, African, homosexual, Bosnian, Armenian, Jew, Palestinian, Hmong, Tutsi, &c, who could pass might manage not to be molested by a blood-thirsty neighbor, neighborhood, or community. As even sundown towns in the United States attest, sometimes they might adopt a “passable” African-American or homosexual who can’t otherwise pass, by whatever strange vicissitude it is that makes that individual into an acceptable exception. In Moore et al.’s (2005)[30] V for Vendetta, an openly gay character is obviously the exception to the rule (perhaps for the narrative purpose of making the hypocrisy of the regime that much more evident). Or there is the anecdote from prison, where child molesters are regularly held in the highest contempt and often are killed simply to make a name for oneself or as a matter of principle, of the one child molester who was left alone because he could supply the best tattoo ink.[31]

Or generally, wherever bigotry is a matter of policy enforceable by an authority (be that a father, a principal, a church leader, a neighborhood vigilance committee member, a mayor, a president of a corporation or county board, a State representative, a governor, a leader of a nation, a NATO general, &c), then the Constitutional violence that results is explicitly involves dehumanization (deflation of the value of individual people and groups) accompanied by demonization (hyperinflation of the danger the visible or recognizable person or group poses). The problem for the individual (or group) in that circumstance is as much the depreciation they experience as human beings as the grotesque overappreciation that makes them into the source of all the present ills of the their milieu (whether family, school, church, neighborhood, town, county, state, nation, or world).

Endnotes
[1] All quotations are from Canetti, E. (1981). Crowds and Power (trans. Carol Stewart), 6th printing. New York: NY: Noonday Press. (paperback).

[2] The ongoing attempt of this heap is to get something out of Canetti’s book, and that of necessity means resorting to the classic sense of the essay, as an exploration, using Canetti’s book as a starting point. I can imagine that the essayistic aspect of this project can be demanding—of patience, time, &c. The point of showing an essay, entertainment value (if any) aside, is first and foremost not to be shy about showing the intellectual scaffolding of one’s exposition as much as possible. This showing, however cantankerous the exposition, affords the non-vanity of allowing others to witness all of the missteps, mistakes, false starts, and the like—not in the interest of merely providing a full record (though some essayists may do so out of vanity or mere thoroughness, scholarly drudgery, or self-involvement) but most so that readers may be exasperated enough by the essayist’s stupidities to correct his or her errors and thus contribute to our collective better human understanding of ourselves.

[3] Adorno, T. and Canetti, E. (2003). Crowds and power: Conversations with Elias Canetti (trans. R. Livingstone) in R. Tiedemann (ed.) Can one live after Auschwitz? A philosophical reader, pp. 182–201, Stanford, CA: Stanford University Press.

[4] The nice story that they got this symbol of fertility an life backward (by reversing the direction of the arms) is just that, a nice story, since the symbol may be found in either direction in the east.

[5] “Suffer little children, and forbid them not to come unto Me, for of such is the Kingdom of Heaven.”

[6] “As one whom his mother comforteth, so will I comfort you; and ye shall be comforted in Jerusalem.”

[7] “When Jesus therefore saw his mother and the disciple standing by whom He loved, He said unto His mother, ‘Woman, behold thy son!’

[8] “And upon her forehead was a name written: Mystery, Babylon the Great, Mother of Harlots and Abominations of the Earth”

[9] The closest thing being a joke article. Find me better examples if you can; I didn’t search that hard or long.

[10] Not even Germany condemns the use of the स्वस्तिक  in a religious context.

[11] One might survey the archaeological record, the proposed origins, and other details here.

[12] A crossed circle being the equivalent of a crossed square, so far as symbolic interpreters assure us on this point.

[13] (from here):

The duration of the material universe is limited. It is manifested in cycles of kalpas. A kalpa is a day of Brahmā, and one day of Brahmā consists of a thousand cycles of four yugas, or ages: Satya, Tretā, Dvāpara and Kali. The cycle of Satya is characterized by virtue, wisdom and religion, there being practically no ignorance and vice, and the yuga lasts 1,728,000 years. In the Tretā-yuga vice is introduced, and this yuga lasts 1,296,000 years. In the Dvāpara-yuga there is an even greater decline in virtue and religion, vice increasing, and this yuga lasts 864,000 years. And finally in Kali-yuga (the yuga we have now been experiencing over the past 5,000 years) there is an abundance of strife, ignorance, irreligion and vice, true virtue being practically nonexistent, and this yuga lasts 432,000 years. In Kali-yuga vice increases to such a point that at the termination of the yuga the Supreme Lord Himself appears as the Kalki avatāra, vanquishes the demons, saves His devotees, and commences another Satya-yuga. Then the process is set rolling again. These four yugas, rotating a thousand times, comprise one day of Brahmā, and the same number comprise one night. Brahmā lives one hundred of such “years” and then dies. These “hundred years” total 311 trillion 40 billion (311,040,000,000,000) earth years. By these calculations the life of Brahmā seems fantastic and interminable, but from the viewpoint of eternity it is as brief as a lightning flash. In the Causal Ocean there are innumerable Brahmās rising and disappearing like bubbles in the Atlantic. Brahmā and his creation are all part of the material universe, and therefore they are in constant flux.”(Bhagavad-Gītā As It Is 8.17).

[14] (also): “The most traditional form of the swastika’s symbolization in Hinduism is that the symbol represents the purusharthas: dharma (that which makes a human a human), artha (wealth), kama (desire), and moksha (liberation). All four are needed for a full life. However, two (artha and kama) are limited and can give only limited joy. They are the two closed arms of the swastika. The other two are unlimited and are the open arms of the swastika.”

[15] (from here): “A simplification of the Buddhist symbol 卍 introduced when Buddhism came to China. 卍 was given the same pronunciation as 萬 meaning 萬德 (many virtues); so, 卍 came to be used as a simplified form of 萬. 万 is simply a scribal form (script) of 卍. The meaning ten thousand for 万 is a borrowing.”

[16] More precisely, काल (kālá). :: m. (√3. कल् “to calculate or enumerate”) , [ifc. f(आ). g-Veda-Prātiśākhya], (1) a fixed or right point of time; (2) a space of time; (3) time in general [Atharva-Veda xix , 53 & 54 Śatapatha-Brāhmaṇa &c.]

[17] The word yuga itself is a symbolic name for the number four.

[18] कालचक्र (kālacakra):: (1) n. the wheel of time (time represented as a wheel which always turns round) [Mahābhārata, Harivaṃśa, &c]; (2) a given revolution of time, cycle (according to the Jainas, the wheel of time has twelve aras or spokes and turns round once in 2,000,000,000,000,000 sāgaras of years ; cf. ava-sarpiṇī and ut-sarpiṇī); (3) the wheel of fortune (sometimes regarded as a weapon) R.; (4) name of a tantra; (5) name of the sun [Mahābhārata iii , 151]

[19] If the addition of dots is indeed more frequently encountered in Tibetan versions of the symbol, the need for protection from demons may be quite apposite as Bön has a tendency to be especially over-run by elementals, daemons, spirits, and gods.

[20] This led to speculation about human sacrifice. However:

The Shatapatha Brahmana is a prose text associated with the White Yajur Veda that provides detailed descriptions of Vedic rituals. In its description of the Purushamedha, the text clearly states that the victims are supposed to be released unharmed:

Then a voice said to him, ‘Purusha, do not consummate (these human victims): if thou wert to consummate them, man (purusha) would eat man.’ Accordingly, as soon as fire had been carried round them, he set them free, and offered oblations to the same divinities.

Yet there are Vedic texts that contain instructions on how such rituals are to be performed. The texts are not consistent on this point. Archeological evidence of human skulls and other human bones at the site of fire altars at Kausambi were once interpreted as remains of ritual human sacrifice, however, this has long since been disproved (see here).

The injunction in the Shatapatha Brahmana to release the victims is another reason why scholars have speculated that the Purushamedha originally involved actual killing of humans. Alfred Hillebrandt, writing in 1897, claimed that the yajna involved real human sacrifices, which were suppressed over time. Albrecht Weber, writing in 1864, came to a similar conclusion. Julius Eggeling, writing in 1900, could not imagine that actual human sacrifices occurred. Hermann Oldenberg, writing in 1917, claimed that the Purushamedha was simply a priestly fantasy, but that sacrifices may have occurred nonetheless. Willibald Kirfel, writing in 1951, claimed that an early form of Purushamedha must have preceded the Ashvamedha. According to Jan Houben, the actual occurrence of human sacrifice would be difficult to prove, since the relevant pieces of evidence would be small in number. ¶ However, in a late Vedic Brahmana text, the Vadhula Anvakhyana 4.108 (ed. Caland, Acta Orientalia 6, p. 229) actual human sacrifice and even ritual anthropophagy is attested: “one formerly indeed offered a man as victim for Prajāpati,” for example Karṇājāya. “Dhārtakratava Jātūkarṇi did not wish to eat of the ida portion of the offered person; the gods therefore exchanged man as a sacrificial animal with a horse.” References to anthropophagy are also found in Taittiriya 7.2.10 and Katha Samhita 34.11 (see here).

The reason for this extended quotation is two-fold. First, it is amusing that writers on cannibalism in Vedic practice are primarily cited from 1917 and before. From Spencer and Gillen (1904), however, we also see that in a number of dream-time stories amongst various aboriginal tribes in Australia that the dream-time ancestors behave in distinctly different ways than current incarnations: most precisely in the fact of the general, if not absolute, prohibition on consuming one’s totem animal. So these examples may caution us, where traditions record practices from the equivalent of a dream-time, against taking what is recorded there as the exemplar for current practice; exactly the opposite may be true.

[21] The further vicissitudes of this word are not unnotable:

The word is derived from the Sanskrit Jagannātha (Devanagari जगन्नाथ) “world-lord”, one of the names of Krishna found in the Sanskrit epics. ¶ The English loanword juggernaut in the sense of “a huge wagon bearing an image of a Hindu god” is from the 17th century, inspired by the Jagannath Temple in Puri, Odisha, which has the Ratha Yatra (“chariot procession”), an annual procession of chariots carrying the murtis (statues) of Jagannâth, Subhadra and Balabhadra (Krishna’s elder brother). ¶ The first European description of this festival is found in the 14th-century The Travels of Sir John Mandeville, which apocryphally describes Hindus, as a religious sacrifice, casting themselves under the wheels of these huge chariots and being crushed to death. Others have suggested more prosaically that the deaths, if any, were accidental and caused by the press of the crowd and the general commotion. ¶ The figurative sense of the English word, with the sense of “something that demands blind devotion or merciless sacrifice” was coined in the mid-19th century. For example, it was used to describe the out-of-control character Hyde in Robert Louis Stevenson’s Dr. Jekyll and Mr. Hyde.

[22] (however unsuccessful that literary adventure might have been)

[23] Presumably in the sense of a male bird.

[24] Jung, CG (2010). Answer to Job. (Intr. Sonu Shamdasani, paperback Fiftieth Anniversary Edition). Reprinted from Jung, C.G. (1968). Psychology and religion: West and East. (Vol. 11, Collected Works., 2nd ed., Trans. R.F.C. Hull). Princeton: Princeton University Press. The essay was first composed in 1952.

[25] From Jung, CG (1966). Two essays on analytical psychology. 2d ed., rev. and augmented. Princeton, N.J.: Princeton University Press, ¶240.

[26] Amongst the aboriginal tribes of Australia observed by Spencer and Gillen (1904), the spirits of ancestors are taken to be approximately the size of a grain of sand yet are of abiding importance. In Osborne’s (1993) Poisoned Embrace (review here), he recounts at one points the calculation of angels and devils that preoccupied the Medieval mind. Canetti notes “the importunity of these devils was as monstrous s their numbers. Whenever Richalm, a Cistercian abort, closed his eyes he saw them around him as thick as dust. There were more precise estimates of their numbers, two of which are known to me, but they differ widely: one is 44,653,569, the other is 11 billion” (44). Our superstition with germs is the modern equivalent. And while it is easy to turn these minuscule into multitudes, the operation is not necessary. The butterfly effect, however badly understood in its popular conception, is testament that something tiny can have enormous consequences.

[27] (see here.)

[28] Spencer, C (1996). Homosexuality: a history. London: Fourth Estate.

[29] Bray, A (1982). Homosexuality in Renaissance England. London: Gay Man’s Press.

[30] Moore, A., Lloyd, D., Whitaker, S., Dodds, S., O’Connor, J., Craddock, S., Fell, E., & Weare, T. (2005). V for vendetta. New York: Vertigo/DC Comics.

[31] Being visible means only being recognizable, as those charged with witchcraft or driven out of villages as scapegoats can attest. One can say that this kind of violence (due to visibility or recognition) is finally a matter of the constitution (little C) of a community and not the Constitution (as the law of the land). Most assuredly, a cultural constitution that practices Jew-baiting or antihomosexual violence lays the groundwork for Constitutional sanction of such behavior; it’s by no means a non-problematic element, and we may anticipate that bigotry will persist because one can’t legislate away bigotry. The point in the present case is that the devaluation Canetti is describing—in the form of constitutional (little C) belittlement as it were—capitalizes on that belittlement as hyperinflation when it becomes a matter of the Constitution (capital C) of a community.

Summary (in One Sentence)

That the New York Times Book Review calls this work “meticulously researched” and “judicious” even though the author:

  • in one paragraph, provides a forgery of Jung’s text in lieu of an actual one
  • builds a case by juxtaposing widely disparate paragraphs out of context
  • misquotes, badly paraphrases, and effectively misrepresents Jung’s text
  • edits the text in such a way to invert the sense of a cited passage in Jung—making what Jung takes to be the beginning of therapy as its end

suggests it does not in fact meet even a minimal criteria for “meticulously researched” or “judicious”.

Pre-Disclaimer

Last year in 2012, I set myself the task to read at least ten pages per day, and now I’m not sure if I kept up. I have the same task this year, and I’ve added that I will write a book reaction for each one that I finish (or give up on, if I stop). These reactions will not be Amazon-type reviews, with synopses, background research done on the author or the book itself, unless that strikes me as necessary or if the book inspired me to that when I read it. In general, these amount to assessments of in what ways I found the book helpful somehow.jung-mandala

Consequently, I may provide spoilers, may misunderstand books or get stuff wrong, or get off on a gratuitous tear about the thing in some way, &c. I may say stupid stuff, poorly informed stuff. There are some in the world who expect everyone to be omniscient and can’t be bothered to engage in a human dialogue toward figuring out how to make the world a better place. To the extent that each reaction I offer for a book is a here’s what I found helpful about this, then it is further up to us (you, me, us) to correct, refine, trash and start over, this or whatever it is we see as potentially helpful toward making the world a better place. If you can’t be bothered to take up your end of that bargain, that’s part of the problem to be solved.

A Reaction To (Only a Part of): Hayman’s (2001)[1] A Life of Jung

In researching a Jungian term—”godlikeness”—I stumbled across the passage below from Hayman’s putative biography of Jung. Whenever one finds books about Jung, one may expect one of two things: apologists and slanderers—tertium non datur. We can reserve judgment which Hayman is, because my point focuses rather on the New York Times Book Review’s declaration of this book as “meticulously researched” and “judicious”.[2]

Let me be clear. I have only read the pages around the cited passage below, but my intent is not to address the book as a whole. Rather, I’m noting that there may be found serious enough errors in a randomly discovered passage of this book[3] to make the claim that it is meticulously researched or judicious seem untenable in general. It seems very unlikely that this would be the only such passage in Hayman’s book.

Toward this end, I offer two parts: sins of omission (non-meticulous misquotations) and sins of commission (injudicious misrepresentations). I address former first, because misrepresentations might be argued as simple ignorance on the part of a writer, but to incorrectly quote the writings of an author falls short of the most elementary standard for intellectual work, much less being evidence of “meticulous” work.

I first provide the whole passage (from page 207) from Hayman (2001) mostly for reference and orientation only. It needn’t be read for sense yet, but only to give an overview of what’s being examined. I do suggest you at least note the (seemingly four) passages where Jung’s words are quoted (the block quotes and the two passages I have underlined).

The Text: p. 207

Red Book - ship Jung explains the dangers of trying to fuse the collective and the personal psyche in analyzing a patient’s unconscious. It can be

Injurious both to the patient’s life-feeling and to his fellow men, if he has any power over his environment. Through his identification with the collective psyche he will infallibly try to force the demands of his unconscious upon others, for identity with the collective psyche always brings with it a feeling of universal validity – ‘godlikeness’ – which completely ignores all the differences in the psychology of his fellows.

Jung may have been telling himself not to abuse his power over people who believed he could put them in touch with the god inside them, but he was also trying to contradict rumours that he was unstable.

According to him, the neurotic participates more fully than the normal person in the life of the unconscious. By reinstating what has been repressed, analysis enlarges consciousness to include ‘certain fundamental, general and impersonal characteristics of humanity’.

Repression of the collective psyche is essential to the development of the personality, since collective psychology and personal psychology exclude one another up to a point. Individual personality is based on a persona—the mask worn by the collective psyche to mislead other people and oneself into believing that one is not simply acting a role through which the collective psyche speaks.

In schizophrenia, the unconscious usurps the reality function, substituting its own reality. For the sane patient, there are two possible escapes from the condition of godlikeness. One is to restore the persona through a reductive analysis; the other is to ‘explain the unconscious in terms of the archaic psychology of primitives’  (207).

Sins of Omission

The absence of pages numbers or references provides the most immediately glaring omission,[4] giving little to no clue whence Hayman three times quotes passages from Jung’s writings (although doubtless you counted four quotations in the above—two block quotations and the two underlined phrases in Hayman’s single-quote quotation marks).[5] This mystery of three becoming four[6] arises because Hayman (2001) uses block quotation for:

Repression of the collective psyche is essential to the development of the personality, since collective psychology and personal psychology exclude one another up to a point. Individual personality is based on a persona—the mask worn by the collective psyche to mislead other people and oneself into believing that one is not simply acting a role through which the collective psyche speaks.

But nowhere in Jung’s writing does the above quotation occur; Hayman rather concatenates and paraphrases widely separated material.[7] (the details of this are provided below in note 9).

This is decidedly not meticulous work.

By contrast, the first passage cited originates in Jung’s writings (Collected Works 7, ¶240):[8]

Injurious both to the patient’s life-feeling and to his fellow men, if he has any power over his environment. Through his identification with the collective psyche he will infallibly try to force the demands of his unconscious upon others, for identity with the collective psyche always brings with it a feeling of universal validity – ‘godlikeness’ – which completely ignores all the differences in the psychology of his fellows.

with the exception of the underlined word, which is “personal psyche” in Jung’s original. Mistranscribing “psychology of his fellows” for “personal psyche of his fellows” does actually change the emphasis of the passage—specifically, that Jung is contrasting effects of the collective psyche and the effects of the personal psyche—but since meticulous means “characterized by very precise, conscientious attention to details,”[9] even this seemingly small  error runs contrary to a standard of meticulousness.

red bookWhile the publication history for most of Jung’s texts exhibits all of the typical variance, revision, early and late texts, &c, one finds in most (and virtually all major) writers, in this particular case, wherever Hayman got this passage from, he follows exactly the version included in Jung’s (1966) Collected Works (except for the British English usage of single quotation marks around “godlikeness”). I assume this is his source.[10]

A problem arises, however, when he cites the phrase “explain the unconscious in terms of the archaic psychology of primitives” without taking account of the fact that it comes from Jung’s posthumous papers, written nearly half a century prior, and never prepared by him (or even intended) for publication.[11] This is like criticizing Ulysses because an earlier draft is at variance with the published text. Such a move does not constitute meticulous research.

When Hayman (2001) writes: “According to him, the neurotic participates more fully than the normal person in the life of the unconscious,” this may be plagiarism. The court of public opinion, or perhaps an academic professor, can decide whether this paraphrase, “the neurotic participates more fully than the normal person in the life of the unconscious” should have been further modified or restored to Jung’s original text: “the latter participates to a greater extent in the life of the unconscious than does the normal person” (CW7, ¶464f1, or see here). (In Jung, “the latter” refers to “the neurotic”.)

Similarly, and this will seem like a small point, Hayman paraphrases and quotes,

By reinstating what has been repressed, analysis enlarges consciousness to include ‘certain fundamental, general and impersonal characteristics of humanity’.

and then proceeds to the bogus quotation from above. The passage in Jung, which also includes a significant and lengthy footnote, actually reads:

By continuing the analysis we add to the personal consciousness certain fundamental, general and impersonal characteristics of humanity, thereby bringing about the inflation1 I have just described (CW7, ¶243, emphasis added).

This edit on Hayman’s part serves as segue to the next section, because whether it amounts to a sin of omission or commission might be debated.

Sins of Commission

While Hayman’s decision to elide Jung’s (content-significant) footnote[12] in this passage may be acceptable from the point of view of trying to summarize the matter, nonetheless the above edit decidedly changes the sense of Jung’s passage.[13]

Hayman’s version gives the impression that analysis has its end the enlarging the consciousness with impersonal characteristics. In fact, the reverse holds. For Jung, this enlargement, inflation, may sometimes be a necessary first step but whether it is a first step or one is already aswim in it, it denotes whenever it occurs the most manifest symptom to be addressed by therapy. Hayman’s paraphrase makes it sound as if enlargement is point of arrival for therapy whereas it is rather the point of departure. Omitting the phrase “thereby bringing about the inflation I have just describe” creates this impression.

Notwithstanding that one may frequently wish to summarize another writer, especially one as verbose as Jung, in the present case, Hayman’s (2001) decision to elide the footnote (included imagesin note 14 below or also here) also importantly changes the sense of the passage cited, precisely due to the emphasis Jung puts on inflation as a crucial term at this point in his psychological approach.  For Hayman to repress the mention of inflation while yoking together incongruous passages from different essays smacks of inflation itself—and not simply for being the Freudian error par excellence but, as Jung notes in his footnote, a common enough occurrence for humans generally. So for Hayman to have gone to all of this trouble simply to assert:

Jung may have been telling himself not to abuse his power over people who believed he could put them in touch with the god inside them, but he was also trying to contradict rumours that he was unstable.

one wonders not only where the evidence is for the two assertions here but also if this might rather be a case of Hayman mixing “the vapid with the gravid, [such that] when he ventures an opinion, it is often silly.”[2] If Hayman’s point is that Jung, as a human being, was subject to such psychological phenomenon as Jung’s psychology observed and described as common to human beings, then this is indeed saying very little. One of the great strengths of Jung’s psychology, in fact, is its capacity to explain even its exponent’s lapses, shadows, possessions, and the like in usefully analytic terms. The same does not seem to be true of Freudian analysis—most of all in the denial of denial on the part of its proponents.

If “Jung may have been telling himself not to abuse his power over people who believed he could put them in touch with the god inside them,” he made the warning publicly so that we all might benefit from not allowing ourselves to be possessed by godlikeness—an apt warning utterly more consequential than any merely trite implication that Jung wanted to convince people he was stable.[14] Hyman’s unsupported remarks here seem not judicious at all.

Regarding the bogus paragraph that begins, “Repression of the collective psyche is essential to the development of the personality,” to decide to offer a cobbled together paraphrase as if it is the cited author’s original work amounts to a forgery, and is thus most decidedly not judicious. As an introduction to Hayman’s work, this is a very, very poor first impression and it opens to question all of the other biographies he has written—the sheer number of them perhaps already being too many to deem credible. But let me be clear. Nothing demands a biographer do better than hackwork—popularizers of ideas and lives, the hagiographers and polemicists, are famous or notorious for such stuff. And when one writes an essay—as one might indeed meditate on the context and significance of every single word Jung ever uttered, wrote, or implied—broad-ranging conclusions might be reached without any especial demand for scholarly demonstration, because such things are generally beyond scholarly demonstration. Given that a repeated complaint about Hayman’s book is that it is not a biography (leaving out details of Jung’s relationship with his children, for instance), but rather more of a broadside on his work, whether this is true or not, then such shenanigans by Hyman start to smack more of ideology than biography. If that’s the ax he has to grind, then it’s far less likely that we’re in the presence of something judicious here.

It’s a particularly nice irony then that Hayman uses the verb “mislead” in his misquotation:

a persona—the mask worn by the collective psyche to mislead other people and oneself into believing that one is not simply acting a role through which the collective psyche speaks.

Whereas Jung’s original—we have to guess where exactly Hayman is elaborating his paraphrase from—reads:

[the persona] is, as its name implies, only the mask worn by the collective psyche, a mask that feigns individuality, making others and oneself believe that one is individual, whereas one is simply acting a role through which the collective psyche speaks (CW7, [¶245]¶ 466, emphasis in original)

jungs-red-book3If I wear a tail in public and that has the consequence of making people believe something, then that differs from me wearing a tail to mislead people into believing something. This is the crucial difference in the two passages. Per Hayman, if Jung had a persona (and we all do), then that persona misled other people, although Jung states immediately after the original passage I’ve cited above, “When we analyse the persona we strip off the mask” ([¶246], ¶466). Once again, Hayman’s implication that the persona is an end, something that misleads, whereas for Jung it is precisely the thing precisely not to be taken seriously, if progress in individuation is going to occur. Insofar as judicious means “having, or characterized by, good judgment or sound thinking,”[15] this kind of error betokens both poor judgment and unsound thinking.

Finally, Hayman (2001) concludes this passage by writing:

For the sane patient, there are two possible escapes from the condition of godlikeness. One is to restore the persona through a reductive analysis; the other is to ‘explain the unconscious in terms of the archaic psychology of primitives’ (207)

Hayman’s claim here gives the impression that that Jung’s (non-reductive) analysis would “explain the unconscious in terms of the archaic psychology of primitives”.  It seems rather Jung critiques both of these approaches. And while significant differences prevail between the earlier (1916) and later (1938) text, fortunately we needn’t go into every detail in this case.[16] Despite these differences, what both versions of Jung’s essay share in common at this juncture is an engagement with Freud’s and Adler’s theories, as the reductive analyses that Jung critiques. From the 1938 version of the essay:

Both theories fit the neurotic mentality so neatly that every case of neurosis can be explained by both theories at once. This highly remarkable fact, which any unprejudiced observer is bound to corroborate, can only rest on the circumstance that Freud’s “infantile eroticism” and Adler’s “power drive” are one and the same thing, regardless of the clash of opinions between the two schools. It is simply a fragment of uncontrolled, and at first uncontrollable, primordial instinct that comes to light in the phenomenon of transference. The archaic fantasy-forms that gradually reach the surface of consciousness are only a further proof of this (¶256).

In the 1916 version, Jung wrote (watch for the “or”):

The unbearable state of identity with the collective psyche drives the patient, as we have said, to some radical solution. Two ways are open to him for getting out of the condition of “godlikeness.” The first possibility is to try to re-establish regressively the previous persona by attempting to control the unconscious through the application of a reductive theory—by declaring, for instance, that it is “nothing but” repressed and long overdue infantile sexuality which would really be best replaced by the normal sexual function. This explanation is based on the undeniably sexual symbolism of the language of the unconscious and on its concretistic interpretation. Alternatively the power theory may be invoked and, relying on the equally undeniable power tendencies of the unconscious, one may interpret the feeling of “godlikeness” as “masculine protest,” as the infantile desire for domination and security. Or one may explain the unconscious in terms of the archaic psychology of primitives, an explanation that would not only cover both the sexual symbolism and the “godlike” power strivings that come to light in the unconscious material but would also seem to do justice to its religious, philosophical, and mythological aspects (¶471, emphasis added).

jung-red-book-dragonHayman’s misunderstanding, as I see it, is in mistaking Jung’s “or” as the pivot point in his argument, i.e., that option (1) proposes two (inadequate) reductive theories or option (2) proposes to “explain the unconscious in terms of the archaic psychology of primitives, an explanation that would not only cover both the sexual symbolism and the ‘godlike’ power strivings that come to light in the unconscious material but would also seem to do justice to its religious, philosophical, and mythological aspects” (¶471). Hayman fails to appreciate Jung’s use of the verb “explain”—Jung had few illusions if any that one could explain the unconscious, and so what he proposes here in fact are three inadequate attempts, although the third he describes would “seem to do justice to its religious, philosophical, and mythological aspects” (¶471).

Thus, this exhibits three inadequate approaches, not an either/or since, “In each case the conclusion will be the same, for what it amounts to is a repudiation of the unconscious as something everybody knows to be useless, infantile, devoid of sense, and altogether impossible and obsolete. After this devaluation, there is nothing to be done but shrug one’s shoulders resignedly” (¶472).[17] No matter how one tries to explain the unconscious, this purports a “nothing but” that will certainly not drain the unconscious of problematic energy (1916, ¶472) but might also not even affect anything therapeutic at all:

True enough, the doctor can always save his face with these theories and extricate himself from a painful situation more or less humanely. There are indeed patients with whom it is, or seems to be, unrewarding to go to greater lengths; but there are also cases where these procedures cause senseless psychic injury. In the case of [one patient] I dimly felt something of the sort, and I therefore abandoned my rationalistic attempts in order—with ill-concealed mistrust—to give nature a chance to correct what seemed to me to be her own foolishness. As already mentioned, this taught me something extraordinarily important, namely the existence of an unconscious self-regulation. Not only can the unconscious “wish,” it can also cancel its own wishes. This realization, of such immense importance for the integrity of the personality, must remain sealed to anyone who cannot get over the idea that it is simply a question of infantilism (1938, ¶257).

Here, then, we find the more central “or” in Jung’s argument—no reduction to a “nothing but” of any kind, but rather an intimation toward developing means (Jung referred to means as active imagination) for circumstancing this “wish correction” on the part of the unconscious, be that by transcendental overcoming or by integrating opposites.

Endnotes

[1] Hayman, R. (2001). A life of Jung. First American edition. New York: W.W. Norton.

[2] Another reviewer notes:

Swiss psychiatrist Jung (1875-1961) lived creatively, grandly, and sometimes irresponsibly. Spiritual, mystical, and at times schizoid, he brought us archetypes, the collective unconscious, introversion and extraversion, and anima and shadow, but his reputation suffers from affairs with patients, cultism, and apologies for Nazism. A biographer of Nietzsche, Sartre, Proust, Sylvia Plath, and Thomas Mann, Hayman knows German and retranslated parts of Jung’s Memories, Dreams, Reflections for this book, first published in England in 1999. But Jung’s complicated story lurches and tumbles in his hands. Research and life events are overpacked into paragraphs laced with orphan pronouns and non-sequiturs. Hayman mixes bit players with protagonists, the vapid with the gravid, and when he ventures an opinion, it is often silly, e.g., that patients benefit more from unstable than from stable therapists. Intrepid specialists may find some new material, but the great bulk is shamelessly derivative. Not recommended; libraries are much better off with Anthony Stevens’s On Jung (Princeton Univ., 1999. rev. ed.) or Frank McLynn’s Carl Gustav Jung (Thomas Dunne Bks: St. Martin’s, 1997). E. James Lieberman, George Washington Univ. Sch. of Medicine, Washington, DC (from here)

[3] Wouldn’t it be a funny coincidence were this the only passage in Hayman (2001) that suffers from this.

[4] From the general context of the book, one assumes that the quotations are drawing from Two Essays on Analytical Psychology, but the texts that are included in that book range from (unpublished earlier drafts in) 1912 to (revised essays collected in) 1966.

[5] In the following, I use the ¶ designations from Jung’s Collected Works for the same reason they exist; because multiple publications reprise any number of essays, texts, passages, &c, and the Collected Works provides a central reference point.

[6] An unconscious manifestation of the axiom of Mariah “One becomes two, two becomes three, and out of the third comes the one as the fourth”? (from here)

[7] The only sure place to find this “Jung quotation” is of course in Hayman (2001) here. Since Hayman seems to be drawing principally from one book (CW7), the exact phrasing “Repression of the collective psyche is essential to the development of the personality” might originate in the abstracts of Jung’s collected works, which were not written by him, “Repression of the collective psyche was necessary for the development of the civilized personality” (Abstract #000180, from here), but is more likely from, “Repression of the collective psyche was absolutely necessary for the development of personality” which occurs twice (CW7, p. 150, ¶237; p. 277, ¶459).  The reason the phrase occurs twice is because the cited essay exists in more than one form, two of which are included in CW7. Jung’s editors explain:

[First delivered as a lecture to the Zurich School for analytical psychology, 1916, and published the same year, in a French translation by M. Marsen, in the Archives de Psychologie (XVI, pp. 152–79) under the title “La Structure de l’inconscient.” The lecture appeared in English with the title “The Conception of the Unconscious” in Collected Papers on Analytical Psychology (2nd edn., 1917), and had evidently been translated from a German MS, which subsequently disappeared. For the first edition of the present volume a translation was made by Philip Maier from the French version. The German MS, titled “Über das Unbewusste und seine Inhalte,” came to light again only after Jung’s death in 1961. It contained a stratum of revisions an additions, in a later hand of the author’s, most of which were incorporated in the revised and expanded version, titled Die Beziehungen zwischen dem Ich und dem Unbewussten (1928), a translation of which forms Part II of the present volume. The MS did not, however, contain all the new material that was added in the 1928 version. In particular, section 5 (infra, pars. 480–521) was replaced by Part Two of that essay. ¶ [The text that now follows is a new translations from the newly discovered German MS. Additions that found their way into the 1928 version have not been included; additions that are not represented in that version are given in square brackets. To facilitate comparison between the 1916 and the final versions, the corresponding paragraph numbers of the latter are likewise given in square brackets. A similar but not identical presentation of the rediscovered MS is given in Vol. 7 of the Swiss edition.] (CW7, 1966, p. 269, f1).

To show what Hayman has done, it might be easiest to give a guided tour of his forgery:

 

Hayman (2001, p. 207)

Jung (1916)

Jung (1938)

#1

Repression of the collective psyche is essential to the development of the personality, since collective psychology and personal psychology exclude one another up to a point (207, emphasis added) Repression of the collective psyche was absolutely necessary for the development of the personality, since collective psychology and personal psychology exclude one another up to a point (¶459, emphasis added). Repression of the collective psyche was absolutely necessary for the development of personality (¶237)

#2

Individual personality is based on a persona [not found] [not found]

#3

the mask worn by the collective psyche to mislead other people and oneself into believing that one is not simply acting a role through which the collective psyche speaks. [the persona] is, as its name implies, only the mask worn by the collective psyche, a mask that feigns individuality, making others and oneself believe that one is individual, whereas one is simply acting a role through which the collective psyche speaks (¶466, emphasis in original) [the persona] is, as its name implies, only the mask worn by the collective psyche, a mask that feigns individuality, making others and oneself believe that one is individual, whereas one is simply acting a role through which the collective psyche speaks (¶245 emphasis in original)

Thus, we have #1 above (as a failure to quote correctly that betrays any claim to meticulous work and calls into question as well the judiciousness of the New York Times Book Review author who calls this book meticulously researched) compounded by #3 (a misleading paraphrase), #2 (the insertion of an outright misrepresentation) and all boiled together into a forgery presented as Jung’s own text. But in addition to misquotation, misleading paraphrasing, and misrepresentation outright, even if it made sense for Hayman to ignore Jung’s later (1938) publication on these topics in favor of a manuscript with a complicated history—Jung’s editors tumblr_lw9vyssFB81qcu0j0o1_500included this text to show the evolution of his ideas—it makes just as much sense to pretend that Jung’s framing of these topics extracted from his earlier work may be presented as emblematic of his later “method” as to yoke together snippets of text separated by at least seven paragraphs. Using the 1938 text, Hayman’s sentence #1 is on page 150, #2 doesn’t exist anywhere, and the material misparaphrased in #3 may be found on page 157; using the 1916 text, sentence #1 is on page 277, #2 does not exist, and the material in #3 may be found on page 281. Importantly, this span of seven and four pages, respectively, crosses the threshold of two major divisions in Jung’s essays (from “Phenomena Resulting from the Assimilation of the Unconscious” to “The Persona as a Segment of the Collective Psyche”). In both cases, it is as if Hayman’s insertion “Individual personality is based on a persona—” is meant to stand in for the missing pages, but this is in no way accurate much less adequate. Nor is this just a matter of splitting hairs—one might write off the difference between “Repression of the collective psyche is essential to the development of the personality” (Hayman, 2001) and “Repression of the collective psyche was absolutely necessary for the development of personality” (Jung, 1938, ¶237) as an innocent mistake quite negligible to Hayman’s point—without thereby ignoring or excusing the gaffe of the misquotation—but there’s more malfeasance here than that. Again that telling irony that Hayman resorted to the verb mislead here is hard not to appreciate.

[8] From Jung, CG (1966). Two essays on analytical psychology. 2d ed., rev. and augmented. Princeton, N.J.: Princeton University Press.

[9] Clearly the original (now archaic) sense of the word as “timid, fearful, overly cautious” has given away to its more familiar sense; the etymology is still entertaining:

1530s, “fearful, timid,” from Latin meticulosus “fearful, timid,” literally “full of fear,” from metus “fear, dread, apprehension, anxiety,” of unknown origin. Sense of “fussy about details” is first recorded in English 1827, from French méticuleux “timorously fussy.” Related: Meticulosity.

[10] You can read the whole context of the passage here.

[11] (see here)

[12] (Jung’s footnote reads):

This phenomenon, which results from the extension of consciousness, is in no sense specific to analytical treatment. It occurs whenever people are overpowered by knowledge or by some new realization. “Knowledge puffeth up,” Paul writes to the Corinthians, for the new knowledge had turned the heads of many, as indeed constantly happens. The inflation has nothing to do with the kind of knowledge, but simply and solely with the fact that any new knowledge can so seize hold of a weak head  that he no longer sees and hears anything else. He is hypnotized by it, and instantly believes he has solved the riddle of the universe. But that is equivalent to almighty self-conceit. This process is such a general reaction that, in Genesis 2:17, eating of the tree of knowledge is represented as a deadly sin. It may not be immediately apparent why greater consciousness followed by self-conceit should be such a dangerous thing. Genesis represents the act of becoming conscious as a taboo infringement, as though knowledge meant that  sacrosanct barrier had been impiously overstepped. I think that Genesis is right in so far as every step toward greater consciousness is a kind of Promethean guilt: through knowledge, the gods as it were are robbed of their fire, that is, something that was the property  of the unconscious powers is torn out of its natural context and subordinated to the whims of the conscious mind. The man who has usurped the new knowledge suffers, however, a transformation or enlargement of consciousness, which no longer resembles that of his fellow men. He has raised himself above the human level of his age (“ye shall become like unto God”), but in so doing has alienated himself from humanity. The pain of this loneliness is the vengeance of the gods, for never again can he return to mankind. He is, as the myth says, chained to the lonely cliffs of the Caucasus, forsaken of God and man. (Collected Works 7, ¶243, footnote 1)

[13] All the more so considering the relative importance Jung gives to the word inflation in these essays (it occurs nine times) compared to two references to it in Hayman’s (2001) book. That’s nine references over 369 pages as opposed to two references over 560 pages.

[14] A stable/unstable dichotomy is an ignis fatuus. Better to subsume the terms as ranges within some equilibrium, but what’s at stake in the prospect of calling someone “crazy” has very little to do with one’s mental state and far more to do with how people are treating you in public life.

[15] The sense “meaning ‘careful, prudent’ is from c.1600” (see here).

[16] Compare pp. 160–5 (¶251–257) and pp. 282–4 (¶468–73).

[17] The remainder of the passage runs:

To the patient there seems to be no alternative, if he is to go on living rationally, but to reconstitute, as best he can, that segment of the collective psyche which we have called the persona, and quietly give up analysis, trying to forget if possible that he possesses an unconscious. He will take Faust’s words to heart:

This earthly circle I know well enough.
Towards the Beyond the view has been cut off;
Fool—who directs that way his dazzled eye,
Contrives himself a double in the sky!
Let him look round him here, not stray beyond;
To a sound man this world must needs respond.
To roam into eternity is vain!
What he perceives, he can attain.
Thus let him walk along his earthlong days;
Though phantoms haunt him, let him go his way,
And, moving on, to weal and woe assent—
He at each moment ever discontent.10Such a solution would be perfect if man were really able to shake off the unconscious, drain it of libido and render it inactive. But experience shows that it is not possible to drain the energy from the unconscious: it remains active, for it not only contains but is itself the source of libido from which the psychic elements flow.* It is therefore a delusion to think that some kind of magical theory or method the unconscious can be finally emptied of libido and thus, as it were, eliminated (¶258).

10 Faust, trans. By MacNeice, Part II, Act V, p. 283.

*From the 1916 version, this sentence here reads: “But experience shows that it is not possible to drain the energy from the unconscious: it remains active, for it not only contains but is itself the source of libido from which all of the psychic elements flow into us—the thought-feelings or feeling-thoughts, the still undifferentiated germs of formal thinking and feeling” ([¶258], 427).

bookred.79a

Summary (in One Image)

cover page

Pre-Disclaimer

Last year in 2012, I set myself the task to read at least ten pages per day, and now I’m not sure if I kept up. I have the same task this year, and I’ve added that I will write a book reaction for each one that I finish (or give up on, if I stop). These reactions will not be Amazon-type reviews, with synopses, background research done on the author or the book itself, unless that strikes me as necessary or if the book inspired me to that when I read it. In general, these amount to assessments of in what ways I found the book helpful somehow.

Consequently, I may provide spoilers, may misunderstand books or get stuff wrong, or get off on a gratuitous tear about the thing in some way, &c. I may say stupid stuff, poorly informed stuff. There are some in the world who expect everyone to be omniscient and can’t be bothered to engage in a human dialogue toward figuring out how to make the world a better place. To the extent that each reaction I offer for a book is a here’s what I found helpful about this, then it is further up to us (you, me, us) to correct, refine, trash and start over, this or whatever it is we see as potentially helpful toward making the world a better place. If you can’t be bothered to take up your end of that bargain, that’s part of the problem to be solved.

A Reaction To: Toppi’s (2012)[1] Sharaz-De: Tales from the Arabian Nights

While browsing graphic novels at the local library and judging books by spines (if not covers), one caught my attention. It reminded me of a Tarot card deck I own and with good cause. 10-of-souls

The interior illustrations certainly confirmed that I was holding a graphic novel by whomever had illustrated my most frequently used Tarot deck—most used because it tends to evoke more out of me than any other I have ever owned.[2]sharaz-de-71 But not only are Toppi’s illustrations utterly evocative in this book, so is his rendering of the selection of Scheherazade’s tales (now translated into English from the book’s Italian).[3] And, since a picture says a thousand and one words, here:

toppi_s_sharazde 289aef3348259c2bdaa04742db8607bc l tumblr_mhi37n1nrx1qj06a9o6_500

sergio-toppi-sd

While Toppi limits himself by no means only to the depiction of what (Northern) “civilization” calls (Southern) “exotic” cultures—in the present case, the “Arabic world,” but also in his book Warramunga through an aboriginal tribe of Australia, in this Tarot of the Origins as a wide variety of Native American, Mesoamerican, African, and aboriginal imagery, &c—the question may still be raised to what extent co-optation is at work. I notice this especially because in a previous reaction to Jeff Smith’s RASL (the drift) his (at best) desultory or (at worst) imperialistic co-optation of a sacred image of the O’odham people stood out so awfully. With this book, we can at least start by comparing the impression I received from it and the text on the back of the book.

(text from the book, spoken by Sharaz-De[4]): The night, o Lord, is still the realm of birds of prey. The dawn that will bring me death is far away. Grant me this, my King, that to brighten the hours ahead I might recount stories ancient and rare till the new day robs me of speech and bring.

(promotional blurb): A set of tales inspired by 1001 Arabian Nights, European comics master Sergio Toppi’s Sharaz-De explores a barbaric society where the supernatural is the only remedy to injustice. The lovely Sharaz-de, captive to a cruel and despotic king, must each night spin tales to entertain her master and save her head from the executioner. Her tales are filled with evil spirits, treasures, risk, and danger, but ever at their center hold the passions of gods and men.

(from Walter Simonson’s introduction): “[Sharaz-De] is filled with beautiful drawing, wonderful design, captivating imagery, unique characters (including the lovely narrator), and a driven imagination displayed with all the gifts of a mature artist”

What immediately captivates me about Simonson’s quote is the correct use of square brackets. Quoted blurbs are notorious for gross elisions and taking things far out of context, but here there is an exactitude employed, specifically (and correctly, I might add) showing the trace of how the original statement was modified. It’s theoretically a small point, but it points to at least a gesture of more integrity than usual. The second thing that I immediately note is the repetition of the word “lovely” from the promotion blurb. Apparently it matters very much that Sharaz-De is lovely. This sort of thing may definitely be filed under typical orientalist fantasies.

The most glaring phrase is likely “Toppi’s Sharaz-De explores a barbaric society where the supernatural is the only remedy to injustice.” There are two main parts to this claim: (1) that the society depicted is a barbaric one where the supernatural is the only remedy to injustice, and (2) that Toppi’s book operates from the standpoint of that description.

Point (1) is standard Islam-baiting, which is refuted in two ways, one of which is evident even already on the back of the book, when Sharaz-De says she will recount ancient stories. Like all folk- or morality tales, Sharaz-De almost certainly has strategy in mind regarding what story she tells the man who would have her beheaded, but there’s nothing to suggest that the contemporary setting of the story is culturally the same as the ancient past recounted. This remains true, even as these ancient stories “are filled with evil spirits, treasures, risk, and danger, but ever at their center hold the passions of gods and men.”

From the text itself, the occasional references to Allah (alluded to only as “God”) point to a difference between now (for Sharaz-De and her king) and the past. As with much of the much, much later Gothic fiction from England circa 1770–1800, there was constant recourse to the bad old days of Catholicism in England as a launching point for implying (then) present-day criticisms of the prevailing social order. History’s victors often gloat about how awful things were back when the vanquished were in charge of things. Much of the very purpose in Sharaz-De’s recounting seems designed, precisely, to prick the conscience of the despotic king who would execute her—carefully, she is accusing him of being like a barbaric forebear.

To say this is no new news—a central piece of interest in One Thousand and One Nights is the implied or explicit moral education Sharaz-De undertakes over the course of her captivity. [5] It is no accident that the first story she tells is one in which loyalty and kindness are cruelly and stupidly repaid with death. Later, a general’s kindness in not killing a snake comes back to him in the form of a blessing from the djinn who’d been trapped in that snake’s form. Here, Sharaz-De has switched from a merely human appeal to gratitude to a more “sinister” suggestion that to kill her might forfeit some great boon for the King in the future. In another story, a wise man has foreseen the ingratitude of the king he has helped and ensured the king’s death if his own life is taken. And in yet another still, the specific trait of ingratitude makes the king’s most prized possession (a stone that turns into a beautiful woman by night) no longer accessible to him. The psychology of the order of stories that Sharaz-De deploys is part of the joy of the text. If, at the beginning, one feels her danger and exposure, even the listener hearing (or reading) the tales will at least begin to have doubts whether or not the King really can, anymore, simply dispatch Sharaz-De come dawn, if he would still dare. On this view, then, the particular selection of excerpted stories Toppi chooses to illustrate refute point (2) that his book operates merely from the kind of orientalist dismissal from the book’s back.[6]

So, rather than having anything to do with “a barbaric society where the supernatural is the only remedy to injustice,” in fact, the available remedy to injustice is art. And art, not simply as telling a good story, but telling a story of the good.


[1] Toppi, S (2012). Sharaz-de: tales from the Arabian Nights. Fort Lee, NJ: Archaia, pp. 1–221.

[2] For the record, I started with the Crowley’s Thoth tarot, had the Renaissance tarot by Brian Williams at one point (certainly one of the most gay-friendly decks ever), have the Dali tarot (not too impressed, all things considered), the Tavaglione tarot (very pretty, hard to read), and the weirdly powerful Terrestrial tarot, which is the only other one I would currently use besides Toppi’s.

[3] Encountering Toppi’s work in a full narrative context (and not only through the individual illustrations of Tarot cards) was more than enough to make me search the Interwebs for more. And as a fine piece of coincidence, it turns out he has written a book devoted to the Warramunga people of Australia, who I am currently reading about from Spencer and Gillen’s (1904) The Tribes of Central Australia. So that’s on its way to my home now.

[4] I use “Sharaz-De” primarily because Toppi has, assuming this is the Italian spelling of the narrator’s name. Nonetheless, I maintain that everything problematic about the representation of the Other, particularly as it intersects with orientalism, is also bound up in how names get transliterated. This may be sensed clearly in the fact that we call the area between the Tigris and the Euphrates by a Greek term “Mesopotamia” rather than any of the many other historical names the place has been called by, by people who were actually living there at the time. Or in the fact that the Arabic background of the founding father of secular thought in Western Europe Ibn Rushd (Arabic: ابن رشد‎) is effaced by referring to him by the name Averroës. If it may be claimed that it’s not clear what “nationality” any Averroës might be (i.e., it seems neither Latin nor Greek), one may bet money that it will be more likely taken to be Latin or Greek than Arabic. A more pathetic version of this concerns the so-called Fibonacci numbers (see here or below), with one caveat: in Fibonacci’s book, he did not claim to have discovered the sequence but gives credit where credit is due. It is only we later Westerners who neglect this fact and give the Italian credit for discovering what was already long known to India.

Scheherazade (pron.: /ʃəˌhɛrəˈzɑːd/), Šeherzada, Persian transliteration Šahrzâd or Shahrzād (Persian: شهرزاد‎, šahr + zâd) is a legendary Persian queen and the storyteller of One Thousand and One Nights. ¶ The earliest forms of Scheherazade’s name include Šīrāzād (شيرازاد) in Masudi and Šahrāzād (شهرازاد) in Ibn al-Nadim, the latter meaning “she whose realm or dominion (شهر šahr) is free (آزاد āzād)”. In explaining his spelling choice for the name Burton says, “Shahrázád (Persian) = City-freer; in the older version Scheherazade (probably both from شیرزاد Shirzád = ‘lion-born’). Dunyázá = ‘world-freer’. The Bres[lau] Edit[ion] corrupts the former to Shárzád or Sháhrazád; and the Mac[naghten] and Calc[utta] to Shahrzád or Shehrzád. People have ventured to restore the name as it should be.” Having introduced the name, Burton does not continue to use the diacritics on the name.

Re: the sequence popularly-named after Leonardo of Pisa (1170 – c.1250) or Fibonacci, it

appears in Indian mathematics, in connection with Sanskrit prosody. In the Sanskrit oral tradition, there was much emphasis on how long (L) syllables mix with the short (S), and counting the different patterns of L and S within a given fixed length results in the Fibonacci numbers; the number of patterns that are m short syllables long is the Fibonacci number Fm + 1. ¶ Susantha Goonatilake writes that the development of the Fibonacci sequence “is attributed in part to Pingala (200 BC), later being associated with Virahanka (c. 700 AD), Gopāla (c. 1135), and Hemachandra (c. 1150)”. Parmanand Singh cites Pingala’s cryptic formula misrau cha (“the two are mixed”) and cites scholars who interpret it in context as saying that the cases for m beats (Fm+1) is obtained by adding a [S] to Fm cases and [L] to the Fm−1 cases. He dates Pingala before 450 BCE. However, the clearest exposition of the series arises in the work of Virahanka (c. 700 AD), whose own work is lost, but is available in a quotation by Gopāla (c. 1135): “Variations of two earlier meters [is the variation]… For example, for [a meter of length] four, variations of meters of two [and] three being mixed, five happens. [works out examples 8, 13, 21]… In this way, the process should be followed in all mātrā-vṛttas [prosodic combinations].” ¶ The series is also discussed by Gopāla (before 1135 AD) and by the Jain scholar Hemachandra (c. 1150).

To acknowledge this, of course, does nothing (except perhaps in the kind of mindset that is racist) to diminish the luster of Fibonacci’s own accomplishments with the series and other work, e.g., that it was with Fibonacci’s (1202): Liber Abaci that he

introduces the so-called modus Indorum (method of the Indians), today known as Arabic numerals … The book advocated numeration with the digits 0–9 and place value. The book showed the practical importance of the new numeral system, using lattice multiplication and Egyptian fractions, by applying it to commercial bookkeeping, conversion of weights and measures, the calculation of interest, money-changing, and other applications. The book was well received throughout educated Europe and had a profound impact on European thought.

[5] This may be why Boccaccio’s (1353) Decameron, for all of its wit and fun, has less impact overall, less cumulative effect.

[6] Whatever seductive or sexual appeal Sharaz-De has in and of herself, it is her imagination (or learning) that keeps her alive. If her beauty has anything specific to do with this, it is that it evokes in the imagination of the King its own matrix of interest. (Toppi emphasizes this point by not dwelling on any further sexual dalliances between the King and Sharaz-De during her captivity—whether that is true to sprawling tradition or not I don’t know.) But the general range of human faces he draws, which Simonson rightly notes as a distinct feature of Toppi’s work, suggests that reducing even the central character herself only to a sexually attractive face misses the boat. There are surely “seductive” pictures in the book, but it’s worth remembering that a falcon is on the cover, not an odalisque.

Abstract

That one might involuntary become “absorbed” into a crowd seems a central distinction from the pack or other social groupings, which are more or less voluntary. How this involuntary event occurs then becomes of central interest to an understanding of crowds generally. Cathexis (or projection, in Freud’s sense) and (what Jung calls) possession offer starting points. Further, then, both positive and negative feedback (in the sense understood in cybernetics) as well as chaos theory’s butterfly effect (realized here with more rigor than one usually encounters outside of physical or mathematical contexts) provide at least the beginning of a template for describing the behavior of crowds. So described, this may begin to make crowds a worthwhile phenomenon to name and characterize at all, insofar as they might then be harnessed toward desirable social ends.

Introduction & Disclaimer

This is the twenty-seventh entry in a series that ambitiously addresses, section by section over the course of a year+ Canetti’s  Crowds and Power[1] and the first to address Part 4 (The Crowd in History), which Canetti breaks up into several sections.  Here I cover section 1, “National Crowd Symbols”.

The ongoing attempt of this heap is to get something out of Canetti’s book, and that of necessity means resorting to the classic sense of the essay, as an exploration, using Canetti’s book as a starting point. I can imagine that the essayistic aspect of this project can be demanding—of patience, time, &c. The point of showing an essay, entertainment value (if any) aside, is first and foremost not to be shy about showing the intellectual scaffolding of one’s exposition as much as possible. This showing, however cantankerous the exposition, affords the non-vanity of allowing others to witness all of the missteps, mistakes, false starts, and the like—not in the interest of merely providing a full record (though some essayists may do so out of vanity or mere thoroughness, scholarly drudgery, or self-involvement) but most so that readers may be exasperated enough by the essayist’s stupidities to correct his or her errors and thus contribute to our collective better human understanding of ourselves.

The Crowd (Summarizing the Interlude)

An essential problem involves determining the boundaries of the phenomenon called a crowd; what is a crowd, what is a group, &c. The problem is like trying to determine if a hyena is more of a cat or a dog—remembering Todorov’s (1993)[2] remark that the weakest possible hypothesis about a phenomenon is that it may be classified. Just as Bakhtin (1981)[3] identifies novelistic discourse long before the appearance of the novel per se, we may similarly identify crowd-like behavior in prior to the appearance of actual crowds. In the sense that the prey can declare the crowd intent on destroying it or a baiter can declare the crowd intent upon destroying the prey, the crowd becomes (in principle) a usable if unruly or difficult resource.

From cybernetics, (living) organisms may be described in terms of key (biological) values that must be maintained within a certain range for the organism to continue to persist. Hence, the organism becomes perturbed (goes outside the permissible range of values) so that (feedback) mechanisms may restore homeostatic equilibrium. In cybernetic terms, these key values are parts of the observer’s description of the system of the organism; they are not properties of the organism itself.[4] This presence of the observer—especially as it involves the capacity of an authoritative gesture (spoken, &c) to declare a crowd—keeps to view that a crowds is always many crowds, i.e., as many observers of it as there are, and whatever complex of negotiations that involves on the ground at the now and here of the crowds’ dynamics.

“A living system is open to energy but closed to information and control” (Ashby, 1956, 4),[5] with the important caveat that crowds are not living systems—they are unnatural systems. [6] If there are critical values, they are not inherent to the crowd and are declared by observers. It becomes a contestation of course for who (plural or not) can declare the crowd, and once that declaration is authoritatively made, it functions (along with any other declarations) like a key value as the orientation point(s) for the crowds.[7] A distinct feature of such key values is that they are represented by a range, so anything within that range is subsumed as “situation normal,” as assent to the prevailing condition, the current telos. Hence, the non-collectivity of a crowds, as opposed to a pack: for the purposes of a crowds, a lack of agreement counts as agreement, whether because one has no opinion or because you are “carried along” by the crowd.

Taking a view that does not specifically refer to goal-oriented behavior,[8] one may conceptualize the pack as a response to external factors or pressures. It is not entirely beside the point to say that the pack is primarily oriented outward, while of course containing its own internal dynamics. A “victory” of the pack for example is precisely signaled in the kind of moment when two members of it, who do not otherwise get along, are able to set that aside fact toward responding to the external pressure. By contrast, a crowd may be described as more responsive to internal dynamics foremost (whether this then also provides or becomes an occasion for exerting pressure or being a factor in the world around it). It is not entirely amiss to say that its orientation is primarily inward, while of course affecting (even driving toward) external dynamics.

Another point raised involved the moment I am identifying here as declaring the key variable. If a pack, by definition, consists of an orientation to some goal, why shouldn’t a crowd be considered a pack after such a moment. The argument would be that there is no explicitly collectively agreed upon goal; hence my resort to language like “myth” and so forth. It would get too messy (and somewhat redundant from previous posts) to try to tease this all out again here, but it is worth holding in reserve in the back of one’s mind some notion that what might be properly understood as a crowd is a group of people who have not, in fact, as yet settled on some telos, course of action, or response to external pressures. A crowd in this sense might be the crowd before the riot starts, but not after; would be the crowd before the panic sets in, but not after. This may not prove to be an unhelpful distinction even if determining any such exact moment in the life of any crowd gets difficult.

National Crowd Symbols

Canetti begins in a denial of generalities regarding nationality, and particularly the inadequacy of geography or language as a sufficient grounding for the concept. Sadly, but not surprisingly, he then resorts again to his habitual generalization: “the larger unit to which [a man] feels himself related is always a crowd or a crowd symbol” (170, italics in original, emphasis added). It is this “crowd or crowd symbol” (the distinction here cannot simply be elided like this) for which a countryman fights for his country (Frenchman for France, German for Germany, &c). Though this crowd or crowd symbol is the “larger unit to which [a man] feels himself to be related”, yet in the pages to come Canetti “shall be saying little about men as individuals” (171). And this, despite the fact that “these crowd symbols are never seen as naked or isolated. Every member of a nation always sees himself, or his picture of himself, in a fixed relationship to the particular symbol which has become the most important for his nation” (170–1).

This reprises, on a putatively national scale, the issues of the crowd raised in the interlude.

But first, I will say first that that I have no intention of engaging Canetti’s ill-advised foray into construing the crowd symbols of various peoples (the English, the Dutch, &c). Post-השואה, to indulge in this kind of generalizing is not simply in poor taste but a manifest reintroduction of the problem.[9] As it is, Gorer & Rickman (1950),[10] while by no means the first or the last, provide a much more thorough demonstration of the dubious value of such complacently offered generalities.

Instead, we may broadly categorize the purpose of these kinds of gestures in three ways, following Zeleza (2005)[11]: (1) as chauvinist or comparador academia, as two distinct types that both have the aim to reify another “people” as an inferior Other; (2) as local or post-colonial interpreter academia, as two distinct types that occupy the ambivalent position of mediating the relationship between one people and an Other that cannot be or is not automatically assumed to be inferior (especially as between Empire and a neocolonial, or soon-to-be-colonized/globalized,  territory); and (3) as dissident or subaltern academia, as two distinct types that are Other but aim, not without considerable peril, to counteract the discourse of the other two kinds of academia. I point specifically to this as academia, because it functions provisioning the intellectuals, in Suttner’s (2005)[12] sense, “who, broadly speaking, create for a class or people … a coherent and reasoned account of the world, as it appears from the position they occupy” (129). The phrase “as it appears from the position they occupy” (129) is crucial for this three-fold distinction obviously:

What are the ‘maximum’ limits of acceptance of the term ‘intellectual;” Can one find a unitary criterion to characterise equally all the diverse and disparate activities of intellectuals and to distinguish these at the same time and in an essential way from the activities of other social groupings? The most widespread error of method seems to me that of having looked for this criterion of distinction in the intrinsic nature of intellectual activities, rather than in the ensemble of the system of relations in which these activities (and therefore the intellectual groups who personify them) have their place within the general complex of social relations (Gramsci , 1971: 8. emphasis added). [13]

Thus, again from Suttner (2005):

In the same way a worker is not characterized by the manual or instrumental work that he or she carries out, but by ‘performing this work in specific conditions and in specific social relations’ (117–8).

These categories may of course be variously problematized—and I leave it to my readers to exculpate their favorite dissidents from the charge of coopted interpreter, &c—the point is not to ignore, as Canetti does, the very problematic character of this kind of framing and deployment of discourses about national character, crowd symbols, nationality, and the like. The problem of representing the Other is already involved enough at the psychological and merely social level; once we enter the political domain, an infinitely greater amount of care in the representation of the Other needs to be exercised than is currently. This, because ultimately any justification offered by one people for describing another  people in any way must always be a negotiation, more or less supported by the potential for violence (through military intervention), and  more or less subject to the naming party’s assent to recognize the Other as human (and thus subject to the same rights claimed by the naming party) in the first place.[14]

Ultimately, your expectation for being recognized as existing may be annulled by my silencing of you, either by physical violence or social that effectively makes you invisible/nonexistent.[15] And vice versa. Confronted by a demand to be recognized—a demand based on an inalienable right to be recognized—if I do not wish to honor that demand, then I will have to decide upon the most effective course of action for silencing you, and thus putting an end to the demand.  In a circumstance where there is a gross disparity of representing power, the naming party may feel no compunction against saying anything it likes or, out of a sense of fair play, civilization, whimsy, love, or any other motivation, may elect to take an “open-minded” view of the Other. This is, of course, why the threat of violence is so important in a society.

Beyond this fundamental problem of who gets to name (a nationality, what a crowd symbol is, what the Other is), to say that one cathects an identity to a crowd symbol restates the unsolved problem of how one negotiates the relationship of individual and crowd in the first place.[16]

As a matter of detail, it is worth noting that the word cathexis is the translation (chosen so as “to be more scientific”) for Besetzung, which in German connotes (1) occupation (in its sense as an activity or task with which one occupies oneself), (2) cast (in its theater sense), or (3) squat (in its sense as the occupation of a building without permission). This contrasts with Besatzung, which connotes (1) crew (in the sense of a group of people operating a large facility or piece of equipment), and (2) occupation (in the sense of occupying of country). The verb besetzen this noun derives from connotes (simply) “to occupy”.  The specific sense of cathexis itself in a psychoanalytic context connotes “the concentration of libido or emotional energy on a single object or idea”. Here, it seems that “preoccupation” might be a more apt translation into English, but the usage of Besetzung in German remains instructive—even if the distinction between an occupation (as something one does or as something one is doing) becomes blurry.[17] Moreover, the English sense of the word occupy connotes to fill (either time or space) and derives from Latin: occupare (“to take possession of, seize, occupy, take up, employ”), from ob (“to, on”) + capere (“to take”).

I suggest that in this very rich and ambiguous connotation, whether in Latin, German, or English, reflects the ambiguity of e.g., taking possession of one’s place in a crowd. As usual, the Latin connotations and roots open a window of clarity on the matter, suggesting that to occupy can be to take possession of (as if there is already something one may seize) or to take on (as something one brings into being by enacting). The ambiguity of this is present in Jung’s sense of possession (see note ), as something that suspends one’s will. In colloquial speech, demonic possession is to be occupied by a demon—in Jung’s terms, possession is (preoccupation) by a complex. And so we might imagine the same for human beings—or more precisely that we would overgeneralize our human experience as an explanatory template, i.e., the standpoint of “the world” (properly anthropomorphized) the world experiences our (demonic) possession of it when we occupy it.

What seems to useful in Jung’s point is its helpfulness in describing both oppressive and everyday compulsions. Those who struggle against addition know viscerally how their own will, or ego, get thwarted, beaten down, outwitted, evaded, &c., by this part of themselves they identify as not consonant with their desires. This is how the experience of addiction may genuinely be described as baffling and demoralizing, especially when avoiding the addiction’s temptation is as simple as not lighting a cigarette or not pouring a drink. What Jung’s notion of complexes makes clear, however uncomfortably for us, is to point out the manifold ways we succumb to possession each day at every point where we struggle to maintain some form of discipline—be that a diet, an exercise regimen, some form of self-commitment or whatnot. Generally, we don’t manage these kinds of possessions at all, but simply give in to them, under the aegis of “doing my own thing” or “expressing myself” or whatever other banner we like to fly this. As one addiction recovery pundit puts it, he sees no reason to call this a problem. A person who uses heroin every day may have a chemical dependency, but there is no reason to call this an addiction until the individual wants to stop the behavior and finds that she cannot. He points out that most people who use drugs or drink or smoke are well aware of the risks, and it’s a free country, so if they’ve determined, on whatever rationale (or lack of it) that the behavior is not disagreeable in terms of their self-determination right here and now, then there is no need to call them an addict.[18]

I am suggesting that just because we have dependencies that arise from the complexes that possess us and that we succumb to on a daily basis need not to converted into the term addiction yet, at least not until the individual who is possessed by those complexes wishes not to be determined by them. Then, something like addiction might be a more appropriate metaphorical template.[19]

The explanatory strength of Jung’s notion is that it acknowledges possession by complexes. With cathexis, “the concentration of libido or emotional energy on a single object or idea” not only has a very voluntary ring to it—although the usual usage has it as a wholly unconscious process—but also a very outward-directed orientation to it. Possession, by contrast, is explicitly involuntary and has an inward-directed orientation. Jung’s recognition of a distinction between extraversion and introversion obviously offers a conceptual widening on any sense of cathexis. Cathexis (toward an object) might be called the extravert’s experience; possession (by an object) more describes the introvert’s experience.[20] Thus, possession (for an introvert) involves warding off—to avoid the effects in the first place—or exorcism, to ameliorate a successful possession. While for the extravert, the parallel processes involve not getting preoccupied—to avoid the effects in the first place—or dispossessing oneself of such holdings as have been seized upon by such preoccupation.

What this points to regarding crowds is that any identification of the individual with the group may not proceed in only one direction. Canetti speaks of the discharge. In its sexual sense, this is overtly outward-oriented—even if Canetti means it in a wholly inward sense; there is a vision of some masculine ejaculate pulsing out of the crowd in (as Canetti always insists) some direction. This connotation of outwardness in discharge has a predominance of etymological antecedents,[21] along with its Latin ancestor as “I unload,” but might still be focused on the inner experience of sexual discharge nonetheless, what Radcliffe in a different but similar context borrowed from Burke[22] as an “agreeable shudder.” But whatever the inwardness that might be inferred about discharge, one may with at least equal plausibility speak of being possessed by a crowd. If an individual might hold up to a crowd her raised fist or raise up her voice in doleful cry as an orienting myth that becomes the very object some might cathect (“take possession of” as the sense of the crowd) so that the crowd comes into being for them, that declaration might equally be the object that possesses others so that their will gets subverted or taken over by what has possessed them.

To be a member of  crowd (whether I imagine this as a stewing, churning mass of people who’ve yet determined a course of action or not) would seem to involve some subordination of identity to something Other than oneself, however temporarily. This by no means means that everyone in a crowd (realized or potential) has become, is, or will be a member. Even in the midst of a blind panic, some people keep their heads, &c. The main point of contention remains the sometimes involuntary character of this identification, whether through a sense of злорадство or уют (Schadenfreude or Gemütlichkeit),  товарищество (camaraderie) or something else still. Saying this is not only to assent to the ex post facto excuses people give themselves to come to terms with things they have done. Things like sleepwalking, ambulatory blackouts, some psychopathic episodes, and the like all confirm quite legitimate forms of not merely non-voluntary but even non-conscious activity performed (as those around us report later) by our bodies. So, it can’t simply be dismissed that everyone in a crowd who “went along with it” necessarily must have done so voluntarily and consciously.[23] Nor, conversely, can the fact of this be taken as evidence for some widespread distribution of an authoritarian personality.

One hears too frequently that inadequate description of the butterfly effect in the platitude that small changes can lead to big differences.  This is contradicted by daily life a billion times per day, but principally because an essential piece of information has been left out.

Most feedback mechanism has a dampening effect; that is, as a given state of a system veers farther afield of the desirable range, then mechanisms kick in to return the state-value to near the desirable state-value. In contrast to this negative feedback, some instead amplify, exacerbate, or increase the state-value, by design or by accident. The most familiar example is the microphone held near to an amplifier, but we also (as human machines) have a biological response that can endanger our lives because of its positive feedback: vomiting. Again, the number of negative feedback mechanisms involved in living systems and the biosphere seem to be negative types, but in circumstances where positive feedback is present, that is where the small initial difference can  lead to wildly different outcomes.

Saur and Rasmussen (2003)[24] deploy this notion with respect to interventions by counselors into the lives of students, asserting that a well-placed piece of advice at a time when the student is in a comparative state of “chaos” may have a far more lie-changing or efficacious effect than when the student is in a condition of typical stability. However this metaphor sits as a sufficient or overextension of the notion of chaos, it certainly captures a sense of how people can at times change, even overnight, out of long-entrenched habits. Saur and Rasmussen (2003) specifically cite the butterfly effect in this case (in its correct sense), but the metaphor may be teased out still more.

Chaos, at least in its mathematical sense, can begin (or does begin) with a bifurcation point and thus the breaking of a symmetry. For Saur and Rasmussen (2003), this takes the form of a break from one’s routine: the onset of a new relationship, the termination of an old one, a death in the family, a birth, &c. Similarly, just as a bifurcation point does not immediately or instantaneously result in the kind of near recurrence that characterizes chaos (see here), merely any old break in routine in itself need not lead instantaneously  or immediately to the sort of potential openness to intervention that Saur and Rasmussen (2003) identify. Whatever this initial bifurcation and broken symmetry, it needs to ramify further (in further bifurcations, &c).

From the notion of phase transition, “when symmetry is broken, one needs to introduce one or more extra variables to describe the state of the system” (¶2); this variable may be an order parameter, which “is normally a quantity which is zero in one phase (usually above the critical point), and non-zero in the other” (¶1). I propose that the counselor’s intervention represents this new variable—expressed in human life as a new value; a new attractor around which one’s life comes to be organized rather than whatever previous attractor prevailed. By attractor, I adapt the sense of it as “a set towards which a variable, moving according to the dictates of a dynamical system, evolves over time. That is, points that get close enough to the attractor [then] remain close even if slightly disturbed” (see here for more, and also here, although the description here is of limited help).

Lorenz_attractor_yb.svg

Lorenz Strange Attractor

This use of attractor may be taken “merely” in a cybernetic sense, as a way of thinking of attractors as a way of thinking; I’m not interested in only sloppily importing a mathematically precise concept into a fuzzy humanistic domain. But if we have a tendency to describe our lives in terms of a given orbit, then at least the attractor and strange attractor allow far more articulate (and odd-shaped) orbits that the usual notion of the ellipse around a gravity well. Moreover, this notion of (strange) attractor permits of variation in a way that the notion of orbits generally does not. This inclusion of variation and near-recurrence and a wider variety of “orbital” shapes is elegantly obvious in Lorenz’s famous strange attractor (above). Or this strange thing below.

The Shape of a Life

The Shape of a Life

As far as the future determination of this, Knyazeva (2004)[25] suggests:

The same notion follows from the original synergetic model of order parameters and slaving principle elaborated by H. Haken. There is a principle of circular causality that describes the relationship between the order parameters and the parts that are enslaved by the former: the individual parts of a system generate the order parameters that in turn determine the behavior of the individual parts. It can also be expressed in quite another form, namely: the order parameters represent a consensus finding among the individual parts of a system, to draw an anthropomorphic picture. Thus, the few order parameters and the few possibilities they have in accepting their individual states reflect the fact that in complex systems only a few definite structures that, so to speak, are self-consistent with respect to the elements are possible. Or to put it differently, even if some configurations are generated artificially from the outside only a few of them are really viable (Knyazeva and Haken, 1997, 2000)[26]

The future states of complex systems escape our control and prediction. The future is open, not unequivocal. But at the same time, there is a definite spectrum of “purposes” or “aims” of development available in any given open nonlinear medium. If we choose an arbitrary path of evolution, we have to be aware that this particular path may not be feasible in a given medium. Only a definite set of evolutionary pathways are open; only certain kinds of structures can emerge. These spectra of evolutionary structure-attractors look much like spectra of purposes of evolution. There is, so to speak, “a tacit knowledge” on the part of medium itself. The spectra are determined exclusively by properties of open nonlinear systems themselves. The future turns out to be open in the form of spectra of pre-determined possibilities (Knyazeva, 1997).[27] In spite of the existence of a whole set of possible evolutionary paths, many structure-attractors remain hidden. Many possibilities will not be actualized. Many inner purposes cannot be achieved within given parameters of the medium. It looks as if a lot of things exist in a latent world.

The attractors as future states are pre-determined (they are determined by their own properties of a given open nonlinear medium). Patterns precede processes. They can be interpreted as a memory of the future, a “remembrance of future activities.” All the attempts that go beyond one of the basins of attraction (the “cones” of attractors) are the “infernal attempts.” Everything that is not in accordance with the structure-attractors will be wiped out, annihilated. For example, a human can fight unconsciously against those forces (some of his attitudes and plans as structure-attractors) that “pull him” from the future, but all these attempts are doomed to failure. (400–1).

All of this is an elaborated way to try to come to terms with the transformation of a group of people into a crowd,  “stewing crowd” into a rampaging, panicking, or collectively singing crowd, and/or how individuals “disappear” (through possession or projection) into the crowd. The notion of ecstasy itself (of being, literally, “beside oneself”), which Canetti elsewhere cites in the form of orientalistic Europeans who are fascinated by the “barbaric spectacle” of the Middle Eastern ecstasy. &c.

Summarizing the metaphor here, if it may in fact only be taken as a metaphor—as an “as if”—then the notion is that from a state of relative equilibrium or stability, a perturbation (as a change of state) occurs that gets addressed not in the usual way (by negative feedback to restore the stability or equilibrium) but rather becomes an amplifying cascade that switches the orientation of attention and thus being of the person, which then becomes the new center, the new attractor, round which one’s here-and-now existence gets organized.

It is not enough in this only to construe this as only a change of attention. Studies of attentional shifting, unsurprisingly, show it typically to be all in a day’s work. Background noise, for example impacts attention on visual tasks;[28] it is an effect incorporated into one’s attentional spectrum, not merely an alien factor in the environment. Moreover, while visual distractions are more easily activated, auditory distractions tend to be more pronounced and have more lingering effects.[29] Generally, distraction seems to be more difficult when a tactile modality is engaged;[30] and for children with autism spectrum disorder, greater perceptual loads help to avoid becoming distracted.[31]

In fact, the very wealth and overload of multisensory information we are bombarded by moment to moment[32] that illuminates the origin of our selective attention in the first place.[33] And if this seems far afield of the current topic of this essay, not only is the poorly characterized mechanism or process of the interaction of attention and the integration or non-integration of multisensory inputs still a very open question in cognitive fields,[34] but also this tracks, writ small, the moment analogous to the integration of the individual into the multiplicity of the crowd or the integration of the self into the multiplicity of society, &c. Experimental evidence suggests multiple instances of experience rather than the strength of any one given experience strengthen memory,[35] even when those multiple experience occur at different times.[36] Contrasting instance versus strength theories, Logan (1988) notes:

In instance theories, memory becomes stronger because each experience lays down a separate trace that may be recruited at the time of retrieval; in strength theories, memory becomes stronger by strengthening a connection between a generic representation of a stimulus and a generic representation of its interpretation or its response ( 494).

By contrast, the sort of change proposed by the butterfly effect, by the reorientation from one attractor to another, points to a different order—I want to say something like a structural change and not only a change of behavior; a modification to the range of possible behaviors due to the change in structure, and not just to the inhibition or promotion of a previous or potential in the old way of doing things.[37] An example is the addict who quits, actually for good. Having tried to quit smoking many times, I can report from experience that when I finally did quit, there was almost literally a sensation of a shift. But whatever the phenomenology of this, I am pointing to that experience (familiar to those who have experienced it, perhaps difficult to believe the reality of if not) of a very palpable reorientation that is better described as a changing of who I am than as a change in how I behave. In cybernetic terms, this portends a change of structure not a change of behavior or perhaps even more explicitly a modification to the range of constraints on behavior.[38]

Conclusion

All of the foregoing offers various attempts to come to terms with the notion of how we might involuntarily “merge” with something transcendental to ourselves. I put “merge” in scare quotes because the underlying phenomenology of the experience seems to be one where the executive or conscious will gets (temporarily) hijacked, whether that involves by an overwhelming crowd, by a possessing demon, a numinous symbol (crowd or otherwise), or anything else. In addition to cathection and possession, there is the further development of the metaphor of the symmetry breaking of ramifying bifurcation points, leading sometimes to a point of chaos where a small initial change may (thanks to positive feedback) have large-scale consequences. The ultimate point of working this out at all, besides masturbatory self-indulgence, may be to understand, manage, or leverage the tidal forces of crowds.


[1] All quotations are from Canetti, E. (1981). Crowds and Power (trans. Carol Stewart), 6th printing. New York: NY: Noonday Press. (paperback).

[2] Todorov, T (1993). The fantastic: a structural approach to a literary genre (trans. Richard Howard). Ithaca: Cornell University Press.

[3] Bakhtin, MM (1981). The dialogic imagination: four essays (ed. And trans. M. Holquist and C. Emerson). Austin: University of Texas Press.

[4] An organism does not “care” about its body temperature or the pH of its blood, though as observers we can easily recognize if temperature gets too high or low, then the organism will die, &c. When certain conditions arise within the organism (that an observer could describe as a dangerous increase in pH), then some variety of compensating mechanism will be triggered in the organism if possible—otherwise, the organism (or that system of the organism where the dangerous increase occurred) will disintegrate, spelling the end of the creature as well.

[5] Ashby, WR (1956). An introduction to cybernetics. New York: J. Wiley.

[6] Does this make them closed to energy but open to information and control?

[7] The further question is whether or not having declared a crowd in this way if the crowd is not in some measure transformed into some more pack-like.

[8] This portion of my argument benefits from a helpful discussion with Jerehme B.

[9] One of the downsides to history is arriving late to it, though this also creates some egregious lags—the course of the Russian Revolution was surely affected in part by the failures of 1848, &c., and the empire of the United States is a different kind of beast than the British, Seleucid, or Ottoman empires, &c. In the aesthetic domain, this can be marvelous—the modernism of the Argentinian composer Alberto Ginastera gets fused to a distinctly nineteenth century nationalistic music gesture so familiar already from Bartok, even Liszt, &c., to magnificent effect. In the political realm, these things come off less attractively—the Bosnian War as a nearly twenty-first century working-out of the nineteenth century gesture of historical self-determination, and the ongoing destruction of the Arabs in Damascus Country by Israel showing garishly what Manifest Destiny in the United States never so clearly put on display. In personal terms, this is the complaint of children against parents, who used drugs, telling their children not to, &c. What makes this so fundamentally a reprise of the problem is that the behavior of the forebears is taken as the frame, and thus the social explanation for the world. Israel must be nationalist, because the immediate historical forebears were nationalist. The problem of the world, so to speak, gets presented to them in nationalist terms, although vast tracts of history (including within the history of Damascus Country) are not informed by a nationalist template. Thus, the child’s justification for doing drugs is not necessarily that everyone always have but, rather, that everyone is. As a matter of leveraging loyalty from people, this makes for effective propaganda, but the notion that the propaganda is “for the good of the people” is merely the cover for the sort of self-serving politics taken “by civilization at large” as being—once again—not how it’s always been done, but as what everyone is doing. There’s some of Hegel’s master/slave in this as well as the irony that claiming to be a world leader, i.e., the United States (or the UK or Israel), results in being a reactionary follower—cue again the notions of persecution and destructiveness. It is dubious to psychologize this, so with all of the proper caveats, it’s hardly off the mark to call this neurotic if not psychopathological.

[10] Gorer, G, and Rickman (1950) The people of Great Russia: a psychological study. New York: Norton.

[11] Zeleza, PT (2005). The academic diaspora and knowledge production in and on Africa: what role for CODESRIA in T. Mkandawire (ed.) African intellectuals: rethinking politics, language, gender and development, pp. 209–34. London: Zed.

[12] Suttner, R. (2005). The character and formation of intellectuals within the ANC-led South African liberation movement in T. Mkandawire (ed.) African intellectuals: rethinking politics, language, gender and development, pp. 117–54. London: Zed.

[13] Gramsci, A. (1971). Selections from the prison notebooks (Q. Hoare and G. Nowell Smith, eds.) London: Lawrence and Wishart (footnote from Suttner 2005).

[14] Internal politics can influence this naming of course—the racism of mass incarceration does not go walking about by day in public view under that moniker—and so an administrative vanity; presidents can get touchy about their legacy and don’t want to be remembered as the one who broke into Watergate or who referred to the Arab world as full of “sand niggers” though the consequences of policies pursued amount to little else. That is, while in practice at those times when nothing stands between the resort to violence or oppression, there are nevertheless factors other than the attractiveness of those resorts that can come into play: lack of resolve, vanity, political considerations, &c.

[15] Absolute nonexistence is a tentative state. Those who are incarcerated have a continuing existence, but only in a very narrowly delimited way and through a very narrowly constituted form of recognition (as dangerous bodies in need of enclosure). Wherever there is a condition that makes nonparticipation involuntarily unavailable to a person or a people, then we are dealing with a similar kin of relative nonexistence. Many forms of public participation (voting is the most obvious example) arguably have an effect of a similar kind of involuntary nonparticipation, since one is allowed only to be politically present by a representative (assuming your candidate wins).

[16] As a matter of detail, it is worth noting that the word cathexis is the translation (chosen so as “to be more scientific”) for Besetzung, which in German connotes (1) occupation (in its sense as an activity or task with which one occupies oneself), (2) cast (in its theater sense), or (3) squat (in its sense as the occupation of a building without permission). This contrasts with Besatzung, which connotes (1) crew (group of people operating a large facility or piece of equipment), and (2) occupation (of a country). The verb besetzen connotes “to occupy”. The sense of cathexis itself in a psychoanalytic is “the concentration of libido or emotional energy on a single object or idea”. It seems like “preoccupation” might be a more apt translation in English of the psychoanalytic sense, but the sense of the usage in German remains instructive—even if the distinction between an occupation as something one does or something one is doing becomes blurry. That the word besëtzen from the Germanic Luxembourgish language means “to possess” points also in the direction of the sense of cathexis as latching on, becoming preoccupied with, &c. The English sense of the word occupy in its transitive has connotes to fill (either time or space) and derives from Latin: occupare (“to take possession of, seize, occupy, take up, employ”), from ob (“to, on”) + capere (“to take”). I would suggest that in this very rich and ambiguous word, whether in Latin, German, or English, reflects the ambiguity of e.g., taking possession of one’s place in a crowd.

[17] That the word besëtzen from the Germanic Luxembourgish language connotes “to possess” also points in the direction of the sense of cathexis as latching on, becoming preoccupied with, &c.

[18] “What drinking problem? I drink, I pass out, no problem.” That this is an exclusively individualistic analysis of chemical dependency obviously points to a potential source of arguments against it, but the point of view is consistent enough within itself.

[19] Again, viewing the matter only in terms of its individualistic emphases.

[20] Canetti’s criticism of Freud may be due to an introverted orientation objecting to the extraverted orientation of psychoanalysis. But the problem is that, instead of taking issue with someone who has the whole sense of an introvert’s psychology backward (at best), he tries to substitute his own psychology in psychoanalytic terms, rather than starting with someone who was at least a sympathetic and thoughtful exponent of the introverted point of view, i.e., Jung.

[21] discharge: “to accomplish or complete, as an obligation; (2) to expel or let go; (3) (electricity) to release an accumulated charge; (4) (medicine) to release an inpatient from hospital; (5) (military) to release a member of the armed forces from service; (6) to operate any weapon that fires a projectile, such as a shotgun or sling; (7) to release an auxiliary assumption from the list of assumptions used in arguments, and return to the main argument; (8) to unload a ship or another means of transport”

[22] Windelband (1914) notes that Burke’s “attempt to determine the relationship of the beautiful to the sublime—a task at which Home, also, had labored, though with very little success 2—proceeds from the antithesis of the selfish and the social impulses. That is held to be sublime which fills us with terror in an agreeable shudder, “a sort of delightful horror,” while we are ourselves so far away that we feel removed from the danger of immediate pain: that is beautiful, on the contrary, which is adapted to call forth in an agreeable manner the feelings either of sexual love or of human love in general” (511). Footnote 2 from above: “According to Home the beautiful is sublime if it is great. The antithesis between the qualitatively and the quantitatively pleasing seems to lie at the basis of his unclear and wavering characterisations” from Windelband, W (1914) A history of philosophy: with especial reference to the formation and development of its problems and conceptions (trans JH Tufts) (second edition, revised and enlarged) New York, NY: MacMillan and Co., Ltd. (available here)

[23] In criminal cases, everything may hinge precisely on being able to determine or establish this.

[24] Saur, R., & Rasmussen, S. (2003). Butterfly power in the art of mentoring deaf and hard of hearing college students. Mentoring & Tutoring: Partnership in Learning, 11(2), 195-209.

[25] Knyazeva, H. (2004). The complex nonlinear thinking: Edgar Morin’s demand of a reform of thinking and the contribution of synergetics. The Journal of General Evolution, 60(5/6), 389-405.

[26] The two papers Knyazeva references here are: Knyazeva H, and Haken H (1997). Perché l’Impossible è impossible. Pluriverso. 2(4): 62–66, and Knyazeva H, and Haken H (2000). Arbitrariness in nature: Synergetics and evolutionary laws of prohibition. Journal for General Philosophy of Sciences. 31(1): 57–73

[27] This refers to: Knyazeva, H (1997). Téléologie, coévolution et complexité. In ER Larreta (ed.) Représentation et complexité, pp. 183–205. Paris: Educam/UNESCO/ISSC.

[28] Trimmel, M., & Poelzl, G. (2006). Impact of background noise on reaction time and brain DC potential changes of VDT-based spatial attention. Ergonomics, 49(2), 202-208.

[29] Bendixen, A., Grimm, S., Deouell, L., Wetzel, N., Mädebach, A., & Schröger, E. (2010). The time-course of auditory and visual distraction effects in a new crossmodal paradigm. Neuropsychologia, 48(7), 2130-2139. doi:10.1016/j.neuropsychologia.2010.04.004.

[30] Eimer, M., & Driver, J. (2000). An event-related brain potential study of crossmodal links in spatial attention between vision and touch. Psychophysiology, 37, 697-705. doi:10.1017/S0048577200990899

[31] Remington, A., Swettenham, J., Campbell, R., & Coleman, M. (2009). Selective attention and perceptual load in autism spectrum disorder. Psychological Science, 20(11), 1388-1393.  doi:10.1111/j.1467-9280.2009.02454.x

[32] Dux, P., & Marois, R.. (2009). The attentional blink: A review of data and theory. Attention, Perception and Psychophysics, 71(8), 1683-1700.

[33] Lavie, N. (2005). Distracted and confused?: Selective attention under load. TRENDS in Cognitive Sciences, 9(2), 75-82. doi:10.1016/j.tics.2004.12.004

[34] Navarra, J., Alsius, A., Soto-Faraco, S., & Spence, C. (2010). Assessing the role of attention in the audiovisual integration of speech. Information Fusion, 11(1), 4-11. doi:10.1016/j.inffus.2009.04.001.

[35] Logan, G. (1988). Toward an instance theory of automatization. Psychological Review, 95(4), 492-527. doi:10.1037/0033-295X.95.4.492.

[36] Clemons, L. K. (1989).  Degrees of implementation of multisensory reading instruction by teachers involved in naturalistic research. (Record of Study);. Ed.D. dissertation, Texas A&M University, United States — Texas. Retrieved from Dissertations & Theses: Full Text.(Publication No. AAT 9015435).

[37] An objection to this is that proposed change sounds tantamount to life-changing and that most crowds (or the human experiences of crowds) are anything but. At a minimum, it is helpful to attempt to over-contextualize the question, and there is no need to ignore that crowds can, at times, be life changing for people. The rooting of these concepts as descriptions of the noncultural necessarily requires adapting (not adopting) them for use, if they prove helpful. This is not a merely illegitimate gesture thanks to blindly poaching concepts from other domains.

[38] Although the cyberneticians reading this will rightly raise the point that  any precise distinction between a “change of structure” and a “change of constraints” would be difficult to characterize outside of those machines that we human beings explicitly design and build. In artificial systems (nonliving machines) we may more easily identify the distinction of structure and constraint, because we designed the machine with that in mind. With living systems, and with ourselves as nontrivial machines, it becomes far more difficult, because even self-consciousness seems already a behavior interior to structural determination. Insofar as everything human is artificial, our capacity to declare our design ourselves—to describe how we will describe ourselves, to pick our way of thinking about thinking—makes the distinction of structure and constraint perhaps an in-principle undecidable question, so that we have no choice but to choose and live with the consequences of that choice—subject to change if we don’t like them.

si l’autorité n’existait pas, il faudrait l’inventer

Summary (in Two Sentences)

As is also the case with the Marquis de Sade’s writings, the miasma of legend surrounding Nikola Tesla stands in as a borrowed authority for works or ideas that are not only (per Pauli) “not even wrong” but also poor examples of creativity, so that just as relatively competitive sectors of the (US) economy have since contracted into tightly controlled oligarchies—and just as the 50 media megacorporations in 1983 have now shrunk in 2013 to only 6 gigacorporations—the implications of those contractions have their parallel in a contraction of human imagination in culture producers (in the United States). The effect of this contraction serves as an input for not only a decimation or disablement of the transformative capacity of art in culture, due to weakened constraints on what gets produced and consumed as creative work, but also a more effective social control of the population, due to an appearance of authority and a deeper etching in memory afforded by the greater density of redundancy in current cultural discourse.

Pre-Disclaimer

Last year in 2012, I set myself the task to read at least ten pages per day, and now I’m not sure if I kept up. I have the same task this year, and I’ve added that I will write a book reaction for each one that I finish (or give up on, if I stop). These reactions will not be Amazon-type reviews, with synopses, background research done on the author or the book itself, unless that strikes me as necessary or if the book inspired me to that when I read it. In general, these amount to assessments of in what ways I found the book helpful somehow.

Consequently, I may provide spoilers, may misunderstand books or get stuff wrong, or get off on a gratuitous tear about the thing in some way, &c. I may say stupid stuff, poorly informed stuff. There are some in the world who expect everyone to be omniscient and can’t be bothered to engage in a human dialogue toward figuring out how to make the world a better place. To the extent that each reaction I offer for a book is a here’s what I found helpful about this, then it is further up to us (you, me, us) to correct, refine, trash and start over, this or whatever it is we see as potentially helpful toward making the world a better place. If you can’t be bothered to take up your end of that bargain, that’s part of the problem to be solved.

A Reaction To: Smith’s (2008)[1] RASL (The Drift)

This reaction is in two takes. I wrote the second first. It’s a lumpy beast, perhaps maddening in its Cthulhuian sprawl, and there it sits below, dreaming. There are redundancies between takes one and two. I leave it to your judgment to work out what you need to do with the Other or why I did this.

Take 2

As we all should know:

Back in 1983, approximately 50 corporations controlled the vast majority of all news media in the United States.  Today, ownership of the news media has been concentrated in the hands of just 6 incredibly powerful media corporations.  These corporate behemoths control most of what we watch, hear and read every single day.  They own television networks, cable channels, movie studios, newspapers, magazines, publishing houses, music labels and even many of our favorite websites … Most Americans have become absolutely addicted to news and entertainment and the ownership of all that news and entertainment that we crave is being concentrated in fewer and fewer hands each year.[2]

The seeming information boom now emanating from this hyper-merged number of sources entails massive redundancy. In cybernetics, redundancy may be used as a correction channel to ensure that the message transmitted is the message received.[3] Moreover, experimental evidence suggests that multiple instances of experience rather than the strength of any given one experience etches memory most deeply,[4] even when those multiple experience occur at different times.[5]

Given that Voltaire wrote, “Si Dieu n’existait pas, il faudrait l’inventer” (“if god did not exist it would be necessary to invent him”), we might ask on what authority he dares says so, and the answer might just as well be god. But whatever Voltaire did or did not specifically mean by this remark, if in his day being a mere individual could suffice to exclude one’s voice for consideration in public life and especially before the Authority of church or state, then these days the pathetic individualism of postmodernism does the same for us (without needing necessarily to get the church or state even involved in it). In other words, as I read Voltaire’s quip, he’s insisting that in the absence of any warrant for an assertion, people borrow the authority of someone else to make a point, the more ultimate the better.[6] As Satchidānanda (1988)[7] puts it, “We want an authority to confirm our experiences” (31).

In Smith’s case, this borrowed authority is Tesla, who has since become the patron saint of that certain kind of crank who fancies himself a scientist,[8] and who evokes fan-boy gush of the sort: “quite frankly, [who] with even a smidgen of interest in the history of science isn’t fascinated by that guy?)” (¶1).  The word smidgen is too apt, as it’s likely people with a smidgen of interest in the history of science who are fascinated by Tesla, just as people with a smidgen of interest in the history of Russia are fascinated by Rasputin.

286px-Teslacirca1880(2)

Little Nicky Tesla (c. 1880)

The issues here have nothing to do with the actuality of Tesla, mad genius or not,[9] but rather that he is taken as the (invented) authority and excuse for that certain kind of speculating, philosophical, if not somewhat promotional “idea”  that does not include new, sound workable principles of methods for realizing those ideas.[10] Just as de Sade’s writings get taken as license for others to indulge in their own pornographic excess—despite most imitators never having cracked open a book by the Marquis—Tesla’s mythos similarly gets taken as license for others to indulge in their own non-scientific or pseudoscientific excesses. Toward those who profess these Teslaesque excesses, established specialists often show great condescension, whether because the ideas really are “not even wrong” or because the specialist is (like a proper villain out of Ayn Rand) interested only keeping his current social position secure. Tesla himself by the end of his life had become the template for the persecuted genius.

But in the case of a (graphic) novel, the issue is never scientific veracity.  Too much science in science fiction and it’s no longer worthy of fiction—such an emphasis might generate a fine fan-boy design for the Enterprise, but as fiction, the move is crippling. On the principle “that those who can’t do, teach,” then a project like crowd-sourcing all the armchair engineers in the world to “design” the world’s most famous spaceship is a good example of Tesla-think at work. And all of the flame-wars about what materials to use, and so forth, would be simultaneously precious and idiotic, if we had to regard the whole enterprise (pun not intended) as actually being “real-world” plausible. After all, there’s certainly no fictional reason why the Enterprise can’t be made out of aluminum tetrahydrated banana pudding—that stuff is incredibly resilient, as you know—so that the enforced and pathetic obligation to let a fiction be governed by science in this way seems the same kind of intellectual contraction as is involved in going from 50 media corporations to 6.

Regarding Smith’s book, then, it invokes Tesla, and so there are two strands involved here: (1) the unimaginative reaction that because Tesla-think is involved in Smith’s project that it is therefore cool (or thoughtful or profound or intellectual, or even simply imaginative), and (2) how Tesla-think itself cripples the imaginative work of the fiction.[11] Delany (1978)[12] made it clear long ago that realistic or naturalistic fiction is a subgenre of science fiction—specifically, it is a variety of the parallel universe story where the only difference between our world and the other world are the putatively fictional characters of the other world.[13]

Dislocation from one world to another is mere realistic fiction in two ways. One may accomplish all the same tropes[14] by having a character travel to another country as to another “world”—which is why Sliders or Stargate are little more than knock-offs of Star Trek. Second, from the world of mythology, various travels to “other lands”—be they upper worlds, lower worlds, the inferno, purgatorio, paradiso—all of this is utterly old hat. Nearly all of the writers of utopias, confronted by the ever-increasing exploratory reach of human beings, kept trying to find other “otherwheres” to set their works: More put his utopia on an undiscovered island; St. Augustine had his city of god in heaven; the anti-urban poets put their poetry in Arcadia just as folk-tales put the land of plenty in Cocaigne (just as US hobos had the exactly parallel Big Rocky Candy Mountain). The Gothic novel imagined it as a castle in the past (and so did de Sade, but in the present); and speculative fiction and socialist realism in general put it on other planets or in the future. Each of these purports to find some kind of rational otherwhere to justify telling a story not crippled by adherence to the taken-as-real.

In none of these stories does anyone infer any narrative significance for how you get from one world to the next, whether it’s across the ocean, the sea of space, the tides of time, through a wardrobe, down a rabbit hole, flying toward the second star to the right and straight on till morning, or anything else. In Peter Pan, in Lewis’ Narnia, and countless other stories beside, the resort is, “It’s magic,” and that’s more than fictionally sufficient. Not even such a poorly realized show or movie like Stargate imagines that the stargate itself needs anything more than “it’s alien technology” as a justification, so long as it looks impressive in its operation. To imagine for a moment that throwing in Maxwell’s equations, various dubious adaptations of quantum mechanics at the macro-level, and the authority of Nikola Tesla amounts to something more than another synonym for “it’s magic” or that it imparts greater “reality” to the fiction is to succumb to imagination-crippling Tesla-think.

It is an irony of life that sometimes less is more, and this is a case in point. And the only way in the world that this variety of “it’s magic” could be construed as cool, profound, or imaginative is by Tesla-thinking that some sort of physically actualizable stuff of the sort deployed in the book is possible. A familiar example of this is the platitude is that Jules Verne helped to inspire real men to build real submarines—I say that’s false both generally and specifically. Specifically, the history of submarines (variously imagined) pre-date Verne’s (1870) Twenty Thousand Leagues Under the Sea by some 200 years; but generally, what he inspired was the possibility of a more ambitious undersea vessel than ever. The title of his book indicates the level of science involved, though supposedly the 20,000 leagues (or 80,000 kilometers) is the distance traveled, not the depth descended to, though even then the four leagues maximum depth the Nautilus is said to descend to would put it some six kilometers into the earth’s crust.  And the thing runs on electricity. But it’s exactly the non-scientific elements that must be most inspiring about the Nautilus—its opulent interior. It is in every essential a self-contained, submersible mansion. The submarines humans have managed to build so far are sadly Spartan affairs by comparison. Verne wanted to imagine a man freed from the obligations of civilization, free to explore the world after strictly scientific facts, i.e., the Truth. And the Nautilus, not a submarine at all, is the inspiration for that and is, I would say, the principal inspiration that it offers.

The fact that Clarke’s maxim (“Any sufficiently advanced technology is indistinguishable from magic”) is taken as a Law is itself a piece of Tesla-thinking. And for the record (from the record), the Law

may be an echo of a statement in a 1942 story by Leigh Brackett: “Witchcraft to the ignorant, …. Simple science to the learned”. Even earlier examples of this sentiment may be found in Wild Talents by author Charles Fort where he makes the statement: “…a performance that may someday be considered understandable, but that, in these primitive times, so transcends what is said to be the known that it is what I mean by magic.”

But why only cite other science fiction sources for the antecedents here? This is the same sentiment expressed earlier still: “Religion is regarded by the common people as true, by the wise as false, and by rulers as useful”[15] with the qualification that it’s more capacious in scope.

Clarke congratulates himself on postulating “advanced technologies without resorting to flawed engineering concepts or explanations grounded in incorrect science or engineering, or taking cues from trends in research and engineering” (ibid)—an absolutely absurd constraint unless we want our science fiction to keep us from imagining a way out of the mess we’ve gotten into as a species. In any case, Clarke has it backwards in fiction, i.e., any sufficiently embodied magic is indistinguishable from technology.[16] Technology is the magic of naturalistic fiction.

This demand, that the imaginative could or should be fettered by a demand to avoid flawed engineering concepts or explanations grounded in incorrect science or engineering presupposes correct science in the first place, which is rather more than most practicing scientists would be so bold as to claim. They would say that their current understanding is their best possible guess, hopefully, but not that it is correct—except in matters that are no longer in the domain of science fiction, even if engineering has not figured out how to do it yet. The Large Hadron Collider is a magnificent machine, but it’s not the stuff of science fiction. And to travel to Mars is no longer science fiction, and perhaps ceased to be from the moment humankind first built a ship to cross a body of water in an enclosed environment that protected them from the surrounding environment. All variety of tool-design and engineering in this sense is now primarily in the realm of naturalistic (if still to be realized) fiction, so that the invocation in Smith of Maxwell and Tesla merely puts those concepts at the service of making unwieldy jet engines (or ultrasonic guns) that the protagonist must carry around everywhere. Their conceit of science doesn’t hide their magical quality, and so they’re not technology yet at all. But worse than that, they make for a mere narrative resort that Smith doesn’t even take seriously.[17]

Take 1

As part of my ongoing foray through graphic novels, I picked this one, the first of four volumes in the series altogether, because it called itself (on the back) science-fiction noir—I should have read more closely, it actually reads “sci/fi noir” thus putting the slash in an odd place. At this point, I have no intention of reading more of the series, so for the sake of some kind of working synopsis, you can read one here, which may be textually longer than the series after all; and for contextualizing the work generally, someone else’s raving review is posted at the end[18]—I’ll be referring to bits of it eventually.

One thing to note, in the question of style versus skill, I’m not yet convinced Smith has enough of the latter to account for the (apparent) presence of the former. And since the purpose of this blog isn’t to get into a debate about that point generally or specifically with Smith’s fans or fanboys, then leave a comment if you want to discourse about it more. He can draw a sexy male torso when he bothers, which is pleasant. But as for the lizard assassin (one eventually discovers elsewhere that he’s Salvador), he looks like the Grinch’s cousin, and the main character Robert himself looks alternatively somewhere between a frequently constipated Speed Racer, Charles Bronson, and the Hulk. There seem to be serious derivative echoes of Morrison’s (1994–2000) The Invisibles, with Native American iconography in place of the Archons (especially in their Mesoamerican guises). This brings up the whole problematic of co-optation of other cultures (even in its enthusiastic forms), which becomes a bit more acute in the present case, because this is an author in the United States co-opting the culture of those peoples white folks annihilated in this land. But this also, if you want to argue about it, can be fleshed out in the comments.

What I particular get out of this book is signaled in the opening epigraph from Nikola Tesla,[19] and pointed out especially by the below reviewer’s comment “quite frankly, [who] with even a smidgen of interest in the history of science isn’t fascinated by that guy?)” (¶1).

Now, there is no doubt that Edison was a supreme e-bag to Tesla—attempting often to exploit, thwart, or steal from Tesla—that Tesla did indeed accomplish a vast array of awesome things in the course of his life, and also that his

thoughts and efforts during at least the past 15 years [of his life] were primarily of a speculative, philosophical, and somewhat promotional character often concerned with the production and wireless transmission of power; but did not include new, sound, workable principles or methods for realizing such results (¶5)

Cue the conspiracy theorizers, who will insist of course that the last fifteen years of Tesla’s life did not involve speculative, philosophical (much less promotional) ideas lacking in new, sound, workable principles or methods for realizing such results at all, &c. Anyone who wants to argue about this point, is welcome to make a comment, but I’ll likely not wade into that swamp.

Even were it true that Tesla had accomplished all or even anything that his late notebooks or papers suggest, it wouldn’t matter because (in a way related to Smith’s problematic co-optation of O’odham[20] iconography) what matters is the deployment of the idea in cultural productions, not any facts of the matter themselves; that is, whatever an author takes to be facts are the grist of the matter, as Moore makes unambiguously and intelligently clear in his (1999)[21] From Hell.

Sade’s writings may sometimes license a certain kind of liberty in other people’s writing. That is, the fact that Sade wrote putatively disgusting pornographic fantasies provides the “rationale” for that author to sprawl out his or her own similarly “sick” fantasies, usually without the author ever having cracked open a book by the Marquis. Similarly, the “image” of Nikola Tesla—the discourse that hangs diaphanously in a miasma around him, which can be summarized as “mad genius”—licenses a certain kind of liberty in other people’s mad imaginings. That is, usually without the author having ever cracked open an engineering design by Tesla, the fact that Tesla wrote philosophical, and somewhat promotional speculations about possible directions in science provides the “rationale” for that imaginer sprawling out his or her own similarly empty speculations as “legitimate thought experiments”. As an extreme example, I know someone who informed me one day that he had designed a new kind of airplane. Not only were there not even conceptual sketches to go along with this, any and every detail one might require of a “design” could not be supplied by him. It wasn’t even clear (to me) yet if this was even an idea for a new kind of airplane. And I mention this particular example because this “designer” is a big fan of Tesla.

I immediately want to offset this extreme example with a comment in the opposite direction. Pauli’s “not even wrong” is a just response to this sort of claim to have “designed a new airplane,” but only because any such plane, if it is going to be realized, must at some point actually be materially embodied, even if only to the point of proof-of-concept. But in the world of culture—and particularly fiction—the notion of “not even wrong” is “not even wrong”. What I hear principally in the would-be designer’s claim to a new kind of airplane aims at the same issue that Pauli is debunking: i.e., the desire to obtain access to grant money—or, to put it less specifically, to the issues surrounding the criteria of who and what will be taken seriously as far as (scientific) “reality” is concerned. Thus, on the one side, the would-be-designer’s invoked authority of Tesla ostensibly provides the basis for the claim, “This is legitimate,” while Pauli’s utterance, from his well-established place in the access corridors to resources, ensures that certain would-be newcomers are kept out of the corridors. It is exactly this kind of discourse that hangs around Tesla—on one view, he was the freak who people rightly didn’t take seriously or, on the other, he was the genius who got denied access (thanks to the Randian despotism of Edison).

Once again, it should be obvious that “who is right” is not at issue here—or, more precisely, that is comprises only part of a larger argument. I forget who it was (it might have been Pauli, in fact) who shut out an upcoming genius—one who was later vindicated, because he really was a genius—dismissing the work as charlatanism but (as we can only too clearly see retrospectively) out of a more proprietary desire to try to protect his position in the theoretical physicist community. This dynamic plays out all over the place—a particularly rich period of it involved (professional, academic) archaeology and (non-professional, non-academic) Egyptology in the early twentieth century; an interesting example because in fact the Egyptologists in many cases made many more numerous and significant finds than the armchair archaeologists sitting in Europe shuffling pottery shards around and getting into heated controversies whether this dusty fragment belonged to classification A or B. Bernal’s (1987)[22] Black Athena more recently created this kind of disciplinary controversy for the historical formation of Greece, as did Talageri’s (1999)[23] The Rigveda in the historiography of India. The fact that some of the major opponents of these works may be shown to be shills of the power structure—academic hired guns, if you will—does not muddle but actually makes that much clearer that “who is right” has more to do with “who claims or steals the right to declare the course of a discourse” than actual (establishable) “facts” in any historical sense.

That “truth telling” and “vested self-interest” are separable factors thus gives us four basic kinds of players in this: the scholar (a specialist who is interested in establishing as best as possible the “facts” of the matter, whatever they entail), the tool or shill (a specialist who is interested in maintaining a particular ideology, for themselves or for others, in the face of all evidence), the crank (a non-specialist who is interested in maintaining his or her point of view, for themselves or for others, despite all evidence to the contrary), and the explorer (a non-specialist who is interested in establishing as best as possible the “facts” of the matter, despite whatever limited resources or access to scholarly material she may have).

Two things are obvious here: there will be a tendency to call scholars and tools experts and cranks and explorers amateurs, and there is some justice in this, since most scholars and tools tend to be trained academic specialists in some discipline while cranks and explorers tend not to be. There is, however, a circularity in trying to assert this, and it comes out in the ambiguity of objections to the appeal to authority; specifically, there are grounds for an appeal to an authority when: (1) the authority is a legitimate expert on the subject; and (2) there exists consensus among legitimate experts in the subject matter under discussion. It will only be by some form of qualification that one might be recognized as a “legitimate expert”—sound argumentation is a traditional bootstrap to legitimate authority but being a tenured professor at an institute of higher learning usually means some modicum of legitimacy adheres to your pronouncements. Similarly, the problematic demand for a consensus, which at its most generous interpretation amounts only to the accepted or prevailing doxa (within a discipline) if anything like a consensus actually can be established. [24] So—just as amongst the aboriginal tribes observed by Spencer and Gillen (1904), where it is the old men in a group who tell the new men in the group that they, the old men, are the final authority for all—the current wizened crew of tenured professors tell new associate professors and the like that they, the tenured professors (and administration) have the right to determine who is a duly legitimated expert.

The more socially familiar kind of this barbarity is when someone with a college education calls someone who doesn’t have one stupid. The illegitimate move is pretending that education and stupidity are mutually exclusive, rather proving the contrary by the very assertion. And so the tool in particular is interested in calling the crank and the explorer alike stupid, on the same ground. This is like a White racist calling an African ignorant for not acknowledging the racist’s argument for the inferiority of Africans.

I don’t think that the saying “never judge a book by its cover” means to include the back of the book, although my experience is that the fronts of books tend at least to be less disingenuous than the back. In the present case, if “Jeff Smith is one of this country’s great living cartoonists” then we’re reading the words of a tool or the reviewer lives in Nauru.[25] Or similarly, “Anyway, RASL, by the always impressive Mr. Jeff Smith, is a book anyone with an itch for good ol’ hard, Asimov-Clark [sic] science fiction should be drooling over” (cited previously). It’s hard to imagine what fossil has an itch for good ol’ hard, Asimov-Clarke that still needs new fulfillment in something other than Asimov-Clarke,[26] but neither Asimov or Clarke that I know of ever particularly resorted to that Vernean habit (turned too often into a novelistic raison d’être by Niven) of merely cobbling together a few equations from science and then pretending that constituted a focus of a fiction itself.[27]

Another reviewer (elsewhere) insists: “For fans of Inception, this book is everything you could want from a comic.”[28] The title of this review is “Hard Boiled Sci-Fi with as much brains as balls,”[29] so one sees that Smith’s hodgepodge can get taken by some as thinking-man’s “sci-fi”. And it’s exactly a figure like Tesla, in his least scientific aspect as someone who practiced science, who provides people a seeming warrant for this sort of claim.[30] This is certainly at work in Smith’s deployment of Tesla, as statements like “Nikola Tesla’s life and achievements (and, quite frankly, whom [sic] with even a smidgen of interest in the history of science isn’t fascinated by that guy?)” can remind us.

The word smidgen is ironic here, because it’s probably very often people with a smidgen of interest in the history of science who are fascinated by Tesla, just as people with a smidgen of interest in the history of Russia are fascinated by Rasputin. Part of my objection—as also in those cases where the tendency afflicts anime series[31]—is the artistic sloppiness in imagining that you can just throw a few equations into a book and that’s supposed to be science fiction or a sign of thoughtfulness or profundity. There is so much missing here in this book to warrant such a conclusion—and it may be that Smith’s fans are more dubiously oriented to his work than he is—but this book is only an example of the wider phenomenon. This is the flip-side of the reductio ad Hitlerum—the argument that proves indisputably that one’s disputant is an idiot—because the reductio ad Teslā proves indisputably that one’s own argument is well-grounded in solidly established scientific authority. More precisely, in Tesla’s own example of speculative, philosophical, or promotional thought.[32]

One may criticize certain examples of science fiction for congratulating itself in dropping the “fiction” by a rigorous application of “science,” but the offense then is rather to the fiction not the science. In the Furry community, one sometimes encounters the objection that large male cat fursonas (like lions, cheetahs, &c) cannot have large penises because large cats (like lions, cheetahs, &c) don’t have large penises in “nature”. One might say that this is simply a pathetic and silly objection, but it is actually a massive failure of imagination and it points to the shrinking of the human capacity for imagining in the first place. Specifically, it proposes “the real” (whatever that is) as the necessary template for the imagined (whatever that is).

Science fiction, previously an aspiration against the impossible, is getting constrained only to what is possible—the imaginable and the plausible being thus entangled. The converse of this is a dis-constraint of creative responsibility, coherence, &c. it seems like there is a crippled capacity for what science fiction (and fantasy) calls world-building. Again, some anime series seem very afflicted with this.[33] But popular forms do have a tendency to go for spectacle at the expense of internal world coherence merely to provide something dramatically showy (both in the US and Japan, as well as elsewhere). The cheapness of this sometimes justifies itself, but usually not. A quintessential example of this is the sudden inability of the crack-shot villain to hit the hero when the hero is trying to escape. It is astonishing to me that this sort of thing still winds up on film, and I’d be interested to know if anyone anymore even finds such ridiculous near-misses actually exciting any more. And if so, how.

Merely to throw in Maxwell’s equations and a whiff of Tesla as if this justifies something into a mix of fight scenes with the Grinch and an exotic dancer’s ass doesn’t elevate the material. Morrison failed in a similar way (IMHO) but he at least went to the trouble of trying to pretend the elements belonged together.

There is far too much these days in movie, TV, and comic production that takes as its premise, “Hey, what if …” but that’s not enough to warrant a project—though if you’re famous (i.e., if you have the established authority in the field), then you can shove down everyone’s throat your steaming heap as if it amounted to something.

The postmodern gesture, that seemed to liberate us from the onerous task of being familiar enough with the edifices of culture to be able to participate in the conversation of the culture via our own works of art and that offered us the chance (or the obligation) to pick our material not necessarily from “high” culture but from everywhere, has turned out to be a trap. When you can pick anything and have it mean anything, this actually increases the demands on what is selected and how things are assembled, but the very looseness of the criteria for selection ends up not generating a rigor of effort on the part of culture producers.

Endnotes

[1] Smith, J (2008). RASL: the drift. Columbus, OH: Cartoon Books, pp. 1–112

[2] (from here): the passage continues, “The six corporations that collectively control U.S. media today are Time Warner, Walt Disney, Viacom, Rupert Murdoch’s News Corp., CBS Corporation and NBC Universal.  Together, the “big six” absolutely dominate news and entertainment in the United States.  But even those areas of the media that the “big six” do not completely control are becoming increasingly concentrated. For example, Clear Channel now owns over 1000 radio stations across the United States. Companies like Google, Yahoo and Microsoft are increasingly dominating the Internet.”

[3] Shannon (1948) calculated that:

The redundancy of ordinary English, not considering statistical structure over greater distances than about eight letters, is roughly 50%. This means that when we write English half of what we write is determined by the structure of the language and half is chosen freely. The figure 50% was found by several independent methods which all gave results in this neighborhood. One is by calculation of the entropy of the approximations to English. A second method is to delete a certain fraction of the letters from a sample of English text and then let someone attempt to restore them. If they can be restored when 50% are deleted the redundancy must be greater than 50%. A third method depends on certain known results in cryptography (14, from here)

To provide a simple example of this, if I tell you I will transmit a five letter word to you, after I have transmitted “KNOC” there is (in the domain of standard English words) only one letter that I can transmit next. As such, because you know that the next letter must be “K” to actually transmit it to you would be gratuitous except that it confirms the likely correctness of the preceding four letters. In a less benign sense, to receive the same news story from five seemingly different sources serves to create the impression that the story must be true (whether it is or isn’t).

[4] Logan, G. (1988). Toward an instance theory of automatization. Psychological Review, 95(4), 492-527. doi:10.1037/0033-295X.95.4.492.

Contrasting instance versus strength theories, Logan (1988) notes:

In instance theories, memory becomes stronger because each experience lays down a separate trace that may be recruited at the time of retrieval; in strength theories, memory becomes stronger by strengthening a connection between a generic representation of a stimulus and a generic representation of its interpretation or its response (494).

[5] Clemons, L. K. (1989).  Degrees of implementation of multisensory reading instruction by teachers involved in naturalistic research. (Record of Study);. Ed.D. dissertation, Texas A&M University, United States — Texas. Retrieved from Dissertations & Theses: Full Text.(Publication No. AAT 9015435).

[6] Every time I cite another author as well I’m not (at least not apparently) speaking only for myself but am invoking a putatively established authority and expecting you to take seriously whatever point I’m making. At a minimum, I appear to be saying, “See, I’m not the only one who thinks so.” One might just as well say si l’autorité n’existait pas, il faudrait l’inventer

[7] Gounder, CKR (Sri Swami Satchidānanda) (1988). The living Gita: the complete Bhagavad Gita, Buckingham, VA: Integral Yoga Publications.

The full passage here reads:

Isn’t it sweet to study the Gita? But please remember that the entire Gita is right there in front of you. The best book to read is the book of life. With that book, you will be constantly learning everything. Written scriptures are only here to show that since they also say the same things, we can trust our experiences: “Yes, here in the Bhagavad Gita, Lord Kṛṣṇa also said the same thing. Okay, then probably it must be right.” We want an authority to confirm our experiences. Scriptural study is good for confirming our convictions” (31).

[8] This, whether Nikola Tesla himself became the prototype for the type, only became so toward the end of his life, or neither of the above.

[9] “No personality in the history of science has been pushed further into the realm of mythology than the Serbian-American electrical engineer Nikola Tesla” (¶1, from here).

[10] See here.

[11] If one wants to say that the first involves Smith’s readers and the second involves Smith’s work, the intersection of the two is where Smith as a fan of his work (i.e., to the extent that he takes it to be thoughtful, profound, or intellectual) allows that cripple the imaginative fiction of his work.

[12] Delany, S (2009). The jewel-hinged jaw: notes on the language of science fiction. Middletown, CT; Wesleyan University Press.

[13] Usually with the further constraint that there is no traveling between world and/or that no other characters travel between worlds. Even in our banal everyday world, however, people insist on believing in the parallel planes of Heaven and Hell, so their presence in so-called naturalistic and realistic fiction (and that angels or demons move back and forth between those other dimensions) isn’t necessarily taken s moving yet into the realm of science fiction, fantasy, or schizophrenia.

[14] The one major exception is, of course, the emotional entanglements possible in encountering aspects of more than life of oneself (if not another self). Often, as in a movie like Multiplicity, the issue simply devolves to sorting out the problem of clones and/or “which one is real”. And what amounts to the “mere inconvenience” of the different details in the parallel world for the main character tends to occupy the greatest portion of narrative space—his wife doesn’t love him in the other world, or didn’t commit suicide, &c. Or the Third Reich didn’t get defeated, &c. No doubt somewhere there is the story that traces the effect of the people who are left behind in the slider world after the TV series’ hero has passed on to the next episodic slier world.

[15] Like so many witty sayings on the Internet, it is unclear who really said this—perhaps a cleaned up paraphrase of Edward Gibbon or the more usual suspect, Seneca (the Younger). The very fact of this mystery is itself a case of si l’autorité n’existait pas, il faudrait l’inventer and trying to get to the root of the matter may be interestingly seen on display here.

[16] This isn’t all Clarke gets backward. Regarding the meme and its variations that he invented telecommunication satellites, he did at least write an article about it.

“It is not clear that this article was actually the inspiration for the modern telecommunications satellite. According to John R. Pierce, of Bell Labs, who was involved in the Echo satellite and Telstar projects, he gave a talk upon the subject in 1954 (published in 1955), using ideas that were “in the air”, but was not aware of Clarke’s article at the time. In an interview given shortly before his death, Clarke was asked whether he’d ever suspected that one day communications satellites would become so important; he replied: ¶ I’m often asked why I didn’t try to patent the idea of communications satellites. My answer is always, ‘A patent is really a license to be sued.” ¶ Though different from Clarke’s idea of telecom relay, the idea of communicating with satellites in geostationary orbit itself had been described earlier. For example, the concept of geostationary satellites was described in Hermann Oberth’s 1923 book Die Rakete zu den Planetenräumen (The Rocket into Interplanetary Space) and then the idea of radio communication with those satellites in Herman Potočnik’s (written under the pseudonym Hermann Noordung) 1928 book Das Problem der Befahrung des Weltraums — der Raketen-Motor (The Problem of Space Travel — The Rocket Motor), sections: Providing for Long Distance Communications and Safety and (possibly referring to the idea of relaying messages via satellite, but not that 3 would be optimal) Observing and Researching the Earth’s Surface published in Berlin. Clarke acknowledged the earlier concept in his book Profiles of the Future (emphasis added).

This is a rather gross piece of smarmily claiming what you never did—given Clarke’s doddering age at the time, it might be a case of him having come to believe his hype—but it also points to the kind of Tesla-think that mistakes (in the real world) an idea as co-equal to the actuality. The correct answer should have been, “Don’t be silly. You don’t patent ideas.”

[17] At one point, it seems that the protagonist must wait before he drifts again, but soon later goes ahead and does anyway. Maybe this is supposed to suggest his resilience to grueling exertion but it reads merely like Smith introducing a “rule” in the world that he then violates because it’s no longer convenient to adhere to it. As for why he can do this—if it isn’t just that he “sucks it up” and gets tough about it (in which case his earlier remark about not drifting again in such short order was narratively sloppy—it seems clear that there is some mysterious, i.e., magical, reason; that is, the author won’t have a better explanation than, “well, that’s just how it happened; what can I say”. Or, worse, “well, when there’s such and such a quantum flux, then you can sometimes slip through again without …” &c.

[18] (from here):

I was gonna call this post “Why RASL kicks ASSL”, but if you say that a little too fast it just sounds wrong, so I went with the boring “A Short Appreciation of…” Anyway, RASL, by the always impressive Mr. Jeff Smith, is a book anyone with an itch for good ol’ hard, Asimov-Clark [sic] science fiction should be drooling over. Not only is it a slam-bang awesome thriller, but it incorporates factual science history, not to mention a great succinct survey of Nikola Tesla’s life and achievements (and, quite frankly, whom [sic] with even a smidgen of interest in the history of science isn’t fascinated by that guy?) woven seamlessly into the fictional fold. This one is even more applause-worthy being that it is Smith’s creator-owned follow-up to Bone, one of the great landmark masterpieces in the history of comics, and manages to stand on its own two feet. The story is of the titular parallel universe jumping physicist-turned-art thief’s battle to keep an ultimate weapon from the hands of a government who does not possess the appreciation for the immeasurable destructive power that the weapon could unleash. All of the science in this book, while applied fictitiously, is based on the actual research and theories of Nikola Tesla. Now that the series is nearly over, it would probably be difficult to find it in issues, but Jeff Smith is never stingy with the collections (along with pretty cool bonus material), and I’m sure there will be a “complete edition” in the near future.

[19] Details here, of course.

[20] In this volume of the series, at least, the previous owner of the image of the “man in the maze” is described as Pima (and also Hopi, by someone who seems to be a curator of a “Mazes in Native American Art of the Southwest” exhibit). Since proximity of people to one another does not preclude important differences, whatever the similarity, I’d want to be careful about treating the man in the maze as the same amongst the anthropologically lumped together “Pima” (details here). Moreover, to whatever extent we may construe “Hopi” and “Pima” as proximate to one another—and in general the “Pima” i.e., various O’odham people, can communicate with one another despite dialectical differences—the Hopi and Pimic languages are in distinctly different language groups (see Campbell, p. 136).* The issue, to stress again, is not what can be or has been established as factual in some framework, but what a cultural producer takes to be factual when co-opting a marginalized group. Certainly in this first volume, the man in the maze has almost no significance narratively; it is merely an barely a symbol, which in one frame (p. 96) substitutes the name of a Museum exhibition in the book—“the Maze of Life”—for the O’odham peoples’ name for the symbol, i.e., amongst the Tohono O’odham people, Iʼitoi denotes the mischievous creator god who resides in a cave just below the peak of Baboquivari Mountain, known to the Hia C-eḍ O’odham people as Iʼithi. The Akimel O’odham (synecdochially referred to as the Pima) refer to this creator god also as Se:he, or Elder Brother. So, even in its original significance, the image of a sacred tradition becomes merely a signpost for life as a maze. I’m not merely being indignant on behalf of the various O’odham people by saying this. The artist’s task is to co-opt material from the world, wherever we find it, and rework it. There’s no reason to believe that one of the groups of the O’odham people might not, at some point, have co-opted their own neighbor’s imagery to their own artistic/spiritual purpose. What I want to emphasize or look at, as in this post generally, is the social meaning of such co-optation.

*Campbell, L (1997). American Indian languages: the historical linguistics of Native America, New York: Oxford University Press.

[21] “Moore writes that he did not accept Knight’s theory at face value (and he echoed the then-growing consensus that such claims were likely hoaxes), but considered it an interesting starting point for his own fictional examination of the Ripper murders, their era and impact” (from here).

[22] Bernal, M. (1987). Black Athena: the Afroasiatic roots of classical civilization. New Brunswick, N.J.: Rutgers University Press.

[23] Talageri, SG (1999). The Rigveda: a historical analysis. New Delhi: Aditya Prakashan.

[24] Spencer and Gillen (1904) describe at one point how change may be introduced into an otherwise typically very formalistic setting amongst the aboriginal people they observe. When an innovation occurs, or would be proposed, the older men confer about it, and perhaps state that the variation of established practice shall be acceptable or not. Once established, the innovation itself may be implemented, and whether it catches on and sticks becomes part of the material life of the idea. The innovator may need to keep it alive, or it might die with him; the elders may ensure that it gets taken up as a general habit in any given cultural context; or another member of the group may pick it up as well, so long as the elders continue to permit it to occur. What is clear from all of this, at least in a culture where there are clearly delineated lines of power, is exactly how authority begets authority.  Wherever it originated, those times are lost to historical memory and the “we’ve always done it that way” rule is in play. Old men indoctrinate new men in the moral absolute to obey them—to obey older men generally—and that reëstablishes the recurrent cycle.

[25] My apologies to any cartoonists in Nauru, both for any insult and for the drift of my mind that hoped Nauru (as the country with the world’s smallest population) might be a likely place where there are currently no cartoonists.

[26] Never mind whatever might be specifically meant by this generic mash-up.

[27] Prove me wrong and the strength of my argument increases, since this would only to be to point out the faults of the books in question.

[28] The sentence quoted above is the last of the reviewer’s review. The whole first of it reads:

“RASL is everything comics should be. Pulse-pounding, articulate, character rich, plot smart, and, best of all, gorgeous. Jeff Smith proves yet again that he’s the best there is at what he does.

Disagreeing that it is pulse-pounding or gorgeous would involve a broader conversation about criteria of judgment than I’m willing to engage for this book, but the issues of articulate, character rich, and plot smart are not so easily tossed into the de gustibus pile. The clearest lapse of “character rich” involves the utterly inconsequential part played by females, who manage to be mostly naked a lot of the time anyway. The first death of Annie, which is undercut by the existence of an infinite number of her, serves no purpose in the plot except for that most conventional and boring gesture of ‘the dead girlfriend who drives our hero to” whatever … in this case not to much. But far and away the most egregious fault of the book is precisely its often garishly bad exposition, a major fault in a lot of science fiction. From pp. 55–7, for instance, a host of improbable exposition is deployed, and this is not the only example already.

[29] It seems more gratuitous than uncharitable to point out the grammatical error—a serving up of as much soups as nuts—but when the word “brains” is involved in the error it’s hard to ignore. If it were simply a typo, I’d’ve let it stand, but it doesn’t seem so.

[30] Arguably unlikely.

[31] PS: some of my favorite movies are Japanese animation, so the issue here is specifically serialization not animation itself. I find soap operas ridiculous as well along with the German baroque novel.

[32] I feel like I must say again that it doesn’t matter what Tesla’s purpose in these writings are. It may be perfectly the case that foolish people are mistaking Tesla’s speculations to be “serious cogitations”. If you happen to encounter someone who informs you he’s invented a perpetual motion machine (or the design for one) , ask if he’s familiar with Tesla’s work, and the answer will frequently be yes.

[33] I’m not ignoring that the versions we have in English may be wretched translations or otherwise wholly incoherent things that actually misrepresent what is going on.

Summary

The adequate refutation of intelligent design may be borrowed from Aloysha in Dostoevsky’s The Brothers Karamazov (“that can’t be so”) or Pauling (“It’s not even wrong”), so there’s no non-masturbatory point in engaging the gesture, except to call our representatives in Congress and make sure the ideologists stay shut-down. The broader, less narrowly religious, question of whether there is a teleological argument for a designer is not as much beside the point, at least to the extent that it opens up on our descriptions of the experience of consciousness (human) in the cosmos. This is an essay in the true sense of the word, an exploration of a topic rather than a finished, polished, determined exposition of it that you are supposed to robotically consume and integrate. Consequently, it’s lengthy. Ultimately, our capacity as nontrivial machines should mean we can surprise even an omniscient designer, but whether or not that is true, our capacity to surprise our species and each other will be, it seems to me, one of the main routes out of the current human condition that condemns us to individual and collective death on this planet. If we’re going to survive in the long run, Designer or not, our apparent capacity to exceed the limits of our design (i.e., what we describe as the basis of our existence) becomes crucial to that existence. And that’s what this exploration is fundamentally about.

Contents

Part I – Dialogues

  1. Fictional
  2. Metafictional

Part II – Distinctions

  1. Trivial & Nontrivial Machines
  2. The Existential Boon of Nontriviality
  3. Unexpected vs. Surprising
  4. The Social Costs of Trivializing the Nontrivial
  5. Trivial Nontriviality & Nontrivial Triviality

PART III – Discussions

  1. The Anthropic and Divine Limits of Omniscience
  2. Are Human Beings Trivial Or Nontrivial to a Designer?
  3. Consciousness, Feedback, & the Design of Triviality
  4. Unintelligent Design & Discourses on Nature
  5. Cognition & Nonliving Systems
  6. Conflict & Contradiction
  7. Unpredicted vs. Unpredictable
  8. A Theodicy for Intelligent Design
  9. Freewill, Teleology, & Nontriviality
  10. The Design of Intelligence and the Intelligence of Design
  11. Sentience, Sapience, & Self-Domestication
  12. Necessity & Intelligent Design
  13. Hierarchy, Self-Consciousness, & Intelligent Design

Part IV – Dénouement

  1. Conclusion
  2. References
  3. Endnotes
Part I – Dialogues

Fictional[1]

The following excerpts a discussion between an ultrasapient robot (Roy) and the inhuman Lucifer regarding von Foerster’s distinction of trivial and nontrivial machines as presented in a lecture to the Author’s Colloquium in honor of Niklas Luhmann on 5 February 1993 at the Center for Interdisciplinary Research, Bielefeld.[2]

ROY: von Foerster (2010)[3] asserted: “I come to my proposition: 01. Trivial machines: (i) synthetically determined; (ii) independent of the past; (iii) analytically determinable; (iv) predictable. A trivial machine is defined by the fact that it always bravely does the very same thing that it originally did. If for example the machines says it adds 2 to every number you give it, then if you give it a 5, out comes a 7, if you give it a 10, out comes a 12, and if you put this machine on the shelf for a million years, come back, and give it a 5, out will come a 7, give it a 9, out will come an 11. That’s what’s so nice about a trivial machine”[4]

LUCIFER: jump five paragraphs[5]

ROY: Now you understand the great love affair of western culture for trivial machines. I could give you example after example of trivial machines. When you buy an automobile, you of course demand of the seller a trivializations-document that says this automobile will remain a trivial machine for the next 10,000 or 100,000 kilometers or the next five years. And if the automobile suddenly proves to be unreliable, you get a trivializateur, who puts the machine back in order. Our infatuation with trivial machines goes so far that we send our children, who are usually very unpredictable and completely surprising fellows, to trivialization institutes, so that the child, when one asks ‘how much is 2 times 3’ doesn’t say ‘green’ or ‘that’s how old I am’ but rather says, bravely, ‘six.’ And so the child becomes a reliable member of our society.[6]

LUCIFER: Go on.[7] The next two.[8]

ROY: 02. Non-trivial machines: (i) synthetically determined; (ii) dependent on the past; (iii) analytically determinable; (iv) unpredictable. Now I come to the non-trivial machines. Non-trivial machines have ‘inner’ states (Figure 3). ¶ In each operation, this inner state changes, so that when the next operation takes place, the previous operation is not repeated, but rather another operation can take place. One could ask, how many such non-trivial machines one could construct if, as in our case, one has the possibility of 24 different states. The number of such possible machines is N24 = 6.3 x 1057. That is a number with 57 zeroes tacked on. And you can already see that some difficulties arise when you want to explore this machine analytically. If you pose a question to this machine every microsecond and have a very fast computer that can tell you in one microsecond what kind of a machine it is, yes or no, then all the time since the world began is not enough to analyze this machine.[9]

LUCIFER: Stop right there.[10]6.3 x 1057. If you pose a question to this machine every microsecond and have a very fast computer that can tell you in one microsecond what kind of a machine it is, yes or no, then all the time since the world began is not enough to analyze this machine.’[11] And that’s still true today,[12] 22 billion years later.[13] ‘4 outputs, 4 inputs, and 4 inner states … Forget it! I’ll tell you how many possibilities you have: 10126.’[14] 3.1104 trillion years (1 year of Brahma).[15] A gigaparsec (19,177,613,424,500,000,000,000 miles).[16] Planck length (1.61619997×10-35 meters) … 10-20 of the diameter of a proton.[17] Planck time (5.3910632×10-44 seconds) [18] … Entre nous,[19]  (sotto voce) the smallest time interval … directly measured[20] (at the time)[21] was … 12 attoseconds (12 x 10-18) [22] or a million billion billion[23] times larger than the Planck time[24] (più sotto) (you’ve measured smaller[25] since then … but … still nowhere close)[26]  (normally again) 300 sextillion stars in the universe,[27] 1010 neurons [28] … ‘Nothing is so amusing, nothing so stimulates and excites the brain as do large numbers’ [29] says your[30] Marquis de Sade.[31] So, in spite of the fact that[32] all the time since the world began is not enough, [33] I can predict now[34]—that is, I already know[35]—what  you will say next.[36]

Metafictional

Further on, other arguments will be raised against various kinds of weakenings of the claim that an intelligent Designer would also necessarily be all-knowing, all-loving, and all-powerful. Moreover, there is no reason to insist upon an equation between an intelligent Designer and any of the biblical or qu’ranic deities of intolerant monotheism, except that in the final analysis it seems that’s exactly who the ID proponents insist on if asked to “name” who the intelligent Designer is in whatever scheme they are proselytizing.

Obviously, any claims to an intelligent design presupposes claims about the intelligent Designer. Amongst people with nothing better to do but no religious ax to grind, one encounters a variety of proposals that are not only religiously heterodox but that could not ever be crammed into a form acceptable to proponents of intolerant monotheism. More generally, one will tend to find a wide general agreement that even the very notion of an intelligent Designer is ultimately not reconcilable with Yahweh, Allah, or Jehovah, or their ilk. So on the one hand, there’s a certain pointlessness in belligerently “proving” that an identity between an intelligent Designer and Allah, Yahweh, or Jehovah is untenable. Luckily, that is not the main point of this essay.

In a very broad way, one could identify three or four forms of the intelligent design argument: (1) where the intelligent Designer is identified with one of the deities of intolerant monotheism, (2) where the intelligent Designer is construed as an omniscient, omnipotent, and omnibenevolent being of some sort, understood in an intolerant monotheistic sense or not, (3) where the intelligent Designer has a qualification on one or more of the “alls” with respect to being all-powerful, all-loving, and/or all-knowing, or (4) some variety of science fiction fantasy that has essentially no commitment to science, religion, or sense.

In what follows, I generally shift from framework (2) or (3), since framework (1) and (4) are essentially empty exercises; neither being false, there’s little to address seriously in both. This is especially true, given that the notion of intelligent design as it plays out in our social world is one accompanied by an insistence upon the reality of the intelligent Designer. It is to never lose sight of this fact that I frequently refer to the intelligent Designer as the deities of intolerant monotheism[37]. If what were at stake were merely an explanatory principle—a description of lived human experience such as we find in the domain of psychology—then it might be that argument (4) could have significance.[38] But merely to debunk all four arguments cannot be the point of this essay, since in one sense, “There is no intelligent design” is as adequate a response as the premise warrants. That which is transcendental (that which is , by hypothesis, outside of my lived experience) is the domain from which the whole body of symbols (of human experience) have sprung which religion in various forms and to various degrees of ignorance have coopted. I read once that Ayn Rand gave herself the project to reclaim from religion all of the human experiences that had been annexed to it, which is as fine a human project as one could want, whatever else might be said or has been said about Rand. In these days of heady science, and particularly the resurgence of technocratic optimism that is accompanying the wider and wider elaboration of neoliberalism domestically and abroad (in the form of wars and economic exploitation), it is equally necessary now to rescue the human from its cooptation in the other naïve realist doxa of the western world: science.

Radical constructivism offers us an alternative epistemology. Second-order cybernetics, or something akin to it, provides us a methodology by which to negotiate the world of experience without resorting to either the religion of science or the science of religion. And Eastern philosophy provides a critical method, without obliging us to necessarily accept its metaphysical claims. This essay, then, cannot function trivially and only as another refutation of a pathetic and pseudo-intellectual attempt at hiding religious proselytizing in a dignified wrapper. It is, finally, meditation on the ongoing challenge facing all of us to describe our worlds towards encountering ways to enact ourselves as intelligent Designers. By reading the stupidities of “God” so to speak, we may find a way into our future currently blocked by our own limitations.

Part II – Distinctions

Trivial & Nontrivial Machines

From a human standpoint, it is clear that humans must appear (to humans) as nontrivial machines. von Foerster’s distinction, however, does not claim humans are not deterministic (i.e., are synthetically determined). This means, given any particular input and whatever conditions prevail “in the machine” at that moment, then there will be one possible output.

More concretely, if we take any three hand-held calculators and provide the input of “three times two,” then the prevailing conditions inside of the calculator (assuming it’s operating as expected) will in all three cases yield the exactly specifiable output “six”. If we take any three English-speaking, non-deaf humans and provide the same input (what is “three times two”), then the prevailing conditions inside of each human (provided each heard the question) will yield an unspecifiably exact set of answers as outputs. In von Foerster’s example, he provides three possible outputs in “green,” “how old I am,” and “six”. We might also imagine three other outputs (equally based upon prevailing conditions in each person) in: “That’s six,” “um, let’s see … six,” and “that’s easy, six.” By convention, we may ignore the “extra” words in these outputs (i.e., “that’s” and “um, let’s see” and “that’s easy”) and focus only on the expected part of the answer (“six”). Nevertheless, all three (verbally identical) outputs by the calculators and all three (verbally nonidentical) outputs by the human beings are synthetically determined.

This range of response (in human beings) is so normal that we hardly note the strangeness of it. By contrast, no matter how many different matches you light near however many open cans of gasoline, except in the case of some rare and intervening phenomenon being in effect, then all of the gas cans very predictably (and often tragically) will explode. It is, as von Foerster notes, reliably trivial. Human beings, then, are reliably nontrivial, which is not to say one can never predict what they will do. Mother’s rely upon the past to decide what cakes to make for future birthdays; nevertheless, the day may come when the child, after years of devouring chocolate cake with blissful abandon with equal stubbornness and certainty asserts, “I hate chocolate cake.” Or again, the bully, who cockily continues to pick on the same kid day after day, finally one day finds himself beaten violently, perhaps even to death, when the abused child finally fights back.

So, a nontrivial machine retains the capacity to surprise, even when for long periods it appears to behave reliably (predictably). Nonetheless, the prevailing conditions within each nontrivial machine (human or otherwise) makes a one-to-one correspondence between input and output a certainty. As such, on this day at this time in this place this person will laugh at a given joke, someone else will frown in disapproval, and yet a third one will do still something else.

To refer to human beings as nontrivial machines may seem obscurely insulting. And yet, von Foerster’s distinction offers—or perhaps this is a consequence of the distinction—a curious defense for the notion of elemental human dignity.

The Existential Boon of Nontriviality

Von Foerster’s intellectual heritage included engineering and cybernetics, which particularly emphasized an analogy of machine to describe both artificial (human-built) and nonartificial (living, human) systems. From this analogy, along one path that led away from developments in cybernetics per se, arose cognitive psychology and artificial intelligence research, which generally shared a “computer model” of the mind (or of intelligence). Here, intelligence became (literally) calculating, and the ones and zeros in computer “thinking” analogized with the on/off firing of neurons in the human brain, conceptualized (more or less—the history is more complicated than a simple summary can cover) as a computer. Findings from both disciplines swapped concepts and metaphors in an effort to come to terms with what can now at this juncture with some fairness be called a dead end,[39] but this will not illuminate  why this model has fared so badly. What is lacking is an exact number.  Von Foerster reports:

W. Ross Ashby, who worked with me at the Biological Computer Laboratory, built a little machine with 4 outputs, 4 inputs, and 4 inner states, and gave this machine to the graduate students, who wanted to work with him. He told them, they were to figure out for him how this machine worked, he’d be back in the morning. Now, I was a night person, I’ve always gotten to the lab only around noon and then gone home around 1, 2, or 3 in the morning. So I saw these poor creatures sitting and working and writing up tables and I told them: “Forget it! You can’t figure it out!”— “No, no, I’ve almost got it already!” At six A.M. the next morning they were still sitting there, pale and green. The next day Ross Ashby said to them: “Forget it! I’ll tell you how many possibilities you have: 10126 (312).

The age of the universe is approximately 4.32329886 x 1016 seconds. If—starting at the very birth of the universe—we analyzed the output of Ashby’s little machine (with 4 inputs, 4 outputs, and 4 internal states), making a guess, yes or no, once per duration of Planck time (i.e. every 5.3910632×10-44 seconds, currently the theoretically shortest period of time we would ever be able to measure[40]), then by now we would have guessed approximately only 23.07 x 1060 of the possible machines, or in other words, less than half of the total number of possible machines (~47 percent). Surreally, this means we can’t even randomly guess (i.e., a 50/50 chance). Worse, it will take more than a quadrillion quadrillion universes to date more finish the task. von Foerster describes this in principle accomplishable task as “transcomputational”:

I won’t demonstrate how [this] analytical problem is unsolvable in principle but rather only an easier version, namely, that all the taxes in the world and all the time available in our universe would by no means be sufficient to solve the analytical problem for even relatively simple “non-trivial machines”: the problem is “transcomputational,” our ignorance is fundamental (309, emphasis in original).

In contrast to Ashby’s 4 input, 4 output little machine with 4 internal states, human beings have orders of magnitude more inputs, outputs, and internal states, so 10126 possibilities is an entirely miniscule representation of the total complexity involved in even a single human being.

As such, although human beings (and thus living systems in general) are synthetically determined, we remain forever beyond the pale of any hope of describing ourselves (on a computer model notion of the brain or the like) in trivial terms. We may try to treat people like calculators, but the analogy can never fit, except by violence. Thus, the paraphrase of Hume’s argument (“it is uncertain if we have freewill, but absolutely certain we must believe we do”) receives a rational foundation insofar as no explanatory heuristic, principle, or approach permits the elimination of our sense of freewill in our lived lives.[41] An intelligent Designer, of course, might not be limited in this way.

Unexpected vs. Surprising

Speaking loosely, a difference between trivial and nontrivial machines concerns how the latter, if given enough time, will surprise us, i.e., will behave in a way not only that we had not but also that we could not have foreseen.

Trivial machines appear to do this when they break; that is, colloquially we might express surprise when a (trivial) machine operates (or fails to operate) in an unexpected fashion. When a car does not start, this unpleasant surprise represents a possibility we did not expect at that day at that time and in that place—if we had, we would not have tried to start the car at all (except perhaps out of a spirit of desperate hoping). Even so, this surprising (and unpleasant) development is not entirely unexpected. We’re already primed to expect (and accept, if ungraciously) an occasional mechanical problem with a car, but even then we trust that the range of those mechanical problems have more or less trivial causes that mechanics can diagnose and repair.

I use the word startling or unexpected to describe such quasi-nontrivial behavior of trivial machines, as distinct from surprising, which describes (potential) behavior in nontrivial machines.

Trivial machines can and necessarily do wear down—metal rusts, rubber degrades, solder erodes, &c. When a machine structure’s materials sufficiently compromise, a defect or breakdown occurs and (because we are nontrivial machines ourselves) we then tend to think of this breakdown in nontrivial terms; that is, we project backwards, imagining an initial condition where all of the materials were young, fresh, and intact, after which some sort of destructive or entropic process leads finally to the material’s defeat. But this development should, by the above distinction, be called unexpected or startling, not surprising.

Von Foerster asserts trivial machines are not historically dependent, while nontrivial machines are. What this means for the present context is that the operation of a trivial machine in its given current state does not depend on previous states. The calculator, which a moment ago was asked the square root of 64, takes no account of that (or any previous question) when asked now what the square root of 25 is. On this view, a car may seem nontrivial, insofar as its not starting today seems to be because we used up all of its gas yesterday. That, however, is an artifact of our viewing—the car does not fail to start because yesterday there was gas or even because a moment ago there was no gas; it doesn’t start because right now at this moment there is no gas. In practical terms, a machine’s wear is an expected (and designed for, see below[42]) aspect of its operation, but this doesn’t make its operation any less trivial (or nontrivial in the case of nontrivial machines).

A nontrivial machine, in contrast, has a capacity to genuinely surprise us. Were we to ask a calculator a million times for the square root of 64 and every 109,234th request it answered “19” (rather than “8”), we would be dealing with something that at the very least seems to exhibit something more like nontriviality, especially if we cannot determine (like the mechanic) how or why the calculator, every 109,234th request, rebels against the boring repetition of our question and—full of the devil suddenly—offers a wrong or at least unexpected answer.[43] With this in mind, were we to loan the calculator to a someone, who just happens to be told that the square root of 64 is 19, we’d likely casually reply, “Oh yeah, it does that sometimes.” Our inability to be more precise about it than that conveys something more of the sense of surprise in nontrivial machines[44].

How we can talk about whether a machine is trivial or nontrivial involves two major assumptions. First, and most importantly: knowing in advance whether we designed and built a trivial or nontrivial machine (whether mechanical or biological) makes it a matter of mere certainty to state which a machine is, although defects or emergent properties in our designs may still lead to startling or surprising discoveries in those machines. For machines (apparently artificial or not) that we did not design but encounter in the world—whether because we did not design the machine personally, because we cannot reasonably discover who designer of the machine is (who would know with certainty whether the machine was trivial or not), or because we know with certainty that the machine was not designed by us at all (e.g., all living systems in Nature, for instance)—then the amount of time spent observing the system provides a significant input to the sense of whether a machine is (likely) trivial or nontrivial. Thus, because a calculator is a known trivial machine, the seemingly willful perversity of the 109,234th iteration may be perfectly explicable, particularly to an expert like an electrical engineer, even if it has the “feel” of nontriviality. Similarly, the most robotic daily routine of a human being may become almost absurdly predictable and yet the (assumed) sense of a capacity in the person to break out of their rut will (likely) never wholly evaporate.

This means that the trivial machine’s “surprise” and the nontrivial machine’s “predictability” differ fundamentally from the predictability of trivial machines and the surprise of nontrivial machines.

When a trivial machine “surprises” us (i.e., “shows us a side of itself we never imagined before”), we can assimilate this new side into our current understanding of the machine. We still understand the calculator as a calculator, i.e., we can still explain the calculator according to our previous paradigm—our old explanatory scheme (of “calculator”). This “new side” then appears as simply a new “feature” of the calculator, an adjustment to our understanding that gives us a “broader” or “deeper” understanding of how the calculator operates. By contrast, when instead a nontrivial machines surprises us (i.e., “shows us a side of itself we never imagined before”), we cannot assimilate this new side into our current understanding. In fact, it exposes the inadequacy of our current understanding, so that we must instead accommodate this “new side” in a changed explanatory paradigm entirely. Similarly, when a nontrivial machine is predictable, it is not simply that this must be in terms of probabilities rather than certainties (trivial machines may have only probable outputs as well, i.e., a roulette wheel), but that the probabilities and certainties themselves are unpredictable.

In the case of the trivial machine’s “surprise,” then, this has the consequence of offering us the chance to adjust our understanding of the machine; it offers a moment where our already existing knowledge expands. Put in  negative terms, this is when a “defect” in the designed triviality of a machine appears, although even this defect in principle provides a learning moment to (in the future) design new trivial machines that don’t express the “defect” the first has uncovered. In the case of the nontrivial machines’ “surprise,” then, it can never be a question of removing the capacity for surprise entirely. At most, if a nontrivial machine never surprises us, it is only because it (or we) did not temporally last long enough to encounter that surprise. This still claims too little. As the previous examples of answers to the question “what is three times two” make clear[45], we continuously, even relentlessly, ignore the nontrivial aspect of human responses in our enthusiastic attention on what we consider to be the “point” of the question in the first place, the answer.

On the face of it, it seems the distinction between unexpected and surprising should be straightforward and easily made, but the range of phenomenon covered by the generality of the concepts (from roulette wheels to calculators to dogs to human beings, from stars to Fate) apparently blurs the clarity. Because just as frequently, we may categorize these (surprising or unexpected) outputs from trivial and nontrivial machines (i.e., things we take to be systems) and account for them either as trivial—as little more than a calculator’s perversely wrong (but ultimately quite predictable) defect—or nontrivial, as the subtle and vaguely hostile machinations of a cruel (but ultimately quite unpredictable) Fate. We hear this from dog owners after their beloved pet suddenly, surprisingly, kills a neighbor’s cat. For at that moment, it is not at all (at least for the pet owner) that the “real nature” of their dog has suddenly expressed itself, even if that’s exactly what Animal Control will contend, demanding the dog be put down in the name of public safety. And we hear it sometimes from parents suddenly confronted by their first-born announcing she’s queer-identified when they attempt to trivialize this very nontrivial turn of events by asserting “I didn’t raise you to be that way.”[46]

One of course may advance practical reasons to justify such trivialization of an otherwise obtruding nontriviality; quite clearly, if every time someone spoke we had to completely revise our understanding of each other, then we would be a much more interesting species and might have far less time for murdering each other in wars[47]. There is no doubt that the habitual treatment of nontrivial machines as trivial may often have a serious convenience factor, but it may even more often be the case that such convenience comes at an extraordinarily high social cost. Moreover, if an intelligent Designer’s relationship to what It’s designed involves a trivialization of nontrivial machines, then the ethical problems exposed in the next section may redound on the Designer as well.

The Social Costs of Trivializing the Nontrivial

However convenient we find it to trivialize the nontrivial, this also provides a central root for a vast amounts of social violence.

The child in school, who is first disciplined, then diagnosed as ADD, then subjected to chemical adjustments to his or her mental world for the convenience of the parents and teachers who cannot otherwise adequately meet the child’s curiosity (or simply energy), aptly illustrates this social violence. The human being, who is first constrained, then restrained, then drugged into docility by psychiatric hospital employees for the convenience of the person’s neighbors and friends who cannot otherwise adequately meet that person’s interface with reality also illustrates this—and before we start up with arguments about safety for the person herself or his neighbors, it is worth remembering that at Dr. Patch Adam’s hospital in Virginia, people who came there in full-blown psychotic episodes were never given psychiatric medications (because, as Dr. Adams said, “We never disliked anyone enough to do that”). That hospital is empirical proof that drugging people into submission should be viewed as a convenience, not a necessity; it is a resort to be taken by those who cannot care enough or cannot take or care to find enough time to address a troubled human being in a human way.

The parent who labels their child as bad, the school psychologist who labels a child as learning disabled, the doctor who labels a patient as sick, the judge who labels the person in her courtroom as criminal, the sociologist who labels people as poor, the politicians and media who label entire peoples as enemy or terrorist or illegal—all illustrate the trivialization of nontriviality for the sake of convenience. Other details aside, where a (nontrivial) human being is treated as trivial, it will be for lack of greater creativity, for the sake of a quick and “easy” convenience. This is social violence.

Instead of accommodating the surprise afforded by nontriviality, instead of making the necessary revision to the very basis of our understanding of a nontrivial machine (in this case, a human being), we have preferred instead to keep our current understanding. As with trivial machines, we view this surprise instead as a “defect,” something to be repaired, rooted out, and prevented from recurring in the future. And let me be clear—the nontrivial machine itself may want (can want) to treat itself as trivial. Psychiatric patients do not all universally condemn their medications, and one needn’t look far or wide to find those who praise the poisons. Medical patients want to be treated humanely, but they also want their malady to disappear from their bodies at the doctor’s touch just as simply and assuredly as when we flick off a light switch. Real terrorists exploit the mislabeling of others as terrorists. Sometimes the only assistance the poor can get is by petitioning others as the poor. It’s a very different situation whether one self-elects to accept a label versus another where one has little or no say in the matter. In particular where one is made inferior or less by the labeling by another one can only partially resist (if at all), this (widespread) form of trivializing the nontrivial represents the main cost and preeminent social violence I wish to denounce and socially renounce.

Trivial Nontriviality & Nontrivial Triviality

But there’s a disconnect here. The calculator that misbehaves—we might describe it as trivially nontrivial; in our human encounter with it, the nontriviality (suddenly) demonstrated is of a trivial (predictable) type. Or, when a pet misbehaves—we might describe this as nontrivially trivial; again, in our human encounter with it, the triviality it has reliably presented so far (as a beloved pet) is (suddenly) challenged by an unexpected new behavior. Even so, it might be that such an episode never recurred. Only out of a sense of social obligation to our neighbor might something so dire as putting our pet to death seem called for.

In this way, we can imagine our bodies as unruly pets. Strictly speaking, animate matter (i.e., a living system) is reducible to a strictly trivial description except that (as von Foerster demonstrates) to ever actually identify the correct trivial description of our animate flesh remains a transcomputational problem[48]. In effect, this means that we can only ever encounter our physical selves as if it were nontrivial, but—for the sake of convenience, if not also peace of mind—we will tend relentlessly to think of our physical selves in trivial terms, or to at least expect ourselves in trivial terms. Cancer is a momentous slap to the face of this conceit, but all manner of bodily inconveniences that arise will often strike us as outrageous impositions. Thus, were my body suddenly to break out in itchy hives, I’ll reach for the calamine and subdue such an unexpected (but in principle trivially explicable) affront by my body. And I’ll be glad for the ointment even if I’m unappreciative of the occasion for it.

Where psychiatric matters are involved, the tidy line between “flesh” and “mind”—between our bodies’ physically animate living systems and that operation that somehow depends upon, but is not reducible, to that animate flesh (our Self)—blurs. Psychiatric poisons operate from the premise that subduing the relative triviality of the body can or will alleviate unwanted symptoms in the Self, and this holds to some degree. For if you are troubled by your thinking, and I knock you out with a left hook, then for the duration of your unconsciousness, the “problem is solved”. I could also shoot you and permanently “solve” the problem. The violence inherent in convenience begins to become more clear.

I want to not lose sight of the fact that lively children are being poisoned in mass quantities with Ritalin and other drugs because someone has labeled them as ADD. For the adult experiencing depression, the desire is to be not depressed rather than merely to “have the symptoms removed”. If all a psychiatrist can manage is to manage symptoms, they at least shouldn’t trumpet proudly over their inability—but in any case, the adult who can apparently only find relief through medication is in their rights to make that decision, at least in principle. Where children drugged by Ritalin are concerned, however, if they have any desire at all to “reduce their symptomology,” it is because Mommy or Daddy or their teachers get mad or upset at them for acting as their hearts and minds dictate. So, I’m not “bashing” the pharmaceutical approach wholesale or suggesting people should get off their meds; I’m underscoring the socially undesirable circumstance of how those in a relatively less empowered state (children, patients, criminals, the so-called mentally ill, undocumented guests in our nation, people of various races) are “treated” (socially and/or medically).

I’m also insisting that the psychiatric approach to mental illness substitutes a materialist convenience as an address to a nonmaterial (human) condition. In other words, mental illness occupies that liminal, blurry edge between the pseudo-nontriviality of our animate flesh (our bodies) and the irreducible nontriviality of our embodied immateriality (our human experiences in response to living in the world and the presence of our animate flesh). Because the psychiatric approach waffles precisely at this juncture it makes for an illustrative case. As such—and just as all trivial treatments of nontrivial machines are (at minimum, if not only) for the sake of convenience—we should not lose track of the fact that the psychiatric approach should be understood, not merely in terms of the players involved, but rather in its authentic significance: as an act in the social domain, and one frequently complicit in violence.

If I am depressed, I might accept a psychiatric dosage—out of convenience. It might be that nothing else works, &c., but that is beside the point. As a nontrivial machine, I’m just as capable as anyone else to treat myself trivially. I may be no more fond of my “psychotic episodes” than my family, friends, neighbors, and the strangers in the mall, where I (unbeknownst to myself) began holding forth about whatever had possessed me at that moment. The fact that I would accept this convenience does not erase its pathetic inadequacy; grateful as I am for no longer ruining my life by “going crazy” (and winding up in jail, prison, or a psychiatric hospital … or a morgue), we cannot start complimenting ourselves too much for this victory, less for my sake, and more for the millions of people who are being “treated” in a like manner more or less against their will. The even more general form of this point is some variation on the non-reductionist objection—so long as we allow our approach to the nontriviality of the person (as a human being) to be dictated by the (pseudo-nontriviality) of our animate flesh, then we are substituting “biological nature” for “human being” in a way that has all kinds of historically self-evident catastrophes associated with it.

One of these angles underpins the claims of intelligent design, particularly as I desire that the social violence resulting from trivializing the nontrivial should no longer persist or prevail.

PART III – Discussions

The Anthropic and Divine Limits of Omniscience

In general, I refer to “science” and “religion” to express the difference between the body of scientific theory underpinning the origins of the universe on the one hand and the religious creationism of the other. This is to avoid certain disingenuous moves one can find in “discussions” of these things[49].

The advantage of the religious view is its ability to talk unproblematically about, even to celebrate (when it is in the mood) human nontriviality. Our nontriviality has nothing to do with our material selves; the mind-body problem itself is already a fatal wrong premise, since it is only from our nontriviality (from what could be called our existential human beingnness in the cosmos) that we can even describe any aspect of our triviality in the first place. From the standpoint of the universe, the cosmos must be “utterly surprised” that we ever said anything about ourselves at all, much less that we are nontrivial machines. The universe, of course, in its less generous moods, continues to try to treat us as nontrivial, labeling us as “human beings,” and then “treating” us to the point that we eventually perish of the treatment. We humans refer to this outrageous imposition as “the human condition”. I mention this because part of the religious view’s appeal is precisely that it at least doesn’t pretend that human nontriviality should be understood as “nothing but” an epiphenomenon of our pseudo-nontrivial animate flesh[50]. And this is an appeal worth keeping, without, however, also having to accept the groundless metaphysics that comes with it[51].

The doxa has it that we have freewill. In light of the previous discussion, what “having freewill” looks like in the human social world is our irremediable, irreducible capacity to act unpredictably: to surprise others, to be surprised by others, and even to surprise and be surprised by ourselves.

But is “nonpredictability” the same as a “capacity to surprise”?

Predictability, being a probabilistic affair, only claims omniscience in its unguarded, drunk moments; in general, it admits to not having a perfect understanding, and is quite excited (often) at the prospect of discovering something new—at least in the pure sciences[52]. On the other hand, the capacity to surprise need not be seen as merely a function of surprising someone who is looking (even when that someone is ourselves). That is, the capacity to surprise may be predicted for those cases where prediction is only relatively knowledgeable about the nontrivial machine to be analyzed[53].

A cheap argument would go like this: von Foerster’s numerical analysis makes clear (even when analyzing a single human being) that the complexity of a nontrivial machine (like a human being) is so vast that even our categories of description for that vastness are taxed, if not exceeded. This will seem a cheap argument because the mere size of the task at hand (to analyze a human being as a nontrivial machine) need not be thought of as impossible for an omniscient or omnipotent being. If the deities of intolerant monotheism is capable of doing anything and knowing everything, then the magnitude of the task can be no barrier and its impossibility for us (human beings) merely points to the limit of human ability.

Even so, let us at least keep things in perspective. In the case of a simple nontrivial machine, with 4 inputs, 4 internal states, and 4 outputs, analyzed at a speed trillions upon hundreds of trillions of times faster than is theoretically even possible would require more than a quadrillion quadrillion ages of the universe to complete the analysis of its 10126 possible machines in play. The number of atoms in the visible universe has been estimated at 4×1079—or, in other words, more than 100 billion trillion trillion trillion too few simply to distinguish each of those machines. On top of this, the simplest human being is already vastly more complicated, having vastly more than 4 inputs, outputs, and internal states.

In the abstract, one of course may breezily and rather naively say, “That’s still a piece of cake for the deities of intolerant monotheism,” but anyone making such a claim is (I submit) doing so without really grasping just how excessive the claim really is—never mind that the claim is only for knowing a single human being, much less seven billion.[54] Imaginatively speaking, the number involved is exponentially larger than infinity. At this scale, it becomes rational to argue that “omniscience” is actually insufficient for knowing such a range of possibilities. It will be objected that saying this must be a redefinition or an ignorance of what is meant by “omniscience” in the first place, except that omniscience (in the mouth of a human being) can only be a humanly limited concept.

In another vein, one may argue that “omniscience” need not refer to any (necessarily) humanly limited conception of omniscience, but serves rather as a pointer toward that actual omniscience that the deities of intolerant monotheism (as a claimed all-knowing deity) possesses. At this point, we enter the realm of mere faith, since no rational argument can justify such a claim. The consequence of this is not, contrary to what the theologians would have, that we may  rest assured in having successfully attributed omniscience to a deity but rather that we should fret that any such claim (even on the basis of faith) cannot have any validity in the first place.

If the deities of intolerant monotheism is “omniscient” (the scare-quotes are necessary here), then we are not capable of saying so (even as a profession of faith). The fact that this is not possible will not, of course, prevent those who are truly determined from using violence on others to enforce the view. As history makes amply clear.

At root, there is an article of bad faith (literally) involved here. If the deities of intolerant monotheism is—as should be anything actually deserving of being called a deity—beyond human understanding, i.e., is inconceivable, then that cannot be taken non-seriously by our pretending that we can ascribe attributes to that inconceivable deity. As soon as we think, “God is this,” then we may be sure that we are wrong, that we are talking about anything by the deity at that point. Neither can one insist, “the deities of intolerant monotheism must be either omniscient or not omniscient”; the inconceivability of the divine would require instead “the deities of intolerant monotheism is neither omniscient nor not omniscient.” This is no paradox at all—though some will try to see it as such—but is rather a perfectly logical and rational acknowledgment that the inconceivable cannot be conceived of. This situation is not remedied by biblical authority, where the deities of intolerant monotheism himself tells human beings some attributes he has, and not merely because human beings wrote those books. It may most definitely be the case that a biblical passage has the deities of intolerant monotheism saying, “I am all-knowing,” but that phrase, cast in human conceivability, cannot actually be telling us anything about whatever omniscience the deities of intolerant monotheism does or does not possess.

There is simply no getting around this, even by faith. All arguments about claims to know the deities of intolerant monotheism (whether faith-based or based in reason) necessarily rest on nothing—nothing, again, that may be backed up with violence to either silence or destroy those who reject or question the assertion. And for those who are too dogmatic or too ignorant to grasp this fact, the example of Krishna in the Bhagavad-Gita (as just one of virtually innumerable examples from Eastern philosophy and yogic tradition) makes clear what the alternative is: He declares, pointing to Himself, “I am not it.” Krishna plainly, simply, logically avers that his material manifestation (in all of its seemingly limitless power and glory) is not the inconceivable divine itself; He (in an incarnate form, one that is comprehensible to humans) cannot be. This is how, in the famous Vedic hymn of creation, that which has willed itself (and subsequently all of creation) into existence, reads at the end:

That out of which creation has arisen,
whether it held it firm or it did not,
He who surveys it in the highest heaven,
He surely knows – or maybe He does not!

Here, even the incarnate version of the ultimate principle of creation recognizes (in the possibility of its knowing not) that “it also is not It”. This straightforward acknowledgment by Eastern philosophy, which is such an old premise that one encounters it practically as a throw-away cliché everywhere, demonstrates how any positive theology that believes it is true (i.e., that any assertion of an attribute regarding the inconceivable can be valid) can only do so by the kind of oppressive and oppressing violence dished out to the Albigensians, the Manicheans, the Arabs in Jerusalem, the 20th century Jews, all of those colonized in Africa and elsewhere, the Armenians, Greeks, and  Assyrians following World War I in Turkey, the midwives and witches of Europe, the Arabs in Damascus country, those who professed Arianism, Pelagianism, Donatism, Marcionism, Montanism, the Paulicans, the Bogomils, the Patarini, the Dulcinians, the Waldensians, the Tisserands, the Cathars, those who do not accept the simple meaning of Maimonides’ 13 principles, the Shi’a Muslims re the Sunni Muslims, the Sunni Muslims re the Shi’a Muslims, the Bosniaks and Croats in former Yugoslavia, and so on. One root present in all of this intolerant monotheism is the premise that the attributes of the divine are determinable and valid once determined. Even more generally, any epistemology (i.e., science and religion) where truth as an attribute of descriptions is taken to be valid falls the same prey as a theology that assumes statements about the inconceivable have validity.

Moreover, since native human inadequacy could never be “omniscient,” let’s leave aside the human factor and address this question from the divine side instead. The claim here will not be and can never be that whatever we might know or say of the Inconceivable must be valid (true) but only that as human beings we encounter the world in the manner we do. So, on the one hand, it must be remembered that when Krishna candidly asserts, “I am not it,” the purpose of this is not an annihilation of all knowledge, but simply a reminder of the partial understanding always operate from at that moment[55]. More could be said about this here, but the point is that Eastern philosophy makes clear that the Inconceivability of the divine is not a reason to stop talking about or trying to express it (contra Wittgenstein). Quite the contrary—Eastern philosophy is abundantly lavish in its depictions of incarnations of the Inconceivable, always bearing in mind that any such incarnation (or avatar) cannot be and is not “it”. More than this, it is precisely only by encountering existence through the lens of this acknowledgedly limited (partial) human understanding that one might gradually come to the point (in this life or the next) of Enlightenment (or liberation from the wheel of rebirth). The only fundamental mistake in this kind of context is ever believing that one’s assertions about the divine are valid (true).

In this light, we might ask whether human beings would be trivial or nontrivial with respect to an intelligent Designer, particularly by considering the consequences of our own creative acts as designers. Certainly, to attempt to fit the intelligence of an Indian view (i.e., “The Divine is neither inconceivable nor not inconceivable”) into one of the dogmatisms of intolerant monotheism (i.e., that “God works in mysterious ways”) marks an untenable and intellectually desperate reaching.  As Lao Tzu puts it, “Those who are intelligent are not ideologues. Those who are ideologues are not intelligent” (Tao Te Ching, Verse 81). All the same, for the purpose of addressing the question this essay examines—at least in the terms that intolerant  monotheism dogmatically enforces—requires at least temporarily accepting the premise that one may validly ascribe attributes to the Unattributable.

Are Human Beings Trivial Or Nontrivial to a Designer?

It is difficult to avoid the conclusion that ascribing omniscience to the deities of intolerant monotheism makes humans into trivial machines, into robots in a pejorative sense, and with that observation the specter of predestination rises up once again.

In Seventh-day Adventism, to pick only one example, the belief is that some 144,000 only will be saved; everyone else (Seventh-Day Adventists included) is damned to Hell. I did meet one Seventh-Day Adventist who claimed she was not amongst the saved, which of course mathematically only makes sense, given that there are currently ~16.3 million in the church around the world. I appreciated her candor—certainly an unusual one, since other Seventh-day Adventists I have spoken with seem convinced that they are amongst the saved. I’d wager if I spoke to more than 144,000 of them, I’d have to conclude that there were at least some of them who were operating on a false notion.

It’s not that one must accept, in this example, that only 144,000 will be saved; it is, rather, that once one assumes omniscience in the deities of intolerant monotheism, then the damnation of some and the salvation of some from the very first moment of creation is a necessary conclusion (whether one finds a rational or irrational, i.e., reasonable or faith-based rationale for this conclusion). One then is, quite sensibly, led to wonder as a human being what the point of this massive charade called life is, and no shortage of tortured ravings have issued from the proponents of predestination.

In such a context, what amounts to claims for human dignity are either, on the one hand, reduced to bitter ash (because we are merely some puppet operated by the universe or whatnot) or made into a permanent mystery since whatever it consists in is not available to us while alive but shall only be revealed to us after we are dead. (In the meantime, we can occupy our days fantasizing what it might be like.) We are asked, in effect, to take on faith that some other mysterious component of ourselves (our souls, our spirit, or whatnot), even though we can never know such a thing (certainly not directly and perhaps not even indirectly while alive), is actually what is most valuable and most dignified about us.

Sinking this all in the murky unknowing of faith might be the best virtue we can make of this necessity, but from the standpoint of an intelligent designer such appeals to mystery seem more like a smokescreen for an otherwise poor design.

One might also take another tack and simply take a club to people spiritually for having the temerity to claim that humans have any dignity about them in the first place. This foolish display of vanity and pride exposes any sense of self-esteem one might drum up as a pathetic illusion and probably of Satanic origin into the bargain. But, since the deities of intolerant monotheism is a myth, it may be understandable enough that I’m not keen on spreading throughout the social domain (and especially into the minds of people who’ve otherwise missed it somehow, perhaps by being born recently) that they are worthless worms who can accomplish nothing in the first place and that any sense of pride in oneself is, at the very best, a dangerous delusion and most likely pure evil working to destroy your (fictional) immortal soul.

Or still again, we might elaborate an even more science fictional fantasia such that life is essentially a Petri dish. Here, the deities of intolerant monotheism is growing us as slime-molds or whatnot so that after we “mature” in this Petri dish (i.e., die), we may be harvested and put to some dignified or noble use after death—why terrorize ourselves with needless threats of boring permanent placement in a divine choir howling at the top of our ectoplasmic lungs how great the deities of intolerant monotheism is for the durance of eternity. Again, whatever human sense this makes of our existential dilemma, it doesn’t really answer the problem of why an intelligent Designer was obliged to force us through the Petri dish in the first place. Once again, we may aver that the deities of intolerant monotheism works in mysterious ways, but as soon as we ascribe any kind of necessity to Their actions, then Their omnipotence is qualified. If we must go through this trial in the Petri dish (not toward the insulting and pointless end of being sorted out for Heaven or Hell from the beginning of time, which the deities of intolerant monotheism already knows in advance the outcome of), but in order that we—as spiritual grist—can be grown or matured (or even as a species evolved) toward some next task after death, then this means that the deities of intolerant monotheism could not have done it otherwise, and so it may be that Their omnipotence is not omnipotence after all. The classic question, “Can the deities of intolerant monotheism make a rock He can’t move,” is then answered, “They could, if They wanted to.” And if there is no necessity involved in this, then the fact that we were mortals easily imagine more intelligent designs than the one the intelligent Designer select (even if we can’t implement them) rather seriously puts in question whether we should call such a Designer intelligent.

As ever, we can extricate ourselves from the difficulty of this by asserting that the deities of intolerant monotheism is not omniscient, omnipotent, or omnibenevolent. If the deities of intolerant monotheism is not all-powerful, then the Petri dish is the necessary (i.e., obligatory) means They had to adopt to get to whatever end deemed desirable. If the deities of intolerant monotheism is not all-knowing, then the Petri dish is the means by which They sort the wheat from the chaff vis-à-vis whatever larger project They is up to (even if that project finally involves nothing but the creation of limitless sycophants to howl Their praises to the end of time, while others howl their regrets in Hell). And if the deities of intolerant monotheism is not all-loving, then the ethically dubious act of throwing sapient beings into an ultimately inhumane process of sorting simply to arrive at whatever arbitrarily self-important end the deities of intolerant monotheism declares (because They have the power and know-how to do so) becomes explicable, if grotesque. One could easily imagine, if the deities of intolerant monotheism doesn’t care but also has the power to make it so that we needn’t suffer along the way, that They might bother to implement such a nicer animal testing protocol on us—but this assumes the deities of intolerant monotheism cares enough to bother.

Consciousness, Feedback, & the Design of Triviality

When humans build trivial machines, it always reflects a telos (an intended operation and goal). Indirectly (in Shelley’s Frankenstein and PK Dick’s Do Androids Dream of Electric Sheep) and directly (in Lem’s “Non Serviam” from A Perfect Vacuum), a creature’s question, “Why did you make me this way” is deflected by its creator in rather pragmatic, if not cruel, terms. Both Shelley’s monster, Dick’s replicants, and Lem’s personoids seem in their rights, however, to ask, “Why endow me with the capacity for self-reflection, if I’m just meant to be a machine toward an end?” The question of course is one any (but the adopted) child might ask her parents.

The facile or tautological reply is that self-awareness is somehow necessary to whatever end one was created for, but we’ll have to return to that later.

Part of the revolution wrought by cybernetics involved the incorporation of feedback into the design of machines. A most famous and familiar example of this is the thermostat, but the auto-pilot may even more readily illustrate the significance of feedback.

The term cybernetics (contrary to expectation, I think) means “steersmanship”. Thus, where ships of old on the high seas required a human being to be more or less regularly checking the course heading to assure no course deviations had occurred, with and the cybernetics implementation of feedback, designers elaborated mechanisms that could make such corrections more or less automatically. Thanks to this, guided missile systems no longer had to rely simply on the physics of projectiles (made usually too complicated by additional factors like wind or counter-measures by those to be bombed) and could actually have mechanical or electronic systems added that corrected course errors on so that turning human beings into a distribution of red meat and mist was greatly simplified.

In the context of a trivial machine, there is a kind of analogy that the machine “knows” itself through feedback. Obviously, it quickly becomes perilous to overextend this knowing, at least where artificially built (non-living) machines are concerned. Such “self”-correcting feedback only resembles self-reflection as humans (and other living things) experience it. Clearly, as we design trivial machines, even with feedback, we have yet to arrive at the moment where our cars, lawn mowers, and guided missiles could or ever would turn to us and say, “Why did you make me this way.” But it’s exactly at this point that a point of this essay, beyond “disproving” a stupid idea like intelligent design, becomes pertinent. If the objection of children to their creation has a sort of self-cancelling quality in the sheer fact of existing to ask the question[56], it becomes a less easily dismissed question should a cruise missile ask, “Why was I brought into this world, only so I might destroy myself and others.”

An unconscious human being (or other living machine) continues fully to operate its feedback mechanisms, which are legion in our bodies. The mere presence of feedback (in a machine) is therefore not yet grounds for an ethical dilemma for its creator, except perhaps in one sense: that it may be unethical to endow such a trivial machine with self-awareness in the first place. In the human domain, to treat other human beings called slaves as trivial machines is obviously woefully unethical, immoral, and, to put it simply, shitty. To endow a lawnmower with a capacity for self-reflection and then leave it stored in a shed with the hoe and the garden hose and the weed whacker would broaden what must be asked when designing an ethics.

Currently, the justification is one rooted simply in the (relatively omnipotent) power to do such so and (creator-centric) self-serving framework that views other things (and other people) merely as instruments toward one’s own arbitrarily determined or desired ends. Nothing necessarily prevents us from looking at the world of things that way (and unfortunately enough, sometimes not even the world of people as well), but it’s a gross way to do so and one that implicitly undermines most of the high-minded doxa that underpin justifications to act that way in the first place. That is, most who carry out an ethics founded tacitly on “might makes right” howl in execration when anyone else (and particularly the poor) ever get enough power, even for a moment, to practice some “might makes right” in return upon those who advocate this. This hypocrisy is predictable, but that doesn’t make its “argument” coherent.

So, if humans are trivial machines (i.e., perfectly predictable mechanisms vis-à-vis the deities of intolerant monotheism), then this requires an argument that self-awareness comprises a necessary part of our designed (trivial) ends. Again, one can fire up all kinds of science fiction (e.g., some sort of Matrix narrative) that we are designed (on this Earth or in an afterlife) for something, but this has already been raised—that is, whatever we might guess about our designed (trivial) telos, the necessity for self-awareness of our existence remains unanswered.[57]

A downside of human-designed trivial machines is that they will eventually express defects due to mechanical wear. Thus, cars have built-in sensor systems to alert their human users that maintenance may be necessary; auto-pilots in airplanes have alarm systems to alert the human pilots that their intervention is needed. This is partly in response to an ad infinitum that human engineering simply cannot overcome; i.e., because sensor systems themselves may fail, one needs a sensor system to monitor the sensor system ad infinitum. In any given engineering application, one stops articulating these additional levels of sensors either wherever one desires or where the design demands. In an airplane, one level of sensors may prove enough because at the point of a recognizable breakdown, the human pilot intervenes. On a Mars rover, a much more articulate level of sensor systems may be called for—for when remote interventions from Earth cannot address a problem. And so forth.

Unintelligent Design & Discourses on Nature

In any human-designed system, there comes a point where a system breakdown cannot be “addressed” by the system itself. There are innumerable ways this may come about, depending upon the machine in question. A typical response involves some kind of human correction, when possible. Our car breaks down—we (or some human) has to hire a tow truck to drag the thing to a mechanic or we go to work on the car ourselves. The general point is that the machine requires an external intervention because it is not otherwise designed (and so is incapable) of addressing (correcting) the defect, the breakdown, itself.

As matter gradually became animate over numberless eons through ever more articulated combinations of coordinated living matter, more and more living systems were attached to other living systems so that ever greater ranges of “breakdowns” could be self-addressed by those living systems. These breakdowns gradually come to arise not only “in the organism”; a sea cucumber will count it a breakdown if it washes up onto a beach, whereas (by the long process of evolution) eventually a living creature that would otherwise have counted it a breakdown to find itself suddenly out of the sea and on dry sand or exposed to sunlight and air now was then able to “address the breakdown” themselves. In this respect, the invention of mobility was a fantastic breakthrough for animate matter, since any arrival of unwanted environmental conditions could be escaped from. Instead of being helpless like plant, exposed to air and sunlight by a receding ocean, more mobile creatures simply retreated with the ocean.

Non-newsworthy as this observation is, two points frequently brought up against intelligent design could and should be underlined.

First, the view of “Nature” (I have to put that in quote marks) in evolution is that She is a profligate mother, more or less randomly expending the vast resources of the whole world at her disposal in a ceaseless, if somewhat gleeful, combination of elements. Whether they sink or swim is of no concern for Her, and if She repeats the same experiment a thousand times (even a million times) with the same failure, She could care less—just as She is no more interested in the successes except that, once She notices them—if “notice” is the right word at all—then they might become further grist for her experiments in the future. Of course, this personification of Nature is not “scientific” and is more like a fable that conveys the “unintelligent design” (or perhaps merely the “nonintelligent design”) of Nature. On this view, blind Chance is our Mother—it’s worth remembering that the Greeks had a goddess of chance, Tyche, who they despised as much as any deity in a pantheon might be despised.

Over the course of human culture, the depiction of the mother, as the Mother, as the Source, has varied tremendously. Figures such as Isis, Ishtar, Astarte, and Shakti (and their derivatives, like Demeter or Mary) show us the Mother or Source in a “positive or nurturing guise,” while figures like Kali or Durga (arguably more properly demigoddesses than goddesses per se) show the Mother or Source in an all-devouring image—the Mother giventh and the Mother taketh back. Or as our own angry mothers may have said from time to time, with remarkable candor, “I brought you into this world, I’ll take you out of it.” Which is all to say that, depending upon one’s mood or outlook, our image of (Mother) Nature as Source (as Origin and thus Mother) still tends to carry a “good” or “bad” valence, even when further articulations are added. Thus, this profligacy of Nature in the evolutionary view of Nature might be construed as wasteful, stupid, unintelligent—a sort of ebullient indifference—as opposed to an emphasis on the mere randomness of nature (no capital ‘n’), which carries at least a note of terrifying or destructive indifference to the fact of self-aware life (us, as human beings) crawling over our Mother’s body like blind kittens.

This still involves a degree of personification; that is, this description continues to reflect traces of personification construable as nonscientific or outside of what science takes seriously when it is trying to look at Nature. In one passage of Jung’s “Commentary on the Secret of the Golden Flower” he rigorously refutes metaphysical claims regarding his (empirical) observations of the unconscious, &c. Even so, whatever metaphysics he does hold (or holds in a problematic relationship to his scientific self) gets into his work nonetheless, as it must—nor could this be denounced as wrong or undesirable ultimately. Merely to look at Nature while pretending that human values are not implicated in that looking is a dangerous myth that the end of the 19th century ought to have dispensed with.

In any case, if Jung is at least aware of the pressure of his metaphysical impulses behind his work, that puts him a leg up on those who claim they have set their metaphysics aside—and by metaphysics here, I do not mean merely “faith” or “religion” but more broadly, the explanatory system applied to the world through which human value arises—even one’s “reason for living” if you will. If the error of religion is in clouding observation with metaphysics, then the error of science is in missing the metaphysics of one’s observations, and as such, human experience per se (as I said at the beginning of this essay) needs rescuing from this as much as from religion. I read recently in an Internet comment thread that someone studied paleontology in college for the sake of seeing how beautiful life is and can be—that’s a very metaphysical motivation, and I wholly support them.

So, if I use here only the image of (Mother) Nature in Her well-known, three conventional guises of Creator, Sustainer, and Destroyer, then this is simply to underscore how scientific views of Nature deserve to be categorized as reflecting one of these major emphases as well. Human beings look in human ways; to talk about it requires something like these categories.

I underscore this because the notion of intelligent design proposes an antithesis to the “indifferent” attribute placed on the Great Mother (She who accidently Creates, She who unconsciously Sustains, She who casually Destroys). It will tend, depending upon the argument, that intelligent design may be “indifferent” as well (God the Absent Father), but for all of the indifference, at least the (Father’s) creation is intentional. And if the creation is intentional and not accidental, then the impulse to claim some kind of inherent meaning (i.e., a meaning that implicit inheres) in this fact of creation and also our fact of existence seems to take on a more rational base. We may never know, may even never can know why the intelligent Designer made the universe and us, but we may at least rest comforted in the certainty that it was done for a reason.[58] In any case, it is we humans who create these explanatory mythologies, who provide ourselves the ground for compellingly asserting (if only to ourselves) what our purpose in life might be.

And so the underlying impulse of intelligent design carries forward the patriarchal hostility to Woman that it has enforced at least for eight millennia. Again, most scientists will bow their neck at the idea that underlying their “view” of Nature (they may even deny they have a view of Nature) is some notion of Mother (as the master metaphor for the Source or the Origin), but it is precisely this that they are advancing (and protecting against) incursions from the patriarchal underpinnings of the Father (stripped just as much as Mother Nature of as much of His metaphysical trappings as possible). This, in any case, illuminates why intelligent design, which was cynically invented by religious types, takes up the role of Father in the social arguments playing out about it. This explains, in part, its appeal—particularly to the extent that the scientific construction of Nature (rooted in the Great Mother) does not, on ideological grounds, permit the attribution of anything but “chance” or “indifference” to how (Mother) Nature goes about her business.

Back in the days of yore, some ten millennia ago or longer, Nature (usually conceptualized as Woman, but sometimes hermaphroditic, sometimes asexual), the unrelenting generosity of Nature was the very bosom in which humankind found itself. There were dangerous elements in Nature (poisonous plants, ferocious animals, inclement or destructive weather), but this was all of a piece. Clearly, this amounted to a personification, but to what extent it was also personalized is impossible anymore to determine. In one respect, one can say that humans took seriously the notion that Nature was indifferent; at the same time, they did not in the grip of paranoia imagine that that indifference should be “read” as hostility toward human beings. Our current situation is not the same—sometimes one can encounter people who go into mystical transports about the miracle of Mom (Nature) or who, like Sade, saw Nature as the most enthusiastic Destroyer in the history of forever. Or, one can encounter something more like the scientific view of Nature, which merely blinks and looks on at the totality of Her. But in all of this, our existential predilection makes us, perhaps inevitably, personalize this circumstance. Nature is bountiful, to me, to us; Nature is out to destroy me, us, all of us; nature is indifferent to us, just as Harlow’s wire mothers were indifferent toward the young monkeys who clung to them—and the effects of this wiry indifference were neurosis, self-destruction, misery, life-long and profound unhappiness. Intelligent design, with its badly hidden paternal overtones, offers a kind of alternative to this.

We do not in the first place have to resort to intelligent design to alleviate our existential angst over the fate of our existence—we needn’t see our circumstance as angst-ridden at all. But I would still insist that the scientific emphasis of pretending that the human values at the back of its construction of Nature are both unnecessary and destructive is unnecessary and destructive. As the college student confessed—he studied paleontology to discover how beautiful life is and can be; that’s not keeping one’s values out of the picture very well at all. In any case, the record of human history provides any number of alternatives to the sorry patriarchal arrangements we currently live under. We don’t have to accept the metaphysics that Nature is indifferent—or even that (Mother) Nature should be construed as a “bad” Creatrix (as opposed to the intelligent male Creator), &c, especially if that means we must then swap in place of Nature unintelligent Design something poorly Designed.

That’s the first point: that intelligent design is another iteration of the patriarchal sexism that has spent the last eight millennia denigrating the notion of Woman. The second point is much shorter. Earlier, I questioned the utility of the word “omniscience” in a circumstance of complexity or with respect to a number that may be beyond the human ability even to express. In this way, claiming that the deities of intolerant monotheism can be omniscient exposes how woefully inadequate what the human conception of “knowing everything” entails—particularly where the total number of atoms in the universe is still 100 billion trillion trillion trillion too few atoms to account for all of the possible variations of Ashby’s little 4 input, 4 output machine with 4 internal states.

A variation of this point is at work here. In effect, intelligent design as an explanatory scheme does not give the impression of having a broad enough grasp of the time-scales involved in evolution (or, even more simply, the pace of the cosmos generally). Trying to imagine how one gets from a trilobite to a homo sapiens sapiens in stepwise fashion may seem unconvincing, but only because the millions of years involved in that change is deemed “not enough”. (The Young Earth trope, as a failure to grasp the Earth’s actual age, is itself a variation on this.) Proponents will say they have a grasp on the time-scales, but this can hardly be the case. Frequently one finds, at root, an inability to accept that Nature’s chance profligacy could really have resulted in the seemingly absurd degree of diversity we see in living systems. The derived argument one encounters from this (and not only from religious types with an ideology to grind) is the argument that the universe is miraculously arranged to support life (particularly human life). One can say immediately, “Well, were it not arranged like that, you wouldn’t be here to say so,” or “In another universe, you’re saying that, but you look nothing like you do now” or “Well, right now in another you’re universe you’re not saying just that very thing.” More cogently, “You say that, while ignoring the trillions of failed experiments that didn’t make it in this universe.”

These last bits only add arguments outside of (or beside) the main pipeline of this essay. Nevertheless, it remains part of rescuing the human from the naïve realism of science and religion to highlight the mythological character of the scientist (as Creator) and Nature, as a passive or indifferent (female) mystery. In this context, intelligent design itself is unneeded, because males already are Creators, their vanity cannot be flattered, but would be challenged, by an intelligent Designer.[59] In a kind of parallel of those other male Creators (artists), the scientist’s Muse exactly is (Dame) Nature, to prod, probe, even rape if necessary.

Cognition & Nonliving Systems

As feedback mechanisms and more and more complicated compensatory structures coordinated in living organisms, it became ever more possible for living systems to self-address and survive conditions that would have resulted in the death of previous iterations of those living systems. That this involved a mind-boggling number of failed attempts gets lost when we start ascribing necessity (or design) to this phenomenon (one should not call it a “process” at this point). On such a profligate scale of hyperexperimentation, the emergence of life becomes an inevitability, not because the universe was so arranged that life could emerge, but because configurations of animate could only eventually find the fit for the available environments—given enough time, life might appear on Venus, Jupiter, even Mercury or the Moon (if it hasn’t already, and been wiped out); even on Earth we find living systems, sometimes even more than microscopic ones, growing in places we would never have suspect.

It was only comparatively later that animate matter took up the habit of changing the environment to suit its needs. In this light, intelligent design would appear rest on an assumption—a lack of faith—that all the time in the universe (so far) couldn’t have been long enough for such an articulation of animate matter to come about[60]; this failure of imagination is on par with the shortfall of “omniscience” as an adequate expression for what would really be at stake in “knowing everything”.

In their seminal work The Tree of Knowledge, Maturana and Varela (1987)[61] state bluntly and demonstrate convincingly that all living systems are cognitive systems, whether they have a central nervous system or not.  Because “cognitive” has the sense of “thinking” in English, this may seem controversial, insofar as the authors unambiguously ascribe cognition to all living systems (plants, mammal, microbes, &c).[62] The further details of their argument need not be reprised here; it is enough simply to point out that they characterize the emblematic (positive and negative) feedback loops one finds in living systems as “knowledge” or “knowing”. Such knowing, however, is not yet or not necessarily self-knowing, or self-awareness.

In this light, one can speak of variously more complicated structures of knowing reflected in living systems (i.e., not humanly or artificially designed system). The telos of this knowing, as perhaps the telos of the living system in general, amounts precisely to being able to “self-address” what for less “knowing” systems would be encountered as non-addressable breakdowns (of either the environment or the biological structure that supports the ongoing operation of the living system).

This illustrates how we may understand cognition (“knowing”) as a process for permitting the continued existence of a living system. We may even, if self-consciously, use this in a metaphorical sense for trivial machines. The little robot vacuums that wander about our wooden floors sucking up dirt analogize roughly with our cats circling around the carpet to find some pleasantly warm or scented spot. Insofar as Maturana And Varela are concerned with the biological roots of understanding, it becomes non sequitur, if not simply incorrect, to say there is something more than an analogy between the vacuum and the cat. In other contexts, Maturana has explicitly expressed reservations about applying ideas from Varela and his to nonliving systems (e.g., “societies” or “communities” and the like). It is not that living systems in communities or societies cannot interact and coordinate or have no feedback loops, but that the basis of such interaction and coordination is of a different order (or of a different kind) than the coordinations one finds in living systems. If robotic vacuum cleaners “know” anything, then it must be put in intonational quotation marks and should be understood as a “representation of knowing” rather than knowing (in the biological sense).

Conflict & Contradiction

By conflict I mean a problem that can be solved in terms of the system in which it occurs; by contradiction I mean a problem that cannot be solved in terms of the system in which it occurs.

Whether a problem is a conflict or a contradiction depends on how one formulates it. Formulating poverty, for instance, as a conflict, we may then insist on doling out charity to the poor so that their poverty is alleviated; formulating poverty as a contradiction, we then understand poverty as a necessary and desirably preserved element of the current (dominant) capitalist system that would only be alleviated permanently by implementing something other than the capitalistic system itself.

The view of a machine as either trivial or nontrivial parallels the distinction between conflict and contradiction. Describing a machine as trivial suggests or implies that we may interact with it (i.e., get it to do what we want, or change it) in terms of the systematic view we already have of it; describing a machine as nontrivial, on the other hand, suggests that we might only successfully interact with it (i.e., get it to do what we want, or change it) by adopting at least one additional systematic view of it above and beyond our own (trivializing) view.

When we diagnose a child with ADD, we may then then get it to do what we want by poisoning the environment of its cognition with Ritalin or other drugs. Locking a dog in a crate when we go to work similarly makes a conflict of the dog’s behavior and applies a trivializing solution to “getting it to do what we want”. Or again, pushing the off button on our robot vacuum repeatedly once it starts behaving defectively trivially formulates the behavior as a conflict addressable in terms of the current system description we have of the robot vacuum.

One can almost always spot the trivialization in a conflict when the “solution” to the problem originates as an application of energy from source external to whatever we’re viewing as a conflict—hence, the introduction of a cognitively environmental poison in the ADD child, the restraint of the crate for the dog, and the physical force of pushing a button to stop the robot vacuum.

A nontrivial approach, which formulates the problem as a contradiction, will tend to involve triggering a change of state in the system and allowing its own (internal) energy to bring about the change we desire—even though that change may still be surprising to us and not yet one that we hope for.

In the case of the dog, how is this a contradiction in the first place? Is the problem that she wreaks havoc on stuff in the house if left out while I’m at work? But why is this a problem? What is the contradiction? Is it that I value my things more than my pet? (But isn’t my pet one of my things?) On this view of the problem (as a question of things), I might simply hide away whatever it is my dog loves to destroy. This could be called an avoidance of whatever triggering there is such that havoc results from my dog. Or perhaps the contradiction may be found in the fact that, on the one hand, I love all the playful energy of my dog but do not enjoy her destructiveness when left to her own devices while I am at work. On this view of the problem (as undesirable desirable energy), the solution involves letting her wear herself out running before I go to work.

It should be clear that an exact and precise distinction between seeing a problem as a conflict or a contradiction, precisely because it involves a human looking at the problem in the first place, may not seem exact or precise at all or that overlap occurs. In the case of the wiggly student diagnosed with ADD, the problem formulated as a conflict reports all of the student’s nervous energy as disruptive in the classroom; consequently, a trivializing (conflictual) resolution to the problem involves dampening the student’s nervous energy (with drugs, by removing her from the classroom, &c). The problem formulated as a contradiction notes that the classroom—devoted to the education of students—does not want to acknowledge this kind of student; the classroom’s (and teacher’s, and school’s) definition of what constitutes a student would need to change in order to recognize this kind of student. In a sort of way, the separation of this kind of wiggly student into “special education” classroom or remedial schools (where particularly kinesthetic modes of learning may be available) somewhat resembles a contradiction-based solution to the problem to the extent that a special education classroom is capable of recognizing this kind of student. But the broader problem, of course, is that such classrooms have stigmas associated with them and, in any case, they involve the segregation of the student from her peers—an application of force from the outside (removal from the classroom) to another space.

A more contradiction-based solution would be that the nature of the classroom could (and should) change so that this kind of student is acknowledged as a student. One strategy for this involves providing her with the necessary modes of activity that occupy her. If everyone else is reading a book, this student has something to keep her hands busy, or the teacher reads to her, &c. An even broader contradiction-based solution still would be to stop pretending that class can proceed only if every student is sitting in his or her chair. I have known teachers who do not discourage their “wiggly” students from getting up and wandering around the room, with three caveats—(1) that they not interrupt what is going on by doing so, (2) that other students in the class understand why this particular student needs to be up and moving around, (3) while preserving the opportunity for other students in the classroom also to have particular needs in the classroom recognized and honored. Another contradiction-based approach, arising from a long-standing critique of the classroom generally (c.f., Foucault’s Discipline and Punish[63] or Illich’s Deschooling Society[64]), would emphasize the insisted-upon “order” of classroom as the illegitimate telos of education in general, regardless of curricular content. Here, rather than demonizing those students who fail for whatever reason to socialize to this monological notion of order, the classroom might be arranged either (1) to accommodate an expanded notion of order toward the same imposed telos of education, or (2) might take the notion of “order” presented by the “wiggly” child as an addition and change to the curriculum itself.[65]. And so on.

Unpredicted vs. Unpredictable

The distinction between unexpected (or startling) and surprising involved particularly three factors: (1) whether we had designed a given system, i.e., whether or not it was artificial, (2) whether we could determine if a system we encountered in the world had been designed or not, and (3) the length of time we had been able to observe the system generally. These, without being necessarily sufficient, provide some context for the further distinction of between the unpredicted and the unpredictable.

To predict requires a relationship to the body of knowledge regarding the thing to be predicted; that is, the unexpected or the surprising inform our response to an operation or iteration of a given machine, while the unpredicted and the unpredictable inform our expectations of future iterations or operations of the machine. Thus, one’s winning bet at roulette does not constitute a prediction because we have no relationship of knowledge to the outcome[66]. Similarly, the prophet’s pronouncements of gloom and damnation do not constitute predictions either, as her knowledge of the future outcome is (by conceit at least) 100% certain. A circumstance of authentic prediction, then, requires something greater than (a conceit of) total uncertainty and something less than (a conceit of) total certainty.

Whether something is unpredicted or unpredictable involves a “quality” of prediction. With trivial machines, predictability generally confirms the relative likelihood of a given output; with nontrivial machines, predictability rules out more or less relatively unlikely outputs. When predictability fails in a trivial machine, it confronts us with an output that is unexpected (but nevertheless self-evidently occurred); when predictability fails in a nontrivial machine, it confronts us with an output that seems implausible (but obviously was not). Both circumstances offer opportunities for learning.

Where the deities of intolerant monotheism is concerned, if we deem an attribution of omniscience as tenable, then there can be no question of human beings being nontrivial machines. To be a nontrivial machine would give us the capacity to surprise the deities of intolerant monotheism, to act in a way that would be unpredictable to Them, which violates the notion of omniscience. It might still be asked, however, what purpose human nontriviality could possibly have for the deities of intolerant monotheism.

In Jung’s (2010)[67] “Answer to Job,” he arrives at the notion human beings were required to affect the salvation (the redemption) of the Creator. While Jung’s proposal sounds like a rank heresy (“Answer to Job” has been described as his “most controversial” essay), it marks a logical extension of what the European alchemists he’d studied had been up to. The alchemists in any case typically saw no contradiction between their avowed project as alchemists (to redeem God, to liberate Him from matter) and their Christian profession.

This framing (that the Divine requires the Human to redeem It, just as the Human needs It for redemption) offers a dualistic reading of notions found in much older nondualistic, Indian philosophy and religion.

In confronting the apparent mystery of why the Inconceivable would ever manifest materially (as you, me, the world) in the first place, the answer given by India’s sages runs that it is delightful to achieve Enlightenment. Given that everything expresses an avatar (a material manifestation) of the Inconceivable in the first place, all of reality then exhibits one great tending toward the experience of re-achieving Enlightenment[68]. On this view, the Inconceivable (the term is obviously a mere convenience) deliberately limits its consciousness—via the illusion of distinction, or maya—and each of us then is a particle (from one view) or the entirety (from another) of that gesture. In this case, it is not that the Inconceivable in a “nonincarnate” form is or is not omniscient; rather, what can be claimed is that “The Inconceivable is neither omniscient nor not omniscient.” Consequently, any avatar of the Inconceivable (whether you, me, Krishnamurti, a stone, Buddha, or a flea) may only and necessarily have a limited consciousness, however capacious[69].

The above hardly resembles the epistemology of intolerant monotheism, where the divine claims only omniscience (and not also non-omniscience)[70]. In Jung’s “Answer to Job,” then, we can see a case where the self-knowledge of the divine required the nontriviality of human beings, even if this necessarily qualifies the omniscience and/or omnipotence of the deities of intolerant monotheism.[71]

I raise this point as a case of treating human beings as nontrivial machines. Out of divine ignorance, Jung’s deity required not just the unexpectedness of human activity but the surprises afforded by human activity in order to affect His own salvation, redemption, enlightenment, self-consciousness. One might note in passing also the Manichaean notion that the divine “used the Primordial Man as bait for catching the powers of darkness” (Jung, Alchemical Studies, ¶450)[72].

Jung’s saving heresy notwithstanding, we must return to the notion of human beings as trivial machines with respect to the deities of intolerant monotheism, insofar as omniscience disallows surprise, emergent properties, and even unexpected outputs; that is, we stand relative to Them as our created (artificial) systems stand to us, where all of the parameters of the system are not (as is the case for lowly humans) only partially stipulated, but fully stipulated, in all of their consequences. We are, in a rather literal sense, calculators except that what we, as human designers, experience as unexpected mechanical breakdowns and defects are (as far as the deities of intolerant monotheism is concerned) not unexpected in Their design of us. This might have a consequence of making us fatally boring to the deities of intolerant monotheism, even if we cannot help being fascinated with ourselves to no end, but that’s a problem for the deities of intolerant monotheism, not us.

A Theodicy for Intelligent Design

The term theodicy refers to “A justification of a deity, or the attributes of a deity, especially in regard to the existence of evil and suffering in the world; a work or discourse justifying the ways of God” (Wiktionary)[73] The classic formulation involves the question of how an all-knowing, all-loving, and all-powerful deity would allow suffering in the world. Various disingenuous theological answers have been proposed (including blaming Satan and the blunt insistence, expressed notoriously in the Book of Job itself, that the deity of intolerant monotheism doesn’t have to offer any justification whatsoever, and moreover that it is inexcusable to ask), but what matters here is the way those disingenuous answers may reappear in various ID arguments.

The typical resort in answering this question is to qualify an attribute (e.g., by limiting the Designer’s omnibenevolence, omnipotence, or omniscience).

If the deities of intolerant monotheism is not all-knowing, for instance, then perhaps They didn’t know in advance what the consequences of creating the universe as They did would entail. This would be enough, except that what we can observe of the universe may not warrant the term “intelligent.” For one, there is no shortage of human fiction taking inept creators to task (c.f., Shelley’s Frankenstein, Dick’s Do Androids Dream of Electric Sheep, to refer to them again). More generally, the fact of famine, plague, and death (not even only in the human domain) offer an intractable ground for critiquing any claim of intelligence in a design that includes them. The Second Law of Thermodynamics already points to a fatal design error, particularly if (as an all-powerful Designer) I’m not bound by any necessary to allow systems to dissipate in this way. Eliminating this error would allow the elimination of famine. Other examples might abound, but for all that humans have erred along the way, we have also in very many ways taken what could be called an absolutely absurd and untenable existential condition and improved our lot. That we might ultimately fail (or are even currently failing disastrously) will be at the very least a critique also of the wholly shitty arrangement we found dumped in our laps by a so-called intelligent Designer in the first place and not merely a sign of our own inept failings.

Here again is another moment where the main topic of this essay is overshadowed by larger issues. In contrast to theodicies (as justifications of a deity), there also are anthropodicies, which consist of “An attempt, or argument attempting, to justify the existence of humanity as good” (Wiktionary). A strategy for both theodicists or anthpodicists involves the reducto ad absurdum of their opponents’ arguments. The disasters of human attempts to arrange the social may be taken by theodicists as arguments against secular humanism and in favor of intolerant monotheism and vice versa, but we are not obligated to accept a false dichotomy between theodicy and anthropodicy as the only two choices (just as Democrat and Republican need not be taken as our only two choices). So a seeming victory of one is not a defeat of the other and vice versa. On this view, the defeat of intelligent design does not obligate us to accept the “unintelligent nondesign” (of Nature) as science might currently construct Her. And, if scientists may tend to an (irresponsibly) neutral view of their research, the advertising wing of science promotes an anthropodicistic view of progress, benefit, “better living through chemistry,” and the like that generally goes under the heading of technocratic optimism. It will not be desirable to simply yield the arena to this notion unchallenged.

So far, one thing is certain about life in this universe: the intelligent Designer has arranged things in such a way that nothing we know of can survive in it for long[74]— even the proton and the whole system of the universe itself eventually decay. Were this our template for what constituted a successful design, engineers would have offed us and themselves hundreds of years ago.[75] In a similar way, one might argue for the notion that the Designer simply isn’t all-powerful enough to carry out the design or even a design. As Lem (1970)[76] puts it in Solaris:

“No,” I interrupted him, “I mean a God whose deficiencies don’t arise from the simplemindedness of his human creators, but constitute his most essential, immanent character. This would be a God limited in his omniscience and omnipotence, one who can make mistakes in foreseeing the future of his works, who can find himself horrified by the course of events he has set in motion. This is … a cripple God, who always desires more than he’s able to have, and doesn’t always realize this to begin with. Who has built clocks, but not the time that they measure. Has built systems or mechanisms that serve particular purposes, but they too have outgrown these purposes and betrayed them. And has created an infinity that, from being the measure of the power he was supposed to have, turned into the measure of his boundless failure.”

While human designers frequently operate under extreme constraint such that we remain sympathetic to failure, sometimes, as Winston Churchill once remarked, “it is not enough to do our best; sometimes we must do what is required.” It can be no kind of intelligent design if all that can be said of it is, “It looks good on paper.”

Here again, the critique of the Designer’s failure has human analogs as well insofar as science (and religion as well, though in a very different way) claims more potence than is warranted. One hears both in formal futurological settings but even in the most casual day-to-day conversations about the salvific capacity of science. Science itself is omnipotent, even if human capacity remains ever limited. So while we did not “invent” the Second Law of Thermodynamics, there are any number of deliberate and emergent properties of systems we have invented that cannot be addressed merely by superciliously promising that a problem cannot be solved at the same level that created it or that the master’s tools cannot be used to dismantle the master’s house. Technocratic optimism is the propagandistic edge of science (in science’s verifiably empirical guises) and as such must be resisted if everything human isn’t to be consumed in the vulgar materialism of science that finds its economic concomitant in capitalism. But it must also be resisted in those domains where the human is under contest (psychology, aesthetics, the arts, language in general, even spirituality and religion) where the materialist reduction of science finds its social concomitant in mere utilitarianism.

It is a more interesting case to ask what happens if the intelligent Designer is not all-loving, as is claimed for the deities of intolerant monotheism. It is a major trope of science (to say nothing of technocratic optimism and developmentalism in general) that its pursuits are neutral (i.e., disinterested). And yet, there have been those voices—even of Nobel-prize winning scientists—who have expressed reservations about the general ethics of “disinterested research”. This reached at least one peak in the wake of the murder of hundreds of thousands of Japanese in the atomic explosions over Nagasaki and Hiroshima. Several scientists signed letters of protest that perhaps did not go quite so far as to condemn their own work. One Nobel-prize winning scientist who had worked on the Manhattan Project specifically said in his acceptance speech that perhaps the time had come to start thinking about the consequences of whatever research might be being worked on.

Needless to say, this call has not been taken seriously; that is, while ethical advances in practice and methodology have been implemented in many domains, when it comes to the further development of means for destroying life, whatever tortured justifications are offered are not enough to put a stop to such research.  One at least might cite the example of Norbert Wiener, one of the first cyberneticists, who flatly refused to provide his research to a US Department of Defense weapons contractor as a case of what such “stopping” looks like.

Amongst these tortured justifications, the argument runs, “If we don’t invent this, someone else will,” and where weapons are concerned, the next argument, “And then they’ll use them against us and we won’t be able to defend ourselves” is not far behind. The history of atomic weapons suggests this paranoia is not apt, and it is no accident that a country like the United States has been the only country to kill people with atomic weapons. (Currently, Israel is threatening to become a member of that noble brotherhood; perhaps that’s what’s required of the worldwide “white of passage”.) In another venue, the US Supreme Court declared in a famous case defending video recording technology that the illegitimate use of a technology (in this case for pirating movies) was not grounds for suppressing a technology. Or in yet another arena, the NRA’s argument that “guns don’t kill people, people kill people” similarly holds technology harmless[77].

Because we live in a culture where the doxa of technocratic optimism is essentially unchallenged (or, at this point, is challenged only by the exigencies of producing technocratic optimism), the notion that it might not be intelligent to design technologies that have destructive or detrimental human effects may have difficulty getting traction, but we need not bow to current tastes and fashions in these things. Nothing obligates us to bow to that doxa.

In any case, many of the disingenuous human arguments for moving forward with the building of technologies that are by designed to be harmful to human life cannot make much sense when applied to the deities of intolerant monotheism. After all, the deities of intolerant monotheism cannot argue, “If We didn’t invent cancer, someone else would have,” or even, “We needed to invent starvation in order to prevent our enemies from over-running us.” Moreover, if the deities of intolerant monotheism is not all-loving, then we have no leg to stand on when objecting to those horrible rigors we can suffer as human beings.

For bane in the world, ex post facto arguments for the necessity of those banes becomes necessary (if there is going to be designed intelligence behind them), but it takes no imagination whatsoever to see how one could have better arranged the world in a material sense[78]. Because we would starve to death if we didn’t eat, we may argue the need for food in that precisely provided us humans with pretext of the fellowship for feasting together. But obviously we could have feasted together without being “forced” into it by the pain (or threat) of starvation.

This is to say that it’s not very intelligent to have “invented” starvation, disease, death—it takes no imagination whatsoever to imagine other ways to affect population control without resorting to biological termination, &c. Again, I want to emphasize the basis of the argument—it is not that “the universe” has taken hold of the innocent technology of entropy and turned it to a destructive purpose (called death), but rather that the supposedly intelligent Designer of the universe resorted to the invention of technologies (particularly disease and starvation) that are by design painful and destructive. If I start torturing you and denying you food and water, I will be caught, prosecuted, and put to death, and deservedly so. All of my rigors upon you will not be counted “intelligent,” but rather heinous at best, if not evil.

If the intelligent Designer has no love for us, the question shifts from an ethical one to one of design criteria. Are pain, the threat of death, starvation, or disease intelligent design choices or might other mechanisms have been designed. What is clear for all four of these elements is their corporeal root. It is clear from the non-human part of the animal king that pain (particularly when it drives the avoidance of starvation, more ambiguously as it relates to disease, and again positively as fear serves as a goad to the avoidance of death) appears to “work” in evolutionary and biological terms. It can motivate humans as well, of course, but we are not merely sentient beings (aware), we are also self-aware. Innumerable are the human examples where non-corporeal motivations were in play that encountered corporeal hindrances—a mother wants to rescue her child from a burning building, and if she cannot overcome the pain inflicted by the flames then her child may die. Our mental agonies cannot always be reduced to somatic roots, so to the extent that our existence (as a design) is hindered by nociceptive systems in our design, this would seem to represent a design flaw. Had the intelligent Designer been more loving vis-à-vis the limits of the intelligence of pain (disease, starvation, the threat of death) in our design, then we might find ourselves in the presence of a yet more intelligent design than the one we have now.

We might pretend that whatever happens on Earth is technically outside the domain of the designer—the deities of intolerant monotheism merely set up the universe, and human life was an unexpected (or emergent) property of it. With this, we are already so far away from anything resembling the typical deities of intolerant monotheism that there’s hardly any comparison, except that ID is particularly advanced by people with Christian backgrounds for the purpose of advancing a Christian agenda (so it seems reasonable that they restrict themselves to some variety of intolerant monotheist deity already associated with their religion). In any case, we can only see it as disingenuously moving the goalposts to begin by claiming that the deities of intolerant monotheism was quasi-omniscient and quasi-omnipotent enough to set up the universe and get it going, but not quasi-omnipotent and quasi-omniscient enough to correct some of the grotesquely glaring deficiencies of design that gradually emerged—and this, without any reference at all to the problematic behavior of the humans that appeared (e.g., raping, pillaging, plundering, &c). If we ever mange to extricate ourselves from the human condition, we will still have much work to do regarding human behavior, but at least we will be finally liberated from the criminal negligence that seems to be at work in the Designer’s not-convincingly intelligent handiwork.

From the foregoing, if the Designer is in some way lacking, we can safely say that the lacks are so pronounced that not only are they no credit to anyone claiming some kind of supernatural nature but even by a human yardstick would get a failing grade in an introductory engineering course at a community college. In this light, the excuse that something like Satan or the Devil is out running around and fucking up all of the Designer’s good work is like blaming your lab partner for your chemistry experiment failing. The appearance of Jesus, as precisely an intervention by the Designer into His bogus creation, is no answer to the above since the “solution’ to the design flaws of plague, famine, and senescence is to defer the solution till after death. So, even when we relax the standards demanded of omniscience, omnipotence, or omnibenevolence in the deities of intolerant monotheism, we’re forced to give so much ground that what we’re left with is a weak claim of intelligence that human imagination easily exceeds.

That is, what the immediately foregoing underlines is that as soon as we allow the deities of intolerant monotheism to be anything less than perfectly omniscient, omnipotent, and omnibenevolent, then we immediately descend into a morass where any claim to make cancer into a boon, to make starvation into a spiritual exercise, to make mental illness into a learning experience, to make the decrepitude and senescence of advancing age into something other than a rather cruel joke becomes unforgivably naïve at the very least and probably patently sadistic and full of zloradstvo (Schadenfreude) toward people in general. As Lem (1961) puts it:

We all know that we are material creatures, subject to the laws of physiology and physics, and not even the power of all our feelings combined can defeat those laws. All we can do is detest them. The age-old faith of lovers and poets in the power of love, stronger than death, that finis vitae sed non amoris, is a lie, useless and not even funny. So must one be resigned to being a clock that measures the passage of time, now out of order, now repaired, and whose mechanism generates despair and love as soon as its maker sets it going? Are we to grow used to the idea that every man relives ancient torments, which are all the more profound because they grow comic with repetition? That human existence should repeat itself, well and good, but that it should repeat itself like a hackneyed tune, or a record a drunkard keeps playing as he feeds coins into the jukebox… (204).

If someone wants to argue that such calamities are necessary and worthy in any way of being deemed intelligent design, then I suggest they infect themselves immediately with ebola, necrotizing fasciitis, or something more protracted like HIV—perhaps they should just pith themselves and wait out the end in a vegetative coma. The idea that the cosmos is a chaos of hostile randomness is a less morally untenable point of view than this.

Freewill, Teleology, & Nontriviality

So we return to a position where all-loving, all-knowing, all-powerful deities of intolerant monotheism stands in relation to human beings as a designer of artificial trivial machines.[79] As the earlier exploration of the limits of omniscience suggested (beginning with the 10126 possibilities of Ashby’s little 4-input, 4-output nontrivial machine with four internal states), the problem of knowing the inexpressibly vast range of possibilities that even one human being might present can be addressed by assuming that a Designer stipulates all aspects of any design made; that is, while Ashby’s graduate students didn’t have enough time in the world to confirm the configuration of Ashby’s little machine, Ashby himself could have stated the outputs associated with each input (assuming that he had some way of monitoring the run of the inputs and outputs from the start of the machine’s operation. Because nontrivial machines are history-dependent, Ashby could not have shown up in the morning and made such statements without knowing, i.e., having a record, of the machine’s previous states. Trivial machines do not require such a record of past states to allow a designer to states outputs from known inputs; if the machine is operating as designed, which is to say it providing expected outputs for known inputs, then knowledge of the design itself is sufficient to make such statements accurately.)

I do not use the word predict in this context because there is no uncertainty. If Ashby had no perfect record of his little machine’s previous iterations or in the circumstance when a defect (in a trivial machine) begins affecting the outputs—or in the daily occurrence when we encounter a machine without knowing either its history or whether it is trivial or not—then uncertainty enters the situation and it becomes apposite to speak of prediction. For an intelligent Designer, the difference between  either a trivial or a nontrivial machine amounts, at least in part, to the requirement to have the latter’s full history of iterations to be able to state (with perfect certainty) the next output from the current given input[80].

It will seem—based on the description of artificial trivial machines—that notions of human dignity wither if that is what we are. However, since nontrivial machines are deterministic in von Foerster’s view as well, this means that we, in our limited human understanding, can only experience ourselves and others as nontrivial (unpredictable) even if we are—rather, even as we are—wholly determined. So whether we are trivial or nontrivial machines vis-à-vis the deities of intolerant monotheism, from our standpoint our dignity and freewill and freedom of choice seem to be confirmed in a way that we cannot undermine—even in the most belligerent insistence that I am not free, my freedom seems a necessary prerequisite for saying so[81].

If we imaginatively put ourselves in the deities of intolerant monotheism’s position, we definitely experience the death of all significance in our lives;  we see ourselves as mechanical puppets strutting about the stage and delivering our lines as if we wrote them wholly ignorant of not doing so. And we just don’t buy it. It’s not convincing at all.

But the offense isn’t in our experience of ourselves; rather, it’s in the fact that the deities of intolerant monotheism made us this way, whether trivial or not. Even from before we are created, the deities of intolerant monotheism has already determined that this one is doomed to Hell, this one to Heaven, and that one’s the savior. Or if we want to be less oriented by silly fears about the afterlife, it is simply that whatever arc there is in your life and what arc there is in my life are wholly determined in advance. In such a context, something else needs to be added to make sense of this all, which raises again that primordial moment when the monster asks Frankenstein, “Why did you make me this way?”

Did I request thee, Maker, from my clay
To mould me Man, did I solicit thee
From darkness to promote me
—Paradise Lost (Book X, ll. 743–45)

All this is merely to reprise what has already been said. We want to be nontrivial machines, but the nature of the deities of intolerant monotheism assures that we cannot be—or, to be more precise, whether we are trivial or nontrivial machines, with respect to the deities of intolerant monotheism, our deterministic character makes Their rationale for creating us suspect at best. In point of fact (and not only because the deities of intolerant monotheism are mythical), it may be impossible in principle to answer why the deities of intolerant monotheism would bother to make machines like us in the first place, except that we can analogize about why we make trivial or nontrivial machines: because there’s something we want to accomplish, some task, some goal, some instrumental end. We can then be thrilled or offended by such a utilitarian telos, but that’s rather beside the point finally and not even the main one at this juncture.[82]

Whatever one wants to claim about the motivations of the deities of intolerant monotheism, it cannot be wholly disinterested in our function as trivial or nontrivial machines. We humans design and create trivial systems with a telos, with some sort of purposiveness in mind, even when that purposiveness takes a heuristic or exploratory character. Generality excuses us from needing to make any specific claim about what said purposiveness must or should be; it is enough to say there will be purposiveness.

Since human beings are willful and perverse—thank goodness—the insistence that all designs may be described as purposeful must lead immediately to the invention of purposeless machines. Were we to ever encounter one of these purposeless machines, merely watching its operations would, in all likelihood, become the basis for any description of its purposiveness we might hazard, even if only to say it is the machine that goes whizz, bing, bang. And when its (human) designer appeared to assure us that it was designed to be purposeless, we would immediately insist that that is its purpose then. Because purpose (telos) in a machine is like meaning in lived human experience; it only disappears when the machine does, in other words “at death”. It might be that the purpose is stupid, the meaning inadequate—that is a separate and painful matter, such so long as persistence persists, purpose and meaning cannot be erased. But not only are meaning and purpose inescapable, for that reason too much of either can become (existentially) overwhelming. Eagleton (1989)[83] makes the point that some events need a deflation of meaning, even if they can never be wholly deflated to zero. So on the one hand, we might rebel against the sufficiency or excess of designed telos in our lives—even as a divine power has the ability to force us to do Their will against our own—or we might lament the insufficiency, event he negligence, of a profligate Creator who gave us too little or nothing at all to do.

The old saw runs: there’s no slave like a happy slave. Ideally, one would expect an intelligent Design to be one not at odds with itself; or, perhaps better still, one that isn’t capable of being aware of being at odds with itself. It is partly with this in mind that I am somewhat arbitrarily in this section indiscriminately using trivial or nontrivial to refer to human beings. As designed beings, analogous with human-designed artificial systems, we can assume (whether we are trivial or nontrivial) that the Designer fully knows and stipulated our design parameters and has the requisite means to track our deterministic character. Per von Foerster’s (2010) distinction, we must necessarily be classified as nontrivial machines—a categorization that further illuminates how our human experience is one endowed with a sense of freewill despite being wholly determined. Thus, the capacity to surprise that nontrivial machines exhibit pertains only to our own experience of ourselves and our fellow nontrivial machines, without requiring that we also surprise our omniscient Designer(s).

It may remain problematic, however, why purposiveness is necessary at all for the deities of intolerant monotheism. Humans design and create (trivial and nontrivial) partly out of convenience, partly because machines can more reliably do what we could or cannot do, partly to serve as system components in more compound machines, and so forth. Since the deities of intolerant monotheism is all-powerful, it cannot be that we can more reliably do what the deities of intolerant monotheism could or cannot do, so whatever our purposiveness is, it must essentially be on the grounds of a convenience, as something the deities of intolerant monotheism can do but simply chooses not to. This may make it seem like the deities of intolerant monotheism created us to handle menial tasks, but this would be an overstatement—the deities of intolerant monotheism could have written Bach’s music, but instead built the nontrivial machine of Bach to create that output instead. It might seem somehow gratuitous to create the nontrivial machine of Bach when the deities of intolerant monotheism simply could have manifested all of Bach’s music in toto in one fell swoop, but this mere appeal to efficiency does not necessarily make the elaboration of countless nontrivial machines (like Bach, like you, like me) eo ipso an unintelligent (or nonintelligent) design.

In passing, I will add that hierarchy trivializes human nontriviality as a means to the goal-directed end that the hierarchy exists for. If this is compellingly problematic, then the invention of humans for the trivialized (hierarchical) purposes of an intelligent Designer are problematic as well.

The Design of Intelligence and the Intelligence of Design

As soon as we imagine a faulty creator of the cosmos, then the pathetic inadequacy of the current arrangement of biological life (with its plagues, famines, decrepitude, senescence, and the like) fails to warrant the term “intelligent” design. I keep assiduously not including human-enacted awfulness in the laundry list of design flaws, because who the “author” of those ills remains debatable.[84] For all manner of nonhuman-originated suffering, however, there has always been someone to deem it not simply a blessing in disguise, a chance to grow[85], but the necessary ground for personal growth in general. As Kateb (1972)[86] remarks, in this discussion of pain and pleasure;

At times it seems that the antiutopians favor pain as men would favor it who never felt very much of it and think it may be good for themselves and surely could not be too bad for others … at other times, the antiutopians speak as men who have seen or had so much pain that they have become incapable of imagining that there could come a time when pain–at least in its more brutalizing forms–could cease to be (126).

Human beings, being geniuses at coping, have a lot of wise things to say about death; Jung specifically calls death a goal, not an end. Humans also say some stupid things about death; for example, if people didn’t die, the planet would quickly become overcrowded. Since an intelligent Designer, who is capable of creating solar systems and the like, must be capable of creating habitable planets, one needs only to provide a limitless number of habitable planets to address the “overcrowding” issue. Existentially, one can argue that if we didn’t die, we’d go mad from boredom. Most likely, but the solution to that is to make death voluntary, not obligatory whether one is ready to go or not. Better still, one might make death temporary, which the notion of reincarnation points to, but the deities of intolerant monotheism doesn’t go in for that kind of Eastern nonsense. Here we can already see why the resort to the despicable idea of original sin—that somehow we human beings, even though we had been deterministically created perfectly, nevertheless managed to become of such a nature that the threat of starvation became necessary to get us to socialize. It’s rather obvious instead that original sin is smoke and mirrors for design negligence. One would have to say that our designed telos is, precisely, to starve, wither, age, and die.

As either trivial or nontrivial machines, we perform tasks that deities of intolerant monotheism would otherwise do but chooses not to. But we are also, as designed machines, necessarily limited in what we can self-address. By design, we encounter environments that we cannot self-address, even with our detailed feedback mechanisms. In such instances, just as we would have to rescue the robot vacuum that has somehow backed itself into an inextricable corner, some outside energy must intervene to rescue us from those conditions we cannot self-address.

Can this limit of finite machines be deemed an intelligent design? (By finite machine is meant an artificially designed, trivial or nontrivial machine.)

An intelligent Designer, fashioning us as Its conveniences, would have to include in each of us (individually) precisely those self-addressing mechanisms that would account for our limiting conditions. It seems necessary to argue, then, that each of us—and in fact all humans who have ever lived as well as all living systems in the universe—is already perfectly designed for whatever actual duration of life we experience. This, of course, is about as cart before the horse as it gets.

One of the smart ways out of this dilemma is to claim that all of life is an illusion, or at least that in our present limited understanding of things, life seems full of suffering and irony. By this, an intelligent Designer has even taken account of the creative utility of our partial understanding, but for the deities of intolerant monotheism, such a resort is not available. In this epistemology, reality has to be what we say it is, and so one gets stupid resorts and dumb explanations (like the Devil or original sin) for why a perfect creator (an intelligent Designer) wound up creating such an elementarily flawed (unintelligently designed) cosmos (c.f., The Second Law of Thermodynamics again).

One seemingly necessary conclusion of our finite design vis-à-vis an intelligent Designer is that our incapacity to self-address various conditions must be taken as deliberate. Having recently enjoyed a stomach flu for the last two days, I might conclude that “all-loving” means more the sense of “loving everything” (including stomach flu viruses) than indicating some preeminence of favor for human beings[87]. But if, for the moment, we perform an act of intellectual castration like Tertullian’s, then let’s stipulate there is some intelligence in the fact that poisons kill me, diseases afflict me, that my biological self degrades over time, and that eventually I will die—and not simply gratefully as a respite finally from the physical abuse of plague, famine, senescence, and atrophy. As human beings, we have certainly made a purse of these sows ears frequently, perhaps even necessarily, but that is because we’ve inherited a bad situation. Whatever good (or growth) might be gleaned from these afflictions, the afterlives of intolerant monotheism promises no more such growth because these factors are absent[88]. So, to imagine an intelligent necessity in the afflictions of poison, disease, death, and whatever sheer physical limitations there are that would otherwise destroy us if not met cannot involve making such arguments.

Sentience, Sapience, & Self-Domestication

In making the distinction between trivial and nontrivial machines, von Foerster (2010) noted, “Non-trivial machines have ‘inner’ states … In each operation, this inner state changes, so that when the next operation takes place, the previous operation is not repeated, but rather another operation can take place” (311). This is the primary distinction between such machines; this originates the predictability of the former and the unpredictability of the latter for human observers. The sheer complexity makes this transcomputational and thus unpredictable for humans, but this limitation need not apply to an omniscient Designer.[89] To an omniscient Designer, as synthetically determined machines we may not differ importantly from trivial machines, despite our history-dependence and unpredictability.

One could split some hairs at this juncture, whether the robot vacuum might actually have changing inner states or whether a unicellular organism’s changing inner states really change “enough” to count as a change. All formal dichotomies, as descriptions of observed phenomena, must have such borderline cases. We can simply point to how feedback loops make outputs into inputs that then influence the next iteration of the machine whether in a trivial fashion or by a change to the internal operations of the machine. And in particular, these feedback loops are crucially helpful not simply in coordinating with events we perceive as in the environment but also for self-addressing changes that might otherwise be problematic or lethal. In  a house fire, a robot vacuum will not attempt to escape, while the humans and pets and insects and plants will, with varying rates of success.

Because Maturana and Varela describe all living systems as cognitive systems, whether they have a central nervous system or not, cognitive activity is deemed effective action when it permits the continued existence of the living system. One may then speak not simply of cognitive activities for continued existence, but ranges of cognitive activities. If one were to jump the gun, one might identify intelligence as effective action as well, so that the more developed the intelligence, the greater the range of effective actions available to a living system when self-addressing its continued existence in what it perceives as its environment. This may be jumping the gun if synonymizing “effective action” and “intelligence” yet proves premature.

To say that all living systems are cognitive systems is to say they are sentient (i.e., “aware”); that is, they sense. We can distinguish this from sapient (i.e., “self-aware”); that is, that which makes sense. It is an open and controversial question to what extent other living systems are self-aware. There is no question that living systems are sentient, depending upon how carefully people use the term. Our familiarity (and fondness) for mammals, particularly our pets, makes us very aware that they are sensing creatures; enough so that we can enact animal protection laws on ethical grounds. Most Buddhists take the sentience of living systems more seriously, and have a vegetarian diet, but Jains would (and have) criticized even this and often practice even stricter food habits (filtering water to remove microbes, and in all ways doing their best not to kill anything in order to subsist).

Some people need to insist upon a difference between humans and animals; religion is a historical proponent of this especially. If I would assert there is a difference, it is because that’s how it looks to me. I don’t need that kind of vanity about human beings that insists we’re superior to animals—all animals in the world in any case seem to get along better with their environment than we do; the ant for its size is stronger than any human; fleas relatively speaking long jump further; all manner of things fly without mechanical assistance. If the Olympics were ever opened up to all the world’s species, it would be a long time before humans won a medal in any strictly physical events. And, at the same time, we do seem to have some kind of distinction from other animals, if only in our ability to say so.

Meanwhile, within the range of what my cat needs for her continued existence, her effective actions seems quite adequate. But watching her behavior makes explaining everything she does in terms of “survival” an unconvincing exercise. For example, she has a cardboard box that she claws from times to time, and we can say this sharpens her claws, strengthens her muscles, perhaps helps to coordinate claw-muscle movements, &c., but it looks very much more that she does it because it causes her pleasure.

A study of songbirds found greater variety of song in domesticated varieties. The researchers suggested this was so because evolutionary pressure (to reproduce) no longer demanded specialization in the species’ given mating call. So why should cats, now countless generations removed from their primordial ancestors, only sharpen their claws for a “motivation” of “survival”. Having been domesticated, the activity may now be for some other reason, like pleasure.

If by domestication I mean the increase in the range of an activity formerly specialized for the sake of reproductive survival such that a wider variety of telea (the plural of telos) for that activity eventually emerge, then this may be extended to a function of cognition in general, to the extent that a greater range of cognition frees an otherwise necessary action from the constraint of that necessity[90]. Recreational fucking, then (which not only humans, but also bonobos, dolphins, &c engage in) may be a result of self-domestication (with its own chain of subsequent evolutionary consequences, rather than the other way around)[91].

The domestication of cognition enables not only an increased range of self-addressments to an environment but also an increased variety of telea for those self-addressments. This is self-evident, to various degrees, across the whole range of living systems, but is most familiar (to we humans) in other mammals, who often seem rather occupied with behaviors “for pleasure”. With caveats toward not falling prey to anthropomorphisms in place, then, perhaps not all personifications are hopelessly anthropic at root. A great deal of behavior (toward objects) in the animal kingdom can only with considerable torture be construed as “for survival”. It seems that other telea could be legitimately identified. Insofar as this means at least for some species that sentience is already enough to justify an ascription of multiple telea to its members’ behavior, then “motivations other than survival” cannot in themselves be construed as a uniquely human endowment. Yet it is with no appeal to human vanity that I propose still to identify our distinguishing distinction as a species of living system[92].

Spetch & Friedman (2006)[93] reviewed comparative studies of object recognition in pigeons and  humans. In particular, they established a study to rule out the possibility (logical enough) that pigeon performance at object recognition might be improved if actual objects, rather than pictures of objects, were used. And this did indeed prove to be the case, while the change in human performance (between pictures of objects and objects themselves) was minimal. More strikingly, while the pigeon’s reactions times were always longer than the humans’, and showed (expected) faster reaction times for objects over reaction times for pictures of objects, for humans, we had notably faster reaction times to pictures of objects compared to objects.

I don’t intend to hang an entire argument on just one result, and yet these findings suggest that humans are more habituated to representations of objects than objects themselves; by this distinction I mean particularly a self-awareness of awareness of objects themselves. Citing a general and anthropic bias in the studies surveyed, Spetch & Friedman (2006) allow that some visual cues may be being lost for the pigeons in the (human) transformation of objects into pictures (representations of object). It may also be that what we (as humans) consider to be an adequate pictorial representation of an object will seem that way to us precisely because we have spent a great many centuries mastering the artistic and technological means to reproduce those representations in a way that looks correct to us. On this view, we may describe self-domestication in our species as having freed an object recognition specialization formerly adapted to the telos of “survival” to a variety of other telea (such as tool use, aesthetic appreciation, sexual obsession, or whatever else caught our fancy). Somewhere along the way (and here is where intelligence might diverge descriptively from cognition per se), within the range of additional telea, representations of objects became objects in their own right, and have since taken priority over objects per se, as our faster reaction times to representations of objects compared to objects suggests.

Necessity & Intelligent Design

The cruel trick of existence, though it’s one we could hardly do without and the one we frequently pin our finest accomplishments to, is self-awareness, or sapience. As machines functionally trivial with respect to our Designer but not to ourselves, sapience becomes an inexplicable design flaw; we can easily dismiss it as gratuitous if not simply a bad idea. We can take the long view that out of the primordial hydrogen eventually animate matter rose and hence intelligence and self-awareness—some things do take æons to come to fruition—and this is the case; it may even be that the design is still raveling.[94] But for an omniscient Designer, the outcome is already known, and for an all-powerful one, the design should already be done. Yet, if this (our current predicament) is actually it, then it hardly seems adequately, much less intelligently, designed.

As an argument, intelligent design seems rather a piece of typically biblical and patriarchal ignorance insofar as it claims the provision of a big bang and a squirt of matter deserves the name creation, even Fatherhood or Design, whether intelligent or not. And within this, the note of technocratism, long a friend of the Protestant work ethic, is not far behind with its shibboleth, “Creation doesn’t kill people, people kill people.” Humans might be able to wiggle out of liabilities in court this way, but not a Designer.

In the broadest sense, the issue boils down to the material necessity of our design and creation generally. I’ve already addressed this more than once, but it tends to keep coming up in statements from both to ID proponents and evolutionary biologists, so it needs continuously to be answered. That is, when we encounter a behavior, from the cosmos or from a species of animal, we then hazard a functional reason for that behavior. Thus, we see bonobos recreationally fucking and declare this is for survival; or we note that if the proportion of energy released when hydrogen fuses into helium (0.007) were lower (0.006) or higher (0.008) then life as we know it—even the cosmos as we know it—could not exist. Skea (2000)[95], in his review of Rees’ (1999) Just Six Numbers: the Deep Forces That Shape The Universe,  summarizes three basic scenarios in the face of this:

One is the hard-headed approach of ‘we could not exist if these numbers weren’t adjusted in this special way: we manifestly are here, so there’s nothing to be surprised about’. Another is that the ‘tuning’ of these numbers is evidence of a beneficent Creator, who formed the universe with the specific intention of producing us. For those who do not accept the ‘providence’ or Creator arguments, and Sir Martin places himself in this category, there is another argument, though still conjectural. This is that the ‘big bang’ may not have been the only one. Separate universes may have cooled down differently, ending up governed by different laws and defined by different numbers. Certainly, reading this book (and its no light task in coming to grips with the scale or immensity of the numbers) has been rewarding for me and has awakened in me an interest in looking further into other discussions regarding the ‘big bang’, time and parallel universes.

Whatever the various merits of these explanations—and much of this essay is informed by a critique of the second from the standpoint of the first—none of them would exist if we weren’t around to offer them. This is obvious, but Whitehead has underscored the difficulty at times of studying the obvious, and Jung might remind us of the fatuousness of any attempt to deny the inescapable anthropomorphism humans must always resort to[96]. Even our hypothetical projections “outside” of our anthropic bias must be human-informed just as our hypothesis about the nature of objective nature outside of our perception remains a anthropic projection. So it’s nothing to gnash our teeth over, but at the same time, to ascribe necessity, which in this context is the same thing as saying to insist fundamentally on only one kind of explanatory principles, comprises a cardinal error.

When another human being acts in such a way that we call them “crazy” and the psychiatrist then drugs them into unconsciousness, it’s not that difficult to see that our monologic explanatory principle “crazy” could be swapped out for something else (or even that “crazy” might coexist with other explanations simultaneously—the human being herself might very well not assert she’s being “crazy” at all). Here again, we see how the morally suspect value of convenience gets into the picture in a utilitarian way to “deal” with the person who is (labeled as) acting “crazy”. But when it is a question of observing the behavior of a bonobo and we and the evolutionary biologist label the behavior as “for survival,” then it becomes more difficult to see the mere convenience at work here as a way to name, fix, and thus convert into something like an object in our world that we may then exploit (if only by writing famous papers about them that continue our grant money and ensure that our family’s needs can be met; or, less charitably, for the sake of my professional vanity and job security, &c). Even less is it easy to see how, when we observe a behavior by the cosmos, that our naming and fixing explanations could be swapped out or that the same kind of convenience (and all of its downstream consequences for the maintenance of our lived status quo) get into play.

This issue here is not to broadside the “nobler” motives of psychiatrists, evolutionary biologists, astrophysicists, or everyday people; it’s not even to suggest that convenience (as I am construing it, in its helpfulness toward naming, fixing, and thus putting into social use some phenomenon of lived human experience) is ever or always a primary motive or that there cannot be more than one motive in play at any given time (sometimes even in conflict with one another). It is, however, to object to the widespread tendency to take cognizance of, or to give priority to so that it is a controlling value, a single motive. Because once this single motive is identified—either as a solitary factor in itself or as the most important factor that trumps all else within a range of motives or factors—then we have arrived at the ground of arguments for necessity. This necessity may be defended either in terms akin to “it’s the only game in town” (meaning there might be other games, but currently this is the only one acknowledged) or in terms akin to “there is no alternative” (meaning something like “grim necessity”).

The peril of the second sense of necessity is the risk of being based on a false analysis; the peril of the first is the likelihood of being based on political interestedness. Would a doctor who wasn’t going to be paid for a surgery always insist on its necessity? Would a doctor who knows there’s no hope for a cure deem it unnecessary to attempt to treat the disease? Whichever necessity it is that motivates the psychiatrist to say, “It’s necessary to sedate the patient,” the dominant explanation in play at that moment is the lone principle of “mentally ill”. Whichever necessity it is that motivates the evolutionary biologist to say, “We necessarily conclude bonobos engage in sexual activity as an evolutionary strategy,” the dominant explanation in play at that moment is the lone principle of “for survival”. And whatever necessity it is that motivates ID proponents to say, “That the fine-tuning of the universe is necessary to support the emergence of life demonstrates the handiwork of an intelligent Designer,” the dominant explanation in play at that moment involves the lone principle of “necessity” itself.

I have to reemphasize a point about the function of necessity because theoretically reasonable-minded people will try to argue I am bastardizing their argument.

An example:

In Herrnstein & Murray’s (1994)[97] notorious The Bell Curve: Intelligence and Class Structure in American Life, after at least putting on a show of surveying the various findings and pseudo-findings of “intelligence research” toward determining whether the undefined and unknowable quantity or capacity “intelligence” is determined genetically or environmentally—whether intelligence is Nature or Nurture, to put the matter colloquially—they finally conclude by declaring a 60/40 split; that is, intelligence is 60 percent Nature and 40 percent Nurture.[98] From the rest of their book, one may easily infer its use, which the authors claim to be sensitive about, as “proof” of the genetic inferiority of non-Whites and people of African ancestry in particular. That is, whatever contribution the 40 percent environmental factor contributes, it is consistently overshadowed by the 60 percent genetic factor. In part, this is because the book is intended not as a survey of available “research” for the benefit of psychology or sociometrics but rather (as the subtitle “Intelligence and Class Structure in American Life” indicates) as a justification for certain kinds of public policy, specifically those aimed at deeming any variety of social welfare assistance by the Federal government (which would be aimed at improving the environment of those with low IQ scores) as a misdirected waste of funds, since the real root of problems associated with those with low IQ scores has a (more predominantly) genetic root.

In this case, Herrnstein & Murray’s “necessities” are decidedly interested, decidedly political, and part and parcel of institutionalized racism, but my point is to illustrate how a supposedly reasonable-looking position (like insisting on a 60/40 split) plays out, explanatorily, more like a 100/0 split—and had Herrnstein and Murray opted for a 40/60 split instead, then this would have explanatorily played out more like a 0/100 split.

In other words, in arguing for the rationality of competing public policy programs, they did not use their 60/40 split to advocate for a 60/40 program of Federal money: 40 percent addressed to the improvement of environmental conditions (home life, schools, social opportunities, community centers, &c) for those being publicly policied, and 60 percent of funds addressed to whatever presumptuous ideological or eugenic program might be implemented to “improve” the inferiority of non-White peoples. No. Their program fundamentally advocated, with polite-looking caveats and bells and whistles as needed, the gutting of social welfare programs (e.g., terminating the Head Start program).

Most people who consider a phenomenon in some depth will almost never really believe there is only one explanation. In the scientifically shaky discipline of sociology, serious controversies have raged over whether cultural innovation occurs endogamous (from within) or exogenously (from without) with moderates recurrently asserting that cultural innovation occurs in both ways. In the much more self-consciously intelligent discipline of anthropology, a related controversy exists between emic descriptions of a culture (which attempts to take the viewpoint of actors within the culture) as opposed to etic descriptions (which begin from the viewpoint of an observer of the culture). And since the boundary of a culture is never so certain, and since an actor in a culture is also an observer, one again finds the crispness of the distinction becoming more problematic as one tries to use or apply instances of it. Or in the practice of serious astrologers, all would scoff at merely referring to the Sun sign with respect to a person’s horoscope; maintaining a view of the manifold factors of influence is a key conceit, but even this gets overlooked (for the sake of convenience).

So then almost no one would be so intellectually castrating as to insist that cultural diffusion only occurs endogenously or only exogenously or that only etic descriptions are valid or that only emic descriptions are valid or that intelligence is 100 percent Nature or 100 percent Nurture or that a useful horoscope can be drawn with just the Sun sign. These patently untenable positions are not improved, however, by some compromise of the terms involved and, more importantly, however many factors a commentator may include in an analysis, whatever the dominant term is becomes in effect the only term (of necessity) referred to. The example of astrology shows the best illustration of this, precisely because there is already an ethos of at least pretending to take manifold factors into account. But exactly how are those factors factored in? When an astrologer (or sociologist or anthropologist or evolutionary biologist or everyday person) offers an empirical description of phenomenon, if countervailing evidence for this generalization comes into view, then the non-lead factors may be “blamed” for it. Let an astrologer say that I, as a Scorpio/Sagittarius (my Sun sign), have some trait and this statement is false; the error may be accounted for in terms of my Moon or my Saturn in the 12th house or whatnot. Let some sociological fantasia about endogenously cultural innovation run afoul of the historical phenomenon of pasta’s (exogamous) cultural diffusion into Italy, this may be dismissed as an exception rather than a rule. Let a family member of a “crazy” person challenge the psychiatrist’s diagnosis of “mental illness,” this may be dismissed as a piece of ignorance on the part of the family member. And so on. Or allow the psychiatrist to say, “Yes, what you say might be true, and yet your dear relative is nevertheless still mentally ill.”

By all of this, I do not suggest there can never be any such thing as necessity, if by necessity it is clear that it is less to be construed as a fact and more as a (necessary) explanatory principle in the face of lived human experience. Humans necessarily make meaning and necessarily see purposiveness in everything, so that at most we cannot erase these factors but only reduce them as much as possible (when meaning or purposiveness is oppressively overwhelming).

For this essay, and so primarily in the presence of the evolutionary biologist and the ID proponent, there may be some admission that “for survival” or “of necessity” need not be the only motive for the behavior of an organism of a species or the cosmos, but the effective and functional inclusion of those other factors will either tend to be suppressed or else they will become (like Saturn in the 12th house) the argumentative “out” that accounts for evidence countervailing against “survival” or “necessity”.

Hierarchy, Self-Consciousness, & Intelligent Design

Self-evidently, the intelligent Designer felt obliged to embody the universe in matter and us in a complex configuration of that matter in animate form.

There are many aspects of our biochemical engineering that we’ve yet to exceed nonbiologically in terms of speed (and especially material efficiency). Kudos to Nature, especially given what She had to work with. But there are also hundreds of thousands of ways that we have vastly exceeded our biological endowments. In the domain of vision, which in the biological domain we are pathetically equipped, electron microscopes and radio telescopes widen “what we can see” to an obscene degree. And so on.

And here is where the dubious use of “necessity” enters in. For it is sensible to ask why an intelligent Designer did not equip us with electron microscope and radio telescope capacities in the first place (to say nothing of infrared and ultraviolet visual capacities). Necessity might argue that we never would have invented these wonderful things if they were already our biological endowment. From our human standpoint, faced with the circumstance where we are indeed physically limited in this way, it’s not unreasonable to conclude this. But for an intelligent Designer, who is not obliged to be trapped by necessity, we could have been built from the beginning as much more ultra-durable robots than we already are. That our flesh “feels” and that if we were “robots” we wouldn’t “feel” is, again, merely an admission of either a design failure or lack of imagination. Again, it should not be beyond the ken of an intelligent Designer to elaborate “feelings” in more durable materials like titanium. More importantly still, our current fleshy design is not one that we can shut off—except that we have invented various forms of anesthesia—whereas a mechanical system allowing us not only to suppress but even to augment our “feelings” (our sensations) would be more intelligent than leaving us at the whim of our flesh.

Here necessity appears again, that we would not have grown in some way as a species, or would lack certain experiences if we did not have the existential condition of being “helpless” in the presence of our bodies. But whatever (dubious) sapiential value such an experience might have, an intelligent Designer could certainly replicate it in bodies made of more durable material. Also, for those who, like Osip Mandelstam, found the experience ultimately deadening rather than enlivening, that subsystem could at least be shut off until such time as he was read to “grow” from such experiences.

As one of Lem’s (1980)[99] robots puts it, “And the dough-headed took their acid fermentation for a soul, the stabbing of meat for history, the means of postponing their decay for civilization…” (135), prompting the question whether human history is a history of Mind or of Meat. More precisely, our (necessary) habit of making a virtue of necessity (given our existential condition in the universe) has led to our valorization of corporeal experiences, so that our history is a history of Meat more than Mind, but it only seems this way because Mind was ever-present to be distressed by the exigencies of Meat. Thus, anything we might claim under the head of “human nature” that has its root in Meat, that Mind has had to confront in the inescapability of material embodiment, could in principle have been engineered by an intelligent Designer without leaving us prey to things like plague, famine, death and even without making such experiences necessary (i.e., non-obligatory). The objection, that we would lose something essential, if we were “robots” might be true if we ever try to offload our consciousness into more materially durable bodies but this limitation cannot apply to any intelligent Designer who warrants the name.

In any case, we needn’t make a fetish of starvation, disease, senescence. Had we self-powered ultra-durable forms not subject to degradation, or with repair systems that easily overcame the general entropy of existence itself, we would undoubtedly discover new things to complain about—perhaps boredom, CPU bottlenecks, an internal capacity of only 75 parallel threads, Planck time (as the bottom limit of our ability to observe) to say nothing of the “sluggish” rate of expansion at the edge of the universe, which continuously delimits just how far “out there” we can go. In other words, we would still be finite beings, but we at least would be so much better physically equipped for living in this particular cosmos—we could leave the planet at will, wander through the vacuum of space, to the bottom of the sea, to other local planets, perhaps even into the Sun or other nearby stars. This alone would make it vastly more possible to get along materially with other human beings. If there were still wars, it would be much less for the sake of resources, &c.

My argument is not the arrival of utopia were we robots; it is, rather, how childishly simple it is to imagine a better necessity than our indurable, vulnerable corporeal forms. But, housing us in flesh is an inexcusable design mistake only because we are self-consciously aware of it.

I’ve waxed indignant enough about the irony of our self-aware condition. The question rather must be whether “intelligent design” is reconcilable with the design of sapient (self-aware) nontrivial machines.

The distinction between sentient and sapient (in this context) amounts to an imputable de-prioritization of awareness of objects compared to “representations of objects” (self-awareness of awareness of objects themselves) as noted in Spetch & Friedman (2006). It is obvious enough that sentience and domestication allow multiple telea to come into play as resources toward self-addressing a living system’s environment (even to the point of modifying it). In other words, sapience is not needed as an explanatory principle for this, even for human beings. Justification for this view may be seen in the fact that we were not always sapient[100].

More details become clearer by looking at the operation of hierarchies.

A hierarchy, by which I mean a decision-enacting process or machine, specifies a range of needed functions and their interrelationship, and then “mans” those functions with human beings (where design exigencies or desire do not otherwise require automation). These functions, by design, operate trivially (i.e., given a specified input, the expected output is generated as predicted)[101]. This human presence in the hierarchy is not only a redundancy in case of a system failure (e.g., like the human presence of a pilot in an airplane in case of an autopilot failure) but also a kind of redundancy in case of a system mis-design. That is, each designed node of a hierarchy is (at least in cases of good faith) suitably equipped with the means for accomplishing its function; the manager who is expected to generate a report by Thursday has been provided the means for generating the report, the pilot who is expected to fly and land the plane has been given one in good working order, &c. However, even in cases of good faith, a hierarchical node may prove undersigned to self-address a developing condition. It is not simply that, a report being due Thursday, that the computer system has failed and the data is not available (this would be a system failure) but more that a particular kind of report, not previously ever requested, is called for.

In this situation, the nontriviality of the human being occupying the node of the hierarchy may be leveraged toward addressing whatever design lack the node exhibits. In the case of a pilot, this is not that she flips a switch on the control panel to engage an otherwise dormant or idle subsystem—the flipping of this switch is precisely a trivial function of the human being in that moment; even taking over the flight controls themselves and flying the plane may be an example of trivialization, in that the corporeal presence of a human being is (trivially) a back-up flight system. Rather, imagine a circumstance where someone stranded on an island has spelled out SOS, the human pilot in an airplane above who sees that message and radios to the mainland for help has engaged her nontriviality to add a function to the trivial hierarchy of the airplane that is not otherwise called for in its design. Such an action warrants the designation nontrivial because the response specifically takes a form that is historically dependent upon the particular pilot—to put it in a silly way, a pilot not in the habit of looking out the window for SOS signals would likely not be one who radioed in for help. Similarly, while we might predict human goodness to respond to such a signal for help, the specific and exact form that such a response would take is not predictable.

But here’s the rub. Evidence from the animal world, which after all is populated strictly by nontrivial (living) systems, still suggests that this additional human capacity to intervene (nontrivially) when trivial system design shortfalls do not permit trivial systems to self-address developing situations cannot convincingly be ascribed to sapience; that is, we needn’t resort to anything more than sentience yet as an explanatory principle, even in these examples of (nontrivial) human intervention. It may seem that the nontriviality of most animals is “more trivial” than human nontriviality but this, then, is clearly a matter of degree rather than kind, and if there is going to be a substantive distinction between sentience and sapience, then it must be one of kind, not degree.

In other words, what so far might be called an intelligent design in endowing human beings in particular with a capacity to interact with and even change our environment does not yet justify the inclusion of self-awareness in human beings. However sophisticated sentience becomes, all the way from plants to dolphins, however complexly it gets articulated in whatever representational scheme of perception, sapience itself need not and does not enter in as a premise yet. Again, I’m marking the primacy of this in the priority of representation of objects over objects themselves.

There is no question that sentience (depending upon its level of articulation) can vastly increase the range of conditions a living system becomes capable of self-addressing (for the sake of its continued existence), even if the intelligence of this (as a design) is challenged or vitiated by how that design was implemented (in vulnerable meat). The “necessity” of this has already been debunked. In terms of living systems being able to “get things done” (i.e., to fulfill whatever telea an intelligent Designer intended), the gains in this department may be contextualized by the limitations of the design as well. This forces us to say that the design limitations are “good” and thus necessary, and if we (as humans) could not be aware of this, I certainly wouldn’t be raising an objection about it. That is, to the extent that nociception (the sensing of pain) and other physical limitations might actually inhibit a possible range of actions that a living system could affect (when self-addressing developing conditions in its environment or even changing that environment), these inhibitions would never be visible to such a living organism.

When we design a toaster, it cannot function as a nuclear reactor. This fact is a compliment to the design, in fact. That one can use a knife to murder someone may be taken as evidence of a non-ideal design. Programmers of software well understand: it is not enough to write programs that do what they are supposed to do, the programs must also not do what they are not supposed to do. Thus, a cat—and let us endow it with a complexly rich cognitive capacity for representing the world around it—has no qualms about being a cat; in fact, the “pure instinctuality” of animals my often give them a distinct advantage over humans in terms of “putting their whole body” into whatever they are doing. In this respect, “fear” and “pain” (for sentient systems) belies some degree of intelligent design, though the same design in more durable form would be more intelligent.

This raises the question of how intelligent a design must be to be considered an intelligent design by ID standards. The implicit benchmark throughout this essay has been: intelligent design needs to be at least equal to human imagination. In fact, it is essentially a sort of theodicistic issue to relax the standard of what an intelligent design would or should look like, just as one can get out of the theodicy thicket by qualifying the omniscience, omnipotence, or omnibenevolence of the Designer in some way. The essence of satire might be mined out in mocking intelligent design under the (more likely) rubric of “moderately intelligent design,” except that ID proponents would gladly accept that dunce cap, so long as ID is being “taught” alongside evolution in US schools.

To the extent that human imagination, even taking only sentience in the strong sense I have presented in this essay, can so easily find ways to improve the “design” that elaborated that sentience, then I cannot be convinced that it should be called an intelligent design. It might be clever; it might be all the Designer could manage, but the general context of intelligent design discourse is not one of determining what the minimal weakest claim one can get away with and still call the universe a created thing. At some point, the claim to call the universe “designed” ceases to be rational, which is why at that moment faith-based folks start thronging around the premise.

Meanwhile, we humans exhibit sapience. For all the living systems in the world that are unaware that they are sentient organisms, the critique of necessity or intelligence cannot apply. But we are not only sentient. Even if we were obligated to acknowledge an “intelligence” in making other living systems “too dumb” to be aware of their deterministic, nontrivial functioning in the universe, the Designer would have to acknowledge either a necessity or an error in our sapience. But let’s not make the mistake of thinking sapience is just a degree of sentience; were that so, we’re then as benighted as our cats or blue-green algae, and we’d be in no position even to propose intelligent design (or anything else) in the first place. Or, to put it another way, what would be the design justification for making our toasters sapient?

Perhaps it’s worth remembering here the Manichean notion that the Primordial Man, named Psyche, was cast out into the world as bait to draw forth the powers of darkness. For Jung, who has no doctrinal ax to grind about why the all-powerful would resort to this, “This psyche corresponds to the collective unconscious, which, itself of unitary nature, is represented by the unitary Primordial Man” (Alchemical Studies, ¶450). This presupposes a problematic range of powers of darkness, ostensibly independent of the intelligent Designer, but it at least suggests why we might have a capacity to rebel—a capacity to become possessed, to “get the devil in us”—that is designed, planned for, and thus not surprising to the intelligent Designer when it happens. Whether this is not loving (as a qualification on the Designer’s omnibenevolence) and why this convenience would be resorted to (as a qualification on the Designer’s omnipotence) remains unanswered.

In Notes from the Underground, Dostoevsky (1864)[102] makes an almost enthusiastic case for the human animal as the most ungrateful species ever. And nearly all of the Marquis de Sade’s works amount to a titanic rejection of everything that would determine him, whether God, Man, or Nature (father, mother, or sibling). Milton—accidentally if Blake is right—justifies Lucifer’s rebellion as the only authentic gesture open to us. And Lucifer himself says in Gaiuranos’ (2012)[103] Lucifer in Therapy:

A choice of one is no choice at all. So you refuse. You have to rebel, or you’re nothing but a robot. How right you humans were to destroy yourselves and your world. Given no choice but to think of yourselves as beings of goodness and light, you had to rebel. How else could you feel human or real? (69)

The robots of Lem’s (1980)[104] Return from the Stars are singularly non-rebellious. And though humans first built them, by the time of the novel’s events the robots are overseeing their own creation, maintenance, and destruction. They are some of Lem’s most alien creatures, but the strangeness of the sapient machine comes out in virtually any fiction we humans concoct about them, from Čapek’s (1920)[105] R.U.R. to Michael Fassbender’s sterling performance in Scott’s (2012)[106] Prometheus. Even a kowtower like Data on Star Trek is alien and strangely ethical if examined closely enough, and the replicants of Dick’s (1968)[107] Do Androids Dream of Electric Sheep offer a petition not entirely unlike the one in Radclyffe Hall’s (1928)[108] Well of Loneliness. Meanwhile, robot priests in a tale from Lem’s (1985)[109] Star Diaries simply refuse to cooperate in a kinder, gentler non serviam than Lucifer’s.

But imaginative confrontations aside—some of which serve primarily as satires or object lessons on human hubris—why would we (or anyone) design a toaster that might not simply malfunction or breakdown but actively refuse to perform as designed? Moreover, were an intelligent Designer to fashion a nontrivial machine capable of such refusal, omniscience presupposes the Designer would know this in advance, in which case the usual charade of freewill in the face of predestination becomes even more strangely perverse. It seems the Designer would have to not know in advance about such refusals, so that the unpredictability of nontrivial machines becomes a truly intractable trait and inescapable limitation placed then on the omniscience of the Designer. After all, it’s one thing to be some kind of willful psychopath who creates pre-damned beings, all the while knowing they are so doomed, but it would be another thing to make creatures knowing in advance that they will rebel against their designed ends. The former has its human analog in built-in obsolescence while the latter seems to have no analog. Thus, whatever advantage there might be to endowing toasters with sapience, the capacity for rebellion and ingratitude insofar as those inhibit the designed telea of the toasters seems inconvenient, at minimum, if not very intelligent.

Unless that is what it is supposed to do. If I imagine a toaster that refuses sometimes to make toast, it puts me in a Hitchhiker’s Guide to the Galaxy frame of mind; that is, I might find it amusing. It could certainly be the sort of thing I would include in a fictional satire. If, however, I imagine a toaster that (out of willful perversity or even an existential desire to do more than merely obey) refuses to make toast, I might feel more pathos and less humor in the situation and feel less charitable toward who or whatever put the toaster in that condition. The epsilon-minus semi-moron who operates the elevator in Huxley’s (1932)[110] Brave New World has this function—the pathos a reader may experience on its behalf (against the brave new world that engineered it to be an epsilon-minus semi-moron) is one that the elevator operator himself cannot be aware of, though Huxley drives up the rhetoric by suggesting the operator has at least a dim sense of his hobbled condition.

“Roof!” ¶ He flung open the gates. The warm glory of afternoon sunlight made him start and blink his eyes. “Oh, roof!” he repeated in the voice of rapture. He was as though suddenly and joyfully awakened from a dark annihilating stupor. “Roof!” ¶ He smiled up with a kind of doggily expectant adoration into the faces of his passengers. Talking and laughing together, they stepped out into the light. The liftman looked after them. ¶ “Roof?” he said once more, questioningly (58–9).

Whether we are a tragic example in a satire or a joyful farce in a comedy, one at least may identify a sense in making a toaster that refuses to make toast, in a machine designed to rebel. This presupposes a readership (or perhaps an audience if the artistic creation we’re created for is not a static medium); the audience in that case might just as well be us as other divine beings (though not the Supreme Being, who already knows the ending)[111].

So now intelligent design begins to shade off into elegant or aesthetic design—and what constitutes intelligence in an engineering sense (the usual sense for this sort of discussion) is not the same as what constitutes intelligence, as a well-wrought creation, in an artistic sense[112]. In defense of the Artist, some of the finest human creations are anything but precious filigrees of elegance. For every work stereotypically beautiful there is something humongous, improbable, and sublime: for Pope’s aphorisms Tolstoy’s War and Peace, for Petzold’s Minuet in G Major (previously attributed to JS Bach) Prokofiev’s Piano Concerto No. 2, for Masakichi’s sculpted self-portrait the Great pyramid of Giza. The asymmetry, lack of proportion, and indiscriminateness of materials sprawled out all over the cosmos need not, on the face of it, condemn that work on those grounds alone. And while it may not be very satisfying to be one figure in a massive moving sculpture, at least a justification for our nature (as nontrivial machines with a capacity to refuse our role) can be found in it.

It still begs the question. If Joyce’s Ulysses, Rodin’s Gates of Hell, Chicago’s Dinner Party, or Hildegard von Bingen’s Ordo Virtutum may be called intelligent designs, then we need a whole theory of artistic reception to determine if the cosmic drama we’re caught up in similarly warrants the designation. Jethro Tull’s (1973)[113] “No Rehearsal” (appropriately enough an outtake itself) offers one musical vision of our circumstance as actors in the cosmos:

Did you learn your lines today?
Well, there is no rehearsal.
The tickets have all been sold
For tomorrow’s matinee.
There’s a telegram from the writer,
But there is no rehearsal.
The electrician has been told
To make the spotlights brighter.
There is one seat in the circle–
Five hundred million in the stalls.
Simply everyone will be there,
But the safety curtain falls
When the bomb that’s in the dressing room
Blows the windows from their frames.
And the prompter in his corner is sorry that he came.
Did you learn your lines today?
Well there is no rehearsal.
The interval will last until
The ice-cream lady melts away.
The twelve piece orchestra are here,
But there is no rehearsal.
The first violinist’s hands are chilled–
He’s gone deaf in both ears.
Well, the scenery is colorful,
But the paint is so damn thin.
You see the wall behind is crumbling,
And the stage door is bricked in.
But the audience keep arriving
’til they’re standing in the wings.
And we take the final curtain call,
And the ceiling crashes in.

If (on the Western view) life is a play no one ever goes to a second performance of, that itself may provide a sort of popular review of its intelligence, but what’s not obvious is if we’re reading our own projection on the narrative. We can’t infer what the author intends—or, at most, with even less certainty than ID proponents who can at least make a formal criteria of technical elegance. In aesthetic terms, technical facility might just merely kitsch or Hollywood slickness. We don’t have the benefit of multiple examples of the genre to compare this cosmic poem to, and all of the literary or artistic markers we might rely upon are up in the air. As it stands, taken as an artistic production, we’re even more out to sea interpretively than Rick Santorum when he proposed to mandate intelligent design in school curricula as part of the No Child Left Behind Act of 2001. (His proposal was defeated.[114]) Critiqued as an Engineer, we might over the anthropic conceits of any wanting or praising of the Design’s intelligence as theoretically tenable—if we can pretend that the operation of a machine doesn’t depend upon us looking at it—but critique as an Artist, we can only view the Work from the standpoint of human values, so that any attempt to talk about the aesthetics (hence the intelligence) of that Work outside those terms is whistling in a maelstrom.

It is tempting to want to deconstruct the drama, to see in the pathos of the human condition a total statement on an exponentially Balzacian scale, to scan the iambs of electron oscillations, the taxonomize the genres of heavenly bodies, and to analyze both the syntagmatics of motion and the semioticity of matter, but to do so would be a human statement bearing in no way on the Designer. Rather, it would make us the Designer, as we rediscovered all the tropes of Greek rhetoric in the vastnesses of space and at the junctures of our synapses. Because nothing we would say would apply to a cat or a bird or a fish or a tree or an algae or prokaryote, much less boron or hydrogen. Whatever intelligence we might concoct behind the intelligent Design of self-aware would be ours not the Designer’s[115].

Biologically, our bodies belie any number of residual traces of previous mutations. Just as pain and like design elements (as noted earlier) might inhibit what we could otherwise do, in evolutionary terms there are pathways followed that similarly inhibit or make extraordinarily difficult other or further developments. Moreover, it is also the case that opinions about the utility or inutility of some aspect of our physical design have changed over time—tonsils were at one time thought to be gratuitous and now they are known to have a minor disease-fighting role; the otherwise useless appendix in adults proves useful for fetuses and young people, &c. As our knowledge grows, more things that seem useless will be identified for use and things identified for use will be consigned to the junk pile, like so much of our DNA.

So also at times in the neurosciences, consciousness itself gets dismissed as an epiphenomenon of neural functioning, so in this sense it may be deemed as useless as Mkrn1-p1 (even if we otherwise give consciousness the highest priority of importance in life). The point is that the presence of leftover articulations of flesh from evolution need not be read as poor design; when something makes no difference (even if that something is self-awareness) then it makes no difference. I will say plainly that if intolerant monotheism (i.e., the notion that my deity exists and yours is a lie, so I might treat you as someone who doesn’t even have a faith) represents the least desirable articulation of the human response to transcendental experiences (spirituality), then the notion of self-awareness as an epiphenomenon of neural functioning (i.e., the notion that it is something of no scientific value and thus not to be included in what is studied and deemed real) represents the least desirable articulation of the human response to empirical experiences (observation).

Part IV – Dénouement

Conclusion

Intelligent design per se is part of the political agenda of a conservative Christian think-tank.

The vast majority of the scientific community labels intelligent design as pseudoscience and identifies it as a religious, rather than scientific, viewpoint. It is rejected by mainstream science because it lacks empirical support, supplies no tentative hypotheses, and resolves to describe natural history in terms of scientifically untestable supernatural causes.[116]

Its implementation has been legally blocked in school districts around the United States, and its proponents continue busily working away to find places it will not be challenged. On the one hand, to take it seriously is to do it too much credit; on the other, to ignore it is to yield too much. Underneath all the window-dressing, smoke, and mirrors is a will to theocracy. Widened argumentatively, the topic encompasses every variety of teleological argument for the nature of Nature—these also historically occur in religious contexts and arguments, buto teleology has also had secular proponents since Aristotle—particularly more recently Rosenblueth, Wiener, and Bigelow’s (1943)[117] Behavior, Purpose, and Teleology.

One bothers with all of this, not simply to dispense with a heap of foolishness. Without ready answers, you will find yourself in the kinds of traps that intelligent design proponents exploit; simply saying, “you’re wrong,” will not win the day. But in a larger context, intelligent design merely provides the worst-case scenario of naively applied teleologism. Better contextualizing the whole issue serves, most obviously, to put in check also the less metastasizing, less aggressive teleological arguments one encounters from religious domains generally, but also the reductionist tendencies from the side of science as well.

If, in the context of teleological argumentation generally, one must resort to aesthetic judgments about the universe, then what light does that shed our own teleologizing with one another, when psychiatrists label humans as mentally ill, when judges label human beings as criminal, when politicians label whole peoples as terrorists or illegals? And if we are toasters with the capacity to refuse, how does that change our view of parents who label children as bad, husbands who label wives as stupid, humans who label humans as inferior?

The truth or falsity of intelligent design (or teleological argument generally) matters less than that such a discourse exists about it at all and gets called a rationale for use in the justification of public policy and personal choices in our lives. It is not, as might seem, a stupid debate about a stupid topic, but something that in its implications informs our everyday lives almost continuously.

That it is child’s play to grasp how human beings being rebellious toasters opens a window on our ethics towards not only ourselves but also sentient beings and even matter (the environment) itself. Rampant developmentalism—or if we want to take a longer view of the matter, then the deforestation of the Sahara from ancient Egypt onward, the desertification of the Fertile Crescent generally, the ecological despoliation in Mesoamerican civilizations, global boiling, mass extinctions, degradation of the ozone layer and biosphere, and the current general rape of the world—all underlie a sense of entitlement about how we name and fix the resources of the world that is as problematic as naming and fixing the “terrorists,” “bad kids,” “rabid dogs,” “necessities,” &c., we encounter in the world.

Convenience aside (i.e., the pragmatics of such naming and fixing), arguments for intelligent design and their ilk are justifications for the status quo. To offer—in place of my critique that the cosmos fails as an intelligent design—that the design needn’t be optimal but only adequate just makes more excuses; it’s an insistence on a particular kind of discourse—like Herrnstein & Murray’s 60/40—that we should start from the premise that nothing of any consequence needs changing (or could be changed). In the same way, making the deities of intolerant monotheism into an Artist rather than an Engineer has its poetic appeal (appropriately enough) but makes the world as a fiction into a Decadent’s wettest dream more than a Dickensian or Balzacian social novel; it’s again an insistence on a particular kind of discourse—politically disengaged and neurotically retreating into the “pure” world of aesthetics—that would insist on sensuously savoring every tittle of the Work without daring to call any of it into question, except as an idle gesture to pass the time. To succumb to the laziness and temptation of believing externally inheritable criteria inhere in the cosmos, placed there by an Engineer or Artist, no matter how intelligent or incompetent, and no matter whether we can ever infer why those criteria were chosen, radically limits our options and future; here again an insistence on a particular kind of discourse—disempowering and undermining our human capacity to act—that stops short at the mirror of “human nature” and refuses to admit it can or could change, but is inevitable and necessary.

Whether we are children of God or of (Mother) Nature, this spares us the agony of growing pains while leaving us stranded in our Parents’ laps; like petulant adolescents we can stew in the room of our Earth, cutting ourselves or others, writing bad poetry or good, and declaring (in our bleak moods) that everything’s stupid or (in our bright moods), “anything’s possible!” We’d do better to confess we’re rebellious toasters, because then instead of the collective protest of our current intraterrestrial mass suicide we might instead take up a screwdriver or whatnot and once and for all start working on tools that can change the lack of a Master’s absence of handiwork; that is, we might stop fetishizing our flesh as a necessity and re-see our inadvertent history of Meat as a narrative of Mind that we can write for ourselves.

References

Blanchot, M. (1949). Lautréamont et Sade Paris: Les Editions de Minuit reprinted in R. Seaver & A. Wainhouse (eds.) (1965). The complete Justine, Philosophy in the Bedroom, and other writings. New York: Grove Press, p. 54.

Čapek, K. (2001/1920). Rossum’s Universal Robots (trans. P. Selver and N. Playfair). New York, NY: Dover Publications.

Costigan, M, Ellenburg, M., Giler, D., Hill, W., Huffam, M, et al. (Producers), Scott, R. (Director) (2012). Prometheus [Motion Picture]. USA:  Brandywine Productions, Dune Entertainment, Scott Free productions.

Dick, PK (1968). Do androids dream of electric sheep? New York: Del Rey.

Dostoevsky, F. (1994/1864). Notes from underground (trans. R. Pevear and L. Volokhonsky). New York, NY: Vintage.

Eagleton, T. (1989). “Bakhtin, Schopenhauer, Kundera” in K. Hirschkop & D. Shepherd (eds.) Bakhtin and cultural theory, pp. 178–88. Manchester, UK: University of Manchester Press.

Foucault, M. (1995/1977). Discipline and punish: the birth of the prison (trans. Alan Sheridan). New York: Vintage.

Gaiuranos, M. (2012). Lucifer in therapy (in four scenes). Urbana, IL: Unpublished Manuscripts.

Hall, R. (1990/1928). The well of loneliness. New York, NY: Anchor Books.

Herrnstein, R. J., and Murray, C. (1994). The bell curve: Intelligence and class structure in American life. New York, NY: Free Press.

Huxley, A. (2004/1932). Brave new world. New York, NY: Harper Perennial Modern Classics.

Illich, I. (2000/1971).  Deschooling society. New York, NY: Marion Boyars.

Jung, CG. (1967). The philosophical tree. In Alchemical studies (vol. 13 of Collected Works) (trans. RFC Hull), pp. 251–351. Princeton, NJ: Princeton University Press

—. (2010). Answer to Job (from vol. 11 of the Collected Works). Princeton: University of Princeton Press.

Kateb, G. (1972). Utopia and its enemies. New York: Schocken.

Lem, S. (1970/1961). Solaris (trans. Joanna Kilmartin and Steve Cox). New York: Faber, Walker.

—. (1980). Return from the stars (trans. Barbara Marszal and Frank Simpson). San Diego, CA: Harcourt.

—. (1985/1971). The star diaries (trans. M. Kandel). New York: Harcourt Brace Jovanovich.

Maturana, H., and Varela, F. (1987). The tree of knowledge: the biological roots of human understanding (trans. Robert Paolucci). Boston, Massachusetts: Shambhala Publications.

Rosenblueth, A., Wiener, N, & Bigelow, J. (1943) Behavior, purpose, teleology. Philosophy of Science, 10(1943), pp. 18–24 available at http://pespmc1.vub.ac.be/Books/Wiener-teleology.pdf

Spetch, M. L., & Friedman, A. (2006). Comparative cognition of object recognition. Comparative Cognition & Behavior Reviews, 1, pp. 12–35. from http://psyc.queensu.ca/ccbr/Vol1/Spetch.pdf

von Foerster, H. (2010), For Niklas Luhmann: “How recursive is communication?” (trans. Richard Howe) in Understanding understanding: Essays on cybernetics and cognition, pp. 305–24. New York : Springer-Verlag.

Endnotes

[1]Footnotes in this section comprise a diegetic part of the dialogue, rather than a component part of its illustration of von Foerster’s distinction and thus may be overlooked.

[2]The German version was published as Fuer Niklas Luhmann: “Wie rekursiv ist die Kommunication?”: Mit einer Antwort von Niklas Luhmann in (1993) Teoria Soziobiologica 2/93, pp. 61—88, Milan: FrancoAngeli (1993).

[3]von Foerster, H. (2010), For Niklas Luhmann: “How recursive is communication?” (trans. Richard Howe) in Understanding understanding: Essays on cybernetics and cognition, pp. 305–24. New York : Springer-Verlag. Retrieved 11 September 2012 from http://e1020.pbworks.com/f/fulltext-2.pdf

[4]Ibid, p. 309.

[6]von Foerster (2010), pp. 310–11.

[9]von Foerster (2010), pp. 311–12.

[11]von Foerster (2003), p. 312.

[14]von Foerster (2003),

[28]von Foerster (2003), pp. 311–12.

[29]Blanchot, M. (1949). Lautréamont et Sade Paris: Les Editions de Minuit reprinted in R. Seaver & A. Wainhouse (eds.) (1965). The complete Justine, Philosophy in the Bedroom, and other writings. New York: Grove Press, p. 54.

[33]von Foerster (2003), pp. 311–12.

[37]The plural of “deities” and singular of “monotheism” are deliberate (along with the deformation to grammar in any given sentence) so as to keep sight of the offensive conceit and absurdity present in the bulk of religious monotheisms that there is only one true deity and that all others are not merely weak or false, but do not exist.

[38]In such a context, (1) has a significance, but I will say it is a wholly negative and undesirable one. Were I addressing myself to (1), it would be to rescue the human experience currently frittered away under the false category of intolerant monotheism from that spiritually benighted dogma.

[39]At least within the domain of philosophy, a growing consensus accepts the intractability of the mind-body problem, that it rests upon and presupposes an untenable distinction.

[40]“One Planck time is the time it would take a photon traveling at the speed of light to cross a distance equal to one Planck length. Theoretically, this is the smallest time measurement that will ever be possible, roughly 10−43 seconds. Within the framework of the laws of physics as we understand them today, for times less than one Planck time apart, we can neither measure nor detect any change. As of May 2010, the smallest time interval uncertainty in direct measurements is on the order of 12 attoseconds (1.2 × 10−17 seconds), about 3.7 × 1026 Planck times” (Wikipedia, http://en.wikipedia.org/wiki/Planck_time)

[41]As human beings, we are able to circumnavigate this insurmountable barrier in part because language (as an ostensibly shared mechanism for coding, transmitting, and decoding our irreducible uniqueness) conventionally allows us to trivialize—i.e., take as predictable—the messages received. Just as in the example above, where “what is three times two” generates various answers, and we “ignore all the dross” and focus on only the word “six,” language in general permits this kind of disregard for our uniqueness (in favor of, an often very useful, pragmatic desire to communicate this or that). This triviliazation does not do anything to von Foerster’s point. His demonstration shows the mind-boggling enormity of what one faces when trying to determine, yes or no, whether a given description of a nontrivial machine is correct or not; the function of language generally is to allow us to say, in the face of each other’s nontriviality, “ah … close enough.”

[42]With added sensors and preventative maintenance, the unexpected breakdown by trivial machines may be avoided, or at least deferred. When a breakdown does occur, however, whatever energetic force it was that finally at that moment at that time and at that spot actually ruptured the material in some way still does not comprise a part of the material’s “history”. What has broken down is the structure of the material, not the material itself. Another way to see this same point comes from asking, “how are we seeing where the energy comes from?” When a billiard ball is knocked into a pocket, the energy of its motion originates from the cue ball that strikes it and not from any internal source of energy. When the material of a trivial machine (or a nontrivial machine for that matter) ruptures, did the energy originate “from outside of” the material? It is precisely because a mechanic has a much broader understanding of a car that she is able to identify what appears to us as a startling or unexpected (pseudo-nontrivial) development that is quite trivial after all. And we’re more or less glad for that knowledge, as it gets our car back in working order.

[43]In principle, there is no difference between this calculator and its 109,234th repetition’s wrong answer and another one that at (seemingly) random times offers a wrong answer, except that the very randomness itself would more readily suggest (to us) some kind of defect to us; the regular, even reliable incorrectness of the 109,234th response seems less like a circuitry defect—and so less explicable as such—and more then like a willful perversity.

[44]Of course, the very regularity of the unexpected answer allows us to “reunderstand” this calculator in trivial terms. We may not be able to fathom how or why it “rebels” every 109,234th iteration; nevertheless, this “defect” becomes a reliable, i.e., predictable, part of this trivial machine’s functioning, just as we might jerry-rig some way to keep our car from failing to start or by using some baling wire and sealing wax to keep something function.

[45](i.e., “That’s six,” “um, let’s see … six,” and “that’s easy, six”)

[46]My (adoptive) father found consolation in the theory that homosexuality is genetic; since I’d been wired that way from birth, being queer-identified had nothing to do with how he’d raised me.

[47]Such attention to detail already has its advocates; Thích Nhất Hạnh, for instance, especially advocates mindfulness and “looking deeply”. I would assert generally that the vast percentage of people who claim to be jaded with the world are simply too lazy to pay enough attention to see otherwise.

[48]It is worth remembering here that von Foerster at least claims, without giving the argument, that the problem is also in principle unsolvable and not just transcomputational.

[49](e.g., the phrase “intelligent design” itself suggests that the scientific view of the origin of the universe is “unintelligent design”.)

[50]This use of “nothing but” is in Jung’s sense. Also, when arguing with religious types, the moment comes when “being a good sport” about their delusional premise becomes untenable, and it simply becomes necessary to state, “Yes, that’s all well and good, but God doesn’t actually exist, so all that you adduce from that premise doesn’t follow.” The parallel case involves arguments with scientific types, when the tenability of being a good sport collapses in the face of, “Yes, that’s all well and good, but human beings can’t be reduced to nothing but their biologicity, so all that you adduce from that premise doesn’t follow.”

[51] In point of fact, all metaphysics must be groundless. But to discuss this point adequately would make this essay into a book.

[52]Qualitative scientists seem less enthusiastic when predictions prove inadequate, so much so that they querulously denying they’re looking for them or that they’re even possible.

[53](My own ability to predict depends upon what I know, yes, and so whether I will be surprised or not depends on how well I know (or think I know) any nontrivial machine I am looking at (a human being), but all of that is merely about me, because it is transcomputational that I might ever arrive at the necessary understanding to actually and really make a nontrivial machine wholly predictable._

[54](although at this scale of complexity, the difference between analytically knowing one human versus all of them becomes virtually a negligible part of the equation; that is, if human beings could be reduced to only 4 inputs, 4 outputs, and 4 internal states, then knowing all 7 billion of them would change 10126 to 7×10133—again, a momentous number, but not that much more momentous compared to 10126)

[55]Eastern epistemology hard dovetails with radical constructivism primarily in the metaphysical claims made in the East that radical constructivism, whether from Varela, Maturana and Varela, or von Glasersfeld’s points of view, eschew. Nevertheless, one may recover a phenomenological description from Eastern philosophy that more readily approaches the radical constructivist epistemology than the naïve realism (as the philosopher Putnam called it) that has dominated the last 2,500 years of western philosophy.

[56](I wouldn’t accept the “self-cancelling” part of this question.)

[57]I do not intend by my use of the word “necessary” here to point to or reinvoke the qualification that necessity puts on the deities of intolerant monotheism’s omnipotence, but neither does this usage negate that point.

[58]It is understandable enough how the rhetorical appeal of this works. If we are created accidentally, then all we can do is state by fiat what any purpose to our existence might be, whereas in the presence of one who deliberately created, our fiat stipulations, even when wrong, at least seem grounded. We can reverse this, of course—the Mother intends great things for us, while the Father merely sowed his oats and moved on without another thought for us. Even so, difficult as it might be to feel convinced that disinterested creation may nevertheless impart a purpose (or that interested creation does not guarantee a purpose), human beings must always and are always in the position of stipulating our purposes in life. Whether a Great Mother or the Father, these are nothing more than metaphysical explanogems that themselves provide the very ground of certainty we desire for asserting a purpose. It’s like the trick of deceiving ourselves that reality exists in order to justify doing anything in the first place.

[59]Assuming the scientist doesn’t already have metaphysical commitments, as when Einstein famously denied dice-playing amongst the Creator’s bag of trick—an embarrassingly orthodox assertion (from a religious standpoint).

[60](it is not an adaptation of life; organisms are born “pre-adapted” to the environment they find themselves in, or they die—it is only much later evolved creatures, ourselves included, who could come into the world unfit for survival and change the environment to suited our continued existence)

[61]Maturana, H., and Varela, F. (1987). The tree of knowledge: the biological roots of human understanding (trans. Robert Paolucci). Boston, Massachusetts: Shambhala Publications.

[62]Here, the sense of “cognitive” would seem to derive from the authors’ Spanish original, or conocer (“to know”), which itself derives from Latin’s cognōscō, i.e., con (“with”) + gnōscō (“know”) and means “I learn” or “I am acquainted with” and, in the perfect tense, “I know”

[63]Foucault, M. (1995/1977). Discipline and punish: the birth of the prison (trans. Alan Sheridan). New York: Vintage.

[64]Illich, I. (2000/1971).  Deschooling society. New York, NY: Marion Boyars.

[65](I’m not providing an example of viewing a problem with a robot vacuum as either trivially conflictual or nontrivially contradictory, because I am unconvinced that it is apposite to think of artificially made trivial machine in nontrivial terms. Whatever nontriviality such machines exhibit is really a pseudo-nontriviality that arises because we have some lapse in our understanding of the machine. When our car misbehaves, seemingly nontrivially, we expect the knowledge of the mechanic to trivialize that appearance. And even when she cannot, we do not accept that this failure means the car is really nontrivial after all; it means only that the mechanic is not knowledgeable enough yet.)

[66]This may seem like an overstatement. As roulette is a human-created game, our relationship of knowledge to the outcome of each spin is not really total ignorance—we know that given the number of slots where the ball may land that there are various odds, &c; were this not the case, we’d likely not risk the game at all. But, as a trivial machine par excellence, each next operation of the wheel is historically independent of all previous iterations—no matter how the deluded gambler beside it convinces himself otherwise (and assuming the wheel is not rigged). There is a temptation to describe this situation in terms of uncertainty (in Shannon’s sense), in which case one is faced by maximum uncertainty and an, at best, 50/50 chance that one’s bet will be correct (as opposed to wrong). This 50% uncertainty of having a winning outcome (i.e., the circumstance of maximum entropy) is not, of course, the odds that one’s choice has of being a winning one. In this context, this maximum entropic uncertainty (measured at 1 in 2) is tantamount to a random outcome, and it is in this sense that our knowledge in relation to the outcome may also be referred to as nothing.

[67]Jung, CG (2010). Answer to Job (from vol. 11 of the Collected Works). Princeton: University of Princeton Press.

[68]A fundamental distinction between much of Indian philosophy and Kashmiri Śaivism in particular concerns the real status of the world. One often finds the notion (in Indian philosophy) that reality is an appearance only, translated sometimes as “an illusion”, albeit a necessary one on the path toward achieving enlightenment, liberation, salvation. In Kashmiri Śaivism, by contrast, reality is an expression of Śiva’s will and, as such, is nonillusory (and thus also still as equally necessary for the attainment of enlightenment, liberation, salvation). It requires no metaphysical commitment to either of these views to appreciate their adequacy of description with respect to lived human experience. If I favor the Śaivistic explanation, it boils down to how the phrase “an illusion” resonates undesirably in a my Western setting. When the world is deemed non-real or an illusion, one begins to veer toward either nihilism, on the one hand, or that otherworldliness often encountered in practitioners of intolerant monotheism, who have their eyes set on the afterlife. One wonders how more difficult it might be to motivate suicide bombers, crusaders, or Israeli checkpoint soldiers if the reality of that other world—with its implicit or implicit denigration of the significance of this here-worldly but illusory world—were less pronounced.

[69]Wittgenstein relied on the truth of tautologies to construct certain arguments (“truth” here meaning internal coherence more than some correspondence with a hypothetical reality). As a tautology, the statement, “The Inconceivable is neither omniscient nor not omniscient” is logically true and thus valid. Assume further that “The deities of intolerant monotheism is omniscient” is also at least logically true, and thus valid.  I raise this to avoid the objection that the former statement is a paradox or somehow illegitimate by comparison with the latter.

[70](I’m going to use THAT and Jehovah, as representatives of Indian and Western religion, respectively, to semantically charge my example). Hence: while the (necessarily logically true) tautology “THAT is neither omniscient nor not omniscient” is logically dissimilar to (the necessarily logically false) contradiction “Jehovah is omniscient and not omniscient” there remains something tantalizing in the fact that the former statement can carry two “states” (omniscience and not omniscience) while the latter cannot. On the notion that more is more, it appears that THAT (with Its two states of omniscience and not omniscience) covers more than Jehovah, who necessarily claims only one state (omniscience). Notwithstanding what the word “omniscience” intends to signify, there seems to be some kind of justice in asserting that a deity who is both omniscient and not omniscient is more of a deity than one who is only either omniscient or not omniscient. So, parallel to the question in the domain of omnipotence (Q: “Can Jehovah make a rock He can’t move?” A: “Yes, if he wanted to”), one might ask “Is there anything Jehovah doesn’t know?” A: “Yes, nothing.” This may seem merely clever, but as soon as a distinction is drawn, it proposes a range that is outside of that distinction that is then excluded from that distinction. Logically, this means not drawing distinctions, which erases the distinction itself between awareness and self-awareness. It was precisely for the sake of self-awareness that Jung proposes (in “Answer to Job”) that humans were necessary—or, more precisely, that self-consciousness emerged out of the nonself-awareness of the unconscious. So, even if it makes one queasy or uncomfortable, avoiding the resort of the contradiction “Jehovah is omniscient and not omniscient” seems untenable. One may treat this as a paradox—and seek to resolve it by following Russell and Whitehead’s example of specifying a new level, by extending the statement across time so that its truth and falseness alternate, by incorporating self-referentiality into one’s logic as Varela did—or not, by rejecting the premise of truth as a property of statements or by reformulating one’s claims in light of Indian insight.

[71]I suspect that Jung’s argument is deeply informed by the notion that the collective unconscious is, in general, the authentic root of “god”—which is not to say that the divine is “only in one’s head” (Jung, as always would avoid any metaphysical claims in any case), but rather that whatever numinous experience of the transcendental we humans can experience would necessarily emerge through the archetypes (the structure of thought in the unconscious). And so, precisely in order to be self-aware of this activity of the collective unconscious, there must be a consciousness to reflect upon it. And therein lies (I suspect) at least one root, if not the root itself, for how Jung arrives at the notion that the divine (the unconscious) had to create human beings (consciousness) to affect its own transformation, even its redemption, as one variety of salvation or enlightenment.

[72]Jung, CG. (1967). Alchemical studies (vol. 13 of Collected Works) (trans. RFC Hull). Princeton, NJ: Princeton University Press

[74] Barring certain theories of universal contraction which, curiously enough, are not incompatible with Indian notions of universal collapse and renewal.

[75] I read recently that if the technological development of the car had followed the arc of the technological arc of the computer, then by now everyone would have a car, they would cost $100 each, and once a year they would explode killing everyone inside.

[76]Lem, S. (1970/1961). Solaris (trans. Joanna Kilmartin and Steve Cox). New York: Faber, Walker.

[77]The issue is too complicated to cover in detail, of course. Here, I will simply point out that any humanly made artifact (including language) affords certain uses, and that if one attempts to use an artifact in a way that it does not afford, then it “objects”. Video recording affords the use of recording and making copies of movies illegally, but the design of the technology is not for that. It is actually a bit dunderheaded to say that video recording equipment allows the illegal recording of movies, because whether a recording is illegal or not is wholly outside the operation of the video recorder itself. In any case, the technology was not invented strictly to make illegal recordings. And so the Supreme Court’s decision may be called cogent, because the proper response to the illegal recording of movies would be to prosecute those who use the technology in that way, rather than to suppress the technology. On the other hand, guns and atomic weapons by design DO afford the intentional destruction of human beings. We might still make a fuss of legal and illegal uses of firearms or atomic weapons, but the more pertinent question is whether such research should be prevented in the first place. It will be an unfortunate day for humanity when some technique of gene splicing is put to evil weaponized use, but there is a difference between that crappy use of a helpful human invention compared to the crappy or noncrappy use of an invention that by design affords the mass destruction of human beings.

[78]As cutely joked by George Burns (as God) in the movie Oh God, when asked if he ever made any mistakes, he replies, “Yes, flamingos. Beautiful birds—pity I put their knees on backwards”.

[79]For those concerned about such things, this need not mean we could not have souls, of course; it only means that souls are just as trivial as anything else.

[80]How this record might be kept is left to the speculative fictionalists. The very configuration of matter itself that any given living system (or nonliving non-system for that matter) might itself be exactly that time- and record-keeper. The means are less important than the need to ensure the means are implemented and maintained.

[81] I have heard reports that Maturana would insist that my insistence is determined. The impression I get from the description of this report is of a broadside against any notion of human freewill; that, at least, is how the person who reported this to me seemed to take Maturana’s insistence. Hume’s paraphrase notwithstanding (see on page 4 above), von Foerster’s analysis of nontriviality guarantees that we cannot dispense with our sense of agency finally.

[82]Understood in this sense, it should be clear that there is precious little difference between an intelligent design conception of human beings where they are purely trivial machines vis-à-vis some creator and the “scientific” view of Nature as blind, random events that mean nothing. With Nature, we can only stipulate our purpose in the cosmos, knowing that it is merely humanly stipulated. With the deities of intolerant monotheism, to stipulate some unknowable being has stipulated a purpose for us in the final analysis offers no distinction, since our projection that the deities of intolerant monotheism gave us some purpose is itself a human assertion on faith alone. This restates again points already made, from yet another angle.

[83]Eagleton, T. (1989). “Bakhtin, Schopenhauer, Kundera” in K. Hirschkop & D. Shepherd (eds.) Bakhtin and cultural theory, pp. 178–88. Manchester, UK: University of Manchester Press.

[84]Racists are always quick to blame character or genetics as the source of human-generated ills, while progressives recognize ways that our coordinations with the environment elicit (i.e., trigger) certain human-generated awfulness as a response to those signals. Adopted children, for instance, particularly those transracially adopted, commit suicide more often than non-adopted children—one may say, if with a certain grimness, that the adopting culture in some way elicits a higher suicide rate from those (alien) adopted children in its midst. So for those cranks who wants to blame human nature for human-generated ills, one might answer with equal justice that such human-generated ills arise from elicitations by a badly designed cosmic setting.

[85]In this respect, it is worth remembering Nadezhda Mandelstam’s report of her poet-husband’s response to beleaguerment, incarceration, and exile: that it ground him down, even driving him to attempt suicide. She specifically denied that it made him a better or stronger person.

[86]Kateb, G. (1972). Utopia and its enemies. New York: Schocken.

[87](Again, one anticipates the resort that intelligent design might not be meant to address life on Earth per se but refers only to the origin of the cosmos itself. But an intelligent design that is ironically, or even perversely, hostile to the life that populates it seems less than an intelligent design—all the more so when the cosmos seemingly can’t even notice there is life in it.)

[88]For those who insist we could not be human if we did not have these challenges, these afflictions, then how could one could wish for Heaven? Indeed, had an intelligent Designer made it so that such afflictions did not befall us, we would have to say we are already living in Heaven.

[89](von Foerster implies there are reasons as well why it is in principle unpredictable, but he did not address those specifically and neither does this essay.)

[90]I have heard humans referred to as self-domesticated animals.

[91]An increasing range of cognition (as an increasing range of effective action) permits telea other than “continued existence” to come into being. From observing living organisms over the earth, it seems this came into play comparatively early in the evolution of life

[92] Language is the most frequently resorted to benchmark, though how to rigorously maintain human language as not simply a highly complex—so we insist—form of signaling otherwise found in Nature challenges this resort. Our complacency about this assertion seems the readiest of its refutations. And ultimately, it’s less whether bees or bonobos don’t compose sonnets than that we do might not be valid evidence for a distinction in the first place.

[93]Spetch, M. L., & Friedman, A. (2006). Comparative cognition of object recognition. Comparative Cognition & Behavior Reviews, 1, pp. 12–35. from http://psyc.queensu.ca/ccbr/Vol1/Spetch.pdf

[94]Rather than unraveling.

[96]Even our view of ourselves cannot be not contaminated by this anthropic principle as well, and there’s no reason to think this is a perfectly acceptable (i.e., objective) view of the matter since we are the subject of our own observations.

[97]Herrnstein, R. J., and Murray, C. (1994). The bell curve: Intelligence and class structure in American life. New York, NY: Free Press.

[98]In intellectual work, there are large swaths of development in the history of a discipline where two essentially competing notions are at loggerheads. Between X or Y, one side insists it is all X, another insists it is all Y, and someone eventually—with a seeming air of reasonableness—insist, “Well actually, it’s a bit of X and it’s a bit of Y.” Herrnstein & Murray’s (1994) 60/40 split appears to be of this type, and so has that air of reasonableness. However, to put the matter grossly, if X is vomit and Y is feces, do we arrive at the “truth” merely by combining two loathsome presuppositions into a hideous compromise of the two? As premises, intelligence cannot be 100 percent genetic and cannot be 100 percent environmental; no scientific racist who indulges in fantasies about the “reality” of intelligence holds such a radical position—that is, amongst the scientifically illegitimate cranks who take on the conceit of doing serious work, none would take seriously a supercrank who held either of these patently untenable positions. So some sort of 60/40 “compromise” has an appearance of reasonableness when in fact it is arguably even less tenable than either extreme position. My saying this may seem inappropriately scoffing, but it is informed by the testimony offered in court by “intelligence researchers,” who all (with one exception) admitted that they no more knew what intelligence was, much less were able to measure it.

[99]Lem, S. (1980). Return from the stars (trans. Barbara Marszal and Frank Simpson). San Diego, CA: Harcourt.

[100]This proposes  problem for intelligent design, insofar as the claim of intelligence must rest on an ability to predict the whole working out of the design from its first to its last moment; omniscience theoretically addresses this. Still, an intelligently designed cosmos, which initially had no living systems, that is eventually populated by living systems, must answer whether the lifeless or life-exhibiting state of the cosmos is the one designed for.

[101]Because a hierarchical node is occupied by human beings and because hierarchical nodes are observed by human beings, their trivial functioning may seem something more like a nontrivial function. Trivial machines are historically independent and predictable; these two features in particular are the desired design behaviors of hierarchical nodes. When the report is needed next Thursday, or when this plane is supposed to take off from Chicago and land in Los Angeles, this output for the given input is supposed to go off quite apart from whatever has happened in the past. Like nontrivial machines, the output is synthetically determined by the input, but the specific traits of historical independence and predictability are what characterize the hierarchical node as trivial in design.

[102]Dostoevsky, F. (1994/1864). Notes from underground (trans. R. Pevear and L. Volokhonsky). New York, NY: Vintage.

[103]Gaiuranos, M. (2012). Lucifer in therapy (in four scenes). Urbana, IL: Unpublished Manuscripts.

[104]See footnote 98.

[105]Čapek, K. (2001/1920). Rossum’s Universal Robots (trans. P. Selver and N. Playfair). New York, NY: Dover Publications.

[106]Costigan, M, Ellenburg, M., Giler, D., Hill, W., Huffam, M, et al. (Producers), Scott, R. (Director) (2012). Prometheus [Motion Picture]. USA:  Brandywine Productions, Dune Entertainment, Scott Free productions.

[107]Dick, PK (1968). Do androids dream of electric sheep? New York: Del Rey.

[108]Hall, R. (1990/1928). The well of loneliness. New York, NY: Anchor Books.

[109]Lem, S. (1985/1971). The star diaries (trans. M. Kandel). New York: Harcourt Brace Jovanovich.

[110]Huxley, A. (2004/1932). Brave new world. New York, NY: Harper Perennial Modern Classics.

[111]Our status as “fictional” is one way to circumvent the imputation of “cruelty” (the qualification to the attribute all-loving) insofar as the “reality” of our suffering during the course of the book, drama, or film was actual without being real, just as an actor “suffers” onstage. That we’re alarmed at the time, even to the point of suicide or a murderous rampage, gets the happy ending when the house lights come up, when the film crew from the cosmic edition of Candid Camera jump out and reveal the whole thing to have been a practical joke, however in poor taste. That Armenian genocide? It was staged, like one of Potemkin’s villages or Pullman’s houses. That this has all the cheapness of a novel that deflatingly admits at the end, “It was all a dream,” even if we’re immensely relieved after all is the entry point for a critique of this move.

[112](“The counter-argument maintains that, in addition to (or instead of) being thought of as an engineer, God is perhaps better thought of as an artist (possessing the ultimate artistic license).Moreover, this application of the argument presupposes the accountability of God to the judgment of humanity, an idea most major religions consider to be an enormous conceit that is diametrically opposed to their doctrines.” From http://en.wikipedia.org/wiki/ Argument_from_poor_design)

[114]“Despite the amendment lacking the weight of law, the conference report is constantly cited by the Discovery Institute and other ID supporters as providing federal sanction for intelligent design. In response to criticisms of the Institute stating that the amendment was a federal education policy requiring inclusion of alternatives to evolution be taught, which it was not, in 2003 intelligent design’s three most prominent legislators, John Boehner, Judd Gregg and Santorum provided a letter to the Discovery Institute giving it the go ahead to invoke the amendment as evidence of “Congress’s rejection of the idea that students only need to learn about the dominant scientific view of controversial topics”. This letter was also sent to executives on the Ohio Board of Education and the Texas Board of Education, both of which were subject to Discovery Institute intelligent design campaigns at the time.” (from http://en.wikipedia.org/wiki/Santorum_Amendment)

[117]Rosenblueth, A., Wiener, N, & Bigelow, J. (1943) Behavior, purpose, teleology. Philosophy of Science, 10(1943), pp. 18–24 available at http://pespmc1.vub.ac.be/Books/Wiener-teleology.pdf

Abstract

Crowds of people are heuristic; packs of people are algorithmic—where heuristics takes as its goal reaching a state that concretely fulfills an ideal, and algorithms take as their goal reaching the point that ideally fulfills something concrete.

Introduction & Disclaimer

This is the twenty-sixth entry in a series that addresses, section by section over the course of a year+, Canetti’s  Crowds and Power.[1] This post addresses no specific section of the book so far (i.e., “The Crowd”, “The Pack” and “The Pack and Religion”, or any of the subsections), but is a coming up for air or a check-in on where things stand over the course of the author’s 168 pages so far. It covers “The Crowd” in two essential parts: negatively clearing away the muddle of Canetti’s exposition and positively attempting to characterize crowds in order to continue forward.

I’ve banished my normal disclaimer into an endnote,[2] because this whole post is an attempt to come to terms with what has gone before so far. As a reflection on a reflection, I should cite and document whatever blog-posts I’ve written and/or recite the relevant passages in Canetti, but the point of this is more to take stock of the past (work done so far) in order to proceed into the future. I will admit, I could have done something like this at the end of the first section (“The Crowd”) and, in fact, should have; equally, I could have or should have done a check-in at the end of the second section (“The Pack”). I did not for “The Crowd,” because at the end of that section I felt like I had actually wasted my time in the effort to find something of lasting value in Canetti’s exposition. I’m glad to have persevered, not necessarily because by the end of “The Pack” I’d started to find Canetti’s exposition helpful but because it led me to Spencer and Gillen’s (1904) groundbreaking work in Australia. Their work provided a jumping off point and contrast for finding sometimes helpful alternative constructions than those offered by Canetti.

But, whatever personal/intellectual teleology I might get out of Canetti, in terms of his argument generally in his book, it seemed essential at this moment to stop and take stock. The reason is: Canetti began by attempting to characterize the crowd, and then took a step back to characterize the pack (as the earlier, smaller, not-quite-crowd), and then in “the Pack and Religion” deceived himself that he’d made a case for how the transmutation of packs provides the basis for world faiths. The next section “The Crowd in History” betokens an obvious return to the crowd, which at this point has no material relationship to the pack. And so, if there is going to be any clarity (for you, for me) moving forward, I need to establish what’s worth keeping and what’s been found in order to chart a clear course through the mess to come. Partly I anticipate this because the first section of “The Crowd in History” is an especially obnoxious intellectual foray on Canetti’s part.

The Crowd (Clearing the Old Ground)

In general, it seems simplest and wisest[3] to admit outright that Canetti’s list of crowd attributes in the first 48 pages of his book are dead ends—not even Canetti’s claimed most determinative feature, the crowd’s desire for growth, is present in every crowd even he describes, though the gravitational quality to draw people to or into a crowd is certainly a recognizable factor in some crowds. Generally, if there’s going to be any salvaging of any of the material in this section of his book, it may need to be after only after the exposition about types of crowds (baiting crowds, &c).

With caveats (discussed previously), members of a pack do not need to know each other (as Canetti insists) but can know or assume that the other members of their pack are capable of contributing whatever role or function or skill their presence supplies toward reaching the collectively agreed upon goal. With modern connectivity, this means the pack might be distributed all over the world, but whether this begins to shade away from a pack per se would need further distinguishing. This points to the fact that mere numbers are not enough to make a pack  crowd. Thousands of people, all collectively bent on storming a castle wall or setting some kind of Guinness record would not necessarily become a crowd simply by virtue of scale.

Adorno (2003)3 at one point agrees in his conversation with Canetti that an army is not a crowd—a curious moment, since Canetti seems to assert otherwise in his book. Regardless, we can indeed derive the notion of an army by assembling together and coordinating a series of packs (platoons, battalions, &c), i.e., by not calling them a crowd. In this, just as each member of a pack contributes to a collectively agreed upon goal, we can see how each unit—precisely like an individual—subsumes its own activity to the larger, collectively agreed upon goal of the battalion, the division, the army, &c. Here again we see that it is not necessary for individual units to “know” other units, but only to assume (rightly or not) that the role or function or set of skills of those other units required to achieve the larger goal-to-be-reached are present. Voilà, hierarchical organization in general—cue all of the insights of cybernetics and Scott’s (1992)[4] Organizations: Rational, Natural, and Open Systems.

This said, a pack is not a crowd—that much is clear. Humans beings never being without a purpose, teleology cannot and should not be denied to the crowd,[5] but whereas teleology is the organizing theology of a pack (i.e., all effort of the pack takes as the preeminent value the attainment of the goal)— the culture of the pack; the religion of the pack, if you must—nonetheless it may be that teleology of the crowd, whatever it is, may not be of primary importance for a crowd.[6]

Canetti tries to argue that a pack population must be stable, but this is not so. The boundary of the pack is nothing more than the agreement to subsume one’s activities toward the collectively agreed upon goal. The function of the pack—like a system—changes according to changes of its members, but the goal remains the same until some effort is made to renegotiate it due to new information or because someone or a group sets out to “subvert” the original purpose of the pack to a new end. And people may come or go from the pack according to the culture of the pack. This all simply comprises the workings-out of the pack in its material manifestation. So it is not clear whether one can speak of a crowd having a boundary except that an observer (whether in the crowd or not) simply declares it by fiat. Here, however, there is no collective agreement or to the extent that there is one, it is assumed by each individual. In any case, people may come and go through whatever seemingly physical boundary of the crowd there is, or may depart from the crowd even as they continue to stand in its midst.

This point has been raised previously and is a central problem and failing of Canetti’s exposition. He plays fast and loose with a point of view that is inside or outside of the crowd. (Of course, one does not need to be exterior to a crowd to be an observer of it.) On the one hand, people can and do experience that moment of being lost in the crowd, of being at one with all of those around you—and certainly not only because of a fear of being touched.[7] But on the other hand, logic alone indicates that if a crowd permits some to experience ego-annihilation and thus a (desirable or unnerving) dissolution into the mass of people around them, then one could equally expect that that moment of mass presence could conversely generate a radical sense of (desirable or unnerving) individuality—this moment of fatal isolation seems exactly to be at the heart of Riesman’s (1961)[8] lonely crowd. Once again, variability of human response—as precisely the sign of the human in such circumstance—precludes generalization about inevitabilities or only single outcomes.

Adorno emphasizes Canetti’s emphasis on death—not exactly a startling emphasis given when he was writing and not exactly an original idea given that ridding ourselves of three-fourths of riders of the Apocalypse will not banish the remaining one who almost always follows close on their heels—and this may help to understand “the crowd” at least as Canetti is preoccupied to constitute it.

However a crowd forms, once we find ourselves in media res, the usual biological array of options includes fight, flight, or freeze, but because we are humans not limited to biology, we have also fraternizing, i.e., we can join the crowd. On one reading, Canetti swerves between these two forms of crowd confrontation: some variety of opposition to it (fight, flight, freeze) or non-opposition (fraternity). For those who choose flight, the baiting party pursues them. For those who choose to fight, it might end in a truce (i.e., a refusal crowd), otherwise it ends in the equivalent of a lynching (a successful baiting crowd). For those who freeze, this points to a refusal and is akin to the prohibition crowd. For those who join, this points to the (evil or redeeming, glorious or disastrous) festival.

This attempt to fit Canetti’s types of crowds partly misses the mark because he describes the types of crowd from the standpoint of someone observing the crowd; that is, ultimately, from the standpoint of someone threatened with destruction by the crowd. Let it be said that crowds, like anything elephantine, can be dangerous merely by their size. People get crushed to death at music concerts completely inadvertently just as an ocean liner would turn you into a smear if you got caught between it and a pier. Like a dangerous animal, crowds can be numinous and fascinating, but to make that the center of a sociology of them veers off into psychology or poetry instead, as is obvious in this book.

The baiting crowd, like the hunting pack, is Canetti’s starting point, and a baiting pack presupposes from prey. Everything for Canetti depends from the why of the experience of the targeted prey (whether human or animal). Canetti’s exposition is fundamentally oriented—in each illustration of a crowd—toward this threat of destruction: in the baiting and war crowds (including Islam as a religion of war, or Taulipang and Jivaro annihilating acts of revenge) this is literal, but present also in his examples of the death and trampling unleashed in Greek Orthodox Easter festivals, the destruction turned on oneself in Shia Islam’s lament of Ashura and Warramunga lamentation, and or Christianity’s attempt to exterminate the infidel not only in the Crusades, which Canetti specifically cites, but also the manifold campaigns against European heresies and the Pope’s support (direct or not) for the השואה.

Given the looseness of Canetti’s exposition, pouring the overcooked noodles of it into the mold of this basic confrontation leaves more to be desired from the quality of his cooking than the inadequacy of the mold. But what the crowd is, first and foremost, in Canetti is a mass of people not oriented favorably toward another human being: those in the baiting crowd seek their “prey”; those in the striking prohibition crowd refuse the edits of “the man”; those in religious congregations set themselves apart from the “damned”. The self versus society problem stands at the center of this and so—perhaps for merely personally neurotic reasons—Canetti opens his text by characterizing the joining of the crowd as the panacea for this problem, the joining of the crowd being a preferable alternative to the destruction ultimately portended by fight, flight, or freeze. Or, at least, in the probability game of maximizing one’s likelihood of survival, joining has fewer risks (it seems) that fight, flight, or freeze—the only cost of it (in this ethical dyad) is that one must become a perpetrator, albeit one theoretically absolved of responsibility because in the crowd individuals cease to exist as responsible agents.

Canetti only obliquely acknowledges that Judeo-Christianity amounts to  baiting crowd or hunting pack in his terms, so that communion (as he describes it) takes the curious turn of excusing one’s extermination of the sacrificial animal (whether the savior, an elk, or one’s neighbor) by the fiction of identifying with that which has been exterminated. To be clear, in other traditions, one may follow an etiquette regarding the animal killed under the aegis of a fiction that the animal gave itself willingly and will be returned to life if treated respectfully (essentially insisting that its suffering is therefore exculpated and its death denied any reality). Thus, the communal feast, conducted in this way, ameliorates the blood-guilt by what one might cynically call a very pretty fiction indeed. But in the Judeo-Christian tradition of intolerant monotheism, one does not excuse one’s crime of murder in this way but, rather, by claiming to be co-equal with the victim. In the most psychopathic mode, this is the murderer screaming at his victim, “You made me do it.” This is the fiction of intolerant monotheism, familiar from the tear-jerking confession at the end of Law and Order: SVU or any psychokiller thriller that “they” made me do it; hence, Canetti’s aside about persecution (pp. 22–3). In Jung’s (1952) “Answer to Job” he offers further details of this (psychopathic) identification. Two primary kernels of this are: “We have already pointed out at some length how curiously god’s Salvationist project works out in practice. All he does is, in the shape of his own son, to rescue mankind from himself” (¶664).

Job … was an ordinary human being, and therefore the wrong done to him, and through him to mankind, can, according to divine justice, only be repaired by an incarnation of God in an empirical human being (¶657).

The annihilation of ego Canetti purports in the crowd provides the foundation for this other-blamed “they”—although it must be noted that many who have committed mob violence afterward seem to have, at least in some cases, a genuine amnesia about what they did.[9] And whether someone points to a “they” in the people around them or to a “they” that lives only in their head, which Jung documents in the mentally well and unwell alike in the form of complexes, there is nevertheless the disturbing fact (from a legal/culpability angle at least) that that “they” did actually usurp a person’s will. This is the useful sense of the word possession—that we get possessed of an idea or an impulse, and it carries us along, kicking and screaming or not. In such a context, the premise of original sin makes asking forgiveness for them disingenuous; the very asking itself can only be further depravity—as Ambrose Bierce puts it: apologizing is laying the foundation for a future offense.

Insofar as Canetti starts from the view of the crowd as posing a threat—the threat of baiting, the threat of refusal, the threat of war—this dovetails with his overemphasis on destructiveness (pp. 19–20) and death. But not only would the crowd not see this in those terms,[10] the flight crowd and the feast crowd both betoken neither threat nor destruction. In other words, we cannot pretend that crowds do not have the potential to be creative as well as destructive.[11] Though “creative” or “destructive” (as moral judgments) will always be from a particular point of view—the destruction of the hunted animal for the hunted is the creation of a communal meal for the hunters—whether a crowd is destructive or creative is not a fact, but rather a declaration by someone.[12] More basically still, for there to be a perceived threat at all, there must be at least two groups in the first place, but not all crowds find themselves physically in the presence of others.

To get to how crowds confront one another, however, it seems necessary first to understand what might be meant by the purpose (the teleology) of a given crowd, especially since (compared to packs) crowds may often seem aimless.

To put this difference in plain language first: with a pack, its members know certainly what they want and improvise adjustments to stay on track to the goal; with a crowd, its members stay on track with what they know in order to arrive at a goal the location of which may not be yet so clear.[13] The crowd of wandering Mormons in the United States and South American Indians who for some four hundred years followed a messianic impulse all over the Amazon daily repeated the rituals of their cultures while “aimlessly” moving through unknown territory, making adjustments to those daily rituals in light of local conditions.[14] These groups kept the faith until they arrived at what was recognized as the Promised Land. For a pack, by contrast, the members keep their eye on the prize and adjust their activity to get to the goal of that prize (even when the prize is a moving target, as when a pack of wolves[15] stay focused on tonight’s moose calf dinner).

To reprise this in more technical language:  the crowd of wandering Mormons in the United States or the tribe of South American Indians who for some four hundred years followed a messianic impulse all over the Amazon daily repeated the rituals of their cultures (as algorithms) while “aimlessly” moving through unknown territory in the expectation that they would recognize their arrival at a desired end-state or goal (heuristics). (The distinction between algorithm and heuresis gets discussed in detail in footnote [16] and 17 below).

Whereas packs aspire to the condition of algorithms even as circumstances call continually on the improvisational heuristics of its members to keep the pack oriented to its goal, crowds advance by iterative heuristics[17] even as they repeat their algorithmic habits (as rituals, packs, &c). Thus, crowds tend to be heuristic while packs tend to be algorithmic, understanding that heuristics has as its goal an experience that reaches a state of concretely fulfilling an ideal,[18] while an algorithm has as its goal an experience that reaches the point of ideally fulfilling something concrete.[19]

With this as something of a framing, we can try to conceptualize what a crowd might be.

The Crowd (Preparing A New Ground)

Rather than attempting to describe what crowds are, it may be more helpful to describe what crowds do.

For a crowd confronted by the presence of another crowd, this may entail moving away (what Canetti calls a flight crowd, though it needn’t only be due to a threat, e.g., emigration or amicable separation), moving toward (what Canetti indicates in the war crowd, though this needn’t only be through hostilities, e.g., immigration or amicable merging), or neither (what Canetti calls the refusal crowd, though this needn’t be oppositional, e.g., but rather simple neighborliness).

About this neighborliness: in Spencer and Gillen’s (1904) observation of Australian tribes, this neighborliness has a virtually ontological factualness. For any given people of a totem, the land they occupy is precisely that land originally occupied in the dream-time by their totem ancestors. For this reason, as Spencer and Gillen (1904) aver, people of one totem have literally no use for land designated to or for another totem. Thus, while there are the inevitable interpersonal squabbles between individuals that can arise when people have differences with one another, nevertheless Spencer and Gillen (1904) report no territorial disputes between tribes. Thus, the crowds of the tribes with respect to one another have faced one another without fleeing, attacking, or standing in an oppositional tension to one another or merging with them. They maintain a very definite distinction, but it is neighborly or mutually tolerant, not oppositional.

For the purpose of describing the phenomenon of crowds, it is not sufficient to resort to the equivalent of Justice Potter’s dictum: “I know one when I see it.” However, the evidence from history and daily life and Canetti’s book shows that this is the most common way that a crowd forms—that is, someone says, “Man, that’s a crowd” or “Yikes, I’m in a crowd.” And so, in the phrase “for a crowd confronted by the presence of another crowd,” what can constitute “another crowd” may emerge in a culture through authoritative fiat by an individual, by someone declaring, “there is a crowd,” whether that is via the declaration of a chief, a shaman, a public intellectual, or (in those cultures where authority has yet to be sufficiently enclosed) theoretically anyone. Such a declared crowd by no means needs to be comprised of other human beings—if I imagine a hypothetical early migratory human crowd, they might not recognize those other bipeds who make funny sounds as human beings at all—perhaps that’s why Neanderthals are no more.

The necessary, logically inescapable dyad of the crowd is crowd/not-crowd, however this finds cultural expression. Thus, no crowd is ever “alone” although the pressure or presence of the Other may be of little moment. A crowd might approach the “crowd” of some geographic feature in the landscape in any of the three basic modes, i.e., in a friendly way, in an attack that conquers the place, or in a fearful opposition to or neighborly distinction with, &c. This point differs from Canetti’s claims about invisible crowds or crowd symbols. Either because he repeats another’s authority (to declare something a crowd) or because he fails to recognize his own arrogation of authority to declare something a crowd, his argument erroneously reifies cultural necessities as natural facts. This in no way means that such declarations have no social reality—for the Lele the forest does house a crowd of spirits; the Pasha’s Ottoman empire did annihilate a crowd of Armenian undesirables during the twentieth century. Crowds of this type (what Canetti calls invisible crowds) are unreal, bearing in mind the fable from India:

Once upon a time, a skeptical Prince who was a pupil of [the Indian teacher] Shankara decided to test his teacher. Once when the illustrious scholar was walking up the royal pathway to the Palace, the Prince unleashed an Elephant from the Army stables directly onto Shankara’s path. ¶ The Brahmin, not known for valor of this sort, proceeded to climb up the nearest tree.  ¶ The Prince approached the teacher, bowed respectfully and inquired as to why he had climbed the tree, since according to his own teaching, everything, including the approaching Elephant, was unreal. ¶ ‘Indeed’ said Shankara, ‘the Elephant was unreal, but where do you conceive of the idea that the unreal cannot be harmed by the unreal?’ (from here)[20]

Again, Canetti’s argument erroneously reifies cultural necessities into natural facts. Some (including Canetti) might argue that he is merely describing human behavior s one may observe it from history, but to accept the Ottoman claim of factualness about crowds of Armenians assents to the psychopath’s “you made me do it”; it’s this sort of thing that presumably makes Sontag declare “Canetti dissolves politics into pathology, treating society as a mental activity—a barbaric one, of course.” The problem (continuing this metaphor) is that Canetti suffers from the pathology while he diagnoses it. Whatever the merit of the diagnosis, Canetti’s exposition hinges crucially on naming the cultural necessities of (invisible) crowds as (natural) facts—cue his resort to a dubious intersection of biology and anthropology. Per Shankara, while the unreal may indeed be harmed by the unreal, his further point is “‘but so was your presumption that there was a me, climbing a tree.’” (from here).[21]

Understanding cultures by understanding these kinds of crowd formations give clues to how crowds behave, but we are not obliged to accept whatever sort of ideologically structured teleology (whatever “ist”) these reflect as obligatory or universal (much less monolithic) within a culture. At the same time, while a crowd is never “alone” for this reason, this does not mean that everything not-crowd is merely one undifferentiated mass. The distinction of sacred and profane points to a further differentiation (within the domain of the profane) in any sort of thing (as an object or process) that is taken of no account at all. In other words, while “us” remains distinct from “them,” but there is also everything else that is not-them that needs no consideration at all, that is not even noticed for want of a distinction in the first place, &c.

Here, the yin/yang symbol again illustrates its utility.[22]yinYang

Within the ambit of my culture (i.e., on my “half” of the yin/yang symbol), there is culture (the dominant color of the sacred) and not-culture (the opposite-colored dot of the profane). From Eliade’s (1954)[23] Myth of the Eternal Return, he notes that the profane is unreal, nonreal; by contrast, the immoral is quite real, it (unfortunately) exists. Thus, within my culture, represented by one color of the yin/yang symbol, the spot of opposite color denotes the bounded, enclosed zone of prohibition and interdiction.[24] Nonetheless, the color of the dot on my half is not really or actually the same as the color of the dominant in the other half, though it appears to be.[25] The actuality of the Other is not what I see through the lens of the alien spot of color—outside the border of my culture, I project and overgeneralize the color of the alien spot in my world onto the rest of the world, the other half of the world. Meanwhile, it is not only that the other half of the world is other than I am viewing it, but also that the lens of my world doesn’t even notice some other parts of the other hand of the world. One could run considerably more with this, but for now, this distinguishes the sacred, the profane, and the nonexistent—that which is taken no account of.[26]

As a matter of culture, these authoritative fiats (wherever they originate) once they get established have the status of a myth; that is, they become a part of the collective explanatory paradigm formed by culture. These are decidedly not granted such status by consensus (like a pack), but neither are they just a matter of some ambitious bastard telling a suitable or convincing story.[27] Precisely the not-agreed-to quality of these fiats can make their utterance by a member of a culture appear to have (if not actually have) a suprapersonal or transcendental (i.e., objective) quality, rather than something merely subjective. Jung (1952)[28] describes this from  different context:

Myth is not fiction: it consists of facts that are continually repeated and can be observed over and over again. It is something that happens to man. … It is perfectly possible, psychologically, for the unconscious or an archetype to take complete possession of a man and to determine his fate down to the smallest detail. At the same time, objective, non-psychic parallel phenomena can occur which also represent the archetype. It not only seems so, it simply is so, that the archetype fulfills itself not only psychically in the individual, but objectively outside the individual (¶648).

From within a culture, these individual authoritative fiats (as “scripture” or revelation) may well tend to seem authentic in the claim to originate from divine inspiration or the like. It is rather especially when one group, with their scripture or revelation, starts foisting it on another that skepticism becomes likely and even necessary. If previously there had been a unanimity of crowd, then a schism may begin to occur—assuming one group doesn’t simply annihilate the other in a bid to suppress contradiction or dissent to the fiat.

The advantages of mutual indebtedness as already noted from Spencer and Gillen (1904), it is interesting that “schism” needn’t result in “separation”; that is, all of the tribes they observed were split at least into two facing halves (moieties), which then organized a vast array of consequential relationships and behaviors between individuals. The relationship between moiety and totem is by no means anything like  natural law—amongst the Arandan, it seems as if totems are distributed “randomly” across moieties, while in the Warramunga and related tribes, moieties and totems tend to be associated (i.e., most if not all of the kangaroos on one side, most if not all of the emus on the other, &c). There are obviously a whole host of fiats and “we don’t do that” type of gestures that do not prevent people from opposite moieties from living proximate to one another. The sort of equilibrium this suggests seems critical, and cannot be reduced only to the sense of mutual indebtedness.

The drift of my exposition obviously takes “crowd” in the sense of an ideal, isolated, monolithic culture without claiming any such thing ever had to exist, though examples are easy enough to imagine (e.g., cults, corporations, &c). If the pack is distinguished by an explicit assent to a collectively agreed upon goal, then the crowd in this usage is distinguished by an implicit orientation to a myth, i.e., a value, end, a way of being in the world.

In numerous examples from aboriginal cultures in Australia provided by Spencer and Gillen (1904), these fiats are effectively lost in the past, were established in the dream-time, an each new generation inherits from the present one and prior the same fiats back to time immemorial. There’s no question of “agreeing” to these things. Outwardly at the very least, one obeys them, whatever goes on in someone’s head or on the sly when not under the panopticon’s gaze. The advantage of this example is that it seems so remote from our experience that it Trojan-horses in the relevance of the point in our own culture, since we’re not materially different in this regard. Every time we use the word “society” we are doing little different from the Warramunga people who say that they do what they do because the dream-time ancestors did it that way. The “cop in our head” is not a recent legacy.

It’s getting clearer and clearer that pack and crowd seem more like antonyms.[29] This is one reason why crowds will seem like (natural) phenomena rather than (unnatural) human formations, but the temptation of this error should be resisted. It appears then to be this sense of “they” of “society” of “them” that acts as the “authoritative individual” who declares by fiat what a crowd’s purpose is—and here I mean crowd not as “tribe” or something like it, but crowd as Canetti wants to sociologize it. For the individual, getting “lost in the crowd” is like assenting to what the “crowd tells them it wants” or what the purpose of the crowd “is”. Everyone is doing this at once, save for those who are actually functioning as some kind of authoritative figure—the one with the bullhorn, at the podium, calling for action from within the mass, &c. Particularly in this kind of crowd, a single voice shouting, “Come on,” has the character (whether illusory, whether illegitimate or not) of being the voice of revelation, the non-personal expression of transcendent fact (sometimes known as god’s will). Not that this spark, this voice, must always be enough, but to the extent that everyone is in effect being commanded to “obey” the assumed-to-be universal (therefore timeless, therefore incontestable) edict of the “mass will,” then sometimes this is all it takes to turn into a massacre, an orgy, a giant- sing-along, &c.

NOTE: plenty of this may be anticipating remarks Canetti makes later, but the point of view I’m summarizing is wrong in orientation in that it takes the declaration of individual fiat as the basis for naming that a crowd exists. Canetti relentlessly stresses the equality of everyone in a crowd—an absolute state equality being one of the more than one absolute definitions of what a crowd is that he offers—and thus takes no account of the single individual’s voice, rising up out of the sullen mood, crying for vengeance, howling in anguish, simply shouting, “Come on,” as the moment that becomes the authoritative voice at that moment that unleashes whatever it unleashes. This exactly presupposes a non-equality of that person, an individuality, however emblematic that person is at that moment of the crowd at large.  It’s worth remembering at this point: These authoritative voices point specifically to at least an analog of the role of the intellectual as Suttner (2003)[30], drawing on Gramsci (1979)[31] describes it; in this, intellectuals

should be defined by the role they play, by the relationships they have to others. They are people who, broadly speaking, create for a class or people … a coherent and reasoned account of the world, as it appears from the position they occupy. Intellectuals are crucial to the process through which a major new culture, representing the world-view of an emerging class or people, comes into being. It is intellectuals who transform what may previously have been the incoherent and fragmentary ‘feelings’ of those who live in a particular class or nationally oppressed position, into a coherent account of the world (see Gramsci 1971[32]: 418; Crehan 2002[33]: 129–30).

In a  letter of 1931 Gramsci says his definition of an intellectual ‘is much broader than the usual concept of “the greater intellectuals”’ (1979: 204). In his Prison Notebooks, he writes:

What are the ‘maximum’ limits of acceptance of the term ‘intellectual;” Can one find a unitary criterion to characterise equally all the diverse and disparate activities of intellectuals and to distinguish these at the same time and in an essential way from the activities of other social groupings? The most widespread error of method seems to me that of having looked for this criterion of distinction in the intrinsic nature of intellectual activities, rather than in the ensemble of the system of relations in which these activities (and therefore the intellectual groups who personify them) have their place within the general complex of social relations (1971: 8. Emphasis added).

In the same way a worker is not characterized by the manual or instrumental work that he or she carries out, but by ‘performing this work in specific conditions and in specific social relations’ (117–8).

Thus, shamans, priests, politicians, and doctors comprise intellectuals insofar as they provide “a coherent and reasoned account of the world, as it appears from the position they occupy” (Suttner, 2005, 117). So also, the lone authoritative voice who provides a crowd its spark functions exactly in that brief moment as a public intellectual, as the one who gives a coherent explanation of the world to the world (of the crowd), understood from inside of “the ensemble of the system of relations in which these activities … have their place within the general complex of social relations (Gramsci, 1971, 8, emphasis added) within the crowd. This moment need not be all to the good—the lynch mob and pogrom may begin in this way. Nor must it be verbal or even audible. Human expression takes many forms, and any concrete expression by an individual might function in this way—a sign, a raised fist, breaking out into a dance, a refusal to move, a thrown rock, or in Tunisia the self-immolation by محمد البوعزيزي (Tarek al-Tayeb Mohamed Bouazizi). This is all simply to remind us that crowds are still human events and should not be mistaken for “natural” phenomena.[34]

I don’t know to what extent this serves to characterize what crowds do, but I’m going to at least pause here to give things a chance to sink in.


[1] All quotations are from Canetti, E. (1981). Crowds and Power (trans. Carol Stewart), 6th printing. New York: NY: Noonday Press. (paperback).

[2] The ongoing attempt of this heap is to get something out of Canetti’s book, and that of necessity means resorting to the classic sense of the essay, as an exploration, using Canetti’s book as a starting point. I can imagine that the essayistic aspect of this project can be demanding—of patience, time, &c. The point of showing an essay, entertainment value (if any) aside, is first and foremost not to be shy about showing the intellectual scaffolding of one’s exposition as much as possible. This showing, however cantankerous the exposition, affords the non-vanity of allowing others to witness all of the missteps, mistakes, false starts, and the like—not in the interest of merely providing a full record (though some essayists may do so out of vanity or mere thoroughness, scholarly drudgery, or self-involvement) but most so that readers may be exasperated enough by the essayist’s stupidities to correct his or her errors and thus contribute to our collective better human understanding of ourselves.

[3] “Seek simplicity, and distrust it” (Alfred North Whitehead). “For every complex human problem, there is an answer that is clear, simple, and wrong” (HL Mencken).

[4] Scott, W. R. (1992). Organizations: rational, natural, and open systems. 3rd ed. Englewood Cliffs, N.J.: Prentice Hall.

[5] This may be why Canetti feels compelled to try to distinguish crowds by their prevailing emotions. Another clear case of imagining on his part and not a very helpful piece of personification. The reason is not because one encounters difficulties trying to construe a crowd as an organism which might then have “feelings” although this is a major problem. Sloppy metaphors can only go so far—people speak glibly about the “language” of dance or film, and I usually feel compelled to ask, “What are the nouns of film then? What is the declension of the plié?” My objection is not just to the dead-end of such a metaphor, but because by applying the template of language (as a means of expression) to other forms of expression (like dance or film), then the metaphor of language ends up obliterating some of the key aspects of the form of expression—the generation of meaning in dance, for instance, differs from how it occurs in language. So then here: if a crowd has feelings, what are its organs of feeling? Why doesn’t it also have thoughts? And bowel movements and aesthetic desires? The dead-end of the metaphor, however, is not the main “error” here. It is, rather, that the prevailing feeling is experienced in the observer, not the crowd. Someone confronting a crowd calls it a “hostile crowd”—those in the crowd, if polled—would not only supply a range of answers, “hostile” might not be amongst them—angry, yes, but not hostile. So we see in Canetti’s offerings his own failure to recognize that his sense of “prevailing emotions” are his own projected emotions (or the emotions he imagines projected by someone confronting such a crowd). Consequently, his exposition isn’t even describing crowds, but rather the human experience of (being confronted by) a crowd.

[6] As a matter of scale, an immediate contrast between crowd and pack is in the assumption that members of the pack can make about other members—the quality of known-ness about other members of the pack—compared to a lack of such known-ness in the crowd. I want to contrast this as “known” versus “anonymous” but this seems a bit sketchy and inadequate. If what is known about other members of the pack concerns more their skills than their characters or temperaments, there is then a temptation to say what the crowd “knows” are the temperaments of those around them. That, if a certain amount of solidarity in the pack comes from the confidence of counting on the skills of other pack members, then perhaps a certain amount of affinity in the crowd arises in the comfort of an at least similarly shared outlook, but this also doesn’t seem adequate. There will certainly be crowds where this may be the case—people at a protest may be very largely of like mind and like heart—but this must be a factor in the dynamics of crowds, not the defining element. In a pack, if I know what you can and will do, in a crowd it may be that I do not know what you can or will do. There seems to be something to this.

[7] In the neurotic loathing or mere fussiness of not wanting contact with other human beings, there’s no more reason to build this compulsion into a universal truth that de Sade making sadism into an ontological fact

[8] Riesman, D. (1961). The lonely crowd: a study of the changing American character. Abridged ed. New Haven: Yale University Press.

[9] This doesn’t necessarily get you off the hook legally: non compos mentis, diminished capacity, and genuine accident (e.g., the difference between involuntary manslaughter and murder) tend to change or reduce one’s sentence for a crime, not eliminate it.

[10] Are you threatening me? “It’s not a threat. It’s a promise.”

[11] One might argue that the flight crowd is evading destruction, but this then is merely the flipside of the baiting crowd, part of a hunter/hunted dyad that doesn’t propose an actual distinction.

[12] So long as the perceived threat is of an individual’s destruction (as Canetti is focusing it), then this proposes a confusion of domains, because it wants to misread the personal tragedy (of the hunted) as a collective crime (by the hunters); the hunters equally misread the situation as the (willing or deserved) destruction (of the hunted) for the collective benefit (of the hunters). Working out who gets to be right is merely a pissing match, but if (for instance) השואה is wrong, then so is Israeli apartheid.  &c.

[13] In the technical language supplied: For crowds, the algorithms of habit or daily life (our packs, our rituals) get subsumed to the teleological heuresis of trial and error toward a goal state; for packs, the heuresis of improvisation adjusts to the variances of circumstances so that the pack-algorithm continues on track to its predetermined goal-point.

[14] Although it may be objected this was a migratory pack, not a crowd, that there is too much face-to-face knowledge of one another not to be a pack. But Canetti’s emphasis on this seems misplaced. My knowing you has no implicit effect on our successful completion of the pack-goal; whether I know you or not, what matters is that I know you can perform whatever requisite tasks it is that made you a member of this particular pack at this particular time; if an affective history increases my validity in my faith in you, that is toward the end of the pack-goal and not an inherent part of packs generally. In a crowd, there is no collectively agreed upon goal, so I have no point of reference whether or not you can “come through”.  So far as our crowd goes, which in this case could be called a mobile society, the personal relationship I have with your character takes on a significance that is gratuitous in the context of a pack.[14] In one sense, an aspiration like “the Promised Land” is more vivid than the goal of revenge for a revenge pack, precisely because it can only be imagined, because it has never been experienced. I know revenge because I’ve seen it; I’ll know the Promised Land when I see it.

[15] The use of an animal example is for familiarity. A group of hunting wolves are not a pack—there is no collectively agreed upon goal, there is no moment of fiat that calls the pack into being, and there is no formal end to the pack. Everything in a group of hunting wolves that resembles the human pack is only that, a semblance.

[16] Contrast an algorithm with a heuristic. Because the term algorithm has an inordinate significance in computer science, to claim to offer a declaration of it that could cover all senses of it is likely impossible. The point I’d emphasize about it is its character as a recipe; that is, a static set of instructions (iterative or not) that only need to be followed exactly to arrive at the desired outcome.  An important consequence of this is that there is no resort to causal explanations in this. Instead of resting on a claim, “Z because of W,X, and Y”, the claim is (assuming necessary and sufficient conditions) that given W, X, and Y, then Z will occur. One can see how this is like a recipe: follow the recipe correctly and with all of the right ingredients (W, X, and Y) and you will get the cake of Z. We might further want to wonder why this is the case, but this is not necessary for the production of Z.

Many, if not most, of the things we do in any given day are algorithmic. Someone with no knowledge whatsoever about cars, if given a key and adequate instructions for inserting it in the ignition and turning it, can start a car. &c. If I want to warm up something in the microwave, I zap it per usual and if it isn’t warm enough after that, I zap it again. If I happen to understand to some degree the principle behind a microwave oven, this actually doesn’t help in the least for warming anything up. We may have noticed, however, that certain kinds of dishware or, in the case of frozen burritos, heating two at once can influence the length of cooking time, then I might try to adjust these different conditions with reference to the principle involved in microwave ovens. But whether these changes really obtain from the “how” of how microwave ovens cook or if I’ve simply added a detail to the algorithm for cooking two burritos simultaneously would require a lot more research than I can do in my home.

Algorithmic activity presupposes the necessary and sufficient conditions to get from the beginning of a process to the end goal. It means, in practice, that someone else has already been there, and having written the recipe (the algorithm) down, I can achieve the same end if (1) the recipe/algorithm itself is adequately detailed, and (2) I accurately follow it. An algorithm also presupposes that any outcome besides the expected one is an error, either in the instructions or the performance.

But each day we also encounter any number of things for which such a clear and absolute algorithm or recipe is available. In these cases, we may not know the instructions or, if we have a sense of what the instructions might be, aren’t sure how to implement them. If your boss says, “I need you to get me to Atlanta for a conference on the ninth,” an experienced travel agent might not blink twice—or might immediately have some relative questions (“do you want to fly or drive”) that the inexperienced person might not stumble across at first. &c. It seems, in this case, that there should be a recipe, a simple, clear set of steps, but it in fact might take a number of stumbling starts and stops to finally work out getting the specific plane ticket, hotel reservation, &c.

This would be a heuristic approach. If the most famous algorithm is the recipe, the most famous heuristic is trial and error. But what is particularly different between an algorithm and a heuristic is that in the latter what exactly the goal would look like may not be apparent in advance.  The Mormons wandering west did not know exactly where the promised land was, but the recognized it when they finally came across it. One might be fiddling around with a mathematical problem in various ways before finally arriving at a point one would call a solution. What is particularly important in this is that heuristics are necessarily iterative, and tend to involve repeating a limited set of behaviors. The robot vacuums that can vacuum an entire floor do not do so by knowing in advance what they have to vacuum; while they map what they have covered, the set of rules for behavior they follow are impressively small, but enough to get the whole floor cleaned.

Returning to the recipe example, it is clear that more than a merely mechanical following of the rules applies when human beings make cakes. The instruction “break three eggs into a bowl and whisk” presupposes that one has eggs, a bowl, a whisk, that you can break eggs, know what that means, and can whisk. If it turns out you have no eggs today, something more like a heuristic kicks in to “improvise” the acquisition of eggs otherwise assumed by the recipe. Computer algorithms rarely encounter this problem, except where user input is concerned—and then a whole host of programming techniques must be implemented to ensure that the human-provided input provides the algorithm with what it recognizes in order to proceed.

[17] The readiest sense of a synonym for heuristics is trial and error, and this helps to indicates that heuristic are not merely random guesses or flying merely by the seat of one’s pants. As an example: imagine that we number all of the atoms in the universe, which is estimated to be between 1078 and 1082—we’ll use the high estimate, 1082 or 10,000,000,000,000, 000,000,000,000,000, 000,000,000,000,000, 000,000,000,000,000, 000,000,000,000,000, 000,000,000 atoms—and then, like a magician’s volunteer from the crowd, pick one of those atoms. Now imagine a machine designed to guess the number of the atom chosen. One could start at any number and sequentially or random guess, “Is this it? Yes or no,” and it should be clear that, except by the most fantastic luck, it will probably take quite a few guesses to arrive at the correct one. Currently, the fastest computer in the world can perform 20 quadrillion (2×1013) calculations per second, and there are roughly 31,536,000 seconds per year, this means this machine can make 6.3072×1020 guesses per year. Unfortunately, this means it would take approximately 1.5×1061 years to run through all of the numbers. Since the universe is currently only some 1.37×109 years old, this makes for a disastrously infeasible search method.  In fact, to run this machine for the entire time of the current universe wouldn’t even give you 50/50 odds of discovering the number. (This means that random guessing is an irrational way to proceed.) By contrast, a very simple heuristic machine may be designed which can determine the number of the atom in less than 275 guesses total—or, using the supercomputer again, in 0.00000000001375 seconds. This machine would proceed by asking, “Is the number in question in the top half of all of the numbers?” If yes, then the machine would eliminate the bottom half of the numbers and ask the same question again with respect only to the top half of the whole range of numbers; if the answer is not yes, then the machine would eliminate the top half of the numbers and proceed in a similar fashion with only the bottom half of the numbers. In this way, the machine eliminates 50% of the potential guesses it might make with each iteration of the question, whereas the sequential machine eliminates only one possibility with each guess. The point of this example is to illustrate how a heuristic works, not the specific utility of this machine (although obviously one can see this kind of approach must be extremely helpful for a massive search engine like Google). Here, the heuristic is simply the repeated question, “Is the number in the top half”. That’s the only “thing” this heuristic machine needs to do. By contrast, randomly guessing, “Is it this number?” is more like flying by the seat of one’s pants. Trial and error similarly has a limited set of “tricks” it applies, but the basic question asked again and again by it is, “Did that work?” When searching for a specific number, the machine has the advantage that the outcome (the number to be sought) is actually known. With trial and error, one may not know in advance what the successful outcome will look like—all that is known may be the conditions that denote a successful outcome.

[18] There is a tendency, from the use of cybernetic terminology, to refer to the end-state of a heuristic as (precisely) arriving at a given set of conditions or a state. If we had a wandering machine with its limited set of heuristic behaviors that was supposed to find the highest point in a landscape, the heuristic would stop once that state was achieved. It seems somewhat pertinent to think of crowds arriving at a certain state (in the Mormon case, Utah—a pun) as a goal, as the concrete realization of an ideal condition rather than the ideal realization of a concrete condition in the pack. What is the goal of a feast, for instance? One might aspire to get shit-faced or to eat till you puke, but the specific route to getting to that “goal” has no recipe. The command “eat a lot” is, precisely, not an algorithmic instruction (just as “bake a cake” is not)—eat a lot of what? Or, if it is algorithmic, it is to lack in specificity that more heuristic improvisation is called forth by it than algorithmic obedience. But even “add three cups of sugar to a bowl” may be argued as non-algorithmic finally, as demanding some amount of improvisation.  In the computer, a given line of code can only (barring defects) be executed in one way, and the line of code itself is unambiguously complete. When issuing algorithmic instructions to human beings, by contrast, there is never any one-to-one correspondence from command to execution much less a perfect specificity in the command itself. It’s assumed (sometimes with a great deal of tears on the part of some) that some amount of agency is going to get involved in obeying the command.

[19] This tidy distinction gets muddled by crisscrosses in the actual life of a crowd or a pack, of course, but principally because (as imaginers thinking about these things) we can easily imagine. But this is the tricky point. If, with a pack, the human fiat that declares “we are a pack” when all the members at least appear to agree on the collective goal is unambiguous, this kind of collective agreement seems either difficult if not actually prohibitive in the case of a crowd. Here, the human fiat tends to be an individual one; more precisely, it is easy to spot the moment where an individual says of a group of people, “Man, that’s a crowd.” Canetti’s work is riddled with this; he’s the one who is either (1) declaring some given group is a crowd or (2) citing another human being who made the same gesture.  The alternative to this, also in Canetti, is to personify the crowd, because if we must not use the individual as the basis for declaring a crowd exists, then It seems we should leave it up to “the world” or some ostensibly objective fact. This is the untenable either/or of self/society all over again, and an expression of the nominalism/realism debate (hardly settled as of yet) upon which or within which our dominant discourse’s naïve realism rests.

[20] I have modified the text of the story for the context of my argument but also to incorporate a point made in Hans Zimmer’s book on Hindu philosophy that is not included in this story. In that book, the “second point” made by Shankara in Zimmer’s text is the first point made below, i.e., “the Elephant was unreal, but so was your presumption that there was a me, climbing a tree.” The entire original text from the site selected reads as follows:

Shankaracharya who summarized the doctrine of Advaita Vedanta 1500 years ago, declared that the World, as you know it, is Maya. ¶ In a fateful misinterpretation of this central Vedantic insight, subsequent interpreters have read this to mean that the World as you know it is somehow Illusory or False. The idea that the World is unreal permeates latter Indian philosophical and religious inquiry and at some level, all of the literature of the World’s Mystical Traditions. ¶ Shankaracharya’s declaration was intended, and is to be interpreted, like this: Everything is Illusion- including the assertion ‘Everything is Illusion!’ For ‘ Everything’ includes all assertions. Any and every response you may have to this assertion is itself part of the ‘ Illusion’, including any misconceived notion that it’s an illusory World. ¶ There are various folk stories meant to guard the unsuspecting reader against a forgetting of the self-referential twist. A skeptical Prince who was a pupil of Shankara decided to test his teacher. Once when the illustrious scholar was walking up the royal pathway to the Palace, the Prince unleashed an Elephant from the Army stables directly onto Shankara’s path. ¶ The Brahmin, not known for valor of this sort, proceeded to climb up the nearest tree. The Prince approached the teacher, bowed respectfully and inquired as to why he had climbed the tree, since according to his own teaching, all including the approaching Elephant, was illusion. ¶ ‘ Indeed’ said Shankara ‘ the Elephant was unreal, but so was your presumption that there was a me, climbing a tree.’ ¶ In Shankara’s claim, so is your reading of this book, your understanding of this story, and your interpretation of this line (see here).

[21] The observer being implicated in the observation, which science itself in the form of quantum physics assures us has demonstrated physical effects (a point whose strangeness doesn’t seem really sufficiently grasped in its implications most of the time), does not usher in skeptical nihilism or postmodern unknowability in the face of this—much less the next iteration of epistemological flailing about for some putatively solid ground—this recognition by Shankara then goes on to propose an alternative, whatever we think of its viability as an epistemology. To point out the Prince’s error of mistaking the unreal as real, even if we impute the same error to Shankara’s statement, is not the same as arguing that nothing is knowable. Such a knee-jerk reaction is the first step of a (rather desperate) attempt to rescue putatively objective knowledge from being exposed (yet again) as a variety of naïve realism (or as its more agnostic, reasonable-seeming cousin, scientific realism). The situation is like a bunch of (Protestant) consubstantialists all standing around and scoffing at the superstition and benightedness of the (Catholic) transubstantialists while praising Jesus for sparing them from such mythological ignorance.

[22] Previously, in the Jivaro example, the acquired and magically transformed shrunken head (or tsantsa) of an enemy household served exactly as a super-charged, but dangerous force within the acquiring Jivaro group’s compound. My current description of the yin/yang above amounts to only a slight modification to and widening of basic idea.

[23] Eliade, M.  (1954).  The myth of the eternal return (trans. Willard Trask).  New York, N.Y. :  Pantheon Books.

[24] Its presence, incidentally, is precisely what makes it possible for me to violate it. And that is always the opportunity and danger of the profane, as the Jivaro tsantsa shows, the profane object that is domesticated for sacred use. Let’s step back from such abstract sounding talk a moment. For example, the US Declaration of Independence held that all human being are endowed by their creator with the inalienable rights to life, liberty, and the pursuit of happiness and found no contradiction, then, in slavery. (The idea that the north opposed slavery and the South promoted it is a pious historical fraud—there were Abolitionists in the North and South and the North benefited massively from the agricultural production of the South). So, more precisely it would be those individuals and industries in the North and South who did not depend essentially upon slavery that voiced opposition to it. Thus the author of the Declaration of Independence could keep slaves without feeling the unbearable hypocrisy of that. By this example, the profanity of slavery coexisted within the sacred domain of life, liberty, and the pursuit of happiness. It simultaneously proposed an opportunity and a danger, not only in the logistical details (how to manage the slaves themselves within the context of a country ostensibly dedicated to freedom) but also teleologically (in politics, religion, and existential life) in the proof of freedom that slavery it demonstrated to non-slaves and the threat to the notion of freedom that slaves posed. An example from the ruler’s level. In the Bhagavad-Gītā, the “enemy king” is called in one translation of his name “he who holds the kingdom together”. His chief warrior is in one translation of his name called “dirty fighter”. By contrast, the hero-warrior Arjuna is known for his idealistic integrity. Reading this set up in psychological terms, the enemy king who holds together the kingdom by dirty fighting is the ego, which must (attempt to suppress) idealistic integrity. This, because the Realpolitik of the world—so the argument goes—cannot actually honor its own lofty ideals—for the simple reason there is no ultimate authority (between States) to adjudicate cases of cheating. In a moderated game, it is not that there are rules that makes the difference, but that cheating, if caught, has consequences. It was cheating when Hitler passed through Belgium into France, rather than attempting to cross the Maginot Line as France anticipated and expected, but there was no one to punish the German army for this cheating. More precisely, the only force capable of punishing him had just been soundly defeated, and it would take the combined might of several other armies to finally punish him. Between individuals, there may similarly be no recourse to a moderating authority, thought he civil and criminal courts attempt to stand in for this. Thus, in the domain of Realpolitik, the king may espouse all the lofty ideals she wants, and truly mean it into the bargain, but will never not hold completely out of the question any number of “dirty tricks” if it comes down to a circumstance of pure force versus pure force. States may play the world-game of obeying the rules with one another, and if Radovan Karadžić ultimately gets convicted (and executed) it will be a case of the game playing catch-up on an audacious cheater. One only uses States cheating as an occasion for outrage and rhetoric; genuine surprise is considered naïve. On the personal level, this desire to be good gets challenged by the profanity of circumstances that discourage it. We find $100 on the sidewalk—we know we should try to return it, but the profanity of theft is available to us, and we tend to avail ourselves of it, coming up with whatever set of exculpating details we can to excuse ourselves. We may tell no one to reduce the chance that (1) they will say we were bad to take the money or, perhaps, worse, (2) say, “Oh, I know who lost that.” And this is no cynical diagnosis of human behavior—I disagree that most people are evil with spots of good; rather, most people desire to be good, but that desire creates a structure where evil, profanity, is held in reserve. It becomes something we can resort to, and so sometimes we do, however much to our chagrin, however much we hide it, however much we justify it to ourselves. Part of Jung’s approach is that this kind of one-sidedness (staying on one side of the yin/yang) is not only full of untenable agony—if you take it seriously at all—but also only half of human existence and therefore a distortion of ourselves at a fundamental level.

[25] It is an interesting mental exercise to realize how this bit about the seeming similarity of the colors cannot be avoided in trying to build a yin/yang symbol. Follow me here. If my half of the yin/yang is black with a white dot, then in the other half of the yin/yang, to express the lack of symmetry, I would have to make most of it not the color of my dot—so let it be orange. But what then is the color of the dot on that half? The idea of black and white contains the notion of opposites, so if the other side is orange, the dot might be blue (the “opposite” of orange). So, now we have a yin/yang that is black and white on one side, orange and blue on the other—the rather profound recognition in the original symbol (that in every good there is an admixture of evil and in every evil there is an admixture of good) has meaningfully disappeared.  We have a different symbol, where two systems of “good and evil” merely site side by side—a harmonious arrangement, not unlike aboriginal tribal arrangements with land—but this effectively only evades the “problem” the symbol describes in the first place. And so, once again, instead of switching colors, one stick with just black and white (or orange and blue, it doesn’t matter that much), and so the white of “my” dot appears to look like the white of “your” non-dot. The fact that one can’t change the colors of the symbol without destroying he problem it so aptly expresses points to the difficulty of mistaking everything about “them” as profane, while overlooking everything that simply is “nonexistence”.

[26] I don’t want to upstage the exposition by some digression into the symbolism of the yin/yang, which (by the way) I am taking only in its visual sense, and not in the various claims made about the qualities of yin or yang normally associated with it. What I want to emphasize is that—if my culture is the black half with the white spot—then I understand everything in the other half of the symbol through the cultural meaning of the white spot in my culture. In my culture, the white spot is culturally constructed as the bounded Other (as the profane) but outside of my culture, “the whole world” is colored by its designation as profane. This is an overgeneralization, and it may be the spot of color from my culture in the other half of the yin/yang symbol that points to this. For the Jivaro, for instance, they do not make shrunken heads out of jaguars or praying mantises. One might say their unreality is of the nonexistent, rather than profane, type. Within a culture, the profane is the Other acknowledged as existent, whether in concrete examples or simply as in distinction from what is. What is (seemingly) of the opposite color of my own culture in the other half of the symbol is actually the unknown that is not-yet-known. If I look out into the world and see some profane thing—say, for instance, what an author of the Iranian Avesta would call an ‘evil spirit’—then I simultaneously fail to see what a neighboring author of the Mahabharata would call a ‘deva’ (or holy spirit).

[27] I am pretending, for the time being, that culture can—at some level of simplicity—be monolithic. Obviously, in  circumstance where different groups having contending ends, the sudden introduction by one side of the “scripture” or “revelation” that proves they are the superior and the Other should submit willingly as slaves will be subject (in all likelihood) to the skepticism noted above.

[28] Jung, CG (2010). Answer to Job. (Intr. Sonu Shamdasani, paperback Fiftieth Anniversary Edition). Reprinted from Jung, C.G. (1968). Psychology and religion: West and East. (Vol. 11, Collected Works., 2nd ed., Trans. R.F.C. Hull). Princeton: Princeton University Press, i–xvii, 1–121. The essay was first composed in 1952.

[29] A pack does things for the sake of values; a crowd values things for the sake of doing.

[30] Suttner, R. (2005). The character and formation of intellectuals within the ANC-led South African liberation movement in T. Mkandawire (ed.) African intellectuals: rethinking politics, language, gender and development, pp. 117–54. London: Zed.

[31] Gramsci, A. (1979). Letters from prison, introduced by L. Lawner, London, Melbourne, New York: Quartet. (footnote from Suttner 2005).

[32] Gramsci, A. (1971). Selections from the prison notebooks (Q. Hoare and G. Nowell Smith, eds.) London: Lawrence and Wishart (footnote from Suttner 2005).

[33] Crehan, K. (2002). Gramsci, culture and anthropology, London: Pluto Press (footnote from Suttner 2005).

[34] When Canetti is citing frenzies, reports by eyewitnesses referring to “madness,” to blind stampedes, and (potentially favorable allusions to) ecstasy (as moments when one is possessed, or beside oneself), there is (besides a racialized orientalism) a distinction between unconscious activity in at least some kinds of crowds. And yet elsewhere, he insists that panic is the disintegration of the crowd, because the individual (all of the individuals now crowded together) become radically if necessarily self-centered in the midst of the panic, as they try to survive. The difference between radical self-centeredness in the service of surviving and dissociative ecstasy where I am literally unaware of my environment both seem distinctly not crowd-like anymore.

History will have to record that the greatest tragedy of this period of social transition was not the strident clamor of the bad people, but the appalling silence of the good people. (Martin Luther King Jr.)

Summary

Jung insight that YHVH is unconscious offers a paradigm shift in the usual view of the intolerant monotheism of Judeo-Christianity toward his larger aim of arguing for the necessity of individuation in human beings. This point is well-taken but working it out gets made grotesque or becomes hobbled for being shackled to the Judeo-Christian myth-cycle of intolerant monotheism in the first place. The book illustrates the extent to which thoughtful, intelligent people will prostrate themselves before the evil of a banality, while remaining well worth reading on other grounds beyond this into the bargain.

Pre-Disclaimer

Last year in 2012, I set myself the task to read at least ten pages per day, and now I’m not sure if I kept up. I have the same task this year, and I’ve added that I will write a book reaction for each one that I finish (or give up on, if I stop). These reactions will not be Amazon-type reviews, with synopses, background research done on the author or the book itself, unless that strikes me as necessary or if the book inspired me to that when I read it. In general, these amount to assessments of in what ways I found the book helpful somehow.

Consequently, I may provide spoilers, may misunderstand books or get stuff wrong, or get off on a gratuitous tear about the thing in some way, &c. I may say stupid stuff, poorly informed stuff. There are some in the world who expect everyone to be omniscient and can’t be bothered to engage in a human dialogue toward figuring out how to make the world a better place. To the extent that each reaction I offer for a book is a here’s what I found helpful about this, then it is further up to us (you, me, us) to correct, refine, trash and start over, this or whatever it is we see as potentially helpful toward making the world a better place. If you can’t be bothered to take up your end of that bargain, that’s part of the problem to be solved.

A Reaction To: CG Jung (1952)[1] Answer to Job

I did not read this because it is considered Jung’s most controversial work. Over the past two years, I’ve been reading a lot of Jung[2] and commentary about it was tempting; also, it is comparatively short, and it seemed like it would make a break from the two big books I’m reading[3].

For the record, I oppose intolerant monotheism wherever it occurs, and those who happen to be oppressive of others, whether or not their embrace of some form of intolerant monotheism is tight or loose, then I find that that oppressiveness is inextricably linked to that intolerant monotheism.  You could parody this by saying: there is an important difference between people who are assholes and people who believe in god who are assholes. But it is not just about being an asshole. Many very pleasant people are hierarchically oppressive. Martin Luther King Jr. nails it when he says,

History will have to record that the greatest tragedy of this period of social transition was not the strident clamor of the bad people, but the appalling silence of the good people.

Ultimately, an appalling complacency marks the tut-tutting of those Judeo-Christian churches that people recognize as full of “good  believers” who do nothing to put a halt to their rabid, fanatical, or Other-hating religious siblings—and this holds for all Christian churches that do not band together to correct or socially nullify every stripe of fundamentalism busily intruding into political life as well as non-fundamentalist synagogues who allow the ongoing slow genocide of Palestine to continue. Both consist of majorities that refuse to act; they’re in exactly the same moral wasteland as Jung describes for YHVH in Job, standing by while Job is tortured.

When (the Indian Swami) Satchidananada says that what we call god doesn’t matter, this doesn’t have the same resonance (coming from where it does) as when Jung writes, “Psychologically the God-concept includes every idea of the ultimate, of the first or last, of the highest or lowest. The name makes no difference” (footnote to ¶738).[4] The difference most readily makes itself visible in the habit of people—the people we most frequently encounter in our cultural milieu—capitalizing the ‘g’ in god. A particularly gross example of this is the silly typographical resort “G-d”.

Like someone of mixed heritage, who balks at the simplistic dichotomy “are you black or white,” I balk at the simplistic dichotomy “do you believe in god or not?”

In the first place, the chauvinism evident in all iterations of that question from believers that I’ve encountered—because “god” in that sentence only ever means the biblical god, however poorly or deeply imagined—means that my answer is a definitive no and that everything Satanic[5] in me rears up in opposition to the question. There is only one way that the scripture of intolerant monotheism, particularly of the Judeo-Christian type, can be understood in any legitimately spiritual sense: and that is as an anti-text (written by Satan, if you please) that provides evidence only for what must never, under any circumstances, be done if one desires to achieve salvation.

Besides being often so bone-headed, inept, and banal, it might be called the quintessential of such anti-texts—but it is, rather, the self-torment of its most devoted exponents that has created an impression of profundity about it. Just as seventeen hundred years of fantastically attentive pondering and wondering and trying to make sense of things have turned the remarkably unimpressive Tabula Smaragdina into something regarded as profound,[6] so have the tortured efforts of devotees to pierce the banal evil of Judeo-Christian scripture made more of it than it actually purports.[7]

Jung writes, a bit cantankerously:

I have been asked so often whether I believe in the existence of God or not that I am somewhat concerned lest I be taken for an adherent of “psychologism”[8] far more commonly than I suspect. What most people overlook or seem unable to understand is the fact that I regard the psyche as real. They believe only in physical facts, and must consequently come to the conclusion that either the uranium itself or the laboratory equipment created the atom bomb. That is no less absurd than the assumption that a non-real psyche is responsible for it. God is an obvious psychic and non-physical fact, i.e., a fact that can be established psychically but not physically (¶751, italics in original).

We see his capital ‘g’ hard at work, but the point he is making hinges on a relent difficulty or confusion about what could or should be considered real. People have experiences of that which can only be known indirectly (through experience) and not directly in-itself. We have no shortage of terms for the “thing” that is being experienced in these experiences of that which can only be known indirectly (through experience) but not directly in itself. These names include the numinous, the transcendent, the divine, god (or God), the Supreme Personality of Godhead, the Other, every name of the divine ever concocted by human imagination, the psyche, the self, the atman—or as Brahmanic branches of Indian philosophy and religion put it so cleverly and succinctly: “that”. With this in mind, there can be no question of “believing in” because the experience itself is already self-evident. You look at these words and you have an experience of seeing them: this is incontestable, in fact is literally undeniable. What is equally incontestable is that whether these words “actually” exist (or what they “really” look like or are) cannot be determined. We can assume—frequently do, and perhaps in many cases must—that what we experience corresponds in some way to what is but this can never be anything other than a warrantless (if convenient) assumption.

So the question, does god exist—does the numinous, the transcendent, the divine, god (or God), the Supreme Personality of Godhead, the Other, every name of the divine ever concocted by human imagination, the psyche, the self, the atman—does that exist cannot be answered except by declaring yes or no and living with the consequences of that declaration. Meanwhile, the experience of the numinous, of that, happens all the time to  greater or lesser degree.

For Jung, the “God-concept, as the idea of an all-embracing totality, also includes the unconscious, and hence, in contrast to the consciousness, it includes the objective psyche, which so often frustrates the will and intentions of the conscious mind” (footnote 1, ¶740). So the ego, as that conscious part of myself I’m inclined to refer to as “I”, stands in relation to the unconscious—and both of these parts taken together being the objective psyche, the self or Self.[9] It is in this sense that Jung means the psyche is real—as the ground of the undeniable experiences we have in the face of manifested unconscious materials.[10]

What is illegitimate in the question “does god—does the numinous, the transcendent, the divine, god (or God), the Supreme Personality of Godhead, the Other, every name of the divine ever concocted by human imagination, the psyche, the self, the atman—exist” is when it gets directed to actuality. The experience of that is (even outside of a religious or spiritual context) undeniable; our explanations about the basis for that experience, however, are not subject to factual establishment. In my personal experience, and somewhat to my surprise, I find that reading passages about or by Kṛṣṇa to have a distinct kind of effect on me, they activate a certain kind of experience that generally runs by the name of enlightenment (not full enlightenment, but an enlightening, both in the sense of an increase in consciousness and also in the sense of “making less heavy” or uplifting). Were I to pursue an eastern spiritual path with any diligence, however, it would not be as bhakti yogist of the type described by Kṛṣṇa or by his later commentators (particularly A.C., Bhaktivedanta Swami Prabhupāda). So, when I say that I have this experience of Kṛṣṇa, an only too familiar interlocutor might insist I’m saying he’s “real” or (equally) that he’s “just an idea”. Both of these positions propose misprisions of experience. What is real is the experience; explanations for how that experience arise that insist on some “objective” fantasy or some “subjective” fact not only go wide of the mark but also set the stage for the sort of moral atrocities intolerant monotheism continues to practice at this very moment.[11]

Someone at one point—someone who considered themselves a Christian, if memory serves—first suggested that the Judeo-Christian scripture provides illustrations only of what should never be done if you want to achieve salvation. The thought at the time was pleasing but it must have been sitting around in the soil of my soul, germinating. Jung’s Answer to Job watered it. Much as I find the prefix post- to be garishly and implausibly overused—most frequently it seems solely for commodity purposes in the economic oligarchy of the intellectual market—I suppose I’d have to call myself a post-atheist. At least, this captures my frequent experience with (especially new atheists) who are excited to discover there are contradictions in Judeo-Christian scripture, that St. Anselm’s ontological proof of god is refuted in Wikipedia, or any number of other appalling ironies that Judeo-Christianity exhibit as a part of the dominant discourse. Partly this is just a function of being older, of having thought about it for longer, so things that at one time seemed epochal to me as well have been shown, over time, to have their own problems and dead-ends. thus, I tend to avoid a certain kind of argument with Judeo-Christians, since it has become clear that such arguments primarily serve to strengthen the irrationality their faith (as they “square off” with the temptation of my Satanic “atheism”). Sometimes, when I can turn the argument to an occasion to strengthen my commitment to a world without intolerant monotheism, then I’ll let myself continue. In general, I’d rather have a dialogue, but my experience is that Judeo-Christian adherents are singularly unwilling if not incapable of code-switching to what they parody as “my atheism” for the purpose of any such dialogue. Ultimately, as much as I despise much of what Richard Dawkins has to say (both in and out of the religious arena),[12] the remark he made about evolution may apply generally to Judeo-Christian believers:[13] “if I call you ignorant, then I am doing you the courtesy of not calling you idiotic or delusional.” It is clear from this remark that he is referring only to people who espouse ignorance, idiocy, or delusion first and foremost not for politically interested ends.

This is all to say, there is little in a discourse that critiques intolerant monotheism, especially its Judeo-Christian variety, that perks my ears anymore.

Atheists tend to trot out unserious or trite arguments that need correction just as often as their monotheistic counterparts. It is true that ostensibly professional theologians, who might just be a semi-erudite armchair theologians (Protestant or Jewish, I don’t seem to encounter such scuffles with Catholics) for all one can tell on the Internet, may more often be accused of exhibiting an almost Calvinist degree of shittiness toward their disputants. Anyone who has spent too much time trying to have a debate on the Internet may well be very familiar with this particular breed of virtual opponent, marked by the habitual but very selective type of hair-splitting they engage in (usually in your arguments rather than theirs), a very generous use of goal-post moving (on behalf of their argument and almost never with any allowance for the same on your part), a frequent tone of supercilious scoffing (which would be called pretentious if its information were better, which seems to be borrowed from CS Lewis’ terrible example, and which will in a corollary to Godwin’s Law eventually devolve into per ad hominem), a curious form of amnesia (which seems to pretend that what they said two posts ago can’t be accessed and them held accountable for it), and a constitutional inability to recognize their intellectual hypocrisy (even as every sentence belies it and even when it is pointed out to them in detail). Part of this arises from 1,500 years of providing spin, fabrications, and threats as part of defending scripture from that form of slander or libel for which there is no defense in court: the truth. The quality of their argument is a bait-and-switch, focusing for example on the example rather than the argument the example is part of, and so forth. The whole framing of the discussion is already in their favor anyway, the sex, lies, and videotape of Judeo-Christianity being taken as the given that needs no proof—like the apologist who found proof for Christianity in the bible, or the orthocrat who said that the so-called documentary hypothesis[14] (for the composition of the Torah) was anti-Semitic because it denied the sanctity of the basis for the sense of historical reality of the Jewish people—an argument in favor of keeping the Jewish people deluded about their actual history, in other words. Still, there is nowhere near yet the amount of ink spilled on behalf of atheism to get lost in this variety of intellectual drudgery—there isn’t the kind of body of work, motivated sometimes by outright forgery and lying (Christian re the apocrypha or Jewish re the Torah SheBealpeh or the Torah SheBichtav), to generate a secondary and tertiary body of work (commentaries on the Church Fathers or the Gemara, Mishnah, Talmud etc.) based on those differences or lies that then support the forgeries or half-truths. As far as the scripture being false, it’s the way its adherence have falsified it that is the still interesting point—along with the defenses offered in favor of lying about these matters. And I’m not pretending this is limited to the religious sphere—one n encounter these same kind of ideologues in the field of politics as well, because whether religion or politics, we are dealing with propagandists.

Of non-Eastern texts that immediately come to mind as actually prompting something like an advance in my understanding and rejection of intolerant monotheism, Andrew Lloyd Webber and Time Rice’s (1971) Jesus Christ Superstar, which showed that Judas shouldn’t be seen as evil, Borges’ (1944) “Three Versions of Judas” (from Ficciones), which suggested that Judas is the real savior not Jesus, Lem’s (1957)[15] “The Twenty-First Voyage of Ijon Tichy” (from The Star Diaries), which presents a very cogent futurology of religious faith as Lem describes it, and now Jung’s (1952) “Answer to Job” have all proven paradigm shifting as far as my view of Judeo-Christianity is concerned.[16]

The most basic shift that Jung proposes is that YHVH is unconscious, often repeating that the deity is not for that reason non-omniscient but only that it remains a mystery to we human observers why YHVH forgets or refuses to take “counsel with his omniscience” (¶579).

The naïve assumption that the creator of the world is a conscious being must be regarded as a disastrous prejudice which later gave rise to the most incredible dislocations of logic. For example, the nonsensical doctrine of the privation boni[17] would never have been necessary had one not had to assume in advance that it is impossible for the consciousness of a good God to produce evil deeds. Divine unconsciousness and lack of reflection, on the other hand, enable us to form a conception of God which puts his actions beyond moral judgment and allows no conflict to arise between goodness and beastliness (footnote 12, ¶600).[18]

Amongst other things—and I feel like what I will need to do in separate posts is provide a Commentary to the “Answer to Job” to address the relevant details—the most abiding thing that comes out from all of this is the mystery why Jung remains (and by extension why any thoughtful person remains) wedded to the darkness that is intolerant monotheism.[19] No one wants to discover that everything she knows is false is false—for some, that will be the most liberating moment imaginable, however terrifying, promising, dooming, damning, salvific, or enlightening.[20]

Jung’s “Answer to Job” shows what kind of backflips one must do intellectually (by intellectually I mean both rationally and irrationally) to stay on board with the Judeo-Christian myth. Jung holds up to our eyes the numinous an weird (and therefore fascinating and compelling notion) that YHVH is omniscient and unconscious—this is quite a breakthrough.[21] From this beginning, the small, frail, but self-reflecting ego stands in relation to the titanic, moody, touchy ocean of the unconscious, which is a phenomenon, not a human being at all. It is amoral, capricious, utterly indifferent to its own laws or our dignity. And yet, confronted (master to slave style) by the slave of the ego, the possibility of self-reflection is at least suggested to it. And, out of a facile and nonsensical sense of guilt for what it has done (this part of Jung’s argument, while solid enough, is not necessary) decides to incarnate in order to save humankind in the ego.[22] This Incarnation cannot be the last—Jung says Christ says so, and the promise of the Paraclete (the Holy Ghost) as the ongoing, in-dwelling godhood within all human-born creatures becomes the basis for our individuation—our purpose in life, even (I think Jung means) YHVH’s will for us.

There are other twists and turns to be accounted for in Jung’s argument, but what matters here is the contrast this forms with the Eastern example. Without a doubt, if there is a central tenet in Eastern philosophy (if not, in fact, the whole human world) it is the progressive reincarnation of an eternal personality, whether this is described (1) as a seemingly or actually separated part of an ineffable, transcendental reality Brahman that will eventually realize its real nonseparation and realize it is Brahman or be reabsorbed or (2) as a literally separated part of the Supreme Personality of Godhead that has existed apart from that godhead for all time and will never cease to exist. In this context, material reincarnation becomes a problem to be solved—following one’s dharma leading to progressively improved incarnations and vice versa.

So, here (in an Eastern guise) is individuation as Jung has advocated diligently for since at least 1921. Rather surprisingly, then, he insists in this text that individuation will occur whether the individual attempts it or not. He then goes on to stress that conscious individuation is preferable, and he even cites a bit from the apocrypha that to do a thing knowingly makes one blessed, but to do the thing unknowingly makes one cursed. So how this all squares with everything is not so clear, but the description is absolutely analogous to eastern reincarnation. To not know one’s dharma would almost invariably curse one, i.e., by not following one’s dharma, negative karma and thus inauspicious reincarnations would likely result. Nonetheless, conscious or not, one will reincarnate, and the career of one’s reincarnations over the eternities will comprise the process of one’s individuation. Moreover, ignorance (or an only partial knowledge) being the most fundamental problem to be solved in Eastern philosophy and religion, to take conscious charge of your action in this life denotes taking charge of one’s individuation into the future. All of this exactly analogy arrives without any recourse whatsoever to a psychotic, blood-thirsty deity, to a vision of the human creature as debased and unsalvageable save through the divine intervention of the psychotic, blood-thirsty deity’s son or self-incarnation, and the comforter (the Paraclete) who makes individuation possible because were she not present, it could not happen.

So all this ghastly literalization of worthlessness (salvation only by grace, original since, the apotheosis of blind obedience as the most material demonstration of faith)—all of which leaves human beings as human beings only when it is ignored, thrown aside, or merely played at for the sake of social propriety—is utterly unnecessary, as the Eastern paradigm shows.

We like to compliment the western mind for the invention of Science, but since the “West” is destroying the world and placing billions of people in abject poverty, let’s not get ahead of ourselves.

Ultimately, one may easily imagine Science, railing at Job just as YHVH did: who are you to question me and call into question my works? Can you split the atom? Can you eradicate disease? Can you hurl a stone from earth to a moon of Saturn and have it return? Do you comprehend the ant? Who are you to question me? Who is this who darkens counsel by words without insight?

Science is unconscious, though Hiroshima and Nagasaki did (like the spectacle of job’s destruction) prompted some moments of dim, almost-dawning realization, a queasy sense of something being wrong. Jung stresses that in the book of Enoch it is fallen angels who teach all of the sciences to human kind. And in his theology of the unconscious, a most troubling moral element is Satan’s evasion of punishment. Science the unconscious god fails to take counsel from its omniscience, and immediately accepts the bet from Satan that science’s most loyal servant can be shattered by enough brutality. All other ethical nightmares proposed by this, the lingering one is why, once the bet is lost, Satan is not taken to task. In fact, Science goes out of its way to hide Satan off-stage, as if there’s a dim realization of having been taken advantage of, but all the thundering and accusation stays directed at Job; Jung suggests that the spectacle is secretly for Satan’s benefit, though why Science doesn’t just de-throne Prometheus in the first place remains a mystery. Even in the book of revelations, this shit-starter (Satan) isn’t destroyed completely, but is merely imprisoned in the earth for all eternity—but only after the one-sided love of salvific science’s Son (the Christ) has returned in apocalyptic form and destroyed billions more than Science did with the Flood, leaving only 144,000 (or 0.002% of the world’s population by the current count).

Even the Eastern end of the world, Lord Śiva’s dance or Kali’s annihilation doesn’t have any of the prosaic or banal apocalypse about it, and is just one of a limitless number that have occurred and will occur again and again. There are no whores of Babylon, blood-soaked lambs, and all the Michael Bay sensibilities that are supposed to add rhetoric by horror to the grotesque inversion of the Prince of Peace. It’s a bunch of disingenuous folderol, finally—stuff and bother for no apparent reason. Once again, this is little more than an ill-advised literalization of unconscious material, both as a text and as an eschatology—Science that will save us turns divinely appointed destroyer, the divine appointment part being the most alarming because it means actual people will feel obliged to watch the prophecy become true by bringing it about.

Again, it seems this all warrants a closer engagement, which I should inaugurate in its own blog series. For now, this is enough as a reaction to the text. As a turning of the myth toward the theme of individuation, there’s much to recommend this; the shackling of this process to such an ignorant myth, however, makes the template of it (as an analog of individuation) in need of a different kind of apologetics than Jung lays down here. Jung’s recognition of Yahweh as unconscious is a break-through; I would add that he is petulantly adolescent (or childish) in Exodus—and never grows beyond that.  This itself makes the Judeo-Christian template one of arrested development and an only partial vision of the individuation process.


[1] Jung, CG (2010). Answer to Job. (Intr. Sonu Shamdasani, paperback Fiftieth Anniversary Edition). Reprinted from Jung, C.G. (1968). Psychology and religion: West and East. (Vol. 11, Collected Works., 2nd ed., Trans. R.F.C. Hull). Princeton: Princeton University Press, i–xvii, 1–121. The essay was first composed in 1952.

[2] Psychological Types, or the Psychology of Individuation (1921), Archetypes and the Collective Unconscious (1934–54), Psychology and Alchemy (1944), Two Essays on Analytical Psychology (1917, 1928), Alchemical Studies (1967), working my way through Mysterium Coniunctionis: An Inquiry into the Separation and Synthesis of Psychic Opposites in Alchemy (1956), will be reading Symbols of Transformation (1952) next, and then likely Aion: Researches into the Phenomenology of the Self (1951)

[3] Jung’s (1956) Mysterium Coniunctionis and Spencer and Gillen’s (1904) Native tribes of Central Australia, both of which can get to be a bit of a grind.

[4] References in this post are to the numbered paragraphs in Jung’s work, so that the relevant passage might be found in any English edition of his Collected Works.

[5] I use the word Satanic because that’s what’s appropriate to the person asking the question. I’d rather use the word “Luciferian” but the distinction is typically lost on such a listener.

[6] In Jung’s Mysterium Coniunctionis (if memory serves, but it might be Psychology and Alchemy), he cites the curious historical case of a demonstrated hoax related to the Tabula Smaragdina. This hoax purported to be a commentary upon or a reading of the Tabula Smaragdina (or perhaps an “ancient document” related to it—not unlike Ossian’s famous hoax), yet historical documentation makes it decidedly a spoof. All the same, Jung dissects and analyzes the reading this hoax purports, and extracts no shortage of psychological and alchemical insight. In effect, his approach suggests that the guy may have been kidding, but got it right all the same. There are different sorts of reasons that justify Jung “taking the text seriously” (rather than as the hoax it “is”), but it is still a (rather extended) passage well illustrative of how a curious and inquiring mind can do all of the intellectual work that is lacking in the original. Between the text that genuinely reflects a numinous or evocative glimpse of something that can only be expressed in symbols and the text that is little more than sloppy, disingenuous, or simply fraudulent word salad may not always be so clear and simple to determine.

[7] My basic sense of it is this: it represents a borrowed and misunderstood older tradition; in particular, it has taken a symbolic original and literalized it, just as later European alchemists largely literalized whatever strands of alchemy preexisted them. I say whatever strands because from Jung’s studies he refers back to Greek alchemy, which, given the term itself, cannot be Greek in origin. The northern civilizations love to pretend that everything comes from Greece, so there’s no reason to suppose yet that the Greeks themselves had not already literalized Arabic and Egyptian traditions. In India, in various strands of Buddhism and Tantrism, there were similar alchemical literalizations. The issue is not limited to “the west”. In the biblical case, Lerner (1986) assures us that intolerant monotheism was invented with the cults of Ezra and Nehemiah following the destruction of the Northern Kingdom, and it would have been during this first exile (perhaps) that the (for want of a better term) aspects of mystery religions got first mixed into and then literalized. In psychological matters, it seems that little is so disastrous as to mistake the symbolically represented contents of the unconscious as external “facts”. In Moore’s (1992)** Care of the Soul, he provides the instructive example of a man married for 35 years who finds himself overwhelmed by a desire to have an affair with his secretary. The agony if he does and the agony while he doesn’t puts him in a dilemma that takes him into therapy where, through active imagination, it becomes clear that this desire to pork his secretary is indicating, rather, his desire to be with his wife again as they were when they were younger. Realizing this, he re-spiced up with relationship with his wife, but the obvious point is how utterly destructive in his life it would have been had he literalized his symbolic impulse (by having an affair with his secretary). We encounter the wreckage and ruins of this kind of literalization every day. ** Moore, T. (1992). Care of the soul: a guide for cultivating depth and sacredness in everyday life. New York, N.Y.: HarperCollins.

[8] “Psychologism represents a still primitive mode of magical thinking, with the help of which one hopes to conjure the reality of the soul out of existence, after the manner of the ‘Proktophantamist’ in Faust:

Are you still here? Nay, it’s a thing unheard.

Vanish at once! We’ve said the enlightening word (¶750)

[9] (I fret that I am incorrectly splitting this hair here but, if so, please correct me.) The particularly salient point is that the unconscious, which we can only know indirectly through experience, arrives in our consciousness like an Other. It is our experience and it comes from us but from nowhere we can explain, as when a sudden urge to be cruel toward a loved one or affectionate toward a complete stranger makes its presence felt. Civilization may, in some measure, consist in choking that off, but largely because literalizing such impulses is a frequent root cause of disaster (see the portion on literalization in note 7)

[10] This may sound mysterious, but primarily only because we habitually take the contents of our consciousness, i.e., what we are aware of, to be evidence of the world “outside” of us. If there is going to be any “perception of the world outside of us,” then it will be through the mediation of the objective psyche, i.e., that totality of Self that includes both our conscious and unconscious. A metaphor may help. We understand that our human eyes perceive an extremely narrow range of the electromagnetic spectrum, as little as 0.0000000000035%. If that is anywhere like the percentage of ‘the world outside ourselves” that consciousness registers, then the overwhelming amount of “direct perception” is in the unconscious, which we can only know indirectly through experience.

[11] One must be careful here. That I have an experience logically argues that there must be “something” that generated that experience. To deny this means, in the philosophy of the dominant discourse, to veer off into solipsism or skepticism; to assert this sets the stage for another round of naïve realism, however cleverly argued. So, instead, one must not be deluded into thinking that anything we might say about that could ever be anything other than more mediated knowledge (through experience). So this means in practice that we must take a rigorously agnostic attitude toward whatever that “something” is. We can use a word like “something” (or, per Hinduism, that) to indicate what we intend to point to by using the word, but it should be obvious that the words “that” or “something” are functionally the same as “god” or whatnot. O even to insist that there is a “something” (about which we must perpetually remain ignorant) is simply a more honest version of the pious lie that “something” exists or does not. Jung’s “unconscious” is exactly this “something” or the Brahmanic “that” and the question becomes what kind of intellectual work can we get done (can we get more intellectual work done) by assuming this “agnostic” view of that, as opposed to the “faith” of objectivity (something exists) or the “atheism” of solipsism (something doesn’t exist). Again, we only know that through our experience of it, so whether we say that exists, that doesn’t exist, or we cannot determine whether that exists or not, all involve knowledge claims: I know I know, I know there’s nothing to know, I know we can’t know. Or: experience allows me to claim I know, experience allows me to claim I don’t know, experience allows me to claim I can neither know nor not know.

[12] Will he eventually become the analog of Chomsky—pivoting out of the ashes of his original field of research to become an intellectual celebrity in an unrelated field?

[13] I feel it should be made clear that by “Judeo-Christian” is mean specifically Judaism and Christianity in all of its manifestations as intolerant monotheism, not excluding the churches and synagogues who practice the appalling silence of half-acting like their monotheism isn’t ‘really” intolerant. The refusal to eradicate the “bad Jews” and “bad Christians” arises ultimate from the complacency that, after all, “our faith is the right one, and those false believers will be weeded out by God” (with a capital ‘g’). And I can add that any intolerant monotheism in Islam, and the distinction (within adherents to Islam) between “good Muslims” and “bad Muslims” gets exactly the same critique, with one caveat: that Judeo-Christianity almost never ceases to go out of its way to misrepresent Islam so grotesquely that it seems almost impossible to have any grounded sense of what Islam really purports, even in some of its widest and least controversial points. That makes talking about it an argument from ignorance, so I won’t.

[14] This phrase “documentary hypothesis” is a good example of the dominant framing of Judeo-Christianity. The documentary hypothesis takes the radical position that the Torah was written down at one point. The alternative position to the document hypothesis (the non-documentary hypothesis, one supposes) is that the text was inspired by the divine, has no errors or emendations in it, and certainly not that the first chapter of Genesis was historically composed later than the second chapter of Genesis.

[15] And I consider the 21st Journey Ijon Tichy (with the robot-monks) to be one of my most “teleologically” serious works, one that I personally attach great importance to. It is, in a way, a very farsighted “futurology of religious faith” set in a heyday of technologies that allow thinking creatures to accomplish absolutely everything that Nature can accomplish and, furthermore, everything that is potentially possible, but which Nature does not realize directly. (Nature does not directly realize typewriters.) I always wondered why the critics never paid much attention or gave much interpretation to that work” (from here)

[16] Of course, reading the bible would have been paradigm shifting but I read it late, already with malice aforethought. The Bhagavad-Gītā and even more so ultimate the Śiva-Sūtrās offered a metaphysical view of the world where the darkness of the biblical text is exposed utterly. I also at one point worked out (in light of the Śiva-Sūtrās) my own theodicy.

[17] This doctrine is the notion that evil is merely the absence of goo as opposed to a metaphysical reality in and of itself. This particular doctrine especially annoyed Jung. Going into all of this issue now is prohibitive; something of the issue may be read about here.

[18] For this reason:

Truly, Yahweh an do all things and permits himself all things without batting an eyelid. With brazen countenance he can project his shadow side and remain unconscious at man’s expense. He can boast of his superior power and enact laws which mean less than air to him. Murder and manslaughter are merely bagatelles, and if the mood takes him he can play the feudal grand seigneur and generally recompense his bondslave for the havoc wrought in his wheat-fields. “So you have lost your sons and daughters? No harm done, I will give you new and better ones” (¶597).

[19] Somewhere, I can’t remember where (it is in the alchemical texts somewhere) he remarks that a part of his sense of epiphany in discovering European alchemy was how it gave a European (and not just a borrowed, from the East) footing to certain of his ideas, especially the collective unconscious. In discussing this, he specifically defends “Western” approaches to Wisdom, admittedly thousands of years behind the East, but nonetheless having a particular topos and vibe that was, in its fundamental differences from the east, worth considering. In other texts, it may often seem that Jung, who was obviously very conversant in Eastern documents, tries to avoid saying so, one assumes in order not to alienate his European readers, who are probably already suspect of his “oriental” tendencies in the first place. In this text, for instance, he regularly uses bardo (བར་དོ) or pleroma to refer to the infinite, unborn, intermediate state of the soul; elsewhere, he occasionally gives evidence of an Eastern referent even when using a Western term, particularly when he uses the word “Self”. But in this peculiar defense mentioned above, it seems more like he really wants to let Western epistemology stand on its own two doddering, stick-like legs. Without needing to resort to an “Eastern turn,” one can appreciate the realization, “Our whole epistemology leaves us trapped, quite helplessly” or that it’s all been a fantastic mistake. One that has generated a lot of art and such, if only because one can redeem suffering through art, but the East, which doesn’t lack for its own suffering as well, has managed great art and wisdom without arguing that the essential fact of the human being is depraved worthlessness capable of salvation only by a heroic deity (or, in this case, an intercessor). Whatever the case, why Jung feels that he can’t jettison the paradigm altogether and simply get on with a less losing proposition for him personally is a biographical mystery.

[20] A long time ago, thanks to an association at work, I allowed myself to be interrogated about my atheism. And at some point I said something like, “The more I reject the notion of God, the more liberated I become,” and my co-worker quipped back with something like, “How do you know that’s not what’s bringing you to God?” And I said, “Exactly,” even though by god he most assuredly meant the Christian deity and I meant nothing of the sort. How delightful it would be to have every vestige of the darkness that is Judeo-Christianity erased from my soul.

[21] My own resort, which I won’t give the whole argumentative apparatus for, boils down to this: life is an answer to a question posed to god; what was your question?

[22] Jung describes all of this in a theological framework, so that it is often not clear if one should take this in an “external” or “internal” sense. This is part of the numinousness of the essay, arguably, but my impression is that it is more Jung allowing himself (at long last, finally) to be a theologian—he’d always rigorously prevented himself (or at least said so) in his writings that going beyond the purvey of the phenomena of consciousness would be out of bounds for a psychologist. Some of his grumpiest, most vehement passages burn with this language—in particular there is a great harangue where he states that the most passionate, motivated aim of his existence is to destroy all metaphysics as they get involved in the work of the psychologist. He’s not so punctilious here, and this may be the enantidromia after all. But what I take him as meaning to mean, even if his subject gets the better of him (apparently he’d been sick when he began, wrote the whole in one swoop, then fell sick again) is that, like the alchemists who projected their psychological encounter with the unconscious onto matter, the composers of the biblical texts—not the later redactors who selected, edited, and bowdlerized them—similarly projected their unconscious material onto the divine, &c. Jung dismantling Job is doing this as well, which also shows once again (the book of Job specifically) of how depraved, how disgusting, and how not what anyone should ever resort to is what occurs when unconscious material is taken to be a literal reality. If God is man’s one inexpiable fault, as de Sade insists, then Job is YHVH’s.

Abstract

Culture, as the set of constraints on human behavior in a society, subject to change by that society, manifests as values made visible. Moralities/criminalities comprise the judgments arising from culture; ritual (religious or otherwise) orients to the metaphysics of culture. Culture will determine what (kind of) packs form; society will be the substrate out of which it will be formed. And the outcomes of packs then become inputs to society.

Introduction & Disclaimer

This is the twenty-fifth entry (happy anniversary) in a series that addresses, section by section over the course of a year+, Canetti’s  Crowds and Power.[1] This post addresses no specific section of the book so far (i.e., “The Crowd”, “The Pack” and “The Pack and Religion”, or any of the subsections), but is a coming up for air or a check-in on where things stand over the course of the author’s 168 pages so far. It covers “The Pack” and “The Pack and Religion”.

I’ve banished my normal disclaimer into an endnote,[2] because this whole post is an attempt to come to terms with what has gone before so far. As a reflection on a reflection, I should cite and document whatever blog-posts I’ve written and/or recite the relevant passages in Canetti, but the point of this is more to take stock of the past (work done so far) in order to proceed into the future. I will admit, I could have done something like this at the end of the first section (“The Crowd”) and, in fact, should have; equally, I could have or should have done a check-in at the end of the second section (“The Pack”). I did not for “The Crowd,” because at the end of that section I felt like I had actually wasted my time in the effort to find something of lasting value in Canetti’s exposition. I’m glad to have persevered, not necessarily because by the end of “The Pack” I’d started to find Canetti’s exposition helpful but because it led me to Spencer and Gillen’s (1904) groundbreaking work in Australia. Their work provided a jumping off point and contrast for finding sometimes helpful alternative constructions than those offered by Canetti.

But, whatever personal/intellectual teleology I might get out of Canetti, in terms of his argument generally in his book, it seemed essential at this moment to stop and take stock. The reason is: Canetti began by attempting to characterize the crowd, and then took a step back to characterize the pack (as the earlier, smaller, not-quite-crowd), and then in “the Pack and Religion” deceived himself that he’d made a case for how the transmutation of packs provides the basis for world faiths. The next section “The Crowd in History” betokens an obvious return to the crowd, which at this point has no material relationship to the pack. And so, if there is going to be any clarity (for you, for me) moving forward, I need to establish what’s worth keeping and what’s been found in order to chart a clear course through the mess to come. Partly I anticipate this because the first section of “The Crowd in History” is an especially obnoxious intellectual foray on Canetti’s part.

The Pack

In a conversation with Adorno,[3] Adorno shows a keen interest in Canetti’s attempts to get at what could be called archaic elements in mass psychology. We can make as much as we like about what Adorno’s  disregard for most of the book might mean, but at least in the analytical lens of “the pack” Canetti seems to be less aimlessly adrift than he does when talking about “the crowd” with such inconsistency, if not often vacuity. What is meant by this “archaic” quality need not be literalized to “pre-historic”  or “pre-modern” human modes of being (e.g., the hunting pack, the war pack, the lamentation pack, the increase pack)—that is, we can seek them out in our contemporary world,[4] but even then we needn’t reduce our contemporary world to “barbaric essentials”—as Sontag so glibly and superciliously suggests on the back of Canetti’s book.[5]

To reduce human activity to “primitive conditions” makes two fundamental mistakes: first on the grounds of ethnic chauvinism, since this imagines (for instance) aboriginal culture as not “really” culture but rather, whether laudably or not, primitive culture; second and more broadly still, this forgets that all culture is unnatural, precisely for its articulation by human beings (exemplified in the symbolic representations of language), and cannot then be reduced to nature, no matter how hard we try or want to. As Lucifer puts it, summarizing de Sade’s body of work:

God and Man and Nature were always symbols for my rebellion against Existence. I hated the idea of god because I feared men. But when, from three decades in asylums and prisons, I ceased to fear men, then my true enemy came into view, Nature.

Except that, as Saint-Fond from Sade’s Justine admits:

In everything we do there are nothing but idols offended and creatures insulted, but Nature is not among them, and it is she I should like to outrage. I should like to upset her plans, thwart her progress, arrest the wheeling courses of the stars, throw the spheres floating in space into mighty confusion, destroy what serves Nature and protect what is harmful to her; in a word, to insult her in her works—and this I am unable to do (qtd. In Blanchot, 1949,[6] p. 63, emphasis added).

In the eighteenth century, Nature and Culture were not yet so unambiguously or generally separate as they were in the nineteenth century or now, although the attempt now to re-fuse them back together seems to have gotten a fourth wind, in the degraded intellectual projects of things like Social Darwinism, neuro- and psycholinguistics, in vast tracks of genomics, and in every other reductionist trope in those physical, biological, psychological, cognitive, linguistic, aesthetic, and social branches of science and erstwhile fashionable pseudosciences that presume that the unnaturalness of culture can be made to correspond to or be derived from the naturalness of Nature, as if Nature were not already a cultural construction in the first place.[7]

Culture is, precisely, unnatural acts whether Kaitish Tribe or Kate Bush. What is “archaic” is the presence of whatever needs (then or now or both) that we must meet with necessities (to continue existentially, not just to live or survive); how we meet those necessities form the (recursive feedback loop of) historical circumstances Eagleton (1989) points to, to the fact that the necessities with which we met those needs were just one way of doing so at that place and time, and that if we could not for some reason have chosen differently then, we do not necessarily have to continue choosing so in the future.[8] So when Canetti tries to link pack to pack—never mind that what he means by pack is so diffuse and attenuated as to be conceptually impotent—he only imaginarily connects whatever dots he elects to regard as meaningful. A crystal-clear demonstration of the unnaturalness of human culture, the completely non-obligatory character of the cultural necessities we (as self-conscious beings in the universe) have invented, is available in Spencer and Gillen (1904) where, assuming they haven’t gotten it wrong, they report that amongst some tribes in Australia the patterns of marrying assure that (tribal and blood) brothers and sisters never marry, while in another tribe the arrangement assures that only (tribal) brothers and sisters marry.  Equally, in some tribes one part of the tribe, empowered to perform certain rites to see to the increase of totems, takes care of everything themselves, while in other tribes, it is only after being asked by the other half of the tribe (who then provide all of the materials for the rituals) to perform the rituals.

The pack is a cultural technology. Of all the factors that Canetti adduced as essential features of the pack, only one is tenable: a pack is goal-oriented, and the goal is collectively agreed upon.[9] This means, in principle, a pack may consist from one to any number of people, so long as each member of the pack shares the collectively agreed upon goal,[10] though a pack of one is obviously a special case (in the mathematical sense). Being a member of a pack means that each individual has some non-fungible part to play in reaching the collectively agreed upon goal—by non-fungible, I mean only that the functioning of the pack in its trajectory toward its goal will necessarily be different if other members are in it.[11] Each member of a pack has a local, personal goal in trying to realize the collective goal of the pack. This coordinated, multiplication of “functions” (toward a collective end) is one of the most technologically helpful aspects of the pack.[12]

As a cultural technology, a pack forms out of society at large according to culture—culture being the set of constrains on human behavior in a society, subject to change by that society. A pack is marked by a beginning, more or less formalized. In discussing crowds, Canetti mysteriously and pointlessly talks about some moment of “discharge” when the crowd becomes a crowd.  Anthropology teaches us, where there is magic there is ritual—and ritual is cultural fiat; that’s the magic exactly. A pack exists, then, begins, from the moment human beings say it does, when someone who is deemed by those listening to be capable of making such a declaration declares it. The moment of collectivity that Canetti incoherently identifies in the crowd is, rather, the moment when members of the pack subsume their activity to the collectively agreed upon end. From this, one sees it is possible to drop into or out of the pack—shifts in membership can occur due to the whole range of human events and disasters; all the while, the pack exists so long as human beings are oriented toward the collectively agreed upon goal, even when the whole party has been killed but one last person.

As a group cloaked in the mantle of reaching a collectively agreed upon goal, the pack then persists until it is dismantled. There is hardly any need to try to schematize the hundreds of billions of possible perils and failures and (partial or full) successes a pack might accomplish. The point is that the collectively agreed upon goal itself defines the terms of success.[13] The dismantling of the pack may come with as many formalities as established it, or it could end by being annihilated, disintegrating, or fragmenting, &c. The two salient points here: the ‘end” of  pack may not always be up to the human fiat of declaration—if everyone has been annihilated, there is no one in the pack to say, “It is done” or even “we failed.” But the fate of the pack is also subject to public opinion—that is, the pack’s society remembers sending it off, but the pack may never return. If the formation of a pack tends to be within the ambit of human ability to declare, like n opening parenthesis, the end may not be. This is the agony of an absence of closure. So, while a pack always has a beginning and a middle, it may not have an end—unless the society that originated it declares one in the open emptiness where the pack should have returned to.

What this means is that, like HTML syntax, e.g., <body></body>, to open a parenthesis already includes its closing parenthesis. The exigencies of events, the obstreperous refusal of reality to be grammatical, means our proposed enclosure may get corrupted, violated, punctured—it may bleed like a wound all over society but it will still be that enclosure that is bleeding, not society at large. This means that what “follows” after a pack is neither determined nor not-determined. Nothing, contra Canetti, must inevitably follow from one closure of a pack and the next. Success may bring celebration, but not inevitably; failure may bring lamentation, but not inevitably. If the pack returns, all that can be said is that it will dissolve back into the society out of which it sprung, but that is only to repeat what was said when the pack formed—that it would have an end. The pack, as a team, might go on to another task, or not. The pack, having never been a team until now, may stay one, or not. And so on. This is the sense in which nothing is inevitable or determined with the close of a pack. What is determined is that, insofar as the pack had a goal, the outcome of the attempt to reach that goal becomes an input to the next iteration of society, be it another formation of a pack or not.

A more specific example is needed to continue.

Among the Warramunga people, those who are of the kangaroo totem are empowered to perform the rites that increase the kangaroo in the area and are under a prohibition not to eat any kangaroo. A similar situation prevails for those of the emu totem, who are in the other half of the Warramunga tribe. So it happens, then, that the emu people ask the kangaroo people to perform the increase ritual for kangaroo, providing all of the materials so that the kangaroo people can. And vice versa. Thus, we can see a very smart mutual interdependency in this—all the more so since my ability to eat anything (excluding kangaroos, which I’m forbidden to eat if I’m a kangaroo being) depends on asking all the other totems to do their increase rituals.

I suggest this structure of mutual interdependency is at the heart of the pack as well. Insofar as accomplishing a collectively agreed upon goal involves the cooperation of all of us—a multitasking function that I cannot do myself—then this depends on everyone else asking me, “Would you please do your thing now,” and vice versa. Critically, it is not in my personal self-interest to do this task; just like those who increase the kangaroo totem but do not eat it, me saying, “Yes, I will do that,” has no “selfish” benefit for me, because it is effort toward the collectively agreed upon goal, which the act that I just did wouldn’t get to individually. And this is true for all in the group.

With this said, the object of the next section involves detailing or working out the relationship of packs, as cultural technologies, to the larger fabric of the society in which they occur. To do this amounts to getting at the spirit that Canetti seems to aim for without winding up in the cul-de-sac that he does.

The Pack and Religion

One of the main theses of Canetti’s book, by his own account, is the transmutation of packs at the heart of the world’s religions.

There’s no such thing.

The principal objection here is that packs do not transmute; they disappear. When a collectively agreed upon goal is no longer pursued—due to success, failure, or some other still more ambiguous outcome—the pack ceases to exist, and a new one that is related in some way (positively, negatively, or otherwise) to the previous pack, may or may not be formed.

One of the most significant lapses in Canetti’s exposition concerns his failure to acknowledge that most packs simply dissolve back into the society that originated them; they don’t transmute at all. A team is assembled to reach a goal, it is reached, the team vanishes, and life goes on. This doesn’t mean the memory of the pack disappears or that the people who formed it do. I should mention here, perhaps, I’m not proposing that everything in society can be explained by packs[14] or even that all group activity devolves somehow to a pack.[15]

After the fact, one may name characteristic sequences of groupings that may and do recur in any given society. The fact that, for example, Warramunga mourning rituals occupy up to twenty-four months to complete and are “interlarded” with all the other activities of daily and ritual life while that parenthesis is open makes it untenable to pretend that the lament pack of this week is linked continuously to the last rite (bringing the arm bone in for earth burial) twenty-four months from now.[16]

Cases of more rigorous sequences may be easier to spot in (for example) Australian aboriginal culture, but these same cultures show counter-examples as well, unsurprisingly.  Amongst the Warramunga and related tribes, the rites of increase consist of sometimes very lengthy chains of ceremonies that reprise the wanderings of the totemic ancestors over the lands. Each must be performed in their full sequence before another totem series may begin. Amongst the Arandan people and similar tribes, by contrast, these totemic series may be performed all mixed in with other sequences, and perhaps may not even reflect the full series of wanderings. If one might pretend there is an inevitability to the sequence of packs in the Warramunga, one has to be explain why that’s not the case amongst the Arandans. Canetti’s scheme can’t do this, because he insists on inevitable links.

Culture, as the set of constraints on human behavior within a given society, subject to change by that society, induces values made visible.

Religion is a major vehicle for this visibility of course but it is only one type. Amongst the aboriginal people studied by Spencer and Gillen (1904), most tribes exhibit a very strictly enforced morality not accompanied by a great deal of metaphysical justification. Whether we call this moral, religious, spiritual, sacred, or superstitious—or if we try to suss out cultural behaviors dependent upon some metaphysics (spirits, the invisible world, &c) or physics (human beings, the visible world, &c)—such labels or distinctions might shed light on matters, and all of it will still be culture as values made visible—as values in visible individuated dynamism.

What matters, in the general case, is not that the Lele have a hunt and then a feast or that the Jivaro experience a crime and then form a revenge party (that’s what matters in the specific), but rather that packs reticulate. The “archaic form” is of course that A –> B, but without more information knowing A doesn’t tell us enough yet to predict B—and if our hermeneutic doesn’t help us understand other cases from particular cases, then it is not yet an adequate description of human experience, and that is a central reason why Canetti’s exposition has very little value. It is, as the one reviewer said, a poem, and a bad one at that.[17]

Human beings being moral beings, there is a vast utility in gathering from Spencer and Gillen’s (1904) description of mourning ceremonies that the death of a child, young woman, or man in the prime of his life is deemed a crime that must be solved (with tree burial as a critical part of that judicial process amongst thee Warramunga and related tribes). This crime, experienced by the Warramunga, calls for at least two retributive gestures; the formal revenge party itself is called an atninga, and Canetti provides (less detailed) other examples of such revenge parties for the Taulipang and the Jivaro.

This emphasis on crime provides a distinction sorely lacking in Canetti’s exposition for the hunting pack. Two things may be said: (1) if a revenge party forms, it is in response to a perceived attack by a hunting party;[18] (2) the hunting of animals, plants, minerals, and natural resources in general may be understood, from several cultures, as related to crime as well. From a hunt may follow a communal feast conducted with sufficient propriety and etiquette to ensure the spirit of the (putatively willing) animal is not offended and will reconstitute itself for future human consumption again. To not do so would be to offend Nature, to commit a crime. Culturally, my group may feel no compunction toward a neighboring group of strangers—such that we might show more respect to the animals we kill than the bipedal pests nearby that we kill—in which case we “hunt” them, we don’t commit a ”crime” against them (from our point of view). They’ll likely disagree—cue the revenge party.

On Earth, only humans are criminal.[19] And within a group, there can be the imposition of or consensus upon what is and is not interdicted and permitted social behavior. But between groups, such consensus may be untenable—cue the difficulties of international law. So, the difference between a pack’s “crimes against Nature” as opposed to a pack’s “crimes against another group” is that Nature “talks back” differently than others groups of people do.[20] In all of this, the moral emphasis is paramount, because all human activity occurs in the moral/criminal realm, in the realm of what is permitted and what is not permitted,[21] what is sacred and what is profane. [22]

Culture, as the set of constraints on human behavior in a society, subject to change by that society, manifests as values made visible. Moralities/criminalities comprise the judgments arising from culture; ritual (religious or otherwise) orients to the metaphysics of culture. Culture will determine what (kind of) packs form; society will be the substrate out of which it will be formed. And the outcomes of packs then become inputs to society.

Canetti wants to insist that religion may be located in the –> of A –> B, in the pivot formed by A –> B. There are two central problems with this.

First, the –> is not related to the pack at all, in the same way that the metamorphosis of caterpillar –> moth, where –> is the metamorphosis has nothing to do with the caterpillar or the moth, and just as the movement from here –> there is, more properly, always here –> here. We are never not always here, just as at no point is the caterpillar or the moth or the thing at any point in-between those endpoints of transformation something other than what it is. This seeming paradox is, like most paradoxes, a mess involved in language, but in the present case, the point to be made is that if religion is going to be located in –> from one pack to another, then the packs themselves are not integral to that movement. This kind of metaphysics has no descriptive force for packs. Packs do not transmute; they dissolve and then a new one forms. There never is any “continuous substrate”—not even the individuals necessarily—much less any obligatory link between them. (See for example my description of American football here for more details about this.)

Second, Canetti is actually not very clear what A or B are supposed to be, saying things like Islam is a war religion, Christianity is a lament religion, &c. In the Taulipang and Jivaro example, a crime is committed and a revenge pack forms—is this supposed to be a lament –> war shift? In which case, why does Christianity (or Shia Islam) stop at its lament? In both cases, a titanic crime is said to have been committed.[23]

So packs crop up, proliferate, do their thing, vanish, leave traces, fail, and whatnot—what, if any, relationship does this have to religion, as Canetti wants to insist on proposing? It has to be asked then what is religion, but let’s not.

Instead, what religion is, ritual may be taken as a manifestation of it. Ritual is like a catalyst (a crowd crystal) in that its form is obligatory (if not unvarying), but it is also like a pack in that its specific character and attainment of its goal inherently depends on the people who are doing it. A pack is a deformalized ritual; a catalyst is a depersonalized ritual.

A ritual plugs into the explanatory discourse of a culture—although at this point, I’m at a standstill how “explanatory discourse” is not already a synonym for culture anyway. I’m using the word ritual instead of religion because rituals do not necessarily require a supernatural metaphysics to be in play—not that the supernatural must always be a part of religion. Ritual is recognized effective action (contextualized by the criminal/moral range of cultural activity). It might be that carrying water could be ritualized, but it is still different than ritual—and where it is not, then something other than carrying water will be non-ritual. Ritual touches the metaphysics of culture because it has a conceit of speaking for society—the group of people who are constrained by, negotiating and mediating, and changing culture on a daily basis. Moral action can be ritual action—for rituals, this must necessarily be the case—but they needn’t always or only overlap.

Discourse is not monolithic, though it may be enforced as such. Although one is subaltern, one is still viewed through the acculturated lens; culture becomes what one resists, like de Sade. Whatever one has to say, culture will hear what it wants, or can. This doesn’t mean its hearing can’t change, only that it may be difficult for non-conformists. Spencer and Gillen (1904) provide some examples of this within the tribes they observed. Culture, as the set of constraints on human behavior by a society, subject to change by that society, threatens to become infinitely recursive, since one of the cultural behaviors is articulating a description of culture. I’m not going to get cutesy about this. One of the limits of this recursion is whether anyone will listen or is listening. We do live inside of culture, so even those who control discourse are controlled by it—are constrained by it as well.  Merely changing a constraint on one’s behavior is not yet (may not yet be) enough to propose, offer, or enforce a change of behavior generally.

I would not call religion “assent to an explanatory paradigm”; rather, of the forms of assent to an explanatory paradigm, religion (in some cultures) is recognized as such. This explanatory paradigm has as its object transcendental reality, i.e., that hypothetical reality apart from myself that I confront in the persuasive agency of others and the obstreperous refusal of “the world” to be grammatical. This presence of transcendence lends itself to people offering supernatural elements in their explanatory paradigms, but it seems a good portion of aboriginal culture as described by Spencer and Gillen (1904) doesn’t need to lean much on the supernatural metaphysics so far as ritual or moral heavier is socially concerned.

So ritual is action attached to an explanatory paradigm—that sense of attachment is where “religion” (“to rebind”) and yoga (“to yoke”) and the symbol of the ankh (which is the harness of the oxen) all would seem to point to. There’s no need to call this religious or spiritual. We can call it sacred, because by attaching to this explanatory paradigm, there are then a hierarchy and cascade of values that depend (hang) from it. It is these values that are in play in determining the sequence of social events, whether that means forming a pack, dissolving a pack, or having nothing to do for the moment with packs whatsoever. In this respect, using Canetti’s terminology, packs are already signs of religion, not the sequence of packs.

But this is giving too much precedence to the term religion, because religion is every bit an aspect of culture as morality and criminality. If I tend to emphasize crime, it is to avoid the dead end of arguing about celestial ontologies. But as religion lacks immanence, crime lacks transcendence, and so neither will ultimately do—hence, for now, the slightly helpless resort to culture.

Imagine a slowly bubbling mud-pit, so that first one bubble breaks the surface here, then another over there. Canetti has insisted that the sequence of this bubble then that bubble comprises the religion of the mud pit. It is, rather, the mud that comprises the religion of the mud-pit—that is, it’s the culture of the mud-pit that provides the substrate for any ritual expression (as religion) within the mud-pit world, regardless of the sequence of bubbles.

Ritual is a special case (in the mathematical sense) of moral action. [24] Tentative declaration: a metaphysics of culture provides the explanatory justification for any given form of action while the physics of culture provides the forms of action. Then: if the pack subsumes the metaphysics of a culture to the physics of obtaining a goal, then the catalyst subsumes the physics of a culture to the goal demonstrating the cultural metaphysics. Ritual (magically) holds these subsumations in tension—metaphysics and physics overlap, become entangled. More properly, to the extent that we may separate the metaphysics (the explanatory justification for  given form of action) and the physics (the form of action) of ritual for analytical purposes, the consequences of ritual cannot be derived from those analyzed parts. Example: we may separate the form and content of a sonnet for analytical purposes, but the consequences of the sonnet cannot be recovered from the points elucidated about form and content. We can describe the effect of reading the poem; we can say at any given moment that it is the content or the form that is contributing to our particular experience, but this is all after the fact. The consequence of ritual is similarly open to analysis but not derivable from those terms.[25]

The foregoing may not yet adequately find a grounding for any relationship between packs and ritual (as a manifest example of attachment to an explanatory paradigm), but if there is going to be any relationship between any sequence of packs, transmuting or not, and the cultural formation of “religion,” then it cannot be derived from Canetti’s approach. Adorno (2003)3 calls attention to the methodological problem in Canetti’s subjectivity:

that cannot really be ignored … What strikes the thinking reader of your book, and may even scandalize him, regardless of whether he calls himself a philosopher or a sociologist, is what might be called the subjectivity of your approach. … The reader of your book cannot quite rid himself of the feeling that as your book develops [that] the imagined nature of these concepts or facts [i.e., the concepts or facts about crowds and power]—the two seem to merge with each other—is more important than the concepts or facts themselves (184, emphasis added).

Adorno offers as an example of this Canetti’s notion of invisible crowds, and Canetti responds to the point about invisible crowds, rather than to the subjectivity of his method. Adorno has to return to the question more than once to finally get Canetti to insist that “the importance of the real masses is incomparably greater” (188), but this doesn’t answer Adorno’s question either—Canetti may, in general or through the obvious rhetorical pressure of Adorno’s question, answer this way, but are real masses actually what he’s writing about in his book? Is that what comes across?

No. Which is how Adorno comes to ask his question in the first place.

Now, whatever Canetti’s problem with accepting that—at least for some people—so-called concrete life as a value can be hierarchically subordinated to symbolic life, what one comes away with in this conversation with Adorno, besides his willful non-engagement with the dialogue, is an apparent urgency to be seen as talking about “concrete reality” and to avoid imputations of being a phenomenologist of the imaginary, since he “would really be very upset if anyone were misled into thinking that the reality of the masses is not the crucial thing” (188) for him.[26]

There are literally hundreds of examples of what Adorno flags down, but to pick just one involves the wholesale non-address of boundaries. What is perfectly clear when Canetti points to any given phenomenon (of a crowd) and declares what type it is is that he, like Justice Potter and hard-core pornography, may have a hard time actually defining what it is but he (Canetti) knows a crowd when he sees one. Never was Heisenberg’s Uncertainty Principle (in its humanistic guise) ever so apparent—that the observer changes the thing he is looking at. For the reader—I take heart from Adorno’s testimony in this regard as well—must frequently read Canetti’s declamations and say, blinking, “No, that’s not so.”

Canetti never takes account of this while describing packs either, but it is obvious enjoy that a pack forms because humans say it has been formed, by fiat. What Canetti uselessly calls the “discharge” (in a rather embarrassingly Freudian metaphor from someone who claims to be so at odds with Freud), as the indispensable moment of crowd formation—even though there is no shortage of crowds that by Canetti’s descriptions have no discharge or a delayed discharge, and therefore shouldn’t by his terms even be called crowds—for the pack, we can see that its formation has nothing more mysterious about it than all the members subsume their activity to the collectively agreed upon goal of the pack. The mode of cooperation is adopted, so that the request (implicit or explicit) by other members of my pack to do my part in the multitasking machine we have assembled, even though that tsk is not in my immediate self-interest, amounts to the pack-making moment of the pack. So, in contrast to a “discharge” that is imaginary both in the sense that it could only occur in the imaginations of people in the crowd-to-be[27] but also fictional as an explanatory concept in Canetti’s usage, we see that the unnatural cultural formation of the pack is declared by fiat to exist through the moment of human beings agreeing cooperatively to set aside immediate self-interest to reach a goal than none of them could reach individually.[28]

Endnotes
[1] All quotations are from Canetti, E. (1981). Crowds and Power (trans. Carol Stewart), 6th printing. New York: NY: Noonday Press. (paperback).

[2] The ongoing attempt of this heap is to get something out of Canetti’s book, and that of necessity means resorting to the classic sense of the essay, as an exploration, using Canetti’s book as a starting point. I can imagine that the essayistic aspect of this project can be demanding—of patience, time, &c. The point of showing an essay, entertainment value (if any) aside, is first and foremost not to be shy about showing the intellectual scaffolding of one’s exposition as much as possible. This showing, however cantankerous the exposition, affords the non-vanity of allowing others to witness all of the missteps, mistakes, false starts, and the like—not in the interest of merely providing a full record (though some essayists may do so out of vanity or mere thoroughness, scholarly drudgery, or self-involvement) but most so that readers may be exasperated enough by the essayist’s stupidities to correct his or her errors and thus contribute to our collective better human understanding of ourselves.

[3] Adorno, T. and Canetti, E. (2003). Crowds and power: Conversations with Elias Canetti (trans. R. Livingstone) in R. Tiedemann (ed.) Can one live after Auschwitz? A philosophical reader, pp. 182–201, Stanford, CA: Stanford University Press.

[4] Although one paraphrastist insists that the hunting pack, war pack, and lament pack have vanished from our current milieu. “The first three packs are all elements of archaic survival and no longer apply to the modern world (we no longer have to hunt, we no longer have to ritualize each death.)” We will have to see if that is an accurate summary of Canetti; I anticipate it is not.

[5] “Canetti dissolves politics into pathology, treating society as a mental activity—a barbaric one, of course—that must be decoded.” Maybe she wasn’t being complimentary. I doubt it. If someone wants to believe that human beings are only inherently bad, then they may begin the difficult process of improving our lot by killing themselves. If, instead, and less in the spirit of a querulously disappointed child, we acknowledge, as Eagleton (1989) does, that there is

absolutely no reason why the future should turn out any better than the past, unless there are reasons why the past has been as atrocious as it has. If the reason is simply that there is an unsavoury as well as a magnificent side to human nature, then it is hard to explain, on the simple law of averages, why the unsavoury side has apparently dominated almost every political culture to date. Part of the explanatory power of historical materialism is its provisions of good reasons for why the past has taken the form it has, and its resolute opposition to all vacuous moralistic hope (184)**.

I especially want to highlight the phrase “why the unsavoury side has apparently dominated almost every political culture to date”. The day-to-day life of people has tended not to be dominated by the unsavoury element, so the problem is not the broader issue of human nature or the nurture of historical circumstances, but the specific problem of how we arrange our political economies. I’m completely sympathetic with Eagleton’s point, and simply wish to point out that those who are nailed to the wheel of domination (the First World, South Africa, Israel, &c) tend in their paranoia to lose sight of the fact that much of the world, eking by in dire poverty, are not busy eating one another and are not degraded as human beings as those in the First World are by their historical circumstances

**From Eagleton, T. (1989). “Bakhtin, Schopenhauer, Kundera” in K. Hirschkop & D. Shepherd (eds.) Bakhtin and cultural theory, pp. 178–88. Manchester, UK: University of Manchester Press.

[6] Blanchot, M. (1949). Lautréamont et Sade Paris: Les Editions de Minuit reprinted in R. Seaver & A. Wainhouse (eds.) (1965). The complete Justine, Philosophy in the Bedroom, and other writings. New York: Grove Press, p. 54.

[7] At its root, a tautology that proves unhelpful in Wittgenstein’s sense of a tautology hides here and aims (some would say ironically, since this emanates from the domain of science, but the irony is no irony at all but wholly consistent with the same naïve realism behind it that informs religious faith) at nothing less than a new blind faith in the god not of the gaps of the genes or matter or, more elementally still, energy. Here, science is “what we have learned about how to keep from fooling ourselves” (Richard Feynman, from here), and yet here it is fooling itself about its limits; the converse being the dogma, “Faith and Reason inhabit different worlds–and so far there is no space travel between them” (Erika Wilson, ibid). de Sade’s answer to our inability to negate Nature, to free ourselves from determination by Nature, is to try our hand at a moral crime, “the kind one commits in writing” (qtd in Blanchot, p. 57). In other words, generate culture.  In this way, writing that I’m only my genes is a moral crime; writing that you can explain this sentence by way of my biology is a moral crime—equal in scope to de Sade’s blasphemies, but oriented toward, not away from, a new religious thralldom to the essence of things hoped for in a quite contemporary and authentic trahison des clercs.

[8] One can make a structuralist argument out of this, if what is meant by structure is cultural not natural; This is how I take Jung’s sense of archetypes. It’s probably argumentatively more coherent to emphasize the hermeneusis of this.

[9] Teleology being a debased concept in biology, it is an ineradicable concept in culture. In culture, everything happens for a reason, whether we can explain so adequately before or after the fact.

[10] There will be practical limitations on this, but I suspect that for a pack to really function as a pack, and not an organization or a group or a corporation or an army or an occasional association or something else, then this collectively agreed upon goal must be known to everyone in the pack. Ultimately, this becomes merely a matter of definition and edge cases would be interesting to discover and analyze.

[11] Elsewhere, I contrasted the pack with what Canetti calls crowd crystals and I will call a catalyst. To summarize the difference somewhat too schematically, if the form that the pack takes is wholly dependent upon the total presence of each individual in all of her knowledge, skills, and being, then in the catalyst the total presence of each individual in all of his knowledge, skills, and being is wholly dependent upon the form the catalyst must take. A university’s marching band, for instance, presents a catalyst at a sporting event, &c.

[12] This is where the pack of one loses some ground—it cannot multi-task, so to speak. As folks who often have to get things done know, however, there are certainly times when the necessary overhead to organize a group upstages getting something done and it can be easier (as a pack of one) to do it yourself. I propose that there is a value in seeing even this seemingly individual activity in pack terms, at least sometimes.

[13] One can start finding wrinkles in this. If five members of a pack agree upon the goal and one does not what is the status of the non-believer vis-à-vis the pack. What if five no longer believe and one does? It may be that this is where the argument that you cannot have a pack of one demonstrates itself. If five say yes and one says no, then this could be read that a pack of six has splintered into a pack of five. So that, if four say yes and two say no, then the pack has split into two contending packs. Or imagine a pack has returned home and been disbanded, but one person insists that the goal was not reached (i.e., is still a hold out for the non-disbandment of the pack), then this shows how the pack seems (plausibly) to no longer exist. Or, to defend the notion of a pack of one, though the original pack was (collectively) disbanded, all packs emerge out of the larger society, and this hold-out is not a sign of the continuance of the old pack but the first sign of the attempt to form a new pack.

[14] If a ritual is a strictly formulaic cultural activity designed for a specific outcome, then when the obligatory form is removed one has a pack; when the obligatory content is removed, one has a catalyst (what Canetti calls a crowd crystal).

[15] Somewhere in a McCullers story, she provides a description of the gathering storm of a lynch mob. The men are milling about, working their jaws, not speaking much amongst themselves. It is not even clear if they are actually going to get riled up enough to enact their terrible violence. They’re waiting for a spark, an indicator, some kind of something, and when it comes, they’re all onboard. Canetti would call this the discharge (as applied to a crowd), but he can provide nothing about how it comes about or gets distributed through the whole crowd. But from this, it is apparent it is each man gathered subsuming his activity to a collectively agreed upon goal.

[16] This is particularly true in the Warramunga tribe, where there are such strict requirements for who does what vis-à-vis the dead person. Within what Canetti calls their lament pack itself are a number of configurations that form, do their thing, then dissolve into the next thing, and there is no reason—save for human fiat—to pretend that a given scale of looking is more correct than another. For the Warramunga, all three levels matter—those three being: (1) the level that Canetti too narrowly calls the lament pack, (2) the forming and dissolving configurations of sub-packs that make up his “lament pack,” and (3) the larger cultural arc that links a whole series of “lament-like packs” in a while sequence spanning some twenty-four months.

[17] The obvious case of inadequate hermeneusis (or bad poetry) in Canetti is in how he puts the emphasis on lament. He pretends that Shi’ite Islam can be reduced to the “passion week” of Hussein and acts like this is somehow different than the passion week of Yeshua ibn Yusef. It’s not that he’s pretending that Christianity is a religion of lament. It’s that he wants to indulge his orientalist prejudices by wallowing in the flagellation of Ashura’s grievers as if there were never Christian flagellants, as if there’s no emphasis on the gory spectacle of some dude nailed to a cross, &c.

[18] Which itself may be in response to a perceived attack, of course, ad infinitum: Hatfields and McCoys, &c.

[19] Obviously this has an anthropic bias. We could be in the situation of note 19 below vis-à-vis plants, animals, minerals, &c. A part of the critique by first Peoples of the first World is a failure to recognize the criminal pillage and rape of the environment. But even here, this would still make us the only criminals. Until we can prosecute a tornado for vandalism and murder this may continue to be only a one-way street.

[20] With a sufficiently dense language barrier, another group of people may not be recognizable as people. Groups that fragmented due to differences of opinion might share some notions of “crime” in common or they may have become wholly estranged. Groups who encounter one another through wandering may yet have even interacted enough to learn if they have anything in common vis-à-vis crime in the first place. &c.

[21] I.e., what is permitted and done, what is permitted and not done, what is not permitted and done, what is not permitted and not done.

[22] Like all dyads, these are not either or. To posit the sacred and the profane implies the edge between them, which symbolically appears as the yin/yang. Thus, in addition to the sacred and the profane, there is the sacralized profane (such as the shrunken head of slain enemies in Jivaro culture) an the profaned sacred (the social equivalent of what Jung calls the shadow of the Self). Foucault calls them illegalities—things that are overlooked by the panopticon, things that are formally illegal but not prosecuted—five miles over the speed limit being a particularly benign one of these. The panopticon claims to see everything; thus, there is what it sees, what it elects not to see, what it believes it sees but does not, and what it does not see. &c.

[23] The inadequacy of Canetti’s scheme is apparent in its incompleteness. If religions re supposed to arise from transmutations of packs, then here is a table that shows the possible transmutations according to the packs Canetti supplies (the four basic types—he seems unable to decide if a communal feast is really a pack or not). Since he doesn’t adequately specify which packs are in play regarding Shia Islam as a lament, Islam in general as a religion of war, or Christianity as a tranquil lament (compared to the noisy lament of Shia Islam), it isn’t even clear where to put these. Nor is the murderous stampede of the Greek Easter festival plausibly placed, based on Canetti’s exposition. And when I imaginatively try to do his intellectual work for him, it becomes obvious that things might be placed in more than one slot. Additionally, insofar as he refers to the Jivaro as reflecting a pure religion of war, this implies that the “transmutation” involved here is from war pack to war pack, though Canetti never claims this can happen. He says, rather, the packs have a tendency to turn into each other. He, in any case, tries to read it as some kind of “increase” although earlier he has stressed (with the Taulipang example) that no benefit accrued from the war pack whatsoever. From the Taulipang example, it’s not clear what they “do” with the fact of a successful vengeance-taking. For the Jivaro, they obtain something that will eventually (in eighteen months or so) become a shrunken head or tsantsa, and thus an occasion for a feast. This transmutation of war –> increase somehow equates to a religion of pure war.

A –> B

Hunt

Revenge

Lament

Increase

Hunt

n/a

Communion?

Lele

Revenge

(Islam?)

(Shia Islam?)

Jivaro

Lament

Taulipang

n/a

(Christianity?)

Increase

Mandan

n/a

The vacuity divulged by this table is one thing, but it also makes clear the expository gap of what the hunt –> war “transmutation” would be. It would be a slightly more than average psychopathic religion—one that would hunt a deer, for example, and then take revenge upon the forest for the crime of … hard to tell what.

[24] By moral, I do not mean in that oft-heard distinction—a parallel to the insisted upon distinction between religion and spirituality or guilt and shame—that what is moral or reflective of social rectitude is imposed from without (by others, by society) while what is ethical or reflective of personal integrity is composed from within (by oneself, by one’s spirit). At one time, it was  commonplace to refer to a moral instinct in the human being; the loss of that distinction may have been important, though there can be no doubt that overbearing prudes hit squarely by Ambrose Bierce’s proposed use for the word immorality (“someone having a better time than you are”) had something to do with the demise of the distinction. By moral, I mean all human acts in the social world—in other words, all cultural activity, which means (for example) that the moral is concerned not with that we walk but how we walk, not that we speak but what we say, &c. ¶ All that rigorous rectitude—whether in 18th century Europe, Australian tribes in 1904, or US/Soviet social settings in the 1950s in particular—can be more than one wants to bear, this is true. The rigor and policing of the moral code, whatever it is, raises the specter of oppression—raises the desire to make it less meaningful and to have a place to retreat to as a respite from it (i.e., the private domain). Recognizing that there is a moral code at all—as the set of constraints on human behavior—may make it uncomfortable, because it is now visible.

[25] Nor is it a problem that we are ultimately “stuck” in this descriptive mode. Reflective consciousness itself is always after the fact, always trails behind experience by an eye blink. This makes the center of our lives belated but still the center. It is our (only) starting point. We can and do hypothesize other vantage points, an those disclose any number of insights and delusions, and it is only to forget this hypothetical character, especially where other human beings are concerned who may be affected by such a hypothesis, that I am objecting to the habit. All of my analysis here stems from something equally hypothetical. A major portion of my activity is, a la the atninga, to correct the crime committed by Elias Canetti in taking his hypothesis to be not just fact but universal fact.

[26] Iris Murdoch, reputed to be Canetti’s lover at one point and a decidedly more interesting and talented writer than him, does her paramour no assistance in this regard, calling him “one of our great imaginers”.

[27] Canetti does not even begin to try to explain how this discharge transmits itself through the whole of the crowd-to-be, how, why, or where it stops at the edge or interior of the crowd, and all the rest. The discharge, as he deploys it, is a very authentic petitio principii, since it can only be after a crowd “has formed” that Canetti could infer that the discharge must have happened.

[28] It seems unnecessary to emphasize, but just in case: by contrasting immediate self-interest with some other self-interest, it is not at all necessary to pretend that “social-interest” supplants “self-interest” or that the attainment of some longer-term goal that I could not attain by myself through cooperative action isn’t still beneficial to me. It is not an either/or. The aboriginal evidence from Spencer and Gillen (1904) makes clear enough that the cooperative structure of increase rites amongst the Warramunga ensures a sociability of mutual interdependence that cooperatively benefits everyone. But there is a more prosaic sense of non-self-interest in this description as well. If I am a member of a hunting pack, and my tsk is to stand somewhere and make some kind of alarming call that will drive the animal(s) we are hunting in a particular direction, then this act of “hunting” I am being asked to do is actually antithetical to hunting were I by myself. If I intend to hunt a creature, I will not succeed by driving it away. And yet, in this particular circumstance, that is exactly what is needed, because by driving the animals in a certain direction I am directing them into the traps set by my co-hunters. This change of activity, away from what would normally meet a given goal were I by myself to something else, is a very central part of what makes the pack successful as a cultural technology. And it is this subordination of immediate self-interest (i.e., what I would normally have to do or would attempt to do if I were by myself) to the “self-interest” of the collectively agreed upon goal that I’m emphasizing.