Concerning critique and criticism

In a recent thread over at the UFO Updates Facebook page unwinding under a notice of Jacques Vallée’s and James Fox’s having recently appeared on the Joe Rogan podcast, I chimed in in harmony with one commenter who drew attention to the grave flaws with Vallée’s and Aubeck’s Wonders in the Sky, adding Vallée’s and Harris’ Trinity: The Best Kept Secret was as bad if not worse. Another commenter took exception, remarking: “Why build up when it’s so much easier to tear down?”

I found this comment discomfortingly unfair. Anyone who takes the time to read what I have written about Trinity in particular and Vallée’s work in general will, if they read attentively and with a modicum of understanding, percieve I am at all points charitable, at all points straining to present Vallée’s views as accurately and fairly as I can and, only once I have presented them as I understand them and as strongly as I can, do I then venture to either remark their implications or flaws in reasoning. I challenge anyone interested to find passages here at Skunkworks where I depart from this practice and to leave a comment specific to the substance of my discourse.

Some readers here have conflated criticism with critique, destructiveness with Destruktion (deconstruction). Criticism, at its most excessive, descends to fault-finding, and one unconcerned with grasping the criticized’s position accurately or at its most persuasive. Debunkery is an example of this kind of criticism, an approach I have defended, for example, Vallée’s writing against. Nevertheless, when Vallée’s writing has been less than accurate, I have criticized it on those grounds, but far more respectfully than others. But much more often what I engage in here at the Skunkworks is critique: the probing of the presuppositions and implications of an author’s position. A most recent example of the latter is my essaying a particular blind spot in Vallée’s Control System Hypthesis, especially its implications when read in combination with some passages from Passport to Magonia. Who sees this argument as merely destructive or dismissive fails to either grasp the argument or take it seriously.

And if there is anything at work here it’s my taking the authors I engage—Jacques Vallée, Diane Pasulka/Heath, Jeffrey Kripal, and George Hansen, among others (and, yes, even Avi Loeb…)—seriously. I challenge my critics, anyone who finds the thinking here at Skunkworks merely destructive, facilely dismissive, to find any other reader of these authors’ works as scrupulous or charitable. As I have observed, in some circles these authors can do no wrong; in others, no right. Neither stance does their conjectures, thinking, and labours justice. Only a painstaking, vigilant reading that is dialectical, i.e., that both discerns and uncovers the truth of their positions while at the same time the conditions, limits, and troubling implications of that truth can be said to do their work justice, to take it seriously.

As I responded to the commenter whose remark spurred this post: it’s not easy to tear down—something solidly built. Of course, such wanton demolition is not what we’re up to here at these Skunkworks…

Avi Loeb is the worst

Well, maybe not the worst, but pretty bad. At least his dumbfoundingly vapid clickbait articles for The Debrief serve to maintain if not swell the site’s advertising revenues.

That being said, as any more-or-less regular reader here will know, Loeb has become something of this site’s bête noire. Since he came into the public eye with his hypothesis concerning ‘Oumuamua, his speculations about extraterrestrial life, civilization, and technology have consistently embodied the ideological tendencies targeted here. Since his starting up The Galileo Project and becoming a regular contributor at The Debrief, his pronouncements have become all the more exasperating. However much I resolved to “whack-a-Loeb“, his latest contribution, which deigns to wax philosophical about ethics, has reached such a nadir of intellectual dereliction I’m persuaded (and hope!) this post will be my last on him.

Loeb titles his article “How Can We Guide Our Life?” (which right away sets the linguistically-attuned mind furiously scribbling questions…). Whatever exactly he might intend by this title, I take it his article begins by posing the question, roughly, of what one should live for. He rejects being concerned with one’s posthumous reputation. Asked for his “opinion about the true mark of human greatness”, his response is, in a word, humility. He, then, shifts his attention from the person to, presumably, the species:  “How does humanity wish to be remembered on the cosmic scene?” Loeb’s answer is arguably the same, as humble, but on a species-wide scale: humanity is best remembered as possessing an “unpresuming culture that sought knowledge-based [sic] on new evidence from interstellar space,” which, having discovered “that there is a smarter culture on the cosmic block” sought “to do better in the future relative to our cosmic neighbors than we did in the past.”

It doesn’t take too much close scrutiny to find the lapses in Loeb’s logic. In terms of how each of us should lead our lives, Loeb would, on the one hand,  have us ignore our personal legacy. For himself, he seems to place little stock in posthumous reputation (he could “care less about what other people say”):  those who follow are unlikely to have much insight into the whole truth of his life nor are they necessarily going to be the most charitable commentators; memorials, such as paintings or statues, communicate “little about…guiding principles or the value of…accomplishments” (assuming that is their raison d’etre…). Shifting to a cosmic perspective, even Einstein is put in his place, as “[m]ost likely, there were smarter scientists on habitable planets around other stars billions of years ago”. From this perspective, that of “the vast scale and splendor implicit in the cosmos” wherein “all humans die within ten billionths of cosmic history,” all individual accomplishments shrink to nothing. However, on the other hand, Loeb confesses he guides his life “so as to have an opportunity to press a button on extraterrestrial technological equipment,” to be the one to discover an unquestionable artifact of alien technology. As the one to make this discovery, Loeb would, by his own account, be the one to “force a sense of modesty and awe in all of us” as we discern our place on, at least, “the cosmic block.” Being the one to put us in our interstellar place, Loeb would imaginably take his place with the likes of Copernicus and Galileo, at least as far as human history is concerned, which would be quite the legacy, one to be proud of….

When it comes to how humanity might be “remembered on the cosmic scene,” deeper problems yawn. One might well ask: remembered by whom? Loeb leaves this question unasked and unanswered. Either homo sapiens will be known by itself or by other forms of extraterrestrial intelligence, which, for Loeb, includes forms of Artificial Intelligence (AI). In the event of our extinction, then, imagines Loeb, “perhaps our technological kids, AI astronauts, will survive,” artifacts best designed to “carry the flame of consciousness” out into distant space and time, and, thereby, into the awareness and memories of other intelligent lifeforms. At the same time, however, Loeb advocates a humility, not only because, as he surmises, other, alien intelligences have, do, and will exceed our own, but, from the point of view of “the vast scale and splendor implicit in the cosmos…all humans die within ten billionths of cosmic history,” extinguishing both their legacy and their narrow-minded, perverse self-importance. From this perspective, whether humankind is proud or humble, whether its traces are one day discovered or not, seems pointless. On the one hand, Loeb posits that humility should guide our life; while, on the other, he, personally, aspires to a historical greatness (guided by the aspiration to be the one who discovers an indisputable extraterrestrial technological artifact); he suggests humankind should culture some kind of memorialized humility or at least leave a more durable legacy in the form of its own technology dispersed throughout the stars, but the vast spatiotemporal scale of the cosmos swallows all such aspirations, reducing them to nothing. Either Loeb’s cosmic ethic must restrict itself to the human scale, which, for Loeb is merely an arrogant, self-centred point-of-view, or it must view things from a cosmic perspective, which dissolves all possible value in its implacable vastness.

Loeb’s thinking is riddled by such ironies or contradictions. He constantly advocates against being narrow-minded and self-centred, but his entire worldview is oriented to just such a perverse self-regard. His “humanity” is hardly all human cultures that have lived or do, but that of the so-called “advanced societies” of what used to be termed “the First World”. This idolatry is evident in the old-fashioned sentiment that “human history advances” and in the technofetishistic fantasy of “our technological kids, AI astronauts” that can act as vessels for “the fire of consciousness.” This squinting focus on the technological is evident in the disdainful yawn he shares with “kids” (presumably students) who pass by the “statues and paintings of distinguished public figures” in University Hall at Harvard University. Paintings and statues, however, aren’t made to communicate accomplishments of those honoured but to memorialize them because of their accomplishments. Loeb, here, betrays, as usual, an instrumental thinking, one that conceives of everything in terms of ends and means and efficiency (“A video message would [be] far more informative in conveying the authentic perspective of these people from our past”), i.e., Loeb’s stance in this regard, despite his philosophical airs, reveals him to be a rank philistine when it comes to matters of general culture. This narrowness is most egregious at its most unconscious. Loeb relates he inscribes a “personal copy of [his] book Extraterrestrial to [his] new postdoc… just arrived at Harvard from the University of Cambridge in the UK” as follows: “although you arrived to the Americas well after they were discovered, you are here just in time for discovering extraterrestrial intelligence and its own new world.” The blithe indifference to the fact that Turtle Island was hardly “discovered” least of all by his post-doc’s European forebears and his oblique referring to it as “the new world” betray an unconscious colonialism.

Taken together these stances reveal the direst irony of Loeb’s incessant invectives against narrow-mindedness and self-centredness. As I have never tired of pointing out, whenever Loeb posits older, more intelligent (or, at least, knowledgeable) extraterrestrials, he doesn’t decentre humankind but centres it all the more securely, taking Western instrumental reason to be characteristic of intelligence-as-such and Western, technoscientific society as the instantiation if not the very universal model of civilization. All these presuppositions, prejudices, and unreflected blindspots taken together coalesce into a black hole around which all Loeb’s conjectures about extraterrestrial intelligence, civilization, and technology orbit, a black hole into which I hereby consign all the man’s thoughts that touch on what concerns us here at the Skunkworks.

Whack-a-Loeb: the latest round…

The Debrief has inaugurated a new series of articles, “Our Cosmic Neighbourhood”, and given the first word to Avi Loeb (to whom The Debrief seems a while back to have given carte blanche…). I don’t know if this new series will be penned exclusively by Loeb, but his latest contribution leaves me inclined—if not urged by a sense of duty to maintaining speculative sanity (let alone my own…)—to engage in a game of Whack-a-Loeb: What Dr. Loeb publishes at The Debrief I will smack down (as long as I have the patience: life is only so long…).

This missive is titled “Communicating with Extraterrestrials”. After a regrettably naive introductory gambit, about how “the war in Ukraine illustrates how difficult it is for earthlings to communicate with each other even when they share the same planet and communication devices”, Loeb, in characteristic SETI fashion, considers the possibilities that communication between earthlings and extraterrestrials might occur by means of either a technological artifact from perhaps a long-dead civilization or some electromagnetic signal. Regarding that first possibility, Loeb’s favoured, he speculates

A more advanced form of an indirect encounter with a messenger involves an AI system that is sufficiently intelligent to act autonomously based on the blueprint of its manufacturers. Since AI algorithms will be capable of addressing communication challenges among human cultures in the Multiverse, the same might hold in the actual Universe. In that case, we should be able to communicate at ease with a sufficiently advanced form of AI astronauts, because they would know how to map the content they wish to convey to our languages.

It is Loeb’s hope that this “encounter with extraterrestrial AI will be a teaching moment to humanity and lead to a more prosperous future for us all.”

There are, unsurprisingly, a number of problems with Loeb’s speculations, many of which find echo in his earlier Medium article, “Be Kind to Extraterrestrial Guests” I responded to here. In this earlier piece, Loeb seemed to overlook that communicating with an extraterrestrial life form or artifact thereof is, at least, a form of interspecies communication. Likewise, in this, his latest foray on the topic, he overestimates (as usual) the potential for AI (“AI algorithms will be capable of addressing communication challenges among human cultures in the [Metaverse]”) and underestimates the challenges of interspecies communication (“we should be able to communicate at ease with a sufficiently advanced form of AI astronauts, because they would know how to map the content they wish to convey to our languages”).

I don’t doubt the capacity for relatively functional AI translation, but such “translation” can happen only when the parties communicate in stereotypes, what literary theorists long back termed “the already written” of speech’s inescapable “intertextuality”. That is, translation software does not interpret or understand what the interlocutors actually say, but draws on a vast data bank of the already-written to find the most probable equivalence for any given string. The mindlessness of this procedure can, as a direct consequence of how it works, result in laughable mistranslations.

The root of this problem was posited already in the late Eighteenth and early Nineteenth centuries by the founder of modern hermeneutics, Friedrich Schleiermacher, who observed that language could be characterized in at least two different ways, the grammatical and technical. By the former, he meant the impersonal rules that underwrite the possibility of any linguistic utterance’s being well-formed and hence not nonsensical in the first place, precisely the rules linguistics can quantify and programmers exploit to develop translation and speech- or text-productive software. However, there is also a creative aspect to speech, the “technical”, that both exceeds the already-written, being novel, and that underwrites the possibility of understanding a novel utterance if not any utterance in general (as a speaker must always make an educated guess not only about exactly what words are spoken but how they might be intended). As contemporary philosopher Robert Brandom so eloquently puts it: “What matters about us morally, and so, ultimately, politically is…the capacity of each of us as discursive creatures to say things that no-one else has ever said, things furthermore that would never have been said if we did not say them. It is our capacity to transform the vocabularies in which we live and move and have our being”. The “vocabularies” (the “already said”) that any such creative speech act will exceed and transform will, by the same token, transcend the capacity of the translation AI. The challenge of just such creative language use is especially acute in the case of tone, e.g., irony, which operates precisely in a space shared by the grammatical and technical, i.e., one and the same expression is used to mean its opposite. In this instance, the semiotic model of language-as-pure-syntax (Schleiermacher’s “grammatical” aspect) meets a limit, as Paul de Man so famously demonstrated in the opening chapter of his Allegories of Reading (1979).

If a linguistic AI by its very nature runs up against limits imposed by the nature of language itself, then how much moreso an AI produced by another species (as if the very idea were itself unproblematic…)? That is, at least translation software developed by a terrestrial programmer need only “translate” between human languages, but the problem of interspecies translation, as I sketch out in my earlier post on Loeb’s “xenophobic xenia”, is much more difficult (Diana Pasulka’s claims concerning the research of Iya Whiteley notwithstanding). I get the impression the very real challenges to this idea are not recognized by Loeb because of how he seems to conceive of language, writing, as he does, trusting his imagined “AI astronauts” would be able “to communicate at ease…because they would know how to map the content they wish to convey to our languages.” Many assumptions are packed into this all-too-casual claim. Is it the case in fact that a language “maps a content”, for example? Arguably not, such an idea belonging to the conception of language prior to the advent of the science of language, philology, in the mid-Eighteenth century. Moreover, without already possessing a knowledge of a human language, how would that AI find the equivalences for what it wished to convey? Loeb seems to think that natural languages are systems of or for more or less unproblematic representation, when in fact languages are intimately bound up with their pragmatic use in what philosopher Ludwig Wittgenstein famously termed “forms of life.” For this reason he once remarked “If a lion could speak, we would not be able to understand him,” a contention all the more applicable to an extraterrestrial organism, let alone its artifact. All of this assumes, of course, that Loeb’s AI astronaut seeks out Homo sapiens rather than some other species of organism it encounters on earth…

There is no little irony in Loeb’s opening his article by telling his reader he “was recently invited to attend an interdisciplinary discussion with linguists and philosophers, coordinated by the Mind Brain Behavior Interfaculty Initiative at Harvard University [that] will revolve around the challenge of communicating with extraterrestrials as portrayed in the film Arrival.” I’m going to assume the film’s plot follows in its main lines those of the short story it is based on. What’s telling is that the film is not about communicating with extraterrestrials. The science-fiction scenario, as compelling as it is (imaginably why it’s adopted to orient the discussion Loeb has been invited to attend) is a literary device to present the plot’s theme, which is amor fati: the language of the aliens enables the protagonist to know in advance the daughter she will bear will die in childhood; nevertheless, in full knowledge of the pain she will bear, the protagonist chooses to affirm this fate. Moreover, that an important theme is also that of language, the story fulfills a (post)modern imperative for fiction, that it be reflexive, i.e., that it present and probe its own materiality. In a very important way, taking the story at face value is to completely misunderstand it, a sore failing for those who would aspire to understand and communicate with an utterly alien Other.

Avi Loeb’s Artificial Intelligence

I don’t know Avi Loeb’s reputation among his fellow astrophysicists, but when he ventures into the field of more general culture the results are risible.

In Loeb’s latest article for The Debrief, if one weren’t to read too carefully, one might think Loeb is merely arguing that extraplanetary exploration is best accomplished by artificially-intelligent probes rather than human astronauts. However, closer inspection reveals his position is both bolder and more questionable.

He begins his article by posing the question: “Once humanity begins sending its assets to other planets, what should be our goal?”, which he answers as follows:

Fundamentally, there are two choices: A) Use artificial intelligence (AI) astronauts to plant seeds of scientific innovation in other locations, so that intelligence is duplicated and not at risk of extinction. B) Make numerous copies of what nature already produced on Earth.

Apart from the reservations the sensitive reader might have about the wording of the question itself (e.g., the ideological connotation of ‘assets’ and the too-easy equation of ‘humanity’ with some “we”), the logic of the “fundamental choices” is not at all clear. The AI “astronauts” (why the personification? By what warrant?) of A) are not a “goal” but a means to an end (goal), which is to ensure “intelligence is duplicated and not at risk of extinction” (a clause whose rhetoric we’ll return to), while B) is presented as an end (goal) in itself. The brows are already furrowing. Probes have been sent to other planets, moons, asteroids, and comets in the solar system and human astronauts may well likely follow in the name of scientific exploration and ultimately exploitation of these same bodies for raw materials or colonization, but this is not one of Loeb’s “goals” for sending “assets to other planets”. On the one hand, the status quo speaks against there being “fundamentally…two choices”; on the other, it is not at all clear prima facie what Loeb’s two “fundamental” choices are or how they relate to each other.

Choice A) does seem relatively clear: sending AI probes to other planets is a means of proliferating “intelligence” both numerically and spatially to better insulate it from extinction. However, is B) (presumably disseminating “copies of what nature already produced on Earth”) an end in itself or a means to an end, namely a biological analogue to the strategy proposed by A)? If it is an end in itself, who has proposed it? Science fiction has surely imagined human beings colonizing earth-like worlds and transplanting, as it were, earth plants and animals as part of that project. Closer to the realm of real technological speculation, geoengineers have proposed using earth organisms, single-celled and vegetable, to terraform a planet. Both these ideas, however, are means to the exploration and exploitation of offworld locations. Loeb must have something else in mind…

By “what nature already produced on Earth” is Loeb referring to human beings, so that B) is a version of A): by colonizing other planets humanity better insures its long term survival? If so, Loeb is thinking like Elon Musk, who has argued that a Mars colony would precisely function to preserve humanity in the event of its extinction on earth. On the one hand, Loeb does seem to equate “what nature already produced on Earth” with homo sapiens, writing “We are emotionally attracted to B, because we are attached to ourselves”. However, on the other, he continues: “we like who we are and imagine that if we duplicate natural selection in an Earth-like environment something as special as us will result. Of course, natural selection holds no such promise.” Now, a migraine seems in the offing. Who imagines “that if we duplicate natural selection in an Earth-like environment something as special as us will result”? Not our aforementioned geoengineers: terraforming is proposed as a way to render a planet habitable for human beings, not as a way to evolve some humanoid species. Why exactly—to what end—the “asset” we should send to other planets should be “what nature already produced on Earth” remains unclear….

Whatever the precise significance of B) (an imprecision not inconsequential), Loeb is much clearer as to why he is “all in favor of option A for AI.” He submits that

“Choice A…promotes new systems that are more advanced and adaptable to very different environments. If evolution is supervised by AI systems with 3D printers, it could be more efficient at identifying optimal solutions to new challenges that were never encountered before,” and “[AI] has a higher likelihood of survival in the face of natural disasters, such as loss of planetary atmospheres, climate changes, meteorite impacts, evolution of the host star, nearby supernova explosions, or flares from supermassive black holes.”

I’m uncertain by what warrant exactly Loeb claims that any existing or realistically potential AI probe is “more advanced” than a human being or other terrestrial organism, however much such a probe might well be more adaptable (being able to function in an unearthly atmosphere, like that of Mars, for example). Setting aside for the moment what exactly he might mean by “evolution…supervised by AI systems with 3D printers”, Loeb is clearly fudging when he writes said AI “could be more efficient at identifying optimal solutions to new challenges that were never encountered before” (it could just as easily be stumped by these unencountered conditions) and that “[AI] has a higher likelihood of survival in the face of natural disasters, such as loss of planetary atmospheres, climate changes, meteorite impacts, evolution of the host star, nearby supernova explosions, or flares from supermassive black holes.” It’s believable a probe might very well survive the “loss of planetary atmospheres” or “climate changes” (depending upon their respective causes and concomitant conditions) but less that it would emerge unscathed from “nearby supernova explosions or flares from supermassive black holes.” That Loeb places such faith in the capacities of some imagined AI betrays a technofetishism that underwrites his whole approach to this and related topics…

However questionable his arguments in support of A), his reasons for dismissing B) are no less problematic. Even his rhetoric is flawed. Where ‘A’ stands for ‘AI’, ‘B’ stands for ‘Barbaric’, as he explains at length:

The second approach is suitably labeled B since it was adopted by barbarian cultures throughout human history. Its brute-force simplicity in making copies of existing systems could lead to dominance by numbers, but its main weakness is that it is vulnerable to new circumstances that previous systems cannot survive. For example, the dinosaurs were not smart enough to use telescopes capable of alerting them to the dangers of giant space rocks like Chicxulub. Also, the ideas offered by Ancient Greek philosophy survived longer than the Roman Empire despite the latter’s military might in conquering new territories.

Before addressing his Wikipedia-derived concept of the “barbarian” let’s address some of this paragraph’s other claims. If we compare the staying-power of “brute-force simplicity” to complexity with reference to being “vulnerable to new circumstances”, is it the case complex societies, for example, are more resilient? Arguably not. However much their sophistication might enable them to build killer-asteroid detecting telescopes, the very complexity of such societies renders them less resilient to “new circumstances”. In all honesty, how does Loeb imagine the next hundred years to play out for the earth’s “advanced societies” in the face of endlessly unfolding and increasingly dire climactic changes (changes both of their own doing and well-known, understood, and, thereby, preventable) let alone the acutely-present threat of nuclear war (I write this in the first week of the Russian Federation’s invasion of Ukraine)? Of course, Loeb is contrasting the “brute simplicity” characteristic of the barbarian to the flexible sophistication of his imagined AI astronauts. What is less contentious is Loeb’s apparent ignorance of European intellectual history. Greek philosophy survived into the Renaissance when its rare, extant texts were collected due, first, to its being preserved by being translated into Latin, spreading throughout the Roman Empire and later Christianity, and, second, being translated and commented on by Arabic philosophers, whose translations were then rendered back into Latin. That is to say, the Roman Empire, its conquest of Greece and assimilation of its culture, the Empire’s geographical extent, and its being the foundation for the Christianization of Europe, North Africa, and the Near East, was the material condition for the survival of Greek philosophy down into the present.

If we turn, then, to Loeb’s deployment of ‘barbaric’ even more flaws are revealed. Perhaps motivated by wanting to seem or be user-friendly, Loeb hyperlinks his article to the Wikipedia entry for the term in question, which begins

A barbarian (or savage) is someone who is perceived to be either uncivilized or primitive. The designation is usually applied as a generalization based on a popular stereotype; barbarians can be members of any nation judged by some to be less civilized or orderly (such as a tribal society) but may also be part of a certain “primitive” cultural group (such as nomads) or social class (such as bandits) both within and outside one’s own nation.

‘Barbarian’ at root is a derisory expression rooted in the Greek βάρβαρος, used (at least) for all non-Greek speaking people, because their languages sounded (to Greek ears) like the barking of dogs (“Bar! Bar!”). Hence, even the Wikipedia entry Loeb so helpfully links makes clear in its first sentence that the term is used to denote “someone who is perceived to be either uncivilized or primitive” (my emphasis). Anyone familiar with the peoples the Greeks and Romans so disparaged (Egyptians, Persians, Medes, Celts, etc.) will know they were hardly “uncivilized”; moreover, anyone acquainted with anthropology since, e.g., Franz Boas and Margaret Mead, will know all talk of “primitive” peoples has, thankfully, been tossed in the trash can of history. Given the etymology of the term and the facts of human culture, it is surely eyebrow-raising that a man of Loeb’s education could write about ” barbarian cultures throughout human history” (my emphasis) and their “brute-force simplicity” (my emphasis). Indeed, one could ask for no better counter to Loeb’s use of the term than the brute fact that the longest-lived, continuous culture on earth is that of the Australian Aborigine…

If ‘barbarian’ connotes a certain inarticulateness or linguistic clumsiness, then the rhetoric of Loeb’s article is arguably barbaric. Consider, again, Loeb’s two fundamental choices: “A) Use artificial intelligence (AI) astronauts to plant seeds of scientific innovation in other locations, so that intelligence is duplicated and not at risk of extinction. B) Make numerous copies of what nature already produced on Earth.” Notice the tacit (con)fusion of the vocabularies for what the ancient Greek philosophers distinguished as φύσις (physis, nature) and τέχνη (art, or craft): AI probes are (figuratively) “astronauts”, that “plant seeds” of (artificial) “intelligence”; “nature” produces “copies” of organisms, like (in the next paragraph) “an industrial duplication line” or, imaginably, a 3D printer. Even Loeb’s joke about dinosaurs “not smart enough to use telescopes capable of alerting them to the dangers of giant space rocks” in his discussion of barbaric and civilized cultures blurs the line between nature (dinosaurs) and culture (barbarians) for comic effect, but simultaneously dehumanizes the barbarian.

These stylistic moves might be glossed as mere rhetorical ploys (as if Jacques Derrida hadn’t unmasked precisely the apparent innocence of such “mere rhetoric”…), but the same pattern is discernible in what seem more serious propositions. “AI systems could be viewed as our technological kids and a phase in our own Darwinian evolution, as they represent a form of adaption to new worlds beyond Earth,” writes Loeb. “Adopting survival tactics by AI systems in these alien environments might be essential for tailoring sustainable torches that carry our flame of consciousness there.” In all seriousness, a piece of AI can be thought our technological child no more than a chipping tool of our distant, evolutionary ancestor is a child of theirs. Loeb seems to believe that the “intelligence” of an AI is or might be of the same kind as that of human beings or other living organisms, and that this “intelligence” is somehow equivalent to consciousness. That an AI probe as a technological artifact is evidence of the instrumental reason necessary to invent and construct is goes without saying, but the identification of AI as (self)conscious intelligence is unwarranted de facto and arguably de jure. Loeb’s technofetishism finds its extremest expression when he writes that our fundamental choice “is between taking pride in what nature manufactured […]over 4.5 billion years on Earth through unsupervised evolution and natural selection (B) or aspiring to a more intelligent form of supervised evolution elsewhere (A)” (my emphasis). Here, the hubris that inspires plots of unbridled engineering of life on earth finds its expression, the presumption that natural science as the knowledge of and subsequent power over nature is sufficient in itself to supervise and guide evolution in “a more intelligent form” than the “unsupervised” process that presumably gave rise to life on earth and shaped its development.

Surely space exploration (and its motivations) and the survival of life, human and otherwise, are serious matters demanding ever more urgently to be addressed, interrogated, and, in their own right, explored. But to rise to the challenge of the world “we” (who?) have made, no less surely do we need to draw on all the powers and inheritance of our species, putting that fetishization of the kind of thinking that underwrites both such speculations and the mortally-urgent dilemmas we have come to face in its proper place.

The Xenophobia of Avi Loeb’s Interstellar Xenia

Anyone struck by the recent announcement of Christopher Mellon’s and Luis Elizondo’s being appointed research affiliates to Avi Loeb’s Galileo Project may have been curious enough to visit the project’s website, where they may have been tempted to read an article linked there, “Be Kind to Extraterrestrial Guests“, by project head Loeb.

Loeb proposes that “we” (who are we? Homo Sapiens? Americans? Harvard faculty?…) adopt the classical Greek custom of Xenia, the hospitality extended to strangers as typified in the Homeric epics, except in an expanded, “interstellar”, sense: “Interstellar Xenia implies that we should welcome autonomous visitors, even if they embody hardware with artificial and not natural intelligence, which arrive to our vicinity from far away.” Why? “Our technological civilization could benefit greatly from the knowledge it might garner from such encounters.”

A problem with Loeb’s proposal is evident, first, in the example of mundane hospitality he offers and the exosocial implication he draws from it:

On a recent breezy evening, I noticed an unfamiliar visitor standing in front of my home and asked for his identity. He explained that he used to live in my home half a century ago. I welcomed him to our backyard where he noted that his father buried their cat and placed a tombstone engraved with its name. We went there and found the tombstone….

If we find visitors, they might provide us with a new perspective about the history of our back yard. In so doing, they would bring a deeper meaning to our life within the keen historic friendship that we owe them in our shared space.

Loeb’s anecdote is likely chosen as much for its concreteness and emotional appeal as for whatever features it might be said to share with a hypothetical encounter with ET. That being said, the scenario presents the encounter between Homo Sapiens and an extraterrestrial Other as one of immediate (i.e., unproblematic) mutual recognition (like that between Loeb and the “unfamiliar visitor”), which is both telling and fateful.

By what warrant does Loeb assume the unproblematic recognition of or by this Other? Aside from the obvious obstacle, that, while Loeb and his visitor, or the stranger and his host in Bronze Age Greece, share the same culture, which an interstellar visitor would not, consider the scenario depicted in the science-fiction film Europa Report. A team of astronauts is sent to explore the moon of Jupiter named in the film’s title, where it discovers under the ice a bioluminescent creature resembling an earthly squid or octopus. Does the creature use its bioluminescence to hunt or attract prey in the dark oceans under Europa’s ice, or, being “intelligent“, is it its means of communication? And, if the creature were “intelligent”, how would the human astronauts know and how would the creature perceive in the astronauts their “intelligence”? Why would the astronauts, rather than, say, their capsule, even be the focus of the creature’s curiosity? Even so shopworn a science-fiction franchise as Star Trek (in Star Trek IV: The Voyage Home) envisioned a technologically-advanced, extraterrestrial species blindly indifferent to all human civilization on earth in the search for its own Cetacean kind.

Even if we set aside the specific case of our encountering extraterrestrial intelligent life, the same problem persists. The Galileo Project’s first focus is the search for near-earth “extraterrestrial equipment“, whether a functioning artificially intelligent probe or piece of detritus. In any case, we must be able to recognize the artifact as an artifact, precisely the point of contention around ‘Oumuamua: was it a natural object or an artificial one, as Loeb et al. argue? Again, science fiction has touched on just this challenge, as the ability to perceive a piece of alien technology as such is pivotal to the plot of Star Trek: the Motion Picture. (Loeb seems more a fan of Carl Sagan’s novel Contact or its film version). The problem becomes even more intractable if we take seriously speculations that the very structures of the cosmos or its laws may be artifacts.

So, whether our interstellar interloper be a piece of technology, “intelligent” or otherwise, or biological, we are a far ways from the easy hospitality Loeb was able to offer his visitor, as we may not even know we are in the presence of a visitor and that stranger may not recognize they are in the presence of a potential host. How is it, then, that Loeb overlooks these grave obstacles to mutual recognition in his advocating Interstellar Xenia? I propose that Loeb, like all those obsessed with, fascinated by, or or otherwise inclined to indulge the idea of extraterrestrial, intelligent life, is on the lookout for an anthropomorphic “intelligence”, failing to recognize, at the same time, that encountering an exo-tic, extraterrestrial life form is an instance of interspecies communication.

One needn’t travel to an imagined Europa to discover the grave flaws in Loeb’s perspective. First, restricting “intelligence” to human intelligence in general or that teleological, problem-solving, technical intelligence, instrumental reason, is demonstrably perverse, de facto and de jure. One need only glance at the growing body of research into animal and plant intelligence to see that Homo Sapiens already inhabits a planet teeming with intelligent, nonhuman life. Philosophical reflection on the concept of intelligence, too, dissolves the identification of intelligence with human, instrumental reason. Justin E. Smith makes this case in both a lively and readable manner that I encourage interested parties to read for themselves; here, I attempt to condense his case…. Smith explains

…the only idea we are in fact able to conjure of what intelligent beings elsewhere may be like is one that we extrapolate directly from our idea of our own intelligence. And what’s worse, in this case the scientists are generally no more sophisticated than the folk….

One obstacle to opening up our idea of what might count as intelligence to beings or systems that do not or cannot “pass our tests” is that, with this criterion abandoned, intelligence very quickly comes to look troublingly similar to adaptation, which in turn always seems to threaten tautology. That is, an intelligent arrangement of things would seem simply to be the one that best facilitates the continued existence of the thing in question; so, whatever exists is intelligent….

it may in fact be useful to construe intelligence in just this way: every existing life-form is equally intelligent, because equally well-adapted to the challenges the world throws its way. This sounds audacious, but the only other possible construal of intelligence I can see is the one that makes it out to be “similarity to us”…

Ubiquitous living systems on Earth, that is —plants, fungi, bacteria, and of course animals—, manifest essentially the same capacities of adaptation, of interweaving themselves into the natural environment in order to facilitate their continued existence, that in ourselves we are prepared to recognize as intelligence….

There is in sum no good reason to think that evolutionary “progress” must involve the production of artifices, whether in external tools or in representational art. In fact such productions might just as easily be seen as compensations for a given life form’s inadequacies in facing challenges its environment throws at it. An evolutionally “advanced” life form might well be the one that, being so well adapted, or so well blended into its environment, simply has no need of technology at all.

But such a life form will also be one that has no inclination to display its ability to ace our block-stacking tests or whatever other proxies of intelligence we strain to devise. Such life forms are, I contend, all around us, all the time. Once we convince ourselves this is the situation here on Earth, moreover, the presumption that our first encounter with non-terrestrial life forms will be an encounter with spaceship-steering technologists comes to appear as a risible caricature.

Both fact and reason, then, call into serious question the very intelligibility of Loeb’s imagined, hospitable meeting, for there are no grounds to decide just what organism, extraterrestrial or otherwise, would count as an Other for us to greet (and vice versa: on what grounds would Homo Sapiens be picked out of all the other species on earth to be that Other’s Other?). It’s almost as if Loeb has taken his clue from mythology, not only that found in the epic accounts of xenia, but the Biblical Creation story, wherein Man is made in the image of God and given sovereignty over all other creatures, or the myth of Prometheus who gifts humankind fire or inventive ingenuity. Such a metaphysical idea grants Homo Sapiens a special characteristic (“intelligence”), which is then imagined to be possessed by other, similarly “ensouled” and gifted extraterrestrials we hope not merely to encounter but to meet.

This hope, however, is futile, as the only creature that meets the criteria we have set is ourselves. Were the problem grasped in its more thorough-going form, as one of interspecies communication, then we might turn our attention to all those other organisms with whom we share the earth and perhaps reflect on the nature and extent of the hospitality we extend to them and may perhaps be said to owe them. With this thought, the perversity of why we should extend hospitality to “autonomous visitors, even if they embody hardware with artificial and not natural intelligence” is revealed: “Our technological civilization could benefit greatly from the knowledge it might garner from such encounters.” First, Loeb narrows down civilization to its technology (as if technology were somehow meaningfully abstractable from the society and culture that produce it), then he restricts the interaction to what we, the hosts, might gain (“knowledge”), twisting his central idea of xenia out of all resemblance to the Hellenic custom he invokes, which is characterized in the first instance by the generosity of the host.

Loeb’s vision here is, first, narcissistic (i.e., it sees intelligence only as human intelligence, which he in turn seems to restrict to technical ingenuity, at that) and, second, self-centredly grasping (in conceiving of xenia only in terms of what we, the hosts, have to gain from our guests, “knowledge”). The supreme irony of Loeb’s position is revealed by this insistence that the discovery of a technologically-advanced, extraterrestrial civilization would precipitate a “Copernican revolution” that would disabuse humankind of its delusion that it is the only “intelligent” (and, hence, the most intelligent) species in its galactic neighbourhood, inspiring it to adopt instead a “cosmic modesty“, when in fact Loeb has conceived human instrumental reason as “intelligence” itself, the archetypal standard by which any other organism is determined to be intelligent or not, i.e., his stance is fundamentally anthropocentric. The narcissism of this conception entails that we will only ever be able greet and extend hospitality to ourselves. Loeb’s stranger is not strange enough….

Faster than a speeding light sail: a note on Avi Loeb’s thesis concerning the artificiality of ‘Oumuamua

In a recent discussion with a friend about Avi Loeb’s hypothesis that the object ‘Oumuamua displayed behaviours consistent with its being an artifact of nonhuman technology, namely a light sail, one problem with the consistency of his thesis struck me.

A root problem with Loeb’s thinking that I have noted at length here is the unproblematic spontaneity of the very idea of nonhuman, extraterrestrial technology of the kind Loeb proposes ‘Oumuamua might be. It’s precisely the way the idea seems unquestionable, even as a speculation, that I argue is a mark of its being ideological and calling for scrutiny. (Interested readers are encouraged to click on the ‘Avi Loeb’ tag to access previous posts on this topic).

However, aside from “merely” philosophical reflection if not critique of Loeb’s thesis, one might propose a problem with its internal consistency. If we suppose ‘Oumuamua to be a light sail, then it must have originated, however long ago, from a relatively advanced extraterrestrial civilization. If said civilization were sufficiently sophisticated to imagine, design, and manufacture a light sail, is it not likely the same civilization had at the same time if not earlier developed a form of artificial communication that employed some frequency of the electromagnetic spectrum, e.g., radio? If this same civilization were to possess some such communications technology, then it seems arguable that signals from this civilization would have reached earth long in advance of a light sail, given their relative velocities. In the same way, long before any light sail or subluminal spacecraft from earth will reach another solar system, all the EM emissions from our communications technology will have reached that solar system long in advance. Therefore, subject to a whole raft of assumptions, admittedly, imagining a light sail arriving in our solar system suggests that signals, intentional or otherwise, from the home civilization of said light sail will have alerted us to that civilization’s existence long in advance of the arrival of their spacecraft.

Just a thought, and one I doubt is original to me. (Nor should the implications of this argument for the Extraterrestrial Hypothesis for the origin of UFOs/UAP be underestimated…).

“It is hard to see how any other outcome is possible”: The Platonism of S.E.T.I.

Regular visitors to these Skunkworks can imagine how our interest was piqued by the headline “Philosopher UFOlogist says humans are not ready to make contact”. The Skunkworks Research Library secured the (self-published) book in question, Adrian Rudnyk’s The Assessment:  The Arrival of Extraterrestrials, and a brief notice of it might be forthcoming, but, here, I want to essay the more profound way that the Search for Extraterrestrial Intelligence (SETI) and speculations about intelligent, technologically-advanced extraterrestrial life is more “philosophical” than Rudnyk seems to perceive or SETI and its collaborators would themselves probably be prepared to admit.

A driving thesis of the critical and creative work here is that the very idea of a technologically-advanced extraterrestrial civilization is ideological, i.e, the form of one society and culture of one species on earth is held up as paradigmatic and natural. So-called “advanced” society (that of the so-called “First World”) imagines itself to be, in Francis Fukuyama‘s expression, “the end [the final goal] of history”. This assumption underwrites untroubled speculations about extraterrestrial life, intelligence, and culture: if some life evolves “intelligence” (like that displayed by technologically-advanced terrestrial societies), then that intelligence will likewise develop technologies along lines analogous to the development of earthly technologies, such that it makes sense to speak of these extraterrestrial technologies as being less or more advanced than those possessed by homo sapiens at a given time. The homogeneity of such development is even thought sufficient to be able to speak intelligibly about technologies hundreds, thousands, and even millions upon millions of years “more advanced”…

A most recent example of this kind of “thinking” is that of Avi Loeb. Loeb is best known among SETI and UFO enthusiasts for proposing and arguing that the first known interstellar object to visit our solar system, 1I/2017 U1 ‘Oumuamua was in fact an alien artifact, a “technological relic.” Such astroarchaeological artifacts would be valuable to find, study, and reverse engineer, Loeb argues, because “it might be a way of short-cutting into our future because it would take us many years to develop the same technology, so there are lots of benefits that I can imagine for humanity from just finding technological relics in space.” I’ve addressed Loeb’s views here before, both specifically and more generally. Aside from these criticisms, in light of Arik Kershenbaum’s The Zoologist’s Guide to the Galaxy: What Animals of Earth Reveal About Aliens—and Ourselves, I’m prompted add another, addressed to Loeb’s self-confessed love of philosophy.

Kershenbaum’s book by and large is more level-headed than Loeb’s recent Extraterrestrial: The First Sign of Intelligent Life Beyond Earth, extrapolating, as it does, what we know about the evolution of life on earth to potential life forms on other planets. Such an exercise does not fall prey to ideological blindness the way that Loeb et al. do, as it assumes only that the laws of physics, chemistry, and biochemistry (and, by extension, evolution) hold throughout the galaxy if not known universe. However, when pushed, Kershenbaum can’t help but fall into the same trap as all those who take the idea of technological, extraterrestrial civilizations “seriously”. In a recent interview with the author, Kermit Pattison relates

Kershenbaum predicts that some aliens will exhibit social cooperation, technology and language… He even posits that aliens will share the quality we hold most dear: intelligence. “We all want to believe in intelligent aliens,” he writes. “It seems inevitable that they will, in fact, exist.”

That such a scenario “seems inevitable” reveals that Kershenbaum and those who think like him are no longer engaged in scientific but metaphysical speculation. Indeed, the idea of this inevitability is arguably grounded in Plato’s theory of Forms, which precedes even the term ‘metaphysics’.

Plato’s theory of Forms or Ideas is arguably as much an invention of Plato’s interpreters as of the author of the dialogues himself. That being said, one can all-to-quickly summarize the theory in its received form as follows:

The world that appears to our senses is in some way defective and filled with error, but there is a more real and perfect realm, populated by entities (called “forms” or “ideas”) that are eternal, changeless, and in some sense paradigmatic for the structure and character of the world presented to our senses.

If we think of these Forms as designs or plans, the temporal connotations of these words suggests just how Kershenbaum’s prediction about extraterrestrial intelligence flowering in technology are in a sense Platonic. It’s as life were possessed of a potential to develop what we know as STEM (science, technology, engineering, and mathematics) that it might actualize to a greater or lesser degree. Some organisms (e.g., homo sapiens) fulfill this potential, others (slime mold?) do not, while others “inevitably” actualize it even more than we have. It’s the inevitability of the idea, that we are sure to encounter technologically-advanced extraterrestrial civilizations that essentializes it. It’s part of the essence (Form, Idea…) of life that it has the potential to develop “intelligence” and subsequently “technology”. Homo sapiens are merely an instantiation of the actualization of this essential potential.

The fetishistic character of this idea that Western civilization is somehow a cosmic norm is revealed all the more starkly when we reflect that the “intelligence” operative in STEM (instrumental, calculative reason) and the technology it produces (and, no less, is, in a sense, produced by) is hardly even the norm among human beings, let alone life on earth. The narrowing down of rationality to technical problem solving is a perversity peculiar to a particular society, very restricted in space and time, ironically, one whose own science undercuts and overturns this blinkered, proud self-regard; at the same time, this very science is itself hardly a universal potential aspect of culture, being but one, and a very new one, among many no less functional “systems of knowledge” that have enabled groups of homo sapiens to survive and flourish.

However much Kerschenbaum, Loeb, and others might protest, that our science is governed by often all-too unconscious metaphysical assumptions is well-known to philosophers, among them, surely, Diane W Pasulka. In her American Cosmic, she invokes Martin Heidegger‘s notion of technology in the course of her argument that technology and that represented by the UFO has taken on a religious aura in recent history. Heidegger is well-known for (among other things) articulating what he called “the History of Being”, i.e., a particular trajectory of the basic question of ontology, “What is ‘being’?” from Plato and Aristotle, who first explicitly posed the question, down to himself, who poses it and recasts it again as “fundamental ontology” in Being and Time. What the history of Being uncovers is that the question received a definitive answer among the ancient Greeks, one that held sway until Heidegger’s resuscitation of the question and “destruction” of the history of ontology to free the inquiry from its sedimented, guiding assumptions. Plato and Aristotle posited that “being is presence”, an answer to the question that was passed down to Christian and Medieval civilization, and inherited as an unspoken presupposition of what became the natural sciences.

Aside from whether one accepts Heidegger’s history of Being, it is surely ironic that, on the one hand, Kershenbaum invokes the precariously chance-ridden process of evolution to imagine life on other worlds, while remaining somehow blind to the even more aleatoric process that leads to any given culture’s having ended up where it is, while, on the other, Loeb would argue that humankind should be humble, because it is not unique! What greater hubris is there than to project one’s own peculiar society as somehow characteristic of life in the cosmos? In this regard, Kershenbaum and Loeb not only unknowingly take up inherited Platonic notions but arguably also in a parody of the Ptolemaic universe place this latest, if not last, moment of Western civilization at the centre of, if not the universe, then its workings, an instance of a norm no less universal than the speed of light.

“…news affirming the existence of the Ufos is welcome…”

Of recent developments in the ufological sphere, two stand out to me: the release of a huge cache of CIA documents on UFOs and the prepublication promotion of astronomer Avi Loeb’s new book on Oumuamua and related matters. I was moved to address Loeb’s recent claims (you can hear him interviewed by Ryan Sprague here and hear him speak on the topic last spring here), but, since I have addressed the essential drift of Loeb’s speculations, however curtly, and I’m loathe to tax the patience of my readers or my own intellectual energies rehearsing the driving thesis here at Skunkworks yet again, I want to probe a not unrelated matter, an ingredient of the ufological mix since the earliest days of the modern era.

This post’s title is taken from a longer passage from Carl Jung’s ufological classic Flying Saucers: A Modern Myth of Things Seen in the Skies. In the preface, Jung observes:

In 1954, I wrote an article in the Swiss weekly, Die Weltwoche, in which I expressed myself in a sceptical way, though I spoke with due respect of the serious opinion of a relatively large number of air specialists who believe in the reality of Ufos…. In 1958 this interview was suddenly discovered by the world press and the ‘news’ spread like wildfire from the far West round the Earth to the far East, but—alas—in distorted form. I was quoted as a saucer-believer. I issued a statement to the United Press and gave a true version of my opinion, but this time the wire went dead:  nobody, so far as I know, took any notice of it, except one German newspaper.

The moral of this story is rather interesting. As the behaviour of the press is sort of a Gallup test with reference to world opinion, one must draw the conclusion that news affirming the existence of the Ufos is welcome, but that scepticism seems to be undesirable. To believe that Ufos are real suits the general opinion, whereas disbelief is to be discouraged.

Loeb’s recent experience harmonizes with Jung’s. Loeb recounts around the 22:00′ mark in his interview with Sprague that when he and his collaborator published their paper arguing for the possible artificial origins of Oumuamua, they experienced a “most surprising thing”, that, despite not having arranged for any publicity for their paper, it provoked “a huge, viral response from the media…”

There are, of course, myriad reasons for the media phenomenon experienced by both Jung and Loeb. An important aspect of their shared historical horizon, however, suggests the ready, public fascination for the idea of extraterrestrial, technologically-advanced civilizations springs from an urgent source. Jung, famously, however correctly, argued that flying saucers’ appearing in the skies just at the moment the Iron Curtain came down had to do precisely with the new, mortal threat of atomic war, that, from his psychological perspective, flying saucers were collective, visionary mandalas, whose circular shape made whole, at least to the visionary imagination, what humankind had split asunder in fact. Though we live now after the Cold War, the cognoscenti are quick to remind us the threat of nuclear war remains, a threat along with increasingly acute environmental degradation and global warming. There’s a grim synchronicity in Loeb’s book’s appearing hot on the heels of the publication of a widely-publicized paper in the journal Frontiers of Conservation Science titled “Underestimating the Challenges of Avoiding a Ghastly Future.”

Just how do such anxieties arguably underwrite the desire to discover other “advanced” societies? Jung was right, I think, in seeing the appearance of “flying saucers from outer space” as compensating for the worries of his day. Rather than affirming the phenomenon’s dovetailing into his theory of archetypes, however, I would argue that the very idea of UFOs’ being from an advanced, technological civilization, an interpretation put forward spontaneously by the popular, scientific, and military understanding, is a response to the growing concern over the future of the earth’s so-called advanced societies. Such evidence of extraterrestrial intelligence seems to confirm that technology (as we know it) and the kind of intelligence that gives rise to it are not the result of a local, accidental coupling of natural history (evolution) and cultural change (history proper) but that of more universal regularities, echoing, perhaps, however faintly, those cosmically universal natural laws that govern physics and chemistry. That such intelligence and civilizations spring up throughout the stars suggests, furthermore, they all share the same developmental vector, from the primitive to the advanced, and that, if such regularities hold, then just as our visitors are more advanced than we are, then we, too, like them, might likewise negotiate the mortal threats that face our own civilization, enabling us to reach their heights of knowledge and technological prowess. That we might learn just such lessons from extraterrestrial civilizations we might contact has been one explicit argument for the Search for Extraterrestrial Intelligence (SETI). The very idea, then, of a technologically-advanced civilization embodies a faith that technology can solve the problems technology produces, one whose creed might be said to reword Heidegger’s final, grave pronouncement that “Only a god can save us”, replacing ‘god’ with ‘technology’. What’s as remarkable as it is unremarked is how this tenet of faith is shared equally by relatively mainstream figures, such as Loeb, Diana W. Pasulka, and SETI researchers, and more outré folk, such as Jason Reza Jorjani, Steven Greer, and Raël/Claude Vorilhon.

Conversely, discovering the traces of extraterrestrial civilizations that have failed to meet the challenges ours faces could prove no less significant, as Loeb himself has proposed: “…we may learn something in the process. We may learn to better behave with each other, not to initiate a nuclear war, or to monitor our planet and make sure that it’s habitable for as long as we can make it habitable.” Aside from the weakness of this speculation, the idea of such failed civilizations is based on the same assumptions as the idea of successful ones, thereby revealing their being ideological (positing a social order as natural). Imagine all we ever were to discover were extraterrestrial societies that had succumbed to war, environmental destruction, or some other form of self-annihilation. Technological development would then seem to entail its own end. Indeed, that this might very well be the case has been proposed as one explanation for “The Great Silence”, why we have yet to encounter other, extraterrestrial civilizations. We might still cling to the hope that humankind might prove the exception, that it might learn from all these other failures (à la Loeb), or we might adopt a pessimistic fatalism, doing our best despite being convinced we are ultimately doomed. In either case, advanced technological society modelled after one form of society on earth is projected as unalterable, inescapable, and universal. The pessimistic conception of technological advancement, a blinkered reification of a moment in human cultural history, arguably expresses from a technoscientific angle the sentiment of Fredric Jameson’s famous observation: “It’s easier to imagine the end of the world than the end of capitalism.”

The consequences of this technofetishism are manifold. However much technology is not essentially bound up with capitalism, it is the case that technology as we know it developed under capitalism as a means to increase profit by eliminating labour, a development that has only picked up steam as it were with the drive to automation in our present moment. When this march of progress is imagined to be as natural as the precession of the equinoxes, it is uncoupled from the social (class) relations that determine it, reifying the status quo. In this way, popular or uncritical speculations about technologically advanced extraterrestrial societies are arguably politically reactionary. But they are culturally, spiritually impoverishing, too. This failure, willed or otherwise, to grasp our own worldview as contingent legitimates if not drives the liquidation of human cultural difference and of the natural world. Identifying intelligence with one kind of human intelligence, instrumental reason, and narrowing cultural change to technological development within the lines drawn by the self-regarding histories of the “advanced” societies, we murderously reduce the wild variety of intelligence (human and nonhuman alike) and past, present, and, most importantly, potentially future societies to a dreary “eternal recurrence of the same,” a world not unlike those “imagined” by the Star Trek and Star Wars franchises wherein the supposed unimaginable variety of life in the cosmos is reduced to that of a foodcourt.

There are no repeats in space

Avi Loeb, the Harvard Professor of Astronomy is at it again. Professor Loeb is most famous of late for his conjectures that the interstellar object Oumuamua might be an alien spaceship. Most recently remarks he made at The Humans to Mars Summit (14-16 May 2019) concerning the value of the search for extraterrestrial intelligence (SETI) have stirred some interest.

I haven’t had the time to fast forward through the three days’ live streaming to find Professor Loeb’s talk, but the idea of his that caught the attention of at least two journalists (here and here) is that discovering extraterrestrial civilizations that have self-destructed, as ours threatens to do, might help us learn to avoid their fatal mistakes:  “The idea is we may learn something in the process. We may learn to better behave with each other, not to initiate a nuclear war, or to monitor our planet and make sure that it’s habitable for as long as we can make it habitable.”

Where to begin?…

In the best of all possible worlds, Loeb and I would have an intellectual cage match on this subject. I have consistently (and with increasing impatience, admittedly) taken to task the assumptions that underwrite Loeb’s views and SETI in general, on the grounds that they are anthropocentric in identifying “intelligence” with human intelligence (an identification with fatal consequences for all those other intelligent life forms with which we share the earth) and, worse, that they reify one civilization’s vector of technical development, namely that of “the West”, as being natural to all imaginable anthropomorphically intelligent life. The Enlightenment is sometimes taken to task for unconsciously restricting the human to white, ruling-class males; SETI’s assumptions seem equally, if not more, perverse.

But Loeb’s statement quoted above reveals the vacuity of his thesis. We don’t need to discover another civilization that ended itself through war, nuclear or otherwise, or by fouling its own nest. We already understand that we need avoid even a “limited” nuclear war and we already monitor the habitability of our planet, with increasing scrutiny and anxiety. The only virtue of this aspect of xenoarchaeology would be to discover a civilization that succumbed to an internal threat of which we are unaware. But even letting SETI’s frankly ideological assumptions off the hook, even such a discovery would be empty, since civilizations are each determined at each moment by a set of conditions that are in each instance radically local (historical).

My argument here cuts too against those who believe we can learn from history. Such thinking makes of human societies a kind of natural phenomenon subject to transtemporal laws. But human societies are not “natural” in the way the behaviour of the electron is natural, but historical, and, as such, admit to being not known but only understood within the context of a constellation of temporally local and ephemeral determinants. In a word, and to say too much too quickly, human societies operate within the realm of freedom not (natural) necessity. This is not to say humans beings in the aggregate escape or otherwise stand above nature, but only that it is illegitimate to seek to know them the same way we seek knowledge of nonhuman nature.

Nor am I arguing ultimately against the curiosity that drives SETI. What I am relentlessly and mercilessly critical of are the zombie ideas that make of the human being, and our present iteration of civilization, exemplars of all imaginable intelligence throughout the universe.