The Debrief has inaugurated a new series of articles, “Our Cosmic Neighbourhood”, and given the first word to Avi Loeb (to whom The Debrief seems a while back to have given carte blanche…). I don’t know if this new series will be penned exclusively by Loeb, but his latest contribution leaves me inclined—if not urged by a sense of duty to maintaining speculative sanity (let alone my own…)—to engage in a game of Whack-a-Loeb: What Dr. Loeb publishes at The Debrief I will smack down (as long as I have the patience: life is only so long…).
This missive is titled “Communicating with Extraterrestrials”. After a regrettably naive introductory gambit, about how “the war in Ukraine illustrates how difficult it is for earthlings to communicate with each other even when they share the same planet and communication devices”, Loeb, in characteristic SETI fashion, considers the possibilities that communication between earthlings and extraterrestrials might occur by means of either a technological artifact from perhaps a long-dead civilization or some electromagnetic signal. Regarding that first possibility, Loeb’s favoured, he speculates
A more advanced form of an indirect encounter with a messenger involves an AI system that is sufficiently intelligent to act autonomously based on the blueprint of its manufacturers. Since AI algorithms will be capable of addressing communication challenges among human cultures in the Multiverse, the same might hold in the actual Universe. In that case, we should be able to communicate at ease with a sufficiently advanced form of AI astronauts, because they would know how to map the content they wish to convey to our languages.
It is Loeb’s hope that this “encounter with extraterrestrial AI will be a teaching moment to humanity and lead to a more prosperous future for us all.”
There are, unsurprisingly, a number of problems with Loeb’s speculations, many of which find echo in his earlier Medium article, “Be Kind to Extraterrestrial Guests” I responded to here. In this earlier piece, Loeb seemed to overlook that communicating with an extraterrestrial life form or artifact thereof is, at least, a form of interspecies communication. Likewise, in this, his latest foray on the topic, he overestimates (as usual) the potential for AI (“AI algorithms will be capable of addressing communication challenges among human cultures in the [Metaverse]”) and underestimates the challenges of interspecies communication (“we should be able to communicate at ease with a sufficiently advanced form of AI astronauts, because they would know how to map the content they wish to convey to our languages”).
I don’t doubt the capacity for relatively functional AI translation, but such “translation” can happen only when the parties communicate in stereotypes, what literary theorists long back termed “the already written” of speech’s inescapable “intertextuality”. That is, translation software does not interpret or understand what the interlocutors actually say, but draws on a vast data bank of the already-written to find the most probable equivalence for any given string. The mindlessness of this procedure can, as a direct consequence of how it works, result in laughable mistranslations.
The root of this problem was posited already in the late Eighteenth and early Nineteenth centuries by the founder of modern hermeneutics, Friedrich Schleiermacher, who observed that language could be characterized in at least two different ways, the grammatical and technical. By the former, he meant the impersonal rules that underwrite the possibility of any linguistic utterance’s being well-formed and hence not nonsensical in the first place, precisely the rules linguistics can quantify and programmers exploit to develop translation and speech- or text-productive software. However, there is also a creative aspect to speech, the “technical”, that both exceeds the already-written, being novel, and that underwrites the possibility of understanding a novel utterance if not any utterance in general (as a speaker must always make an educated guess not only about exactly what words are spoken but how they might be intended). As contemporary philosopher Robert Brandom so eloquently puts it: “What matters about us morally, and so, ultimately, politically is…the capacity of each of us as discursive creatures to say things that no-one else has ever said, things furthermore that would never have been said if we did not say them. It is our capacity to transform the vocabularies in which we live and move and have our being”. The “vocabularies” (the “already said”) that any such creative speech act will exceed and transform will, by the same token, transcend the capacity of the translation AI. The challenge of just such creative language use is especially acute in the case of tone, e.g., irony, which operates precisely in a space shared by the grammatical and technical, i.e., one and the same expression is used to mean its opposite. In this instance, the semiotic model of language-as-pure-syntax (Schleiermacher’s “grammatical” aspect) meets a limit, as Paul de Man so famously demonstrated in the opening chapter of his Allegories of Reading (1979).
If a linguistic AI by its very nature runs up against limits imposed by the nature of language itself, then how much moreso an AI produced by another species (as if the very idea were itself unproblematic…)? That is, at least translation software developed by a terrestrial programmer need only “translate” between human languages, but the problem of interspecies translation, as I sketch out in my earlier post on Loeb’s “xenophobic xenia”, is much more difficult (Diana Pasulka’s claims concerning the research of Iya Whiteley notwithstanding). I get the impression the very real challenges to this idea are not recognized by Loeb because of how he seems to conceive of language, writing, as he does, trusting his imagined “AI astronauts” would be able “to communicate at ease…because they would know how to map the content they wish to convey to our languages.” Many assumptions are packed into this all-too-casual claim. Is it the case in fact that a language “maps a content”, for example? Arguably not, such an idea belonging to the conception of language prior to the advent of the science of language, philology, in the mid-Eighteenth century. Moreover, without already possessing a knowledge of a human language, how would that AI find the equivalences for what it wished to convey? Loeb seems to think that natural languages are systems of or for more or less unproblematic representation, when in fact languages are intimately bound up with their pragmatic use in what philosopher Ludwig Wittgenstein famously termed “forms of life.” For this reason he once remarked “If a lion could speak, we would not be able to understand him,” a contention all the more applicable to an extraterrestrial organism, let alone its artifact. All of this assumes, of course, that Loeb’s AI astronaut seeks out Homo sapiens rather than some other species of organism it encounters on earth…
There is no little irony in Loeb’s opening his article by telling his reader he “was recently invited to attend an interdisciplinary discussion with linguists and philosophers, coordinated by the Mind Brain Behavior Interfaculty Initiative at Harvard University [that] will revolve around the challenge of communicating with extraterrestrials as portrayed in the film Arrival.” I’m going to assume the film’s plot follows in its main lines those of the short story it is based on. What’s telling is that the film is not about communicating with extraterrestrials. The science-fiction scenario, as compelling as it is (imaginably why it’s adopted to orient the discussion Loeb has been invited to attend) is a literary device to present the plot’s theme, which is amor fati: the language of the aliens enables the protagonist to know in advance the daughter she will bear will die in childhood; nevertheless, in full knowledge of the pain she will bear, the protagonist chooses to affirm this fate. Moreover, that an important theme is also that of language, the story fulfills a (post)modern imperative for fiction, that it be reflexive, i.e., that it present and probe its own materiality. In a very important way, taking the story at face value is to completely misunderstand it, a sore failing for those who would aspire to understand and communicate with an utterly alien Other.
One thought on “Whack-a-Loeb: the latest round…”