I don’t know Avi Loeb’s reputation among his fellow astrophysicists, but when he ventures into the field of more general culture the results are risible.
In Loeb’s latest article for The Debrief, if one weren’t to read too carefully, one might think Loeb is merely arguing that extraplanetary exploration is best accomplished by artificially-intelligent probes rather than human astronauts. However, closer inspection reveals his position is both bolder and more questionable.
He begins his article by posing the question: “Once humanity begins sending its assets to other planets, what should be our goal?”, which he answers as follows:
Fundamentally, there are two choices: A) Use artificial intelligence (AI) astronauts to plant seeds of scientific innovation in other locations, so that intelligence is duplicated and not at risk of extinction. B) Make numerous copies of what nature already produced on Earth.
Apart from the reservations the sensitive reader might have about the wording of the question itself (e.g., the ideological connotation of ‘assets’ and the too-easy equation of ‘humanity’ with some “we”), the logic of the “fundamental choices” is not at all clear. The AI “astronauts” (why the personification? By what warrant?) of A) are not a “goal” but a means to an end (goal), which is to ensure “intelligence is duplicated and not at risk of extinction” (a clause whose rhetoric we’ll return to), while B) is presented as an end (goal) in itself. The brows are already furrowing. Probes have been sent to other planets, moons, asteroids, and comets in the solar system and human astronauts may well likely follow in the name of scientific exploration and ultimately exploitation of these same bodies for raw materials or colonization, but this is not one of Loeb’s “goals” for sending “assets to other planets”. On the one hand, the status quo speaks against there being “fundamentally…two choices”; on the other, it is not at all clear prima facie what Loeb’s two “fundamental” choices are or how they relate to each other.
Choice A) does seem relatively clear: sending AI probes to other planets is a means of proliferating “intelligence” both numerically and spatially to better insulate it from extinction. However, is B) (presumably disseminating “copies of what nature already produced on Earth”) an end in itself or a means to an end, namely a biological analogue to the strategy proposed by A)? If it is an end in itself, who has proposed it? Science fiction has surely imagined human beings colonizing earth-like worlds and transplanting, as it were, earth plants and animals as part of that project. Closer to the realm of real technological speculation, geoengineers have proposed using earth organisms, single-celled and vegetable, to terraform a planet. Both these ideas, however, are means to the exploration and exploitation of offworld locations. Loeb must have something else in mind…
By “what nature already produced on Earth” is Loeb referring to human beings, so that B) is a version of A): by colonizing other planets humanity better insures its long term survival? If so, Loeb is thinking like Elon Musk, who has argued that a Mars colony would precisely function to preserve humanity in the event of its extinction on earth. On the one hand, Loeb does seem to equate “what nature already produced on Earth” with homo sapiens, writing “We are emotionally attracted to B, because we are attached to ourselves”. However, on the other, he continues: “we like who we are and imagine that if we duplicate natural selection in an Earth-like environment something as special as us will result. Of course, natural selection holds no such promise.” Now, a migraine seems in the offing. Who imagines “that if we duplicate natural selection in an Earth-like environment something as special as us will result”? Not our aforementioned geoengineers: terraforming is proposed as a way to render a planet habitable for human beings, not as a way to evolve some humanoid species. Why exactly—to what end—the “asset” we should send to other planets should be “what nature already produced on Earth” remains unclear….
Whatever the precise significance of B) (an imprecision not inconsequential), Loeb is much clearer as to why he is “all in favor of option A for AI.” He submits that
“Choice A…promotes new systems that are more advanced and adaptable to very different environments. If evolution is supervised by AI systems with 3D printers, it could be more efficient at identifying optimal solutions to new challenges that were never encountered before,” and “[AI] has a higher likelihood of survival in the face of natural disasters, such as loss of planetary atmospheres, climate changes, meteorite impacts, evolution of the host star, nearby supernova explosions, or flares from supermassive black holes.”
I’m uncertain by what warrant exactly Loeb claims that any existing or realistically potential AI probe is “more advanced” than a human being or other terrestrial organism, however much such a probe might well be more adaptable (being able to function in an unearthly atmosphere, like that of Mars, for example). Setting aside for the moment what exactly he might mean by “evolution…supervised by AI systems with 3D printers”, Loeb is clearly fudging when he writes said AI “could be more efficient at identifying optimal solutions to new challenges that were never encountered before” (it could just as easily be stumped by these unencountered conditions) and that “[AI] has a higher likelihood of survival in the face of natural disasters, such as loss of planetary atmospheres, climate changes, meteorite impacts, evolution of the host star, nearby supernova explosions, or flares from supermassive black holes.” It’s believable a probe might very well survive the “loss of planetary atmospheres” or “climate changes” (depending upon their respective causes and concomitant conditions) but less that it would emerge unscathed from “nearby supernova explosions or flares from supermassive black holes.” That Loeb places such faith in the capacities of some imagined AI betrays a technofetishism that underwrites his whole approach to this and related topics…
However questionable his arguments in support of A), his reasons for dismissing B) are no less problematic. Even his rhetoric is flawed. Where ‘A’ stands for ‘AI’, ‘B’ stands for ‘Barbaric’, as he explains at length:
The second approach is suitably labeled B since it was adopted by barbarian cultures throughout human history. Its brute-force simplicity in making copies of existing systems could lead to dominance by numbers, but its main weakness is that it is vulnerable to new circumstances that previous systems cannot survive. For example, the dinosaurs were not smart enough to use telescopes capable of alerting them to the dangers of giant space rocks like Chicxulub. Also, the ideas offered by Ancient Greek philosophy survived longer than the Roman Empire despite the latter’s military might in conquering new territories.
Before addressing his Wikipedia-derived concept of the “barbarian” let’s address some of this paragraph’s other claims. If we compare the staying-power of “brute-force simplicity” to complexity with reference to being “vulnerable to new circumstances”, is it the case complex societies, for example, are more resilient? Arguably not. However much their sophistication might enable them to build killer-asteroid detecting telescopes, the very complexity of such societies renders them less resilient to “new circumstances”. In all honesty, how does Loeb imagine the next hundred years to play out for the earth’s “advanced societies” in the face of endlessly unfolding and increasingly dire climactic changes (changes both of their own doing and well-known, understood, and, thereby, preventable) let alone the acutely-present threat of nuclear war (I write this in the first week of the Russian Federation’s invasion of Ukraine)? Of course, Loeb is contrasting the “brute simplicity” characteristic of the barbarian to the flexible sophistication of his imagined AI astronauts. What is less contentious is Loeb’s apparent ignorance of European intellectual history. Greek philosophy survived into the Renaissance when its rare, extant texts were collected due, first, to its being preserved by being translated into Latin, spreading throughout the Roman Empire and later Christianity, and, second, being translated and commented on by Arabic philosophers, whose translations were then rendered back into Latin. That is to say, the Roman Empire, its conquest of Greece and assimilation of its culture, the Empire’s geographical extent, and its being the foundation for the Christianization of Europe, North Africa, and the Near East, was the material condition for the survival of Greek philosophy down into the present.
If we turn, then, to Loeb’s deployment of ‘barbaric’ even more flaws are revealed. Perhaps motivated by wanting to seem or be user-friendly, Loeb hyperlinks his article to the Wikipedia entry for the term in question, which begins
A barbarian (or savage) is someone who is perceived to be either uncivilized or primitive. The designation is usually applied as a generalization based on a popular stereotype; barbarians can be members of any nation judged by some to be less civilized or orderly (such as a tribal society) but may also be part of a certain “primitive” cultural group (such as nomads) or social class (such as bandits) both within and outside one’s own nation.
‘Barbarian’ at root is a derisory expression rooted in the Greek βάρβαρος, used (at least) for all non-Greek speaking people, because their languages sounded (to Greek ears) like the barking of dogs (“Bar! Bar!”). Hence, even the Wikipedia entry Loeb so helpfully links makes clear in its first sentence that the term is used to denote “someone who is perceived to be either uncivilized or primitive” (my emphasis). Anyone familiar with the peoples the Greeks and Romans so disparaged (Egyptians, Persians, Medes, Celts, etc.) will know they were hardly “uncivilized”; moreover, anyone acquainted with anthropology since, e.g., Franz Boas and Margaret Mead, will know all talk of “primitive” peoples has, thankfully, been tossed in the trash can of history. Given the etymology of the term and the facts of human culture, it is surely eyebrow-raising that a man of Loeb’s education could write about ” barbarian cultures throughout human history” (my emphasis) and their “brute-force simplicity” (my emphasis). Indeed, one could ask for no better counter to Loeb’s use of the term than the brute fact that the longest-lived, continuous culture on earth is that of the Australian Aborigine…
If ‘barbarian’ connotes a certain inarticulateness or linguistic clumsiness, then the rhetoric of Loeb’s article is arguably barbaric. Consider, again, Loeb’s two fundamental choices: “A) Use artificial intelligence (AI) astronauts to plant seeds of scientific innovation in other locations, so that intelligence is duplicated and not at risk of extinction. B) Make numerous copies of what nature already produced on Earth.” Notice the tacit (con)fusion of the vocabularies for what the ancient Greek philosophers distinguished as φύσις (physis, nature) and τέχνη (art, or craft): AI probes are (figuratively) “astronauts”, that “plant seeds” of (artificial) “intelligence”; “nature” produces “copies” of organisms, like (in the next paragraph) “an industrial duplication line” or, imaginably, a 3D printer. Even Loeb’s joke about dinosaurs “not smart enough to use telescopes capable of alerting them to the dangers of giant space rocks” in his discussion of barbaric and civilized cultures blurs the line between nature (dinosaurs) and culture (barbarians) for comic effect, but simultaneously dehumanizes the barbarian.
These stylistic moves might be glossed as mere rhetorical ploys (as if Jacques Derrida hadn’t unmasked precisely the apparent innocence of such “mere rhetoric”…), but the same pattern is discernible in what seem more serious propositions. “AI systems could be viewed as our technological kids and a phase in our own Darwinian evolution, as they represent a form of adaption to new worlds beyond Earth,” writes Loeb. “Adopting survival tactics by AI systems in these alien environments might be essential for tailoring sustainable torches that carry our flame of consciousness there.” In all seriousness, a piece of AI can be thought our technological child no more than a chipping tool of our distant, evolutionary ancestor is a child of theirs. Loeb seems to believe that the “intelligence” of an AI is or might be of the same kind as that of human beings or other living organisms, and that this “intelligence” is somehow equivalent to consciousness. That an AI probe as a technological artifact is evidence of the instrumental reason necessary to invent and construct is goes without saying, but the identification of AI as (self)conscious intelligence is unwarranted de facto and arguably de jure. Loeb’s technofetishism finds its extremest expression when he writes that our fundamental choice “is between taking pride in what nature manufactured […]over 4.5 billion years on Earth through unsupervised evolution and natural selection (B) or aspiring to a more intelligent form of supervised evolution elsewhere (A)” (my emphasis). Here, the hubris that inspires plots of unbridled engineering of life on earth finds its expression, the presumption that natural science as the knowledge of and subsequent power over nature is sufficient in itself to supervise and guide evolution in “a more intelligent form” than the “unsupervised” process that presumably gave rise to life on earth and shaped its development.
Surely space exploration (and its motivations) and the survival of life, human and otherwise, are serious matters demanding ever more urgently to be addressed, interrogated, and, in their own right, explored. But to rise to the challenge of the world “we” (who?) have made, no less surely do we need to draw on all the powers and inheritance of our species, putting that fetishization of the kind of thinking that underwrites both such speculations and the mortally-urgent dilemmas we have come to face in its proper place.
2 thoughts on “Avi Loeb’s Artificial Intelligence”