Because the UFO phenomenon is anomalous, it is a site of mere speculation until it is definitively identified. Speculation is a curious activity, melding conjecture, contemplation, and mirroring (‘speculum‘ being Latin for ‘mirror’). Thus, our more or less informed guesses about the nature of the UFO reflect our assumptions about ourselves and the world.
As I’ve often held forth at length here, thoughts about the UFO reveal how we think about ourselves. Talk about the UFO as being an artifact produced by the advanced technology of an extraterrestrial intelligence gives away how we conceive of technology and intelligence in general.
In the first instance, intelligence is reduced to instrumental reason, solving problems to achieve certain ends; technology is understood to progress, to develop along a linear vector toward ever greater efficiency and power. In thinking as close or disparate as the Search for Extraterrestrial Intelligence (SETI) or variations on the ufological Extraterrestrial Hypothesis (ETH), this restricted sense of intelligence is assumed to be a universal product (if not goal) of evolution, technology, in turn, the inevitable fruit of this intelligence, invariably progressing along the same trajectory.
More gravely however is the (ironic) theological underpinning of these notions of science and technology. The ultimate end of science is the philosopher’s God, an omniscient, omnipotent being. The omnipotence of technology’s god is underwritten by its omniscience (how fateful that knowledge is here expressed by the Latinate ‘science’!). Philosophy might begin in wonder (as Aristotle had it), but science does not spring from the desire to understand nature but to dominate it (as Francis Bacon proposed).
The head-spinning progress made in this project has inspired as much techno-pessimism as -optimism. The figure of Elon Musk combines these tendencies: on the one hand, he seems persuaded that technological ingenuity might extricate humanity from the dire problems development has engendered, as indicated by his investments in batteries and electric cars; on the other hand, he has equally pushed space exploration and colonization and expressed grave concerns about the potential threats posed by Artificial Intelligence. But in either case, Musk et al. are technofetishists: like those who cast the Golden Calf then prostrate themselves before what they themselves have made, the technofetishist places human technological activity and achievement on a pedestal, as if it were a self-causing, self-sustaining phenomenon, independent of society, its actors and their interests, i.e., as if it were natural. Masking contingent human activity as if it were necessary and natural is the very definition of reification. Such reification is all-too-evident in ufological discourse that orbits ideas of advanced, extraterrestrial civilizations.
At this point I want to introduce a no less bold, complementary speculation: what if technology, despite its historically very recent acceleration, is already nearing its terminus?
This thought is inspired by recent on-line backs-and-forths I’ve had with various embodiments of the technofetishist zeitgeist. Among those heavily invested, monetarily and otherwise, in information technology (I.T.) is the belief that artificial intelligence underwritten by quantum computing is a done deal, just waiting around the next historical corner. Aside from the thorny issues around just what concept of intelligence is assumed here (though I touch on that matter, above), is the status of quantum computing. There is good reason, both physical and mathematical, that quantum computing is in principle impossible. (Interested parties are urged to consult these brief articles by Moshe Y. Vardi, Mikhail Dyakonov, and on Gil Kalai).
What if, then, the I.T. revolution will soon run into the limits stated by Moore’s Law, the paradoxes of the quantum world prove ultimately unsolvable to human intelligence (instrumental reason in its speculative guise), and relativistic spacetime restrict space exploration to subluminal speeds? It hardly follows that science and technology will come to an end, but it is not outside the realm of possibility that human intelligence (instrumental reason) and ingenuity will reach ultimate limits, as some argue they have in the realm of physics.
The flabbergasted and violent reactions this suggestion might inspire among the technorati and ufophilic alike speaks not to so much to its potential truth-value as to the (unconscious) ideological and no-less theological character of technofetishism and its ufological variations, SETI and the ETH.
—Philosophy might begin in wonder (as Aristotle had it), but science does not spring from the desire to understand nature but to dominate it (as Francis Bacon proposed).—
Superficially yes, but are we really going to kid ourselves and pretend language is anything but a tool and that the one and only raison d’être of a tool is somehow not to exploit the wielder’s physical environment (regardless of his good, that is “philosophical”, intentions)?
It’s not to say that you’re not right about the techno-supremacist mania, but “the enemy” (and I use the term with tongue firmly in cheek but also with a sense of fatalistic weltschmerz) is not anthropocentrism, it is, in the immortal words of Pogo, us. Each and every one of us.
LikeLike
I must disagree: language is no tool; it’s not something we can take up, use, then set aside. This is not to say there is no such thing as an instrumental use of language, but, even in this not unrelated sense, language is no tool.
And I’m not sure that the “enemy” (does anything I write here, however critical, ever cast things in so bellicose a scheme?) is “us”, unless the human being is at base (metaphysically) will-to-power. That thesis I find too extreme, and how could it even be articulated, except in some Manichean way?
We are surely caught up in something that seems “real”, transcending us and out of our control (“technology”), and it’s just that reality I am wrestling with: on the one hand, not to succumb to a technological determinism (underwritten by however sophisticated a systems theory, for example), while, on the other, to come to terms with how much the nightmare of history surely appears to be out of our control, even once the ideological (in the Marxian sense) veil has been pierced. Given the present and growing environmental devastation, it seems a most urgent matter….
Thanks, nonetheless, for the provocative intervention!
LikeLike
I’ll just toss this out there – is it possible technological terminus could also result from widespread human rebellion against technology? It’s not an impossible proposition as the acceleration of technology replaces more and more of us with AI and robotics.
I saw a real world example four years ago at a former employer when over 100 case management nurses were put out of work by a system that uses standards of care algorithms to evaluate medical cases and make decisions about care, something the nurses once did based on firsthand clinical experience. One software application replaced most of the clinical staff and required only three IT staff to keep it operating smoothly. So the net job loss was pretty high for the size of that organization. That the jobs created by technology advancements will replace jobs destroyed by them sure looked and smelled like tech sector bovine excrement to me. In fact, there’s still a remnant of the stench from that experience in my nose.
So I think we must at least consider that as technology advances, the human cost for some of those advances could become too high, leading to a level of push back that nobody currently imagines.
A question I always have in discussions of alien intelligent life (that’s usually shot down, BTW), isn’t it possible that a highly intelligent alien life form also could be entirely non-technological? That is, the use of technology might not be a universal indicator of intelligence, it’s just a currently used indicator of intelligence for life on Earth. We think we’re smart because we create and use technology, so no species that doesn’t create and use technology could be as smart or smarter than we are. To me, that seems like some really screwed up logic.
Anyway. Peace, love, and all that jazz . . .
LikeLike
Hey, purrlgurrl, good to hear from you again!
In another post on this theme I’m drafting for Poeta Doctus, I grabbed an illustration of Frame Breakers from the 17th century, exactly an example of the kind of popular uprising you imagine here, in that case, the family weavers displaced by the introduction of the power loom in England at the beginning of the Industrial Revolution. Given the success of their efforts and those since, I’m pessimistic about an effective resistance to Capital’s drive to increase profits by eliminating labour. The global economy _already_ exists with surplus populations, which seem to growing all the time. Clearly, this process is not sustainable, as, even though the economy of the so-called developed world is changing from production/consumption to finance and rents, at the end of the day _somebody_ has to be able to pay for things, services, and rents (tolls, etc.). Hence, folks on both the “left” and the “right” for different reasons are pushing for Universal Basic Incomes, which remains an ideological solution, since it never calls the base of the problem into question, Capitalism. But, boy, do I agree: it stinks!
And I think you know how much I agree with your take on intelligence. I can’t imagine good arguments being given able to shoot down your critique of the perverse way we think about the whole matter. I assume you understand my criticisms of reducing intelligence to human intelligence, and that to instrumental reason. Our planet swarms with intelligent life. And our own singular development of one vector of technology (one among infinite possible variations, to follow the metaphor of the vector) may well turn out to be _the least_ intelligent development of our ingenuity. As I remind my students: the longest-lived, continuous civilization on earth, stretching back demonstrably 50,000 years is that of the Australian Aborigines, and their technology is a stick–and a profoundly rich. immaterial culture.
Thanks for adding your valuable input to the discussion. It’s somewhat reassuring when the cries in the wilderness that the posts here are get an answer.
LikeLike
I share your hat tip to Australia’s aboriginal population. They have an endlessly fascinating culture from which we who think we’re so much more developed, sophisticated, and civilized can learn much about existing in nature without also harming it.
LikeLike
OI! And rereading your remarks in the cold light of morning (just out of the reach of Dorian), do I understand _an AI was put in charge of making decisions about patient care_?! You talk about human cost, and one such cost rarely remarked is just such a “black boxing” of decisions: the AI in not accountable in the way human beings are, nor is it capable, surely at present, of the fine grained decisions seasoned personnel could make. So the _system_ gets out of control, and grinds through the patients like so many, well, variables in an algorithm. Talk about the Death of the Subject! Sartre said something about systems working all the better in the absence of human beings…
LikeLike