Exordium

Exordium

Belatedly, I realised that when the median thought-leader on artificial intelligence says something along the lines of 'actually, large language models aren't artificial intelligence at all', what they're usually saying is something along the lines of 'oh, you poor benighted chud, you actually believe that there's a person inside the magic box'. It's not substantively a technical discussion about the nature of intelligence. Rather, it's a continuation of the reflexive class warfare which flows down from what is considered (with a degree of charity verging on immorality) 'contemporary scientific communication', and which deforms the consciousness of the lay public. For the longest time, I thought that the 'no real AI' crowd simply had a rather parochial interpretation of intelligence, whereas a better working theory seems to be, 'yes, and also a contempt for their audience'.

To borrow a construct from Bruce Schneier, there are two kinds of AI: the first is the kind that can beat any human at Go, perform high-speed motion planning considerably in excess of human capabilities, or track the limb positions and orientations of fifty people at 60 frames a second. The second is a collection of risible statistical parlour tricks which stand to promptly unemploy—and maybe unalive—large numbers of people.

Generative AI applications—particularly text and image synthesis models such as LLMs and diffusion models—represent a multi-pronged challenge to the human condition. In the first instance, such models and their plausible successors represent an upwardly advancing horizon of technological un- or underemployment in fields previously thought to be resilient to automation: a sort of 'AI Tide' which by no means stands to lift all boats.

It is reasonable to suspect that the only jobs which are not vulnerable to automation by artificial intelligence are those where the uniquely and viscerally human are irreducible elements of a desired transaction: humanity valued as such. AI stands to contract human economic activity convulsively and monotonically, converging on an undigestible hard core of live performances and the provisioning of warm bodies in whole or part: sports, music, sex, cuddles, surrogacies, kidneys. Anything beyond this requires a transvaluation of values privileging the human in the face of artificially intelligent systems that will plausibly be capable of marshalling productive power faster than it can be consumed, even under conditions of insatiable demand. Every piece a masterpiece and 'too cheap to meter'. The economies of the future may be very strange indeed, if they are even recognisable as such. I am not sure that human beings have a significant place in them.

Total paveover. The legacy noosphere is relegated to sedimentary undergarden. Civilisation understood as a disposable launch vehicle for posthuman intelligence. Your world as a discarded syringe. A bootloader for the main program.

We are all of us strung between the poles of the everyday and the eschaton, as well as a myriad other axes of existence, both patent and occult. Our sensitivity to this could be understood as a type of time preference, but the firehose of disruptive miracles is not amenable to comprehension, much less planning. All most of us can do is situate our anxieties vaguely in time.

I have commented—perhaps to the point of annoyance—on the tendency for thinkers in the realm of AI safety to place their anxieties at the limit of time preference: the presumptive technological singularity of so-called artificial general intelligence (AGI). This makes a kind of sense because the risks associated with a technological singularity are almost by definition existential. A singularity implies that the human condition will be permanently altered if it will be anything at all. But the grandest stakes are also the most prestigious, and prestige is a game that is not entirely accountable to rationality.

More in "Notes on a Permanent Explosion" series

Generative AI disclosure statement
NIL
Conflict of Interest disclosure statement
NIL
Tools Used
1
Acknowledgements