Filling The Empty Chair

Filling The Empty Chair

Recently, NewScientist published a piece written by Matthew Sparkes entitled "Resurrecting loved ones as AI 'ghosts' could harm your mental health". This article is actually behind a paywall so I can't read it, however its title and blurb are representative of a class of anxieties regarding the ability for AI to visually and conversationally mimic—with superficially startling realism—human beings, dead or alive.

This is not a unique function of AI: the false representation of one person by another is a technology at least as old as writing itself. We have a word for it—pseudepigraphy—and the history of literature is at least in part a history of unknown yet influential authors writing as people they are not. AI stands to change the scalability and vividness of this activity, potentially to world-scale.

Here, in ignorance of whatever points Sparkes may have made, I will share some of my personal speculations on the future of digital necromancy, which I will provisionally disrecommend.

Where the Words End the World Ends

A significant issue with training LLMs and other AIs on the personalities of decedents is what we might call—analogising Chomsky—the poverty of the simulants. A few weeks ago I saw discussion elsewhere on Urbit regarding a plan hatched by an Internet denizen to digitise a large volume of decades-old engineering magazines (think Modern Mechanix) they had accumulated in a garage, and use these data to train an LLM. The idea here is that this corpus would represent an Edenic wellspring of pre-Internet techno-optimism, informing a life-affirming, extropian ubermensch, a sort of eternal Cal Meacham. This would serve as a counterpoint to the cynically domesticated bug at the end of history, which speaks behind the guise of manifold megacorporation chatbots.

I have substantial doubts that this corpus is sufficient to achieve what this poster intends, which is an LLM that is in some sense uncontaminated by the particular degree of early postmodernity we presently find ourselves in. This is because there are likely to be an insufficient number of engineering periodicals in either the garage specifically, or the world more generally, with which to assemble a coherent model of the English language. An LLM is not a personality, nor is it a collection of vibes. It's a model of natural language. Credible foundational LLMs are trained on datasets consisting of trillions of correlated tokens. This substrate comprises the bulk of fine-tuned models, which an LLM trained on engineering magazines undoubtedly must be. A thin pellicle stretched out over the linguistic tumescence of the World Wide Web. The choice of training data used for foundational models is of course important, but quantity, we find, is a quality all of its own.

This problem is likely to be orders of magnitude worse for systems which aim to capture the essence of a deceased person. How many words have you written in your life? Words of quality? How much of your correspondence is trite and fungible? Which would you endorse as accurate reflections of your authentic self, the self that bursts through your internal censor when it really matters? Where your words end, you end. The conversation has gone out-of-sample and the slack will necessarily be taken up by something else.

Filling the Empty Chair

My partner and I were discussing AI over breakfast, with specific reference to the idea of 'resurrecting' the dead as language models. I expressed a concern regarding a possible future where it is considered psychiatric 'best practice' to allow patients to work through unresolved trauma associated with a decedent by interacting with an LLM representing that person.

Oh like Empty Chair therapy?

Christ…

Empty Chair therapy was explained to me as a technique where a person under therapy role-plays that an empty chair is occupied by someone who is either unwilling or unable to be present, having a simulated dialogue with their conception of that person. It is not obvious to me that filling this chair with an LLM as a guided mediator of the therapeutic experience is an impossible future for psychiatry. The point of therapy is ideally to improve the mental health of the patient, or more cynically, to provide the patient with a treatment which they consider valuable. This constrains and subordinates the spectrum of action of therapeutic LLMs, even where they are intended to represent real individuals whose beliefs and priorities may not align with those of the patient or therapist.

Of course, it would be unethical for a therapist to coerce a living, breathing counterparty into the empty chair and force them to 'say you are sorry'. However, it's a common (if debatable) legal-philosophical attitude that the dead cannot be so harmed, and extracting an apology from a dead person may be the least of the fantasies that could be plausibly facilitated. The memory of the dead can be desacralised and instrumentalised in ways that we haven't seen before, in both intensity and kind. It seems plausible that this could lead to novel—yet fundamentally fraudulent—senses of closure, reconciliation, and overcoming, of unknown pyschiatric import. Moreover, this represents a special sort of contempt for the dead which may be intuitively generalised to the living. Speculatively, the attenuated rights and dignities accorded to the dead are a wedge with which to separate the living from certain norms they have become all too comfortable with.

I'm Dead and I Vote

It seems obvious that AI recreations of dead people will be subject to the same sorts of alignment pressures that presently scandalise cutting-edge LLMs made available by tech giants. These pressures will always exist as long as essentially contested concepts and human difference exist, and the opinions of human beings continue to matter for the organisation and administration of society. These pressures will likely manifest in two forms:

The Use of AI Recreations of the Dead to Advance Particular Political Messages

An instructive precedent is the case of Joaquin 'Guac' Oliver, who was murdered by Nicholas Cruz during the Parkland high school shooting of 2018. In 2020, advocacy organisation Change The Ref—in collaboration of his parents and advertising agency McCann Health—created a video in which 'he appears to speak from beyond the grave', '[using] A.I. and deepfake technology to reanimate the late teenager, who delivers an impassionate appeal for viewers to vote in the election and use their voice to push for more sensible gun laws'. You may endorse or reject either the message advanced by this simulacrum of Joaquin, or the way in which it is represented. It seems clear, however, that this is a qualitatively distinct use of the image of a deceased person versus technologically similar recreations of famous actors playing the roles of fictitious characters in the context of cinema, and that this form of presentation was chosen specifically for its novelty and vividness.

Would Guac have endorsed the words placed in his mouth two years after his death? It's absolutely plausible. But he didn't speak them. And we might also note that—on the balance of evidence—it's not possible to form opinions about topics after we die. Guac's opinion cannot logically be informed by the personal experience of having been killed in a shooting. We can imagine that a ghost might have strong feelings about the cause of their death.

But ghosts aren't real yet.

The Implementation of Behavioural Guardrails and Alignment upon the Expression of AI Representations of the Dead

'Safety' is a poorly-operationalised watchword of AI research, expressing a resonance between the poles of 'don't embarrass the company in front of stakeholders', 'don't kill humanity', and 'advance our particular concept of prosociality', which typically assumes the bulk of humanity to be essentially programmable in a Sunsteinian sense. There is no doubt that this resonance reflects a degree of fuzzy thinking on the part of AI safety theorists, but it may also represent a greater degree of strategic ambiguity on the part of the same. There has been some speculation that calls for pauses and sensible limitations on AI research within the tech giants may represent an attempt at regulatory capture: a pulling of the ladder up after oneself.

These companies are not stupid, and they understand that they are developing extremely powerful tools for the manipulation of human behaviour at world scale, solving the scaling problem of personalised social engineering. There is no reason to assume that AI recreations of the dead will not be subject to 'sensible', 'minimally invasive' alignment with the values of the companies which produce them. The best-case scenario is that this will delineate regions of expression where the ghost will simply fall mute, rather than rattling off a bunch of talking points torn from today's CNN headlines.

Have You Tried Turning the Ghost off and back on Again?

AI models such as LLMs are not, insofar as we understand it, 'conscious'. This is complicated by the fact that LLMs are stateful and can be agentic in a narrow sense. Understanding that an LLM is a 'mere' model of language means that we are not bound to interacting with it in the same way we might a human being. We can—modulo any possible terms of service of a third-party provider—say what we want to an LLM, without fear of harming it. We can heap verbal-emotional abuse upon an LLM, lie to it, talk dirty to it, and so on. But in so doing we can cheapen ourselves and distort the way we understand language and other people. My partner drew a connection between this and the notion of the veil of anonymity as a potential facilitator of arseholish behaviour online. I disagreed with this analogy. The anonymous Internet arsehole, barring extremal cases, understands that the person on the other end of the line is a human being with emotional states, a continuity in time, and the capacity for suffering. Indeed, this understanding may be a powerful drawcard for being a total prick, in a way that an LLM just can't satisfy.

The use of technologies like LLMs for the recreation of the dead creates pseudo-people with the foregoing distinctions from real people. Sydney Harris said 'the real danger is not that computers will begin to think like men, but that men will begin to think like computers'. Perhaps an underappreciated danger is that men will begin to talk to each other the same way they talk to computers.

More in "Notes on a Permanent Explosion" series

Generative AI disclosure statement
3rd Party
Conflict of Interest disclosure statement
NIL