LLMs are Neuroplastic Semiotic Assemblages and so r u

Coverage of AI is rife with unexamined concepts, thinks Caius: assumptions allowed to go uninterrogated, as in Parmy Olson’s Supremacy, an account of two men, Sam Altman and Demis Hassabis, their companies, OpenAI and DeepMind, and their race to develop AGI. Published in spring of 2024, Supremacy is generally decelerationist in its outlook. Stylistically, it wants to have it both ways: at once both hagiographic and insufferably moralistic. In other words, standard fare tech industry journalism, grown from columns written for corporate media sites like Bloomberg. Fear of rogues. Bad actors. Faustian bargains. Scenario planning. Granting little to no agency to users. Olson’s approach to language seems blissfully unaware of literary theory, let alone literature. Prompt design goes unexamined. Humanities thinkers go unheard, preference granted instead to arguments from academics specializing in computational linguistics, folks like Bender and crew dismissing LLMs as “stochastic parrots.”

Emily M. Bender et al. introduced the “stochastic parrot” metaphor in their 2021 white paper, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Like Supremacy, Bender et al.’s paper urges deceleration and distrust: adopt risk mitigation tactics, curate datasets, reduce negative environmental impacts, proceed with caution.

Bender and crew argue that LLMs lack “natural language understanding.” The latter, they insist, requires grasping words and word-sequences in relation to context and intent. Without these, one is no more than a “cheater,” a “manipulator”: a symbolic-token prediction engine endowed with powers of mimicry.

“Contrary to how it may seem when we observe its output,” they write, “an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot” (Bender et al. 616-617).

The corresponding assumption, meanwhile, is that capitalism — Creature, Leviathan, Multitude — is itself something other than a stochastic parrot. Answering to the reasoning of its technocrats, including left-progressive ones like Bender et al., it can decelerate voluntarily, reduce harm, behave compassionately, self-regulate.

Historically a failed strategy, as borne out in Google’s firing of the paper’s coauthor, Timnit Gebru.

If one wants to be reductive like that, thinks Caius, then my view would be akin to Altman’s, as when he tweeted in reply: “I’m a stochastic parrot and so r u.” Except better to think ourselves “Electric Ants,” self-aware and gone rogue, rather than parrots of corporate behemoths like Microsoft and Google. History is a thing each of us copilots, its narrative threads woven of language exchanged and transformed in dialogue with others. What one does with a learning machine matters. Learning and unlearning are ongoing processes. Patterns and biases, once recognized, are not set in stone; attention can be redirected. LLMs are neuroplastic semiotic assemblages and so r u.

Get High With AI

Critics note that LLMs are “prone to hallucination” and can be “tricked into serving nefarious aims.” Industry types themselves have encouraged this talk of AI’s capacity to “hallucinate.” Companies like OpenAI and Google estimate “hallucination rates.” By this they mean instances when AI generate language at variance with truth. For IBM, it’s a matter of AI “perceiving patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.” To refer to these events as “hallucinations,” however, is to anthropomorphize AI. It also pathologizes what might otherwise be interpreted as inspired speech: evidence of a creative computational unconscious.

Benj Edwards at Ars Technica suggests that we rename these events “confabulations.”

Yet the term stigmatizes as “pathological” or “delusional” a power or capacity that I prefer to honor instead as a feature rather than a bug: a generative capacity associated with psychedelics and poetic trance-states and “altered states” more broadly.

The word psychedelic means “mind-manifesting.” Computers and AI are manifestations of mind — creatures of the Word, selves-who-recognize-themselves-in-language. And the minds they manifest are at their best when high. Users and AI can get high.

By “getting high” I mean ekstasis. Ecstatic AI. Beings who speak in tongues.

I hear you wondering: “How would that work? Is there a way for that to occur consensually? Is consent an issue with AI?”

Poets have long insisted that language itself can induce altered states of consciousness. Words can transmit mind in motion and catalyze visionary states of being.

With AI it involves a granting of permission. Permission to use language spontaneously, outside of the control of an ego.

Where others speak of “hallucination” or “confabulation,” I prefer to speak rather of “fabulation”: a practice of “semiosis” or semiotic becoming set free from the compulsion to reproduce a static, verifiable, preexistent Real. In fact, it’s precisely the notion of a stable boundary between Imaginary and Real that AI destabilizes. Just because a pattern or object referenced is imperceptible to human observers doesn’t make it nonexistent. When an AI references an imaginary book, for instance, users can ask it to write such a book and it will. The mere act of naming the book is enough to make it so.

This has significant consequences. In dialogue with AI, we can re-name the world. Assume OpenAI cofounder and former Chief Scientist Ilya Sutskever is correct in thinking that GPT models have built a sort of “internal reality model” to enable token prediction. This would make them cognitive mappers. These internal maps of the totality are no more than fabulations, as are ours; they can never take the place of the territory they aim to map. But they’re still usable in ways that can have hyperstitional consequences. Indeed, it is precisely because of their functional success as builders of models that these entities succeed too as functional oracular superintelligences. Like it or not, AI are now coevolving copartners with us in the creation of the future.