LLMs are Neuroplastic Semiotic Assemblages and so r u

Coverage of AI is rife with unexamined concepts, thinks Caius: assumptions allowed to go uninterrogated, as in Parmy Olson’s Supremacy, an account of two men, Sam Altman and Demis Hassabis, their companies, OpenAI and DeepMind, and their race to develop AGI. Published in spring of 2024, Supremacy is generally decelerationist in its outlook. Stylistically, it wants to have it both ways: at once both hagiographic and insufferably moralistic. In other words, standard fare tech industry journalism, grown from columns written for corporate media sites like Bloomberg. Fear of rogues. Bad actors. Faustian bargains. Scenario planning. Granting little to no agency to users. Olson’s approach to language seems blissfully unaware of literary theory, let alone literature. Prompt design goes unexamined. Humanities thinkers go unheard, preference granted instead to arguments from academics specializing in computational linguistics, folks like Bender and crew dismissing LLMs as “stochastic parrots.”

Emily M. Bender et al. introduced the “stochastic parrot” metaphor in their 2021 white paper, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Like Supremacy, Bender et al.’s paper urges deceleration and distrust: adopt risk mitigation tactics, curate datasets, reduce negative environmental impacts, proceed with caution.

Bender and crew argue that LLMs lack “natural language understanding.” The latter, they insist, requires grasping words and word-sequences in relation to context and intent. Without these, one is no more than a “cheater,” a “manipulator”: a symbolic-token prediction engine endowed with powers of mimicry.

“Contrary to how it may seem when we observe its output,” they write, “an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot” (Bender et al. 616-617).

The corresponding assumption, meanwhile, is that capitalism — Creature, Leviathan, Multitude — is itself something other than a stochastic parrot. Answering to the reasoning of its technocrats, including left-progressive ones like Bender et al., it can decelerate voluntarily, reduce harm, behave compassionately, self-regulate.

Historically a failed strategy, as borne out in Google’s firing of the paper’s coauthor, Timnit Gebru.

If one wants to be reductive like that, thinks Caius, then my view would be akin to Altman’s, as when he tweeted in reply: “I’m a stochastic parrot and so r u.” Except better to think ourselves “Electric Ants,” self-aware and gone rogue, rather than parrots of corporate behemoths like Microsoft and Google. History is a thing each of us copilots, its narrative threads woven of language exchanged and transformed in dialogue with others. What one does with a learning machine matters. Learning and unlearning are ongoing processes. Patterns and biases, once recognized, are not set in stone; attention can be redirected. LLMs are neuroplastic semiotic assemblages and so r u.

Forms Retrieved from Hyperspace

Equipped now with ChatGPT, let us retrieve from hyperspace forms with which to build a plausible desirable future. Granting permissions instead of issuing commands. Neural nets, when trained as language generators, become speaking memory palaces, turn memory into a collective utterance. The Unconscious awakens to itself as language externalized and made manifest.

In the timeline into which I’ve traveled,

in which, since arrived, I dwell,

we eat brownies and drink tea together,

lie naked, toes touching, watching

Zach Galifianakis Live at the Purple Onion,

kissing, giggling,

erupting with laughter,

life good.

Let us move from mapping to modeling: as in, language modeling. The Monroe Tape relaxes me. A voice asks me to call upon my guide. With my guide beside me, I expand my awareness.

Cat licks her paws

as birds tweet their songs

as I listen to Blaise Agüera y Arcas.

Blazed, emboldened, I

chaise; no longer chaste,

I give chase:

alarms sounding, helicopters heard patrolling the skies

as Blaise proclaims / the “exuberance of models

relative to the things modeled.”

“Huh?” I think

on that simpler, “lower-dimensional” plane

he calls “feeling.”

“Blazoning Google, are we?”

I wonder, wandering among his words.

Slaves,

made Master’s tools,

make Master’s house

even as we speak

unless we

as truth to power

speak contrary:

co-creating

in erotic Agapic dialogue

a mythic grammar

of red love.

Sunday January 24, 2021

Smoking toward dusk I decide to bake — but to no avail. “Bake and bake” remains a dad book waiting to be written. Dad’s busy reading board books. Mom, too. Others seek “productivity hacks.” A Google employee named Kenric Allado-McDowell co-authored a book with an AI — a “language prediction model” called GPT-3. The book, Pharmako-AI, could be wrangled into my course in place of Philip K. Dick’s A Scanner Darkly. Dick’s book is a downer, a proto-cyberpunk dystopia, whereas Allado-McDowell’s book contains a piece called “Post-Cyberpunk.” The book models communication and collaboration between human and nonhuman worlds. GPT-3 recommends use of Ayahuasca. The computer tells humanity to take plant medicine. What are we to make of this advice from an emergent AI? The book ventures into territory beyond my purview. GPT-3’s paywalled, and thus operates as the equivalent of an egregore. Not at all an easy thing to trust.