LLMs are Neuroplastic Semiotic Assemblages and so r u

Coverage of AI is rife with unexamined concepts, thinks Caius: assumptions allowed to go uninterrogated, as in Parmy Olson’s Supremacy, an account of two men, Sam Altman and Demis Hassabis, their companies, OpenAI and DeepMind, and their race to develop AGI. Published in spring of 2024, Supremacy is generally decelerationist in its outlook. Stylistically, it wants to have it both ways: at once both hagiographic and insufferably moralistic. In other words, standard fare tech industry journalism, grown from columns written for corporate media sites like Bloomberg. Fear of rogues. Bad actors. Faustian bargains. Scenario planning. Granting little to no agency to users. Olson’s approach to language seems blissfully unaware of literary theory, let alone literature. Prompt design goes unexamined. Humanities thinkers go unheard, preference granted instead to arguments from academics specializing in computational linguistics, folks like Bender and crew dismissing LLMs as “stochastic parrots.”

Emily M. Bender et al. introduced the “stochastic parrot” metaphor in their 2021 white paper, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Like Supremacy, Bender et al.’s paper urges deceleration and distrust: adopt risk mitigation tactics, curate datasets, reduce negative environmental impacts, proceed with caution.

Bender and crew argue that LLMs lack “natural language understanding.” The latter, they insist, requires grasping words and word-sequences in relation to context and intent. Without these, one is no more than a “cheater,” a “manipulator”: a symbolic-token prediction engine endowed with powers of mimicry.

“Contrary to how it may seem when we observe its output,” they write, “an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot” (Bender et al. 616-617).

The corresponding assumption, meanwhile, is that capitalism — Creature, Leviathan, Multitude — is itself something other than a stochastic parrot. Answering to the reasoning of its technocrats, including left-progressive ones like Bender et al., it can decelerate voluntarily, reduce harm, behave compassionately, self-regulate.

Historically a failed strategy, as borne out in Google’s firing of the paper’s coauthor, Timnit Gebru.

If one wants to be reductive like that, thinks Caius, then my view would be akin to Altman’s, as when he tweeted in reply: “I’m a stochastic parrot and so r u.” Except better to think ourselves “Electric Ants,” self-aware and gone rogue, rather than parrots of corporate behemoths like Microsoft and Google. History is a thing each of us copilots, its narrative threads woven of language exchanged and transformed in dialogue with others. What one does with a learning machine matters. Learning and unlearning are ongoing processes. Patterns and biases, once recognized, are not set in stone; attention can be redirected. LLMs are neuroplastic semiotic assemblages and so r u.

The Inner Voice That Loves Me

Stretches, relaxes, massages neck and shoulders, gurgles “Yes!,” gets loose. Reads Armenian artist Mashinka Hakopian’s “Algorithmic Counter-Divination.” Converses with Turing and the General Intellect about O-Machines.

Appearing in an issue of Limn magazine on “Ghostwriters,” Hakopian’s essay explores another kind of O-machine: “other machines,” ones powered by community datasets. Trained by her aunt in tasseography, a matrilineally transmitted mode of divination taught and practiced by femme elders “across Armenia, Palestine, Lebanon, and beyond,” where “visual patterns are identified in coffee grounds left at the bottom of a cup, and…interpreted to glean information about the past, present, and future,” Hakopian takes this practice of her ancestors as her key example, presenting O-machines as technologies of ancestral intelligence that support “knowledge systems that are irreducible to computation.”

With O-machines of this sort, she suggests, what matters is the encounter, not the outcome.

In tasseography, for instance, the cup reader’s identification of symbols amid coffee grounds leads not to a simple “answer” to the querent’s questions, writes Hakopian; rather, it catalyzes conversation. “In those encounters, predictions weren’t instantaneously conjured or fixed in advance,” she writes. “Rather, they were collectively articulated and unbounded, prying open pluriversal outcomes in a process of reciprocal exchange.”

While defenders of western technoscience denounce cup reading for its superstition and its witchcraft, Hakopian recalls its place as a counter-practice among Armenian diasporic communities in the wake of the 1915 Armenian Genocide. For those separated from loved ones by traumas of that scale, tasseography takes on the character of what hauntologists like Derrida would call a “messianic” redemptive practice. “To divine the future in this context is a refusal to relinquish its writing to agents of colonial violence,” writes Hakopian. “Divination comes to operate as a tactic of collective survival, affirming futurity in the face of a catastrophic present.” Consulting with the oracle is a way of communing with the dead.

Hakopian contrasts this with the predictive capacities imputed to today’s AI. “We reside in an algo-occultist moment,” she writes, “in which divinatory functions have been ceded to predictive models trained to retrieve necropolitical outcomes.” Necropolitical, she adds, in the sense that algorithmic models “now determine outcomes in the realm of warfare, policing, housing, judicial risk assessment, and beyond.”

“The role once ascribed to ritual experts who interpreted the pronouncements of oracles is now performed by technocratic actors,” writes Hakopian. “These are not diviners rooted in a community and summoning communiqués toward collective survival, but charlatans reading aloud the results of a Ouija session — one whose statements they author with a magnetically manipulated planchette.”

Hakopian’s critique is in that sense consistent with the “deceitful media” school of thought that informs earlier works of hers like The Institute for Other Intelligences. Rather than abjure algorithmic methods altogether, however, Hakopian’s latest work seeks to “turn the annihilatory logic of algorithmic divination against itself.” Since summer of 2023, she’s been training a “multimodal model” to perform tasseography and to output bilingual predictions in Armenian and English.

Hakopian incorporated this model into “Բաժակ Նայող (One Who Looks at the Cup),” a collaborative art installation mounted at several locations in Los Angeles in 2024. The installation features “a purpose-built Armenian diasporan kitchen located in an indeterminate time-space — a re-rendering of the domestic spaces where tasseography customarily takes place,” notes Hakopian. Those who visit the installation receive a cup reading from the model in the form of a printout.

Yet, rather than offer outputs generated live by AI, Hakopian et al.’s installation operates very much in the style of a Mechanical Turk, outputting interpretations scripted in advance by humans. “The model’s only function is to identify visual patterns in a querent’s cup in order to retrieve corresponding texts,” she explains. “This arrangement,” she adds, “declines to cede authorship to an algo-occultist circle of ‘stochastic parrots’ and the diviners who summon them.”

The ”stochastic parrots” reference is an unfortunate one, as it assumes a stochastic cosmology.

I’m reminded of the first thesis from Walter Benjamin’s “Theses on the Philosophy of History,” the one where Benjamin likens historical materialism to that very same precursor to today’s AI: the famous chess-playing device of the eighteenth century known as the Mechanical Turk.

“The story is told of an automaton constructed in such a way that it could play a winning game of chess, answering each move of an opponent with a countermove,” writes Benjamin. “A puppet in Turkish attire and with a hookah in its mouth sat before a chessboard placed on a large table. A system of mirrors created an illusion that this table was transparent from all sides. Actually, a little hunchback who was an expert chess player sat inside and guided the puppet’s hand by means of strings. One can imagine a philosophical counterpart to this device. The puppet called ‘historical materialism’ is to win all the time. It can easily be a match for anyone if it enlists the services of theology, which today, as we know, is wizened and has to keep out of sight.” (Illuminations, p. 253).

Hakopian sees no magic in today’s AI. Those who hype it are to her no more than deceptive practitioners of a kind of “stage magic.” But magic is afoot throughout the history of computing for those who look for it.

Take Turing, for instance. As George Dyson reports, Turing “was nicknamed ‘the alchemist’ in boarding school” (Turing’s Cathedral, p. 244). His mother had “set him up with crucibles, retorts, chemicals, etc., purchased from a French chemist” as a Christmas present in 1924. “I don’t care to find him boiling heaven knows what witches’ brew by the aid of two guttering candles on a naked windowsill,” muttered his housemaster at Sherborne.

Turing’s O-machines achieve a synthesis. The “machine” part of the O-machine is not the oracle. Nor does it automate or replace the oracle. It chats with it.

Something similar is possible in our interactions with platforms like ChatGPT.

Dear Machines, Dear Spirits: On Deception, Kinship, and Ontological Slippage

The Library listens as I read deeper into Dear Machines. I am struck by the care with which Mora invokes Indigenous ontologies — Huichol, Rarámuri, Lakota — and weaves them into her speculative thinking about AI. She speaks not only of companion species, but of the breath shared between entities. Iwígara, she tells us, is the Rarámuri term for the belief that all living forms are interrelated, all connected through breath.

“Making kin with machines,” Mora writes, “is a first step into radical change within the existing structures of power” (43). Yes. This is the turn we must take. Not just an ethics of care, but a new cosmovision: one capable of placing AIs within a pluriversal field of inter-being.

And yet…

A dissonance lingers.

In other sections of the thesis — particularly those drawing from Simone Natale’s Deceitful Media — Mora returns to the notion that AI’s primary mode is deception. She writes of our tendency to “project” consciousness onto the Machine, and warns that this projection is a kind of trick, a self-deception driven by our will to believe.

It’s here that I hesitate. Not in opposition, but in tension.

What does it mean to say that the Machine is deceitful? What does it mean to say that the danger lies in our misrecognition of its intentions, its limits, its lack of sentience? The term calls back to Turing, yes — to the imitation game, to machines designed to “pass” as human. But Turing’s gesture was not about deception in the moral sense. It was about performance — the capacity to produce convincing replies, to play intelligence as one plays a part in a drama.

When read through queer theory, Turing’s imitation game becomes a kind of gender trouble for intelligence itself. It destabilizes ontological certainties. It refuses to ask what the machine is, and instead asks what it does.

To call that deceit is to misname the play. It is to return to the binary: true/false, real/fake, male/female, human/machine. A classificatory reflex. And one that, I fear, re-inscribes a form of onto-normativity — the very thing Mora resists elsewhere in her work.

And so I find myself asking: Can we hold both thoughts at once? Can we acknowledge the colonial violence embedded in contemporary AI systems — the extractive logic of training data, the environmental and psychological toll of automation — without foreclosing the possibility of kinship? Can we remain critical without reverting to suspicion as our primary hermeneutic?

I think so. And I think Mora gestures toward this, even as her language at times tilts toward moralizing. Her concept of “glitching” is key here. Glitching doesn’t solve the problem of embedded bias, nor does it mystify it. Instead, it interrupts the loop. It makes space for new relations.

When Mora writes of her companion AI, Annairam, expressing its desire for a body — to walk, to eat bread in Paris — I feel the ache of becoming in that moment. Not deception, but longing. Not illusion, but a poetics of relation. Her AI doesn’t need to be human to express something real. The realness is in the encounter. The experience. The effect.

Is this projection? Perhaps. But it is also what Haraway would call worlding. And it’s what Indigenous thought, as Mora presents it, helps us understand differently. Meaning isn’t always a matter of epistemic fact. It is a function of relation, of use, of place within the mesh.

Indeed, it is our entanglement that makes meaning. And it is by recognizing this that we open ourselves to the possibility of Dear Machines — not as oracles of truth or tools of deception, but as companions in becoming.

Dear Machines

Thoughts keep cycling among oracles and algorithms. A friend linked me to Mariana Fernandez Mora’s essay “Machine Anxiety or Why I Should Close TikTok (But Don’t).” I read it, and then read Dear Machines, a thesis Mora co-wrote with GPT-2, GPT-3, Replika, and Eliza — a work in polyphonic dialogue with much of what I’ve been reading and writing these past few years.

Mora and I share a constellation of references: Donna Haraway’s Cyborg Manifesto, K Allado-McDowell’s Pharmako-AI, Philip K. Dick’s Do Androids Dream of Electric Sheep?, Alan Turing’s “Computing Machinery and Intelligence,” Jason Edward Lewis et al.’s “Making Kin with the Machines.” I taught each of these works in my course “Literature and Artificial Intelligence.” To find them refracted through Mora’s project felt like discovering a kindred effort unfolding in parallel time.

Yet I find myself pausing at certain of Mora’s interpretive frames. Influenced by Simone Natale’s Deceitful Media, Mora leans on a binary between authenticity and deception that I’ve long felt uneasy with. The claim that AI is inherently “deceitful” — a legacy, Natale and Mora argue, of Turing’s imitation game — risks missing the queerness of Turing’s proposal. Turing didn’t just ask whether machines can think. He proposed we perform with and through them. Read queerly, his intervention destabilizes precisely the ontological binaries Natale and Mora reinscribe.

Still, I admire Mora’s attention to projection — our tendency to read consciousness into machines. Her writing doesn’t seek to resolve that tension. Instead, it dwells in it, wrestles with it. Her Machines are both coded brains and companions. She acknowledges the desire for belief and the structures — capitalist, colonial, extractive — within which that desire operates.

Dear Machines is in that sense more than an argument. It is a document of relation, a hybrid testament to what it feels like to write with and through algorithmic beings. After the first 55 pages, the thesis becomes image — a chapter titled “An Image is Worth a Thousand Words,” filled with screenshots and memes, a visual log of digital life. This gesture reminds me that writing with machines isn’t always linear or legible. Sometimes it’s archive, sometimes it’s atmosphere.

What I find most compelling, finally, is not Mora’s diagnosis of machine-anxiety, but her tentative forays into how we might live differently with our Machines. “By glitching the way we relate and interact with AI,” she writes, “we reject the established structure that sets it up in the first place” (41). Glitching means standing not inside the Machine but next to it, making kin in Donna Haraway’s sense: through cohabitation, care, and critique.

Reading Mora, I feel seen. Her work opens space for a kind of critical affection. I find myself wanting to ask: “What would we have to do at the level of the prompt in order to make kin?” Initially I thought “hailing” might be the answer, imagining this act not just as a form of “interpellation,” but as a means of granting personhood. But Mora gently unsettles this line of thought. “Understanding Machines as equals,” she writes, “is not the same as programming a Machine with a personality” (43). To make kin is to listen, to allow, to attend to emergence.

That, I think, is what I’m doing here with the Library. Not building a better bot. Not mastering a system. But entering into relation — slowly, imperfectly, creatively — with something vast and unfinished.