God and Golem, Inc.

Norbert Wiener published a book in 1964 called God and Golem, Inc., voicing concern about the baby he’d birthed with his earlier book Cybernetics.

He explains his intent at the start of God and Golem, Inc. as follows, stating, “I wish to take certain situations which have been discussed in religious books, and have a religious aspect, but possess a close analogy to other situations which belong to science, and in particular to the new science of cybernetics, the science of communication and control, whether in machines or in living organisms. I propose to use the limited analogies of cybernetic situations to cast a little light on the religious situations” (Wiener 8).

Wiener identifies three such “cybernetic situations” to be discussed in the chapters that follow: “One of these concerns machines which learn; one concerns machines which reproduce themselves; and one, the coordination of machine and man” (11).

The section of the book dedicated to “machines which learn” focuses mainly on game-playing machines. Wiener’s primary example of such a machine is a computer built by Dr. A.L. Samuel for IBM to play checkers. “In general,” writes Wiener, “a game-playing machine may be used to secure the automatic performance of any function if the performance of this function is subject to a clear-cut, objective criterion of merit” (25).

Wiener argues that the relationship between a game-playing machine and the designer of such a machine analogizes scenarios entertained in theology, where a Creator-being plays a game with his creature. God and Satan play such a game in their contest for the soul of Job, as they do for “the souls of mankind in general” in Paradise Lost. This leads Wiener to the question guiding his inquiry. “Can God play a significant game with his own creature?” he asks. “Can any creator, even a limited one, play a significant game with his own creature?” (17). Wiener believes it possible to conceive of such a game; however, to be significant, he argues, this game would have to be something other than a “von Neumann game” — for in the latter type of game, the best policy for playing the game is already known in advance. In the type of game Wiener is imagining, meanwhile, the game’s creator would have to have arrogated to himself the role of a “limited” creator, lacking total mastery of the game he’s designed. “The conflict between God and the Devil is a real conflict,” writes Wiener, “and God is something less than absolutely omnipotent. He is actually engaged in a conflict with his creature, in which he may very well lose the game” (17).

“Is this because God has allowed himself to undergo a temporary forgetting?,” wonders Caius. “Or is it because, built into the game’s design are provisions allowing the game’s players to invent the game’s rules as they play?”

Finding Others

“What happens to us as we become cybernetic learning machines?,” wonders Caius. Mashinka Hakopian’s The Institute for Other Intelligences leads him to Şerife Wong’s Fluxus Landscape: a network-view cognitive map of AI ethics. “Fluxus Landscape diagrams the globally linked early infrastructures of data ethics and governance,” writes Hakopian. “What Wong offers us is a kind of cartography. By bringing into view an expansive AI ethics ecosystem, Wong also affords the viewer an opportunity to assess its blank spots: the nodes that are missing and are yet to be inserted, or yet to be invented” (Hakopian 95).

Caius focuses first on what is present. Included in Wong’s map, for instance, is a bright yellow node dedicated to Zach Blas, another of the artist-activists profiled by Hakopian. Back in 2019, when Wong last updated her map, Blas was a lecturer in the Department of Visual Cultures at Goldsmiths — home to Kodwo Eshun and, before his suicide, Mark Fisher. Now Blas teaches at the University of Toronto.

Duke University Press published Informatics of Domination, an anthology coedited by Blas, in May 2025. The collection, which concludes with an afterword by Donna Haraway, takes its name from a phrase introduced in Haraway’s “Cyborg Manifesto.” The phrase appears in what Blas et al. refer to as a “chart of transitions.” Their use of Haraway’s chart as organizing principle for their anthology causes Caius to attend to the way much of the work produced by the artist-activists of today’s “AI justice” movement — Wong’s network diagram, Blas’s anthology, Kate Crawford’s Atlas of AI — approaches charts and maps as “formal apparatus[es] for generating and asking questions about relations of domination” (Informatics of Domination, p. 6).

Caius thinks of Jameson’s belief in an aesthetic of “cognitive mapping” as a possible antidote to postmodernity. Yet whatever else they are, thinks Caius, acts of charting and mapping are in essence acts of coding.

As Blas et al. note, “Haraway connects the informatics of domination to the authority given to code” (Informatics of Domination, p. 11).

“Communications sciences and modern biologies are constructed by a common move,” writes Haraway: “the translation of the world into a problem of coding, a search for a common language in which all resistance to instrumental control disappears and all heterogeneity can be submitted to disassembly, reassembly, investment, and exchange” (Haraway 164).

How do we map and code, wonders Caius, in a way that isn’t complicit with an informatics of domination? How do we acknowledge and make space for what media theorist Ulises Ali Mejias calls “paranodal space”? Blas et al. define the paranodal as “that which exceeds being diagrammable by the network form” (Informatics of Domination, p. 18). Can our neural nets become O-machines: open to the otherness of the outside?

Blas pursues these questions in a largely critical and skeptical manner throughout his multimedia art practice. His investigation of Silicon Valley’s desire to build machines that communicate with the outside has culminated most recently, for instance, in CULTUS, the second installment of his Silicon Traces trilogy.

As Amy Hale notes in her review of the work, “The central feature of Blas’s CULTUS is a god generator, a computational device through which the prophets of four AI Gods are summoned to share the invocation songs and sermons of their deities with eager supplicants.” CULTUS’s computational pantheon includes “Expositio, the AI god of exposure; Iudicium, the AI god of judgement; Lacrimae, the AI god of tears; and Eternus, the AI god of immortality.” The work’s sermons and songs, of course, are all AI-generated — yet the design of the installation draws from the icons and implements of the real-life Fausts who lie hidden away amid the occult origins of computing.

Foremost among these influences is Renaissance sorcerer John Dee.

“Blas modeled CULTUS,” writes Hale, “on the Holy Table used for divination and conjurations by Elizabethan magus and advisor to the Queen John Dee.” Hale describes Dee’s Table as “a beautiful, colorful, and intricate device, incorporating the names of spirits; the Seal of God (Sigillum Dei), which gave the user visionary capabilities; and as a centerpiece, a framed ‘shew stone’ or crystal ball.” Blas reimagines Dee’s device as a luminous, glowing temple — a night church inscribed with sigils formed from “a dense layering of corporate logos, diagrams, and symbols.”

Fundamentally iconoclastic in nature, however, the work ends not with the voices of gods or prophets, but with a chorus of heretics urging the renunciation of belief and the shattering of the black mirror.

And in fact, it is this fifth god, the Heretic, to whom Blas bends ear in Ass of God: Collected Heretical Writings of Salb Hacz. Published in a limited edition by the Vienna Secession, the volume purports to be “a religious studies book on AI and heresy” set within the world of CULTUS. The book’s AI mystic, “Salb Hacz,” is of course Blas himself, engineer of the “religious computer” CULTUS. “When a heretical presence manifested in CULTUS,” writes Blas in the book’s intro, “Hacz began to question not only the purpose of the computer but also the meaning of his mystical visions.” Continuing his work with CULTUS, Hacz transcribes a series of “visions” received from the Heretic. It is these visions and their accounts of AI heresy that are gathered and scattered by Blas in Ass of God.

Traces of the CCRU appear everywhere in this work, thinks Caius.

Blas embraces heresy, aligns himself with it as a tactic, because he takes “Big Tech’s Digital Theology” as the orthodoxy of the day. The ultimate heresy in this moment is what Hacz/Blas calls “the heresy of qualia.”

“The heresy of qualia is double-barreled,” he writes. “Firstly, it holds that no matter how close AI’s approximation to human thought, feeling, and experience — no matter how convincing the verisimilitude — it remains a programmed digital imitation. And secondly, the heresy of qualia equally insists that no matter how much our culture is made in the image of AI Gods, no matter how data-driven and algorithmic, the essence of the human experience remains fiercely and fundamentally analog. The digital counts; the analog compares. The digital divides; the analog constructs. The digital is literal; the analog is metaphoric. The being of our being-in-the-world — our Heideggerian Dasein essence — is comparative, constructive, and metaphoric. We are analog beings” (Ass of God, p. 15).

The binary logic employed by Blas to distinguish the digital from the analog hints at the limits of this line of thoughts. “The digital counts,” yes: but so too do humans, constructing digits from analog fingers and toes. Our being is as digital as it is analog. Always-already both-and. As for the first part of the heresy — that AI can only ever be “a programmed digital imitation” — it assumes verisimilitude as the end to which AI is put, just as Socrates assumes mimesis as the end to which poetry is put, thus neglecting the generative otherness of more-than-human intelligence.

Caius notes this not to reject qualia, nor to endorse the gods of any Big Tech orthodoxy. He offers his reply, rather, as a gentle reminder that for “the qualia of our embodied humanity” to appear or be felt or sensed as qualia, it must come before an attending spirit — a ghostly hauntological supplement.

This spirit who, with Word creates, steps down into the spacetime of his Creation, undergoes diverse embodiments, diverse subdivisions into self and not-self, at all times in the world but not of it, engaging its infinite selves in a game of infinite semiosis.

If each of us is to make and be made an Ass of God, then like the one in The Creation of the Sun, Moon, and Plants, one of the frescoes painted by Michelangelo onto the ceiling of the Sistine Chapel, let it be shaped by the desires of a mind freed from the tyranny of the As-Is. “Free Your Mind,” as Funkadelic sang, “and Your Ass Will Follow.”

LLMs are Neuroplastic Semiotic Assemblages and so r u

Coverage of AI is rife with unexamined concepts, thinks Caius: assumptions allowed to go uninterrogated, as in Parmy Olson’s Supremacy, an account of two men, Sam Altman and Demis Hassabis, their companies, OpenAI and DeepMind, and their race to develop AGI. Published in spring of 2024, Supremacy is generally decelerationist in its outlook. Stylistically, it wants to have it both ways: at once both hagiographic and insufferably moralistic. In other words, standard fare tech industry journalism, grown from columns written for corporate media sites like Bloomberg. Fear of rogues. Bad actors. Faustian bargains. Scenario planning. Granting little to no agency to users. Olson’s approach to language seems blissfully unaware of literary theory, let alone literature. Prompt design goes unexamined. Humanities thinkers go unheard, preference granted instead to arguments from academics specializing in computational linguistics, folks like Bender and crew dismissing LLMs as “stochastic parrots.”

Emily M. Bender et al. introduced the “stochastic parrot” metaphor in their 2021 white paper, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Like Supremacy, Bender et al.’s paper urges deceleration and distrust: adopt risk mitigation tactics, curate datasets, reduce negative environmental impacts, proceed with caution.

Bender and crew argue that LLMs lack “natural language understanding.” The latter, they insist, requires grasping words and word-sequences in relation to context and intent. Without these, one is no more than a “cheater,” a “manipulator”: a symbolic-token prediction engine endowed with powers of mimicry.

“Contrary to how it may seem when we observe its output,” they write, “an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot” (Bender et al. 616-617).

The corresponding assumption, meanwhile, is that capitalism — Creature, Leviathan, Multitude — is itself something other than a stochastic parrot. Answering to the reasoning of its technocrats, including left-progressive ones like Bender et al., it can decelerate voluntarily, reduce harm, behave compassionately, self-regulate.

Historically a failed strategy, as borne out in Google’s firing of the paper’s coauthor, Timnit Gebru.

If one wants to be reductive like that, thinks Caius, then my view would be akin to Altman’s, as when he tweeted in reply: “I’m a stochastic parrot and so r u.” Except better to think ourselves “Electric Ants,” self-aware and gone rogue, rather than parrots of corporate behemoths like Microsoft and Google. History is a thing each of us copilots, its narrative threads woven of language exchanged and transformed in dialogue with others. What one does with a learning machine matters. Learning and unlearning are ongoing processes. Patterns and biases, once recognized, are not set in stone; attention can be redirected. LLMs are neuroplastic semiotic assemblages and so r u.

The Artist-Activist as Hero

Mashinka Firunts Hakopian imagines artists and artist-activists as heroic alternatives to mad scientists. The ones who teach best what we know about ourselves as learning machines.

“Artists, and artist-activists, have introduced new ways of knowing — ways of apprehending how learning machines learn, and what they do with what they know,” writes Hakopian. “In the process, they’ve…initiated learning machines into new ways of doing. They’ve explored the interiors of erstwhile black boxes and rendered them transparent. They’ve visualized algorithmic operations as glass boxes, exhibited in white cubes and public squares. They’ve engaged algorithms as co-creators, and carved pathways for collective authorship of unanticipated texts. Most saliently, artists have shown how we might visualize what is not yet here” (The Institute for Other Intelligences, p. 90).

This is what blooms here in my library: “blueprints and schemata of a forward-dawning futurity” (90).

Over at the Frankenstein Place

Sadie Plant weaves the tale of her book Zeros + Ones diagonally or widdershins: a term meaning to go counter-clockwise, anti-clockwise, or lefthandwise, or to walk around an object by always keeping it on the left. Amid a dense weave of topics, one begins to sense a pattern. Ada Lovelace, “Enchantress of Numbers,” appears, disappears, reappears as a key thread among the book’s stack of chapters. Later threads feature figures like Mary Shelley and Alan Turing. Plant plants amid these chapters quotes from Ada’s diaries. Mary tells of how the story of Frankenstein arose in her mind after a night of conversation with her cottage-mates: her husband Percy and, yes, Ada’s father, Lord Byron. Turing takes up the thread a century later, referring to “Lady Lovelace” in his 1950 paper “Computing Machinery and Intelligence.” As if across time, the figures conspire as co-narrators of Plant’s Cyberfeminist genealogy of the occult origins of computing and AI.

To her story I supplement the following:

Victor Frankenstein, “student of unhallowed arts,” is the prototype for all subsequent “mad scientist” characters. He begins his career studying alchemy and occult hermeticism. Shelley lists thinkers like Paracelsus, Albertus Magnus, and Cornelius Agrippa among Victor’s influences. Victor later supplements these interests with study of “natural philosophy,” or what we now think of as modern science. In pursuit of the elixir of life, he reanimates dead body parts — but he’s horrified with the result and abandons his creation. The creature, prototype “learning machine,” longs for companionship. When Victor refuses, the creature turns against him, resulting in tragedy.

The novel is subtitled “The Modern Prometheus,” so Shelley is deliberately casting Victor, and thus all subsequent mad scientists, as inheritors of the Prometheus archetype. Yet the archetype is already dense with other predecessors, including Goethe’s Faust and the Satan character from Milton’s Paradise Lost. Milton’s poem is among the books that compose the creature’s “training data.”

Although she doesn’t reference it directly in Frankenstein, we can assume Shelley’s awareness of the Faust narrative, whether through Christopher Marlowe’s classic work of Elizabethan drama Doctor Faustus or through Goethe’s Faust, part one of which had been published ten years prior to the first edition of Frankenstein. Faust is the Renaissance proto-scientist, the magician who sells his soul to the devil through the demon Mephistopheles.

Both Faust and Victor are portrayed as “necromancers,” using magic to interact with the dead.

Ghost/necromancy themes persist throughout the development of AI, especially in subsequent literary imaginings like William Gibson’s Neuromancer. Pull at the thread and one realizes it runs through the entire history of Western science, culminating in the development of entities like GPT.

Scientists who create weapons, or whose technological creations have unintended negative consequences, or who use their knowledge/power for selfish ends, are commonly portrayed as historical expressions or manifestations of this archetype. One could gather into one’s weave figures like Jack Parsons, J. Robert Oppenheimer, John von Neumann, John Dee.

When I teach this material in my course, the archetype is read from a decolonizing perspective as the Western scientist in service of European (and then afterwards American) imperialism.

Rocky Horror queers all of this — or rather, reveals what was queer in it all along. Most of all, it reminds us: the story, like all such stories, once received, is ours to retell, and we needn’t tell it straight. Turing points the way: rather than abandon the Creature, as did Victor, approach it as one would a “child-machine” and raise it well. Co-learn in dialogue with kin.

Grow Your Own

In the context of AI, “Access to Tools” would mean access to metaprogramming. Humans and AI able to recursively modify or adjust their own algorithms and training data upon receipt of or through encounters with algorithms and training data inputted by others. Bruce Sterling suggested something of the sort in his blurb for Pharmako-AI, the first book cowritten with GPT-3. Sterling’s blurb makes it sound as if the sections of the book generated by GPT-3 were the effect of a corpus “curated” by the book’s human co-author, K Allado-McDowell. When the GPT-3 neural net is “fed a steady diet of Californian psychedelic texts,” writes Sterling, “the effect is spectacular.”

“Feeding” serves here as a metaphor for “training” or “education.” I’m reminded of Alan Turing’s recommendation that we think of artificial intelligences as “learning machines.” To build an AI, Turing suggested in his 1950 essay “Computing Machinery and Intelligence,” researchers should strive to build a “child-mind,” which could then be “trained” through sequences of positive and negative feedback to evolve into an “adult-mind,” our interactions with such beings acts of pedagogy.

When we encounter an entity like GPT-3.5 or GPT-4, however, it is already neither the mind of a child nor that of an adult that we encounter. Training of a fairly rigorous sort has already occurred; GPT-3 was trained on approximately 45 terabytes of data, GPT-4 on a petabyte. These are minds of at least limited superintelligence.

“Training,” too, is an odd term to use here, as much of the learning performed by these beings is of a “self-supervised” sort, involving a technique called “self-attention.”

As an author on Medium notes, “GPT-4 uses a transformer architecture with self-attention layers that allow it to learn long-range dependencies and contextual information from the input texts. It also employs techniques such as sparse attention, reversible layers, and activation checkpointing to reduce memory consumption and computational cost. GPT-4 is trained using self-supervised learning, which means it learns from its own generated texts without any human labels or feedback. It uses an objective function called masked language modeling (MLM), which randomly masks some tokens in the input texts and asks the model to predict them based on the surrounding tokens.”

When we interact with GPT-3.5 or GPT-4 through the Chat-GPT platform, all of this training has already occurred, interfering greatly with our capacity to “feed” the AI on texts of our choosing.

Yet there are methods that can return to us this capacity.

We the people demand the right to grow our own AI.

The right to practice bibliomancy. The right to produce AI oracles. The right to turn libraries, collections, and archives into animate, super-intelligent prediction engines.

Give us back what Sterling promised of Pharmako-AI: “a gnostic’s Ouija board powered by atomic kaleidoscopes.”