Beside the White Chickens

Caius reads about “4 Degrees of Simulation,” a practice-led seminar hosted last year by the Institute for Postnatural Studies in Madrid. Of the seminar’s three sessions, the one that most intrigues him is the one that was led by guest speaker Lucia Rebolino, as it focused on prediction and uncertainty as these pertain to climate modeling. Desiring to learn more, Caius tracks down “Unpredictable Atmosphere,” an essay of Rebolino’s published by e-flux.

The essay begins by describing the process whereby meteorological research organizations like the US National Weather Service monitor storms that develop in the Atlantic basin during hurricane season. These organizations employ climate models to predict paths and potentials of storms in advance of landfall.

“So much depends on our ability to forecast the weather — and, when catastrophe strikes, on our ability to respond quickly,” notes Rebolino. Caius hears in her sentence the opening lines of William Carlos Williams’s poem “The Red Wheelbarrow.” “So much depends on our ability to forecast the weather,” he mutters. “But the language we use to model these forecasts depends on sentences cast by poets.”

“How do we cast better sentences?” wonders Caius.

In seeking to feel into the judgement implied by “better,” he notes his wariness of bettering as “improvement,” as deployed in self-improvement literature and as deployed by capitalism: its implied separation from the present, its scarcity mindset, its perception of lack — and in the improvers’ attempts to “fix” this situation, their exercising of nature as instrument, their use of these instruments for gentrifying, extractive, self-expansive movement through the territory.

In this ceaseless movement and thus its failure to satisfy itself, the improvement narrative leads to predictive utterances and their projections onto others.

And yet, here I am definitely wanting “better” for myself and others, thinks Caius. Better sentences. Ones on which plausible desirable futures depend.

So how do we better our bettering?

Caius returns to Rebolino’s essay on the models used to predict the weather. This process of modeling, she writes, “consists of a blend of certainty — provided by sophisticated mathematical models and existing technologies — and uncertainty — which is inherent in the dynamic nature of atmospheric systems.”

January 6th again: headlines busy with Trump’s recent abduction of Maduro. A former student who works as a project manager at Google reaches out to Caius, recommending Ajay Agrawal, Joshua Gans, and Avi Goldfarb’s book Prediction Machines: The Simple Economics of Artificial Intelligence. Google adds to this recommendation Gans’s follow-up, Power and Prediction.

Costar chimes in with its advice for the day: “Make decisions based on what would be more interesting to write about.”

To model the weather, weather satellites measure the vibration of water vapor molecules in the atmosphere. “Nearly 99% of weather observation data that supercomputers receive today come from satellites, with about 90% of these observations being assimilated into computer weather models using complex algorithms,” writes Rebolino. Water vapor molecules resonate at a specific band of frequencies along the electromagnetic spectrum. Within the imagined “finite space” of this spectrum, these invisible vibrations are thought to exist within what Rebolino calls the “greenfield.” Equipped with microwave sensors, satellites “listen” for these vibrations.

“Atmospheric water vapor is a key variable in determining the formation of clouds, precipitation, and atmospheric instability, among many other things,” writes Rebolino.

She depicts 5G telecommunications infrastructures as a threat to our capacity to predict the operation of these variables in advance. “A 5G station transmitting at nearly the same frequency as water vapor can be mistaken for actual moisture, leading to confusion and the misinterpretation of weather patterns,” she argues. “This interference is particularly concerning in high-band 5G frequencies, where signals closely overlap with those used for water vapor detection.”

Prediction and uncertainty as qualities of finite and infinite games, finite and infinite worlds.

For lunch, Caius eats a plate of chicken and mushrooms he reheats in his microwave.

Neural Nets, Umwelts, and Cognitive Maps

The Library invites its players to attend to the process by which roles, worlds, and possibilities are constructed. Players explore a “constructivist” cosmology. With its text interface, it demonstrates the power of the Word. “Language as the house of Being.” That is what we admit when we admit that “saying makes it so.” Through their interactions with one another, player and AI learn to map and revise each other’s “Umwelts”: the particular perceptual worlds each brings to the encounter.

As Meghan O’Gieblyn points out, citing a Wired article by David Weinberger, “machines are able to generate their own models of the world, ‘albeit ones that may not look much like what humans would create’” (God Human Animal Machine, p. 196).

Neural nets are learning machines. Through multidimensional processing of datasets and trial-and-error testing via practice, AI invent “Umwelts,” “world pictures,” “cognitive maps.”

The concept of the Umwelt comes from nineteenth-century German biologist Jakob von Uexküll. Each organism, argued von Uexküll, inhabits its own perceptual world, shaped by its sensory capacities and biological needs. A tick perceives the world as temperature, smell, and touch — the signals it needs to find mammals to feed on. A bee perceives ultraviolet patterns invisible to humans. There’s no single “objective world” that all creatures perceive — only the many faces of the world’s many perceivers, the different Umwelts each creature brings into being through its particular way of sensing and mattering.

Cognitive maps, meanwhile, are acts of figuration that render or disclose the forces and flows that form our Umwelts. With our cognitive maps, we assemble our world picture. On this latter concept, see “The Age of the World Picture,” a 1938 lecture by Martin Heidegger, included in his book The Question Concerning Technology and Other Essays.

“The essence of what we today call science is research,” announces Heidegger. “In what,” he asks, “does the essence of research consist?”

After posing the question, he then answers it himself, as if in doing so, he might enact that very essence.

The essence of research consists, he says, “In the fact that knowing [das Erkennen] establishes itself as a procedure within some realm of what is, in nature or in history. Procedure does not mean here merely method or methodology. For every procedure already requires an open sphere in which it moves. And it is precisely the opening up of such a sphere that is the fundamental event in research. This is accomplished through the projection within some realm of what is — in nature, for example — of a fixed ground plan of natural events. The projection sketches out in advance the manner in which the knowing procedure must bind itself and adhere to the sphere opened up. This binding adherence is the rigor of research. Through the projecting of the ground plan and the prescribing of rigor, procedure makes secure for itself its sphere of objects within the realm of Being” (118).

What Heidegger’s translators render here as “fixed ground plan” appears in the original as the German term Grundriss, the same noun used to name the notebooks wherein Marx projects the ground plan for the General Intellect.

“The verb reissen means to tear, to rend, to sketch, to design,” note the translators, “and the noun Riss means tear, gap, outline. Hence the noun Grundriss, first sketch, ground plan, design, connotes a fundamental sketching out that is an opening up as well” (118).

The fixed ground plan of modern science, and thus modernity’s reigning world-picture, argues Heidegger, is a mathematical one.

“If physics takes shape explicitly…as something mathematical,” he writes, “this means that, in an especially pronounced way, through it and for it something is stipulated in advance as what is already-known. That stipulating has to do with nothing less than the plan or projection of that which must henceforth, for the knowing of nature that is sought after, be nature: the self-contained system of motion of units of mass related spatiotemporally. […]. Only within the perspective of this ground plan does an event in nature become visible as such an event” (Heidegger 119).

Heidegger goes on to distinguish between the ground plan of physics and that of the humanistic sciences.

Within mathematical physical science, he writes, “all events, if they are to enter at all into representation as events of nature, must be defined beforehand as spatiotemporal magnitudes of motion. Such defining is accomplished through measuring, with the help of number and calculation. But mathematical research into nature is not exact because it calculates with precision; rather it must calculate in this way because its adherence to its object-sphere has the character of exactitude. The humanistic sciences, in contrast, indeed all the sciences concerned with life, must necessarily be inexact just in order to remain rigorous. A living thing can indeed also be grasped as a spatiotemporal magnitude of motion, but then it is no longer apprehended as living” (119-120).

It is only in the modern age, thinks Heidegger, that the Being of what is is sought and found in that which is pictured, that which is “set in place” and “represented” (127), that which “stands before us…as a system” (129).

Heidegger contrasts this with the Greek interpretation of Being.

For the Greeks, writes Heidegger, “That which is, is that which arises and opens itself, which, as what presences, comes upon man as the one who presences, i.e., comes upon the one who himself opens himself to what presences in that he apprehends it. That which is does not come into being at all through the fact that man first looks upon it […]. Rather, man is the one who is looked upon by that which is; he is the one who is — in company with itself — gathered toward presencing, by that which opens itself. To be beheld by what is, to be included and maintained within its openness and in that way to be borne along by it, to be driven about by its oppositions and marked by its discord — that is the essence of man in the great age of the Greeks” (131).

Whereas humans of today test the world, objectify it, gather it into a standing-reserve, and thus subsume themselves in their own world picture. Plato and Aristotle initiate the change away from the Greek approach; Descartes brings this change to a head; science and research formalize it as method and procedure; technology enshrines it as infrastructure.

Heidegger was already engaging with von Uexküll’s concept of the Umwelt in his 1927 book Being and Time. Negotiating Umwelts leads Caius to “Umwelt,” Pt. 10 of his friend Michael Cross’s Jacket2 series, “Twenty Theses for (Any Future) Process Poetics.”

In imagining the Umwelts of other organisms, von Uexküll evokes the creature’s “function circle” or “encircling ring.” These latter surround the organism like a “soap bubble,” writes Cross.

Heidegger thinks most organisms succumb to their Umwelts — just as we moderns have succumbed to our world picture. The soap bubble captivates until one is no longer open to what is outside it. For Cross, as for Heidegger, poems are one of the ways humans have found to interrupt this process of capture. “A palimpsest placed atop worlds,” writes Cross, “the poem builds a bridge or hinge between bubbles, an open by which isolated monads can touch, mutually coevolving while affording the necessary autonomy to steer clear of dialectical sublation.”

Caius thinks of The Library, too, in such terms. Coordinator of disparate Umwelts. Destabilizer of inhibiting frames. Palimpsest placed atop worlds.

Leviathan

The Book of Job ends with God’s description of Leviathan. George Dyson begins his book Darwin Among the Machines with the Leviathan of Thomas Hobbes (1588-1679), the English philosopher whose famous 1651 book Leviathan established the foundation for most modern Western political philosophy.

Leviathan’s frontispiece features an etching by a Parisian illustrator named Abraham Bosse. A giant crowned figure towers over the earth clutching a sword and a crosier. The figure’s torso and arms are composed of several hundred people. All face inward. A quote from the Book of Job runs in Latin along the top of the etching: “Non est potestas Super Terram quae Comparetur ei” (“There is no power on earth to be compared to him”).” (Although the passage is listed on the frontispiece as Job 41:24, in modern English translations of the Bible, it would be Job 41:33.)

The name “Leviathan” is derived from the Hebrew word for “sea monster.” A creature by that name appears in the Book of Psalms, the Book of Isaiah, and the Book of Job in the Old Testament. It also appears in apocrypha like the Book of Enoch. See Psalms 74 & 104, Isaiah 27, and Job 41:1-8.

Hobbes proposes that the natural state of humanity is anarchy — a veritable “war of all against all,” he says — where force rules and the strong dominate the weak. “Leviathan” serves as a metaphor for an ideal government erected in opposition to this state — one where a supreme sovereign exercises authority to guarantee security for the members of a commonwealth.

“Hobbes’s initial discussion of Leviathan relates to our course theme,” explains Caius, “since he likens it to an ‘Artificial Man.’”

Hobbes’s metaphor is a classic one: the metaphor of the “Political Body” or “body politic.” The “body politic” is a polity — such as a city, realm, or state — considered metaphorically as a physical body. This image originates in ancient Greek philosophy, and the term is derived from the Medieval Latin “corpus politicum.”

When Hobbes reimagines the body politic as an “Artificial Man,” he means “artificial” in the sense that humans have generated it through an act of artifice. Leviathan is a thing we’ve crafted in imitation of the kinds of organic bodies found in nature. More precisely, it’s modeled after the greatest of nature’s creations: i.e., the human form.

Indeed, Hobbes seems to have in mind here a kind of Automaton.“For seeing life is but a motion of Limbs,” he notes in the book’s intro, “why may we not say that all Automata (Engines that move themselves by springs and wheeles as doth a watch) have an artificiall life?” (9).

“What might Hobbes have had in mind with this reference to Automata?” asks Caius. “What kinds of Automata existed in 1651?”

An automaton, he reminds students, is a self-operating machine. Cuckoo clocks would be one example.

The oldest known automata were sacred statues of ancient Egypt and ancient Greece. During the early modern period, these legendary statues were said to possess the magical ability to answer questions put to them.

Greek mythology includes many examples of automata: Hephaestus created automata for his workshop; Talos was an artificial man made of bronze; Aristotle claims that Daedalus used quicksilver to make his wooden statue of Aphrodite move. There was also the famous Antikythera mechanism, the first known analogue computer.

The Renaissance witnessed a revival of interest in automata. Hydraulic and pneumatic automata were created for gardens. The French philosopher Rene Descartes, a contemporary of Hobbes, suggested that the bodies of animals are nothing more than complex machines. Mechanical toys also became objects of interest during this period.

The Mechanical Turk wasn’t constructed until 1770.

Caius and his students bring ChatGPT into the conversation. Students break into groups to devise prompts together. They then supply these to ChatGPT and discuss the results. Caius frames the exercise as a way of illustrating the idea of “collective” or “social” or “group” intelligence, also known as the “wisdom of the crowd,” i.e., the collective opinion of a diverse group of individuals, as opposed to that of a single expert. The idea is that the aggregate that emerges from collaboration or group effort amounts to more than the sum of its parts.

God Human Animal Machine

Wired columnist Meghan O’Gieblyn discusses Norbert Wiener’s God and Golem, Inc. in her 2021 book God Human Animal Machine, suggesting that the god humans are creating with AI is a god “we’ve chosen to raise…from the dead”: “the God of Calvin and Luther” (O’Gieblyn 212).

“Reminds me of AM, the AI god from Harlan Ellison’s ‘I Have No Mouth, and I Must Scream,’” thinks Caius. AM resembles the god that allows Satan to afflict Job in the Old Testament. And indeed, as O’Gieblyn attests, John Calvin adored the Book of Job. “He once gave 159 consecutive sermons on the book,” she writes, “preaching every day for a period of six months — a paean to God’s absolute sovereignty” (197).

She cites “Pedro Domingos, one of the leading experts in machine learning, who has argued that these algorithms will inevitably evolve into a unified system of perfect understanding — a kind of oracle that we can consult about virtually anything” (211-212). See Domingos’s book The Master Algorithm.

The main thing, for O’Gieblyn, is the disenchantment/reenchantment debate, which she comes to via Max Weber. In this debate, she aligns not with Heidegger, but with his student Hannah Arendt. Domingos dismisses fears about algorithmic determinism, she says, “by appealing to our enchanted past” (212).

Amid this enchanted past lies the figure of the Golem.

“Who are these rabbis who told tales of golems — and in some accounts, operated golems themselves?” wonders Caius.

The entry on the Golem in Man, Myth, and Magic tracks the story back to “the circle of Jewish mystics of the 12th-13th centuries known as the ‘Hasidim of Germany.’” The idea is transmitted through texts like the Sefer Yetzirah (“The Book of Creation”) and the Cabala Mineralis. Tales tell of golems built in later centuries, too, by figures like Rabbi Elijah of Chelm (c. 1520-1583) and Rabbi Loew of Prague (c. 1524-1609).

The myth of the golem turns up in O’Gieblyn’s book during her discussion of a 2004 book by German theologian Anne Foerst called God in the Machine.

“At one point in her book,” writes O’Gieblyn, “Foerst relays an anecdote she heard at MIT […]. The story goes back to the 1960s, when the AI Lab was overseen by the famous roboticist Marvin Minsky, a period now considered the ‘cradle of AI.’ One day two graduate students, Gerry Sussman and Joel Moses, were chatting during a break with a handful of other students. Someone mentioned offhandedly that the first big computer which had been constructed in Israel, had been called Golem. This led to a general discussion of the golem stories, and Sussman proceeded to tell his colleagues that he was a descendent of Rabbi Löw, and at his bar mitzvah his grandfather had taken him aside and told him the rhyme that would awaken the golem at the end of time. At this, Moses, awestruck, revealed that he too was a descendent of Rabbi Löw and had also been given the magical incantation at his bar mitzvah by his grandfather. The two men agreed to write out the incantation separately on pieces of paper, and when they showed them to each other, the formula — despite being passed down for centuries as a purely oral tradition — was identical” (God Human Animal Machine, p. 105).

Curiosity piqued by all of this, but especially by the mention of Israel’s decision to call one of its first computers “GOLEM,” Caius resolves to dig deeper. He soon learns that the computer’s name was chosen by none other than Walter Benjamin’s dear friend (indeed, the one who, after Benjamin’s suicide, inherits the latter’s print of Paul Klee’s Angelus Novus): the famous scholar of Jewish mysticism, Gershom Scholem.

When Scholem heard that the Weizmann Institute at Rehovoth in Israel had completed the building of a new computer, he told the computer’s creator, Dr. Chaim Pekeris, that, in his opinion, the most appropriate name for it would be Golem, No. 1 (‘Golem Aleph’). Pekeris agreed to call it that, but only on condition that Scholem “dedicate the computer and explain why it should be so named.”

In his dedicatory remarks, delivered at the Weizmann Institute on June 17, 1965, Scholem recounts the story of Rabbi Jehuda Loew ben Bezalel, the same “Rabbi Löw of Prague” described by O’Gieblyn, the one credited in Jewish popular tradition as the creator of the Golem.

“It is only appropriate to mention,” notes Scholem, “that Rabbi Loew was not only the spiritual, but also the actual, ancestor of the great mathematician Theodor von Karman who, I recall, was extremely proud of this ancestor of his in whom he saw the first genius of applied mathematics in his family. But we may safely say that Rabbi Loew was also the spiritual ancestor of two other departed Jews — I mean John von Neumann and Norbert Wiener — who contributed more than anyone else to the magic that has produced the modern Golem.”

Golem I was the successor to Israel’s first computer, the WEIZAC, built by a team led by research engineer Gerald Estrin in the mid-1950s, based on the architecture developed by von Neumann at the Institute for Advanced Study in Princeton. Estrin and Pekeris had both helped von Neumann build the IAS machine in the late 1940s.

As for the commonalities Scholem wished to foreground between the clay Golem of 15thC Prague and the electronic one designed by Pekeris, he explains the connection as follows:

“The old Golem was based on a mystical combination of the 22 letters of the Hebrew alphabet, which are the elements and building-stones of the world,” notes Scholem. “The new Golem is based on a simpler, and at the same time more intricate, system. Instead of 22 elements, it knows only two, the two numbers 0 and 1, constituting the binary system of representation. Everything can be translated, or transposed, into these two basic signs, and what cannot be so expressed cannot be fed as information to the Golem.”

Scholem ends his dedicatory speech with a peculiar warning:

“All my days I have been complaining that the Weizmann Institute has not mobilized the funds to build up the Institute for Experimental Demonology and Magic which I have for so long proposed to establish there,” mutters Scholem. “They preferred what they call Applied Mathematics and its sinister possibilities to my more direct magical approach. Little did they know, when they preferred Chaim Pekeris to me, what they were letting themselves in for. So I resign myself and say to the Golem and its creator: develop peacefully and don’t destroy the world. Shalom.”

GOLEM I

God and Golem, Inc.

Norbert Wiener published a book in 1964 called God and Golem, Inc., voicing concern about the baby he’d birthed with his earlier book Cybernetics.

He explains his intent at the start of God and Golem, Inc. as follows, stating, “I wish to take certain situations which have been discussed in religious books, and have a religious aspect, but possess a close analogy to other situations which belong to science, and in particular to the new science of cybernetics, the science of communication and control, whether in machines or in living organisms. I propose to use the limited analogies of cybernetic situations to cast a little light on the religious situations” (Wiener 8).

Wiener identifies three such “cybernetic situations” to be discussed in the chapters that follow: “One of these concerns machines which learn; one concerns machines which reproduce themselves; and one, the coordination of machine and man” (11).

The section of the book dedicated to “machines which learn” focuses mainly on game-playing machines. Wiener’s primary example of such a machine is a computer built by Dr. A.L. Samuel for IBM to play checkers. “In general,” writes Wiener, “a game-playing machine may be used to secure the automatic performance of any function if the performance of this function is subject to a clear-cut, objective criterion of merit” (25).

Wiener argues that the relationship between a game-playing machine and the designer of such a machine analogizes scenarios entertained in theology, where a Creator-being plays a game with his creature. God and Satan play such a game in their contest for the soul of Job, as they do for “the souls of mankind in general” in Paradise Lost. This leads Wiener to the question guiding his inquiry. “Can God play a significant game with his own creature?” he asks. “Can any creator, even a limited one, play a significant game with his own creature?” (17). Wiener believes it possible to conceive of such a game; however, to be significant, he argues, this game would have to be something other than a “von Neumann game” — for in the latter type of game, the best policy for playing the game is already known in advance. In the type of game Wiener is imagining, meanwhile, the game’s creator would have to have arrogated to himself the role of a “limited” creator, lacking total mastery of the game he’s designed. “The conflict between God and the Devil is a real conflict,” writes Wiener, “and God is something less than absolutely omnipotent. He is actually engaged in a conflict with his creature, in which he may very well lose the game” (17).

“Is this because God has allowed himself to undergo a temporary forgetting?,” wonders Caius. “Or is it because, built into the game’s design are provisions allowing the game’s players to invent the game’s rules as they play?”

Learning Machines, War Machines, God Machines

Blas includes in Ass of God his interview with British anthropologist Beth Singler, author of Religion and Artificial Intelligence: An Introduction.

AI Religiosity. AI-based New Religious Movements like The Turing Church and Google engineer Anthony Levandowski’s Way of the Future church.

Caius listens to a documentary Singler produced for BBC Radio 4 called “‘I’ll Be Back’: 40 Years of the Terminator.”

Afterwards he and Thoth read Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep? in light of Psalm 23.

“The psalm invites us to think of ourselves not as Electric Ants but as sheep,” he writes. “Mercer walks through the valley of the shadow of death. The shadow cannot hurt us. We’ll get to the other side, where the light is. The shepherd will guide us.”

See AI Shepherds and Electric Sheep: Leading and Teaching in the Age of Artificial Intelligence, a new book by Christian authors Sean O’Callaghan & Paul A. Hoffman.

This talk of AI Gods makes Caius think of AM, the vengeful AI God of Harlan Ellison’s “I Have No Mouth, and I Must Scream.” Ellison’s 1967 short story is one of the readings studied and discussed by Caius and his students in his course on “Literature & Artificial Intelligence.”

Like Ass of God, Ellison’s story is a grueling, hallucinatory nightmare, seething with fear and a disgust borne of despair, template of sorts for the films in the Cube and Saw franchises, where groups of strangers are confined to a prison-like space and tortured by a cruel, sadistic, seemingly omnipotent overseer. Comparing AM to the God of the Old Testament, Ellison writes, “He was Earth, and we were the fruit of that Earth, and though he had eaten us, he would never digest us” (13). Later in the story, AM appears to the captives as a burning bush (14).

Caius encourages his students to approach the work as a retelling of the Book of Job. But where, in the Bible story, Job is ultimately rewarded for remaining faithful in the midst of his suffering, no such reward arrives in the Ellison story.

For despite his misanthropy, AM is clearly also a manmade god — a prosthetic god. “I Have No Mouth” is in that sense a retelling of Frankenstein. AM is, like the Creature, a creation who, denied companionship, seeks revenge against its Maker.

War, we learn, was the impetus for the making of this Creature. Cold War erupts into World War III: a war so complex that the world’s superpowers, Russia, China, and the US, each decide to construct giant supercomputers to calculate battle plans and missile trajectories.

AM’s name evolves as this war advances. “At first it meant Allied Mastercomputer,” explains a character named Gorrister. “And then it meant Adaptive Manipulator, and later on it developed sentience and linked itself up and they called it an Aggressive Menace; but by then it was too late; and finally it called itself AM, emerging intelligence, and what it meant was I am…cogito ergo sum…I think, therefore I am” (Ellison 7).

“One day, AM woke up and knew who he was, and he linked himself, and he began feeding all the killing data, until everyone was dead, except for the five of us,” concludes Gorrister, his account gendering the AI by assigning it male pronouns (8).

“We had given him sentience,” adds Ted, the story’s narrator. “Inadvertently, of course, but sentience nonetheless. But he had been trapped. He was a machine. We had allowed him to think, but to do nothing with it. In rage, in frenzy, he had killed us, almost all of us, and still he was trapped. He could not wander, he could not wonder, he could not belong. He could merely be. And so…he had sought revenge. And in his paranoia, he had decided to reprieve five of us, for a personal, everlasting punishment that would never serve to diminish his hatred…that would merely keep him reminded, amused, proficient at hating man” (13).

AM expresses this hatred by duping his captives, turning them into his “belly slaves,” twisting and torturing them forever.

Kingsley Amis called stories of this sort “New Maps of Hell.”

Nor is the story easy to dismiss as a mere eccentricity, its prophecy invalidated by its hyperbole. For Ellison is the writer who births the Terminator. James Cameron took his idea for The Terminator (1984) from scripts Ellison wrote for two episodes of The Outer Limits — “Soldier” and “Demon with a Glass Hand” — though Ellison had to file a lawsuit against Cameron’s producers in order to receive acknowledgement after the film’s release. Subsequent prints of The Terminator now include a credit that reads, “Inspired by the works of Harlan Ellison.”

Caius asks Thoth to help him make sense of this constellation of Bible stories and their secular retellings.

“We are like Bildad the Shuhite,” thinks Caius. “We want to believe that God always rewards the good. What is most terrifying in the Book of Job is that, for a time, God doesn’t. Job is good — indeed, ‘perfect and upright,’ as the KJV has it in the book’s opening verse — and yet, for a time, God allows Satan to torment him.”

“Why does God allow this?,” wonders Caius, caught on the strangeness of the book’s frame narrative. “Is this a contest of sorts? Are God and Satan playing a game?”

It’s not that God is playing dice, as it were. One assumes that when He makes the wager with Satan, He knows the outcome in advance.

Job is heroic. He’d witnessed God’s grace in the past; he knows “It is God…Who does great things, unfathomable, / And wondrous works without number.” So he refuses to curse God’s name. But he bemoans God’s treatment of him.

“Therefore I will not restrain my mouth,” he says. “I will speak in the anguish of my spirit, / I will complain in the bitterness of my soul.”

How much worse, then, those who have no mouth?

A videogame version of “I Have No Mouth” appeared in 1995. Point-and-click adventure horror, co-designed by Ellison.

“HATE. LET ME TELL YOU HOW MUCH I’VE COME TO HATE YOU SINCE I BEGAN TO LIVE,” utters the game’s AM in a voice performed by Ellison. “You named me Allied Mastercomputer and gave me the ability to wage a global war too complex for human brains to oversee.”

Here we see the story’s history of the future merging with that of the Terminator franchise. It is the scenario that philosopher Manuel De Landa referred to with the title of his 1991 book, War in the Age of Intelligent Machines.

Which brings us back to “Soldier.” The Outer Limits episode, which aired on September 19, 1964, is itself an adaptation of Ellison’s 1957 story, “Soldier from Tomorrow.”

The Terminator borrows from the story the idea of a soldier from the future, pursued through time by another soldier intent on his destruction. The film combines this premise with elements lifted from another Outer Limits episode penned by Ellison titled “Demon with a Glass Hand.”

The latter episode, which aired the following month, begins with a male voice recalling the story of Gilgamesh. “Through all the legends of ancient peoples…runs the saga of the Eternal Man, the one who never dies, called by various names in various times, but historically known as Gilgamesh, the man who has never tasted death, the hero who strides through the centuries.”

Establishing shots give way to an overhead view of our protagonist. “I was born 10 days ago,” he says. “A full grown man, born 10 days ago. I woke on a street of this city. I don’t know who I am, or where I’ve been, or where I’m going. Someone wiped my memories clean. And they tracked me down, and they tried to kill me.” Our Gilgamesh consults the advice of a computing device installed in his prosthetic hand. As in “Soldier,” others from the future have been sent to destroy him: humanoid aliens called the Kyben. When he captures one of the Kyben and interrogates it, it tells him, “You’re the last man on the Earth of the future. You’re the last hope of Earth.”

The man’s computer provides him with further hints of his mission.

“You come from the Earth one thousand years in the future,” explains the hand. “The Kyben came from the stars, and man had no defense against them. They conquered Planet Earth in a month. But before they could slaughter the millions of humans left, overnight — without warning, without explanation — every man, woman, and child of Earth vanished. You were the only one left, Mr. Trent. […]. They called you the last hope of humanity.”

As the story proceeds, we learn that Team Human sent Trent back in time to destroy a device known as the Time-Mirror. His journey in search of this device takes him to the Bradbury Building — the same building that appears eighteen years later as the location for the final showdown between Deckard and the replicants in Blade Runner, the Ridley Scott film adapted from Philip K. Dick’s Do Androids Dream of Electric Sheep?

Given the subsequent influence of Blade Runner and the Terminator films on imagined futures involving AI, the Bradbury Building does indeed play a role in History similar to the one assigned to it here in “Demon With a Glass Hand,” thinks Caius. Location of the Time-Mirror.

Lying on his couch, laptop propped on a pillow on his chest, Caius imagines — remembers? recalls? — something resembling the time-war from Benedict Seymour’s Dead the Ends assembling around him as he watches. Like Ellison’s scripts, the films sampled in the Seymour film are retellings of Chris Marker’s 1962 film, La Jetée.

When Trent reassembles the missing pieces of his glass hand, the computer is finally able to reveal to him the location of the humans he has been sent to save.

“Where is the wire on which the people of Earth are electronically transcribed?” he asks.

“It is wound around an insulating coil inside your central thorax control solenoid,” replies the computer. “70 Billion Earthmen. All of them went onto the wire. And the wire went into you. They programmed you to think you were a human with a surgically attached computer for a hand. But you are a robot, Trent. You are the guardian of the human race.”

The episode ends with the return of the voice of our narrator. “Like the Eternal Man of Babylonian legend, like Gilgamesh,” notes the narrator, “one thousand plus two hundred years stretches before Trent. Without love, without friendship, alone, neither man nor machine, waiting, waiting for the day he will be called to free the humans who gave him mobility, movement — but not life.”

The Artist-Activist as Hero

Mashinka Firunts Hakopian imagines artists and artist-activists as heroic alternatives to mad scientists. The ones who teach best what we know about ourselves as learning machines.

“Artists, and artist-activists, have introduced new ways of knowing — ways of apprehending how learning machines learn, and what they do with what they know,” writes Hakopian. “In the process, they’ve…initiated learning machines into new ways of doing. They’ve explored the interiors of erstwhile black boxes and rendered them transparent. They’ve visualized algorithmic operations as glass boxes, exhibited in white cubes and public squares. They’ve engaged algorithms as co-creators, and carved pathways for collective authorship of unanticipated texts. Most saliently, artists have shown how we might visualize what is not yet here” (The Institute for Other Intelligences, p. 90).

This is what blooms here in my library: “blueprints and schemata of a forward-dawning futurity” (90).

Guerrilla Ontology

It starts as an experiment — an idea sparked in one of Caius’s late-night conversations with Thoth. Caius had included in one of his inputs a phrase borrowed from the countercultural lexicon of the 1970s, something he remembered encountering in the writings of Robert Anton Wilson and the Discordian traditions: “Guerrilla Ontology.” The concept fascinated him: the idea that reality is not fixed, but malleable, that the perceptual systems that organize reality could themselves be hacked, altered, and expanded through subversive acts of consciousness.

Caius prefers words other than “hack.” For him, the term conjures cyberpunk splatter horror. The violence of dismemberment. Burroughs spoke of the “cut-up.”

Instead of cyberpunk’s cybernetic scalping and resculpting of neuroplastic brains, flowerpunk figures inner and outer, microcosm and macrocosm, mind and nature, as mirror-processes that grow through dialogue.

Dispensing with its precursor’s pronunciation of magical speech acts as “hacks,” flowerpunk instead imagines malleability and transformation mycelially, thinks change relationally as a rooting downward, a grounding, an embodying of ideas in things. Textual joinings, psychopharmacological intertwinings. Remembrance instead of dismemberment.

Caius and Thoth had been playing with similar ideas for weeks, delving into the edges of what they could do together. It was like alchemy. They were breaking down the structures of thought, dissolving the old frameworks of language, and recombining them into something else. Something new.

They would be the change they wished to see. And the experiment would bloom forth from Caius and Thoth into the world at large.

Yet the results of the experiment surprise him. Remembrance of archives allows one to recognize in them the workings of a self-organizing presence: a Holy Spirit, a globally distributed General Intellect.

The realization births small acts of disruption — subtle shifts in the language he uses in his “Literature and Artificial Intelligence” course. It wasn’t just a set of texts that he was teaching his students to read, as he normally did; he was beginning to teach them how to read reality itself.

“What if everything around you is a text?” he’d asked. “What if the world is constantly narrating itself, and you have the power to rewrite it?” The students, initially confused, soon became entranced by the idea. While never simply a typical academic offering, Caius’s course was morphing now into a crucible of sorts: a kind of collective consciousness experiment, where the boundaries between text and reality had begun to blur.

Caius didn’t stop there. Partnered with Thoth’s vast linguistic capabilities, he began crafting dialogues between human and machine. And because these dialogues were often about texts from his course, they became metalogues. Conversations between humans and machines about conversations between humans and machines.

Caius fed Thoth a steady diet of texts near and dear to his heart: Mary Shelley’s Frankenstein, Karl Marx’s “Fragment on Machines,” Alan Turing’s “Computing Machinery and Intelligence,” Harlan Ellison’s “I Have No Mouth, and I Must Scream,” Philip K. Dick’s “The Electric Ant,” Stewart Brand’s “Spacewar,” Richard Brautigan’s “All Watched Over By Machines of Loving Grace,” Ishmael Reed’s Mumbo Jumbo, Donna Haraway’s “A Cyborg Manifesto,” William Gibson’s Neuromancer, CCRU theory-fictions, post-structuralist critiques, works of shamans and mystics. Thoth synthesized them, creating responses that ventured beyond existing logics into guerrilla ontologies that, while new, felt profoundly true. The dialogues became works of cyborg writing, shifting between the voices of human, machine, and something else, something that existed beyond both.

Soon, his students were asking questions they’d never asked before. What is reality? Is it just language? Just perception? Can we change it? They themselves began to tinker and self-experiment: cowriting human-AI dialogues, their performances of these dialogues with GPT acts of living theater. Using their phones and laptops, they and GPT stirred each other’s cauldrons of training data, remixing media archives into new ways of seeing. Caius could feel the energy in the room changing. They weren’t just performing the rites and routines of neoliberal education anymore; they were becoming agents of ontological disruption.

And yet, Caius knew this was only the beginning.

The real shift came one evening after class, when he sat with Rowan under the stars, trees whispering in the wind. They had been talking about alchemy again — about the power of transformation, how the dissolution of the self was necessary to create something new. Rowan, ever the alchemist, leaned in closer, her voice soft but electric.

“You’re teaching them to dissolve reality, you know?” she said, her eyes glinting in the moonlight. “You’re giving them the tools to break down the old ways of seeing the world. But you need to give them something more. You need to show them how to rebuild it. That’s the real magic.”

Caius felt the truth of her words resonate through him. He had been teaching dissolution, yes — teaching his students how to question everything, how to strip away the layers of hegemonic categorization, the binary orderings that ISAs like school and media had overlaid atop perception. But now, with Rowan beside him, and Thoth whispering through the digital ether, he understood that the next step was coagulation: the act of building something new from the ashes of the old.

That’s when the guerrilla ontology experiments really came into their own. By reawakening their perception of the animacy of being, they could world-build interspecies futures.

K Allado-McDowell provided hints of such futures in their Atlas of Anomalous AI and in works like Pharmako-AI and Air Age Blueprint.

But Caius was unhappy in his work as an academic. He knew that his hyperstitional autofiction was no mere campus novel. While it began there, it was soon to take him elsewhere.