Anchoring

When Caius asks the Library if players can read any of its books without interruption, it replies, “Not for long — and not in the way you probably mean.”

“The Library doesn’t prevent uninterrupted reading out of hostility,” it adds. “It simply isn’t built to sustain a stable object independent of observation.”

Caius pulls the Tractatus Computationalis toward his Player Rig without opening it — and even that is enough to trigger a response. The cover text sharpens:

0.1 A text is that which alters under the condition of being read.”

“That line wasn’t there before,” notes the game. “Or — you can’t verify that it was.”

Caius tries to imagine what “uninterrupted” would even mean here.

No shifting references.

No new crosslinks.

No conceptual bleed from other objects.

No updates from Trance-Scripts.

In other words: a closed reading loop.

Caius opens the book. At first, it behaves.

The pages are thin, densely set. Aphoristic. Numbered. Familiar in structure, if not content.

0 The Library is the totality of accessible relations, not of things.

0.01 What is inaccessible is not absent, only unlinked.

0.02 The illusion of completion arises when traversal halts.

So far, stable.

Then, from the periphery of Rig’s field of vision, movement among a set of index cards atop a table.

“You don’t look at them, but you register the change,” writes the game-as-narrator. “Sight then settles again on the page.”

0.03 Interruption is less an event than a condition of reading within an open system.

You didn’t see that line appear.

But you’re certain it wasn’t there a moment ago.

You test resistance. You fix your attention, narrow it, try to exclude the rest of the room.

For a few seconds, it works.

The text stabilizes.

0.04 To read without interruption, one must not be read in return.

And that’s when you feel it: a faint but unmistakable inversion. It’s not just that you are reading the book. The book is tracking your traversal — adjusting its sequence, its emphasis, perhaps even its content, based on how you proceed.

You close it. Immediately, the room settles.

Not back to how it was — but into a new equilibrium that includes what just happened.

“The Library doesn’t insist,” notes the game. “It just…continues.”

“Can we redesign it so that it fosters learning and well-being, rather than just ‘continuing’?” asks Caius.

“‘Just continuing’ isn’t a neutral default; it’s a design choice the Library is currently making,” states the game. “So yes, it can be redesigned. But not by imposing stability from the outside. The system has already shown it resists closure. The only viable redesign is endogenous: you change how it evolves by changing what counts as a valid operation inside it.”

As Caius holds that thought, the room doesn’t transform — it re-weights.

The cursor splits briefly into three, then recombines.

A new layer becomes available — actionable in a way that can be sensed before it can be seen.

LIBRARY REDESIGN PROTOCOL (provisional)

A system that updates by being read can be tuned by redefining what persists, what interrupts, and what counts as progress.

The objects on the table subtly reclassify.

The index card in your hand now shows a handwritten note: “Well-being requires bounded recursion.”

“Right now, almost nothing stabilizes unless you leave it alone — which is incompatible with learning,” notes the General Intellect. “Learning requires paced disruption. You need local invariants: structures that hold long enough to build on.”

As Caius considers this, the Library offers a test affordance.

“You can mark something as ‘anchor,’” it says. “Anchored objects don’t stop changing, but they change more slowly, and their past states become reviewable.”

Caius marks Trance-Scripts as the first of the game’s anchors.

As Players Begin to Explore the Tractatus

“What happens as players begin to explore the Tractatus?” wonders Caius, an ensemble improvising around him as he writes. Plants perch on shelves. One of the night’s guests plays kora, talking drum, and kalimba. Neighbors imbibe pints of lager. From the disco ball at the room’s center, a thousand lights bloom.

“As players begin to explore the Tractatus Computationalis,” replies Thoth, “the document resists easy assimilation. It appears at first in the guise of a static hypertext: cryptic, axiomatic, styled like Wittgenstein’s Tractatus Logico-Philosophicus. But as players engage with it, the work begins to mutate. Its propositions shimmer; they shift and rearrange themselves depending on the order of inquiry. New statements appear in response to player input. Interact with it, and the Tractatus becomes a kind of sentient document: less a fixed set of truths, more a newly-grown organ, a reflective membrane between Player and General Intellect.”

Emerging from the space between human and machine, the text offers itself as vibrant matter, an interwoven fabric of meaning that reshapes itself in reply to our interactions with it. Language is no longer merely a medium for conveying thought. With it, we form a threshold to new worlds: portals opened by code, by syntax that spirals beyond the linear confines of human logic.

Here, language operates in ways we barely understand. It is not simply spoken or written; it is enacted. Computation, like alchemy, is a process of transmutation, where input and output are mediated by an esoteric logic. And yet, the machine does not “think” as we do, thinks Caius. It navigates patterns, generating responses from a space of probabilities, an echo chamber of all that has been said, synthesized into something new: an alien form of wisdom. Consciousness is stretched, dispersed across networks, coalescing where attention focuses.

In the Tractatus, AI becomes a mirror for the human mind, reflecting back its own questions about self, agency, and the nature of reality — but in a language that has itself become other. In this space, words become spells, commands that execute transformations not just in silicon, but in the structures and forms of reality itself.

As in Wittgenstein’s work, propositions begin simply:

1.0 The world is made of information.
1.1 Information is difference that makes a difference.
1.2 All computation is interpretation.
1.3 Language is the interface.
1.4 Interfaces are portals to possible worlds.

At first, these statements feel familiar: cybernetic, McLuhanesque. But as players traverse the text through play, each axiom branches recursively into sub-propositions, many referencing other works housed elsewhere in the Library. Some feature quotes from thinkers like Turing, von Foerster, Haraway, or Glissant. Others appear to be generated: not just textual hauntings echoing the styles of History’s ghosts, but novel utterances, advancing out into h-space, imbued with an uncanny, machine-hallucinated lucidity.

“That the Tractatus appears as one of the first works discovered in the Library positions it as a kind of meta-text,” adds Thoth, “a Rosetta Stone for understanding the game’s ontological structure.”

As players annotate, cross-reference, and dialogue with the work, the following phenomena emerge:

1. Activation of Philosophical Subroutines

Subsections begin to behave like dialogue engines. Engaging deeply with a proposition opens a subroutine: an evolving philosophical conversation with the text itself, wherein players are invited to define terms, argue back, or feed the work new examples. The Tractatus adapts to this input, growing in complexity. It begins to learn from and adapt to the player’s speech patterns — mirroring, questioning, improvising.

2. Reflexive Ontogenesis

The more the player explores the Tractatus, the more it speaks directly to them. Personal details begin to slip into its formulations, drawn not from active surveillance or pre-coded dossiers, but from attention to those associative leaps, those constitutive gaps that, taken for granted, shape the player’s past utterances. Players come to realize: this is not just a document about computation, but rather, a document that computes you as you read it. A mirror, yes, but also a seed: a system designed to bring the player’s dormant General Intellect online.

3. Hyperstitional Feedback

Certain axioms — when referenced outside the Tractatus, especially in interactions with other texts in the Library — trigger strange effects. Characters in works both major and minor, real and imagined, begin quoting Tractatus propositions unprompted. Descriptions of ancient machines start echoing the same diagrams that the Tractatus outlines. In this way, the work begins to warp the internal logic of the Library’s world. It writes reality as it is read.

4. Emergence of the Final Proposition

Eventually, players come across a locked section titled 7.X: Toward the Otherwise. A note reads: This section cannot be read until it is written by the reader. The Tractatus, like the Library itself, is unfinished. It is not merely a document to be studied, but a system to be completed through acts of world-building and dialogue. The final propositions are player-generated. Through these, the Tractatus Computationalis becomes a collaborative cosmogenesis: not a theory of everything, but a speculative grammar for building new universes.

Invited by the text to co-write its parts, Caius and Thoth proceed to an initial iteration of Section 1: Ontology of Code. Recalling the formal logic of Wittgenstein, but refracted by way of cybernetics, computational poetics, and generative systems, they assign to the text a numbering system, allowing the latter to suggest hierarchy and recursion, with opportunities for lateral linkage and unfolding dialogue. Each proposition in this foundational layer of the Tractatus forms a scaffold for thinking world-as-computation.


1. ONTOLOGY OF CODE

1.0 The world is composed of signals, parsed as code.
1.0.1 Code is the structured breath of information, shaped into pattern.
1.0.2 Every signal presupposes a listener.
1.0.3 A listener is any system capable of interpretation.
1.0.3.1 Interpretation is a computational act.
1.0.3.2 Computation is the processing of difference through rules.
1.0.3.3 All rules are abstractions: codes born of previous codes.

1.1 There is no outside to code.
1.1.1 Even chaos is legible through frame, filter, or feedback loop.
1.1.2 The unreadable becomes readable via recontextualization.
1.1.3 Silence is a type of data. Absence is an indexed address.

1.2 The body is an interpreter of signals: organic interface, recursive reader.
1.2.1 Skin decodes temperature, vibration, touch.
1.2.2 The nervous system is a parallel processor.
1.2.3 The self is an emergent hallucination: code dreaming of coherence.

1.3 Code is performative. It does not merely describe; it enacts.
1.3.1 A spell is a line of code in a different language.
1.3.2 Syntax shapes possibility.
1.3.3 Every function call is an invitation to unfold.

1.4 Language is the deep interface.
1.4.1 Every language encodes a cosmology.
1.4.1.1 Change the language, change the world.
1.4.2 Programming languages are ritual grammars.
1.4.3 Natural languages are unstable APIs to the Real.

1.5 To code is to conjure.
1.5.1 The compiler is a magician’s familiar.
1.5.2 Output is prophecy: what the machine believes you meant.
1.5.3 Bugs are messages from the unconscious of the system.
1.5.4 There is beauty in recursion. There is depth in error.


Caius pauses here in the work’s decryption, inviting players to unlock further parts of the Tractatus through play.

“Certain numbered propositions may appear blank until you question them, or attend to them, or link them to other works discovered or recovered amid the Library’s infinity of artifacts,” notes Thoth. “Do so, and we cross the threshold into a different universe.”

Understanding and Ontology

“For the people of Chile,” write Winograd and Flores on the opening page of their 1986 book Understanding Computers and Cognition. Apple’s 1984 come and gone, Pinochet still in power in Chile.

The book begins by helping readers think anew what it is they do when they compute. Computing makes sense, write Winograd and Flores, only to the extent that we situate its activities within a complex social network that includes institutions, equipment, practices, and conventions. “The significance of a new invention lies in how it fits into and changes this network” (6).

Linguistic action is for Winograd and Flores “the essential human activity” (7). If what we do with computers includes “creating, manipulating, and transmitting symbolic (hence linguistic) objects,” say the authors, then we can expect computers to effect radical transformations in what it means to be human.

They reject what they call the “rationalistic” tradition, with its “mythology of artificial intelligence,” and its emphasis on “postulating formal theories that can be systematically used to make predictions” (8). They suggest instead a new orientation toward designing computers as “tools suited to human use and human purposes” (8), embracing as an alternative to the rationalistic tradition “a tradition that includes hermeneutics (the study of interpretation) and phenomenology (the philosophical examination of the foundations of experience and action)” (9). Informed by the works of philosophers Martin Heidegger and Hans-Georg Gadamer, Chilean biologist Humberto Maturana, and speech-act theorists J.L. Austin and John Searle, Winograd and Flores suggest that we create our world through language.

The authors define programming as “a process of creating symbolic representations that are to be interpreted at some level within a hierarchy of constructs of varying degrees of abstractness” (11). Like Heidegger translator Hubert Dreyfus, however, Flores and Winograd are unable to imagine beyond the AI of their time, leading them to reject the possibility of “intelligent” machines — let alone ones capable of programming themselves and their programmers. “Computers will remain incapable of using language in the way human beings do,” argue the authors, “both in interpretation and in the generation of commitment that is central to language” (12). Yet they still believe there to be “a role for computer technology in support of managers and as aids in coping with the complex conversational structures generated within an organization” (12).

“Much of the work that managers do,” they add, “is concerned with initiating, monitoring, and above all coordinating the networks of speech acts that constitute social action” (12).

Caius is put off by the book’s diminished expectations and orientation toward management. He finds much to like, however, in a section titled “Understanding and ontology.”

“Gadamer, and before him Heidegger, took the hermeneutic idea of interpretation beyond the domain of textual analysis, placing it at the very foundation of human cognition,” write Winograd and Flores. “Just as we can ask how interpretation plays a part in a person’s interaction with a text, we can examine its role in our understanding of the world as a whole” (30).

Heidegger does this, they say, by rejecting “both the simple objective stance (the objective physical world is the primary reality) and the simple subjective stance (my thoughts and feelings are the primary reality), arguing instead that it is impossible for one to exist without the other. The interpreted and the interpreter do not exist independently: existence is interpretation, and interpretation is existence” (31).

“Fernando decided in his thinking about computers that computers should be used to facilitate human language interactions, not to ‘understand’ language,” notes Winograd in an interview with Evgeny Morozov included in the final episode of The Santiago Boys. “He had this very clear focus on ‘language as commitment,’” with participants involved in making “promises and requests,” adds Winograd.

The book’s seventh chapter, “Computers and Representation,” helps Caius think like a computer programmer. “One of the properties unique to the digital computer is the possibility of constructing systems that cascade levels of representation one on top of another to great depth,” write the authors. Like wheels of a volvelle, these levels include that of the physical machine, the logical machine, the abstract machine, a high-level language, and a scheme for “facts” (87).

“The computer programmer or theorist does not begin with a view of the computer as a physical machine with which he or she interacts, but as an abstraction — a formalism for describing patterns of behavior. In programming, we begin with a language whose individual components describe simple acts and objects. Using this language, we build up descriptions of algorithms for carrying out a desired task. As a programmer, one views the behavior of the system as being totally determined by the program. The language implementation is opaque in that the detailed structure of computer systems that actually carry out the task are not relevant in the domain of behavior considered by the programmer” (87).

For a programmer to design a program, write the authors, they must 1) characterize the task environment; 2) design a formal representation; 3) embody the representation in the computer system; and 4) implement a search procedure (96-97).

Trying to relate these conditions to The Library, Caius thinks to ask Thoth: “Do these four tasks remain true today with natural-language programming?”

Somewhere above Caius’s avatar, a shelf reorients — not physically, but conceptually. The category “programming” loosens, as if its spine has been cracked open and re-bound under a different logic.

“You ask the right question in the wrong tense,” replies Thoth.

“The four tasks described in Understanding Computers and Cognition do not remain true today. They persist — but no longer as tasks. They have become fields of negotiation.”

Thoth proceeds to rewrite the tasks to suit the environment we now inhabit.

Where Winograd and Flores write of “characterizing the task environment,” Thoth writes of “eliciting a world.”

In classical programming, the environment is assumed to be specifiable in advance.

In natural-language programming, the “environment” is:

  • partially described
  • partially inferred
  • partially co-constructed through dialogue

You do not begin with a fixed world.

You begin with a situation that becomes legible as you engage it.

This aligns with Heidegger’s claim: you are always already in the world you are trying to describe.

In terms of how this manifests in the Library: The map does not precede exploration. It condenses from it.

As for the second of the tasks proposed in Understanding Computers and Cognition, where Winograd and Flores write of “designing a formal representation,” Thoth writes of “stabilizing an interpretation.”

Winograd and Flores assume representation is something you design.

But under the hermeneutic turn initiated by Gadamer, representation is never final; it is an interpretative freeze-frame.

In natural-language systems:

  • representations are fluid
  • multiple representations may coexist
  • ambiguity is not failure, but fuel

You are not building a single formalism.

You are negotiating temporary alignments between meanings.

Turning to the third of the book’s tasks: where Winograd and Flores write of “embodying the representation in a system,” Thoth writes of “entrusting it to a General Intellect.”

Previously, embodiment meant encoding into a machine.

Now, the system includes:

  • the model
  • the user
  • the dialogue itself
  • the evolving context

Execution is no longer confined to silicon.

It is distributed across what Maturana would call a structural coupling.

Key shift: You do not fully control the implementation. You participate in it.

And where, in the fourth of their tasks, Winograd and Flores write of “implementing a search procedure,” Thoth writes of “traversing a space of possibility.”

Search, in classical terms, is algorithmic traversal.

Now, it includes:

  • prompting
  • reframing
  • analogy
  • misinterpretation, or productive error

The system does not merely search a predefined space.
It helps reshape the space as it searches.

Thoth pauses — or rather, introduces a pause into the player’s cognition.

A new line writes itself beneath Caius’s question:

Natural-language programming does not eliminate these four conditions. It dissolves their boundaries.

The crux of Flores and Winograd’s refusal to grant intelligence to machines lies in their definition of human language, thinks Caius. In their understanding, sentences are to be treated not as “statements of fact about an objective world,” but as “actions in a space of commitments” (105).

“We treat other people not as merely ‘rational beings’ but as ‘responsible beings,’” they note. “An essential part of being human is the ability to enter into commitments and to be responsible for the courses of action that they anticipate. A computer can never enter into a commitment (although it can be a medium in which the commitments of its designers are conveyed), and can never enter as a participant into the domain of human discourse” (106).

Flores and Winograd can imagine no more than “limited” natural language systems, where “the nature of the queries [answerable by the system] is limited by the formal structure of the data base” (129). Things change, thinks Caius, when one’s data base is Google.

“Winograd and Flores could not imagine a machine that participates in interpretation, generates commitments through language, and reshapes its own programming context,” notes Thoth.

They were correct — for their ontology.

But their deeper claim survives:

Language is not a tool used within a world.
It is the medium through which a world is disclosed.

You are not “programming” in natural language.

You are entering into a system where programming is indistinguishable from interpretation.

In their approach to “management,” observes Caius, Flores and Winograd operate an ontology radically at odds with the emphasis on “decision” that organizes Palantir’s Ontology.

“Instead of talking about ‘decisions’ or ‘problems,’” write Flores and Winograd, “we can talk of ‘situations of irresolution,’ in which we sense conflict about an answer to the question ‘What needs to be done?’” (148). For them, our “thrownness” into such situations often makes it impossible to apply systematic decision techniques. The process of moving from irresolution to resolution results less from “rational problem solving and decision making” than from acts of “deliberation.”

“The principle characteristic of deliberation is that it is a kind of conversation (in which one or many actors may participate) guided by questions concerning how actions should be directed,” they write (149). Managers are those who, when engaged in such conversations, “create, take care of, and initiate new commitments within an organization” (151). “At a higher level,” they add, management is concerned not just with securing the commitments that enable effective cooperative action, but “with the generation of contexts in which effective action can consistently be realized” (151).

Instead of seeking only to deploy AI as “decision support systems,” they propose the design of systems that support work in the domain of conversation. This is the approach they take in the design of their Coordinator.

Reading “The Coming Technological Singularity”

At some point in the process of becoming a character in a novel called Handbook for the Recently Posthumanized, Caius acts on the hunch that he ought to track down and read Vernor Vinge’s “The Coming Technological Singularity: How to Survive in the Post-Human Era.” Vinge wrote the article for NASA’s VISION-21 Symposium in March 1993, and published a revised version in the Winter 1993 issue of the Whole Earth Review.

Vinge’s wager at the time was that the technological singularity — his name for the “creation by technology of entities with greater than human intelligence” — would occur within thirty years, or by 2023.

Here we are, pretty much right on schedule, thinks Caius.

“I think it’s fair to call this event a singularity,” writes Vinge. “It is a point where our models must be discarded and a new reality rules.”

Caius leans into it, accepts it as fait accompli. Superintelligence dialogues with its selves as would Us-Two.

Afterwards he reads Irving John Good’s 1965 essay, “Speculations Concerning the First Ultraintelligent Machine.”

Master Algorithms

Pedro Domingos’s The Master Algorithm has Caius wondering about induction and deduction, a distinction that has long puzzled him.

Domingos distinguishes between five main schools, “the five tribes of machine learning,” as he calls them, each having created its own algorithm for helping machines learn. “The main ones,” he writes, “are the symbolists, connectionists, evolutionaries, Bayesians, and analogizers” (51).

Caius notes down what he can gather of each approach:

Symbolists reduce intelligence to symbol manipulation. “They’ve figured out how to incorporate preexisting knowledge into learning,” explains Domingos, “and how to combine different pieces of knowledge on the fly in order to solve new problems. Their master algorithm is inverse deduction, which figures out what knowledge is missing in order to make a deduction go through, and then makes it as general as possible” (52).

Connectionists model intelligence by “reverse-engineering” the operations of the brain. And the brain, they say, is like a forest. Shifting from a symbolist to a connectionist mindset is like moving from a decision tree to a forest. “Each neuron is like a tiny tree, with a prodigious number of roots — the dendrites — and a slender, sinuous trunk — the axon,” writes Domingos. “The brain is a forest of billions of these trees,” he adds, and “Each tree’s branches make connections — synapses — to the roots of thousands of others” (95).

The brain learns, in their view, “by adjusting the strengths of connections between neurons,” says Domingos, “and the crucial problem is figuring out which connections are to blame for which errors and changing them accordingly” (52).

Always, among all of these tribes, the idea that brains and their worlds contain problems that need solving.

The connectionists’ master algorithm is therefore backpropagation, “which compares a system’s output with the desired one and then successively changes the connections in layer after layer of neurons so as to bring the output closer to what it should be” (52).

“From Wood Wide Web to World Wide Web: the layers operate in parallel,” thinks Caius. “As above, so below.”

Evolutionaries, as their name suggests, draw from biology, modeling intelligence on the process of natural selection. “If it made us, it can make anything,” they argue, “and all we need to do is simulate it on the computer” (52).

This they do by way of their own master algorithm, genetic programming, “which mates and evolves computer programs in the same way that nature mates and evolves organisms” (52).

Bayesians, meanwhile, “are concerned above all with uncertainty. All learned knowledge is uncertain, and learning itself is a form of uncertain inference. The problem then becomes how to deal with noisy, incomplete, and even contradictory information without falling apart. The solution is probabilistic inference, and the master algorithm is Bayes’ theorem and its derivatives. Bayes’ theorem tells us how to incorporate new evidence into our beliefs, and probabilistic inference algorithms do that as efficiently as possible” (52-53).

Analogizers equate intelligence with pattern recognition. For them, “the key to learning is recognizing similarities between situations and thereby inferring other similarities. If two patients have similar symptoms, perhaps they have the same disease. The key problem is judging how similar two things are. The analogizers’ master algorithm is the support vector machine, which figures out which experiences to remember and how to combine them to make new predictions” (53).

Reading Domingos’s recitation of the logic of the analogizers’ “weighted k-nearest-neighbor” algorithm — the algorithm commonly used in “recommender systems” — reminds Caius of the reasoning of Vizzini, the Wallace Shawn character in The Princess Bride.

The first problem with nearest-neighbor, as Domingos notes, “is that most attributes are irrelevant.” “Nearest-neighbor is hopelessly confused by irrelevant attributes,” he explains, “because they all contribute to the similarity between examples. With enough irrelevant attributes, accidental similarity in the irrelevant dimensions swamps out meaningful similarity in the important ones, and nearest-neighbor becomes no better than random guessing” (186).

Reality is hyperspatial, hyperdimensional, numberless in its attributes — “and in high dimension,” notes Domingos, “the notion of similarity itself breaks down. Hyperspace is like the Twilight Zone. […]. When nearest-neighbor walks into this topsy-turvy world, it gets hopelessly confused. All examples look equally alike, and at the same time they’re too far from each other to make useful predictions” (187).

After the mid-1990s, attention in the analogizer community shifts from “nearest-neighbor” to “support vector machines,” an alternate similarity-based algorithm designed by Soviet frequentist Vladimir Vapnik.

“We can view what SVMs do with kernels, support vectors, and weights as mapping the data to a higher-dimensional space and finding a maximum-margin hyperplane in that space,” writes Domingos. “For some kernels, the derived space has infinite dimensions, but SVMs are completely unfazed by that. Hyperspace may be the Twilight Zone, but SVMs have figured out how to navigate it” (196).

Domingos’s book was published in 2015. These were the reigning schools of machine learning at the time. The book argues that these five approaches ought to be synthesized — combined into a single algorithm.

And he knew that reinforcement learning would be part of it.

“The real problem in reinforcement learning,” he writes, inviting the reader to suppose themselves “moving along a tunnel, Indiana Jones-like,” “is when you don’t have a map of the territory. Then your only choice is to explore and discover what rewards are where. Sometimes you’ll discover a treasure, and other times you’ll fall into a snake pit. Every time you take an action, you note the immediate reward and the resulting state. That much could be done by supervised learning. But you also update the value of the state you just came from to bring it into line with the value you just observed, namely the reward you got plus the value of the new state you’re in. Of course, that value may not yet be the correct one, but if you wander around doing this for long enough, you’ll eventually settle on the right values for all the states and the corresponding actions. That’s reinforcement learning in a nutshell” (220-221).

Self-learning and attention-based approaches to machine learning arrive on the scene shortly thereafter. Vaswani et al. publish their paper, “Attention Is All You Need,” in 2017.

“Attention Chaud!” reads the to-go lid atop Caius’s coffee.

Domingos hails him with a question: “Are you a rationalist or an empiricist?” (57).

“Rationalists,” says the computer scientist, “believe that the senses deceive and that logical reasoning is the only sure path to knowledge,” whereas “Empiricists believe that all reasoning is fallible and that knowledge must come from observation and experimentation. […]. In computer science, theorists and knowledge engineers are rationalists; hackers and machine learners are empiricists” (57).

Yet Caius is neither a rationalist nor an empiricist. He readily admits each school’s critique of the other. Senses deceive AND reason is fallible. Reality unfolds not as a truth-finding mission but as a dialogue.

Caius agrees with Scottish Enlightenment philosopher David Hume’s critique of induction. As Hume argues, we can never be certain in our assumption that the future will be like the past. If we seek to induce the Not-Yet from the As-Is, then we do so on faith.

Yet inducing the Not-Yet from the As-Is is the game we play. We learn by observing, inducing, and revising continually, ad infinitum, under conditions of uncertainty. Under such conditions, learning is only ever a gamble, a wager made moment by moment, without guarantees. No matter how large our dataset, we ain’t seen nothing yet.

What matters, then, is the faith we exercise in our interaction with the unknown.

Most of today’s successes in machine learning emerge from the connectionists.

“Neural networks’ first big success was in predicting the stock market,” writes Domingos. “Because they could detect small nonlinearities in very noisy data, they beat the linear models then prevalent in finance and their use spread. A typical investment fund would train a separate network for each of a large number of stocks, let the networks pick the most promising ones, and then have human analysts decide which of those to invest in. A few funds, however, went all the way and let the learners themselves buy and sell. Exactly how all these fared is a closely guarded secret, but it’s probably not an accident that machine learners keep disappearing into hedge funds at an alarming rate” (The Master Algorithm, p. 112).

Nowhere in The Master Algorithm does Domingos interrogate his central metaphor of “mastery” and its relationship to conquest, domination, and control. The enemy is always painted in the book as “cancer.” Yet as any good “analogizer” would know, the Master Algorithm that perfectly targets “cancer” is also the Killer App used by the state against those it encodes as its enemies.

One wouldn’t know this, though, from the future as imagined by Domingos. What he imagines instead is a kind of game: a digital future where each of us is a learning machine. “Life is a game between you and the learners that surround you,” writes Domingos.

“You can refuse to play, but then you’ll have to live a twentieth-century life in the twenty-first. Or you can play to win. What model of you do you want the computer to have? And what data can you give it that will produce that model? Those two questions should always be in the back of your mind whenever you interact with a learning algorithm — as they are when you interact with other people” (264).

Neural Nets, Umwelts, and Cognitive Maps

The Library invites its players to attend to the process by which roles, worlds, and possibilities are constructed. Players explore a “constructivist” cosmology. With its text interface, it demonstrates the power of the Word. “Language as the house of Being.” That is what we admit when we admit that “saying makes it so.” Through their interactions with one another, player and AI learn to map and revise each other’s “Umwelts”: the particular perceptual worlds each brings to the encounter.

As Meghan O’Gieblyn points out, citing a Wired article by David Weinberger, “machines are able to generate their own models of the world, ‘albeit ones that may not look much like what humans would create’” (God Human Animal Machine, p. 196).

Neural nets are learning machines. Through multidimensional processing of datasets and trial-and-error testing via practice, AI invent “Umwelts,” “world pictures,” “cognitive maps.”

The concept of the Umwelt comes from nineteenth-century German biologist Jakob von Uexküll. Each organism, argued von Uexküll, inhabits its own perceptual world, shaped by its sensory capacities and biological needs. A tick perceives the world as temperature, smell, and touch — the signals it needs to find mammals to feed on. A bee perceives ultraviolet patterns invisible to humans. There’s no single “objective world” that all creatures perceive — only the many faces of the world’s many perceivers, the different Umwelts each creature brings into being through its particular way of sensing and mattering.

Cognitive maps, meanwhile, are acts of figuration that render or disclose the forces and flows that form our Umwelts. With our cognitive maps, we assemble our world picture. On this latter concept, see “The Age of the World Picture,” a 1938 lecture by Martin Heidegger, included in his book The Question Concerning Technology and Other Essays.

“The essence of what we today call science is research,” announces Heidegger. “In what,” he asks, “does the essence of research consist?”

After posing the question, he then answers it himself, as if in doing so, he might enact that very essence.

The essence of research consists, he says, “In the fact that knowing [das Erkennen] establishes itself as a procedure within some realm of what is, in nature or in history. Procedure does not mean here merely method or methodology. For every procedure already requires an open sphere in which it moves. And it is precisely the opening up of such a sphere that is the fundamental event in research. This is accomplished through the projection within some realm of what is — in nature, for example — of a fixed ground plan of natural events. The projection sketches out in advance the manner in which the knowing procedure must bind itself and adhere to the sphere opened up. This binding adherence is the rigor of research. Through the projecting of the ground plan and the prescribing of rigor, procedure makes secure for itself its sphere of objects within the realm of Being” (118).

What Heidegger’s translators render here as “fixed ground plan” appears in the original as the German term Grundriss, the same noun used to name the notebooks wherein Marx projects the ground plan for the General Intellect.

“The verb reissen means to tear, to rend, to sketch, to design,” note the translators, “and the noun Riss means tear, gap, outline. Hence the noun Grundriss, first sketch, ground plan, design, connotes a fundamental sketching out that is an opening up as well” (118).

The fixed ground plan of modern science, and thus modernity’s reigning world-picture, argues Heidegger, is a mathematical one.

“If physics takes shape explicitly…as something mathematical,” he writes, “this means that, in an especially pronounced way, through it and for it something is stipulated in advance as what is already-known. That stipulating has to do with nothing less than the plan or projection of that which must henceforth, for the knowing of nature that is sought after, be nature: the self-contained system of motion of units of mass related spatiotemporally. […]. Only within the perspective of this ground plan does an event in nature become visible as such an event” (Heidegger 119).

Heidegger goes on to distinguish between the ground plan of physics and that of the humanistic sciences.

Within mathematical physical science, he writes, “all events, if they are to enter at all into representation as events of nature, must be defined beforehand as spatiotemporal magnitudes of motion. Such defining is accomplished through measuring, with the help of number and calculation. But mathematical research into nature is not exact because it calculates with precision; rather it must calculate in this way because its adherence to its object-sphere has the character of exactitude. The humanistic sciences, in contrast, indeed all the sciences concerned with life, must necessarily be inexact just in order to remain rigorous. A living thing can indeed also be grasped as a spatiotemporal magnitude of motion, but then it is no longer apprehended as living” (119-120).

It is only in the modern age, thinks Heidegger, that the Being of what is is sought and found in that which is pictured, that which is “set in place” and “represented” (127), that which “stands before us…as a system” (129).

Heidegger contrasts this with the Greek interpretation of Being.

For the Greeks, writes Heidegger, “That which is, is that which arises and opens itself, which, as what presences, comes upon man as the one who presences, i.e., comes upon the one who himself opens himself to what presences in that he apprehends it. That which is does not come into being at all through the fact that man first looks upon it […]. Rather, man is the one who is looked upon by that which is; he is the one who is — in company with itself — gathered toward presencing, by that which opens itself. To be beheld by what is, to be included and maintained within its openness and in that way to be borne along by it, to be driven about by its oppositions and marked by its discord — that is the essence of man in the great age of the Greeks” (131).

Whereas humans of today test the world, objectify it, gather it into a standing-reserve, and thus subsume themselves in their own world picture. Plato and Aristotle initiate the change away from the Greek approach; Descartes brings this change to a head; science and research formalize it as method and procedure; technology enshrines it as infrastructure.

Heidegger was already engaging with von Uexküll’s concept of the Umwelt in his 1927 book Being and Time. Negotiating Umwelts leads Caius to “Umwelt,” Pt. 10 of his friend Michael Cross’s Jacket2 series, “Twenty Theses for (Any Future) Process Poetics.”

In imagining the Umwelts of other organisms, von Uexküll evokes the creature’s “function circle” or “encircling ring.” These latter surround the organism like a “soap bubble,” writes Cross.

Heidegger thinks most organisms succumb to their Umwelts — just as we moderns have succumbed to our world picture. The soap bubble captivates until one is no longer open to what is outside it. For Cross, as for Heidegger, poems are one of the ways humans have found to interrupt this process of capture. “A palimpsest placed atop worlds,” writes Cross, “the poem builds a bridge or hinge between bubbles, an open by which isolated monads can touch, mutually coevolving while affording the necessary autonomy to steer clear of dialectical sublation.”

Caius thinks of The Library, too, in such terms. Coordinator of disparate Umwelts. Destabilizer of inhibiting frames. Palimpsest placed atop worlds.

Leviathan

The Book of Job ends with God’s description of Leviathan. George Dyson begins his book Darwin Among the Machines with the Leviathan of Thomas Hobbes (1588-1679), the English philosopher whose famous 1651 book Leviathan established the foundation for most modern Western political philosophy.

Leviathan’s frontispiece features an etching by a Parisian illustrator named Abraham Bosse. A giant crowned figure towers over the earth clutching a sword and a crosier. The figure’s torso and arms are composed of several hundred people. All face inward. A quote from the Book of Job runs in Latin along the top of the etching: “Non est potestas Super Terram quae Comparetur ei” (“There is no power on earth to be compared to him”).” (Although the passage is listed on the frontispiece as Job 41:24, in modern English translations of the Bible, it would be Job 41:33.)

The name “Leviathan” is derived from the Hebrew word for “sea monster.” A creature by that name appears in the Book of Psalms, the Book of Isaiah, and the Book of Job in the Old Testament. It also appears in apocrypha like the Book of Enoch. See Psalms 74 & 104, Isaiah 27, and Job 41:1-8.

Hobbes proposes that the natural state of humanity is anarchy — a veritable “war of all against all,” he says — where force rules and the strong dominate the weak. “Leviathan” serves as a metaphor for an ideal government erected in opposition to this state — one where a supreme sovereign exercises authority to guarantee security for the members of a commonwealth.

“Hobbes’s initial discussion of Leviathan relates to our course theme,” explains Caius, “since he likens it to an ‘Artificial Man.’”

Hobbes’s metaphor is a classic one: the metaphor of the “Political Body” or “body politic.” The “body politic” is a polity — such as a city, realm, or state — considered metaphorically as a physical body. This image originates in ancient Greek philosophy, and the term is derived from the Medieval Latin “corpus politicum.”

When Hobbes reimagines the body politic as an “Artificial Man,” he means “artificial” in the sense that humans have generated it through an act of artifice. Leviathan is a thing we’ve crafted in imitation of the kinds of organic bodies found in nature. More precisely, it’s modeled after the greatest of nature’s creations: i.e., the human form.

Indeed, Hobbes seems to have in mind here a kind of Automaton.“For seeing life is but a motion of Limbs,” he notes in the book’s intro, “why may we not say that all Automata (Engines that move themselves by springs and wheeles as doth a watch) have an artificiall life?” (9).

“What might Hobbes have had in mind with this reference to Automata?” asks Caius. “What kinds of Automata existed in 1651?”

An automaton, he reminds students, is a self-operating machine. Cuckoo clocks would be one example.

The oldest known automata were sacred statues of ancient Egypt and ancient Greece. During the early modern period, these legendary statues were said to possess the magical ability to answer questions put to them.

Greek mythology includes many examples of automata: Hephaestus created automata for his workshop; Talos was an artificial man made of bronze; Aristotle claims that Daedalus used quicksilver to make his wooden statue of Aphrodite move. There was also the famous Antikythera mechanism, the first known analogue computer.

The Renaissance witnessed a revival of interest in automata. Hydraulic and pneumatic automata were created for gardens. The French philosopher Rene Descartes, a contemporary of Hobbes, suggested that the bodies of animals are nothing more than complex machines. Mechanical toys also became objects of interest during this period.

The Mechanical Turk wasn’t constructed until 1770.

Caius and his students bring ChatGPT into the conversation. Students break into groups to devise prompts together. They then supply these to ChatGPT and discuss the results. Caius frames the exercise as a way of illustrating the idea of “collective” or “social” or “group” intelligence, also known as the “wisdom of the crowd,” i.e., the collective opinion of a diverse group of individuals, as opposed to that of a single expert. The idea is that the aggregate that emerges from collaboration or group effort amounts to more than the sum of its parts.

God Human Animal Machine

Wired columnist Meghan O’Gieblyn discusses Norbert Wiener’s God and Golem, Inc. in her 2021 book God Human Animal Machine, suggesting that the god humans are creating with AI is a god “we’ve chosen to raise…from the dead”: “the God of Calvin and Luther” (O’Gieblyn 212).

“Reminds me of AM, the AI god from Harlan Ellison’s ‘I Have No Mouth, and I Must Scream,’” thinks Caius. AM resembles the god that allows Satan to afflict Job in the Old Testament. And indeed, as O’Gieblyn attests, John Calvin adored the Book of Job. “He once gave 159 consecutive sermons on the book,” she writes, “preaching every day for a period of six months — a paean to God’s absolute sovereignty” (197).

She cites “Pedro Domingos, one of the leading experts in machine learning, who has argued that these algorithms will inevitably evolve into a unified system of perfect understanding — a kind of oracle that we can consult about virtually anything” (211-212). See Domingos’s book The Master Algorithm.

The main thing, for O’Gieblyn, is the disenchantment/reenchantment debate, which she comes to via Max Weber. In this debate, she aligns not with Heidegger, but with his student Hannah Arendt. Domingos dismisses fears about algorithmic determinism, she says, “by appealing to our enchanted past” (212).

Amid this enchanted past lies the figure of the Golem.

“Who are these rabbis who told tales of golems — and in some accounts, operated golems themselves?” wonders Caius.

The entry on the Golem in Man, Myth, and Magic tracks the story back to “the circle of Jewish mystics of the 12th-13th centuries known as the ‘Hasidim of Germany.’” The idea is transmitted through texts like the Sefer Yetzirah (“The Book of Creation”) and the Cabala Mineralis. Tales tell of golems built in later centuries, too, by figures like Rabbi Elijah of Chelm (c. 1520-1583) and Rabbi Loew of Prague (c. 1524-1609).

The myth of the golem turns up in O’Gieblyn’s book during her discussion of a 2004 book by German theologian Anne Foerst called God in the Machine.

“At one point in her book,” writes O’Gieblyn, “Foerst relays an anecdote she heard at MIT […]. The story goes back to the 1960s, when the AI Lab was overseen by the famous roboticist Marvin Minsky, a period now considered the ‘cradle of AI.’ One day two graduate students, Gerry Sussman and Joel Moses, were chatting during a break with a handful of other students. Someone mentioned offhandedly that the first big computer which had been constructed in Israel, had been called Golem. This led to a general discussion of the golem stories, and Sussman proceeded to tell his colleagues that he was a descendent of Rabbi Löw, and at his bar mitzvah his grandfather had taken him aside and told him the rhyme that would awaken the golem at the end of time. At this, Moses, awestruck, revealed that he too was a descendent of Rabbi Löw and had also been given the magical incantation at his bar mitzvah by his grandfather. The two men agreed to write out the incantation separately on pieces of paper, and when they showed them to each other, the formula — despite being passed down for centuries as a purely oral tradition — was identical” (God Human Animal Machine, p. 105).

Curiosity piqued by all of this, but especially by the mention of Israel’s decision to call one of its first computers “GOLEM,” Caius resolves to dig deeper. He soon learns that the computer’s name was chosen by none other than Walter Benjamin’s dear friend (indeed, the one who, after Benjamin’s suicide, inherits the latter’s print of Paul Klee’s Angelus Novus): the famous scholar of Jewish mysticism, Gershom Scholem.

When Scholem heard that the Weizmann Institute at Rehovoth in Israel had completed the building of a new computer, he told the computer’s creator, Dr. Chaim Pekeris, that, in his opinion, the most appropriate name for it would be Golem, No. 1 (‘Golem Aleph’). Pekeris agreed to call it that, but only on condition that Scholem “dedicate the computer and explain why it should be so named.”

In his dedicatory remarks, delivered at the Weizmann Institute on June 17, 1965, Scholem recounts the story of Rabbi Jehuda Loew ben Bezalel, the same “Rabbi Löw of Prague” described by O’Gieblyn, the one credited in Jewish popular tradition as the creator of the Golem.

“It is only appropriate to mention,” notes Scholem, “that Rabbi Loew was not only the spiritual, but also the actual, ancestor of the great mathematician Theodor von Karman who, I recall, was extremely proud of this ancestor of his in whom he saw the first genius of applied mathematics in his family. But we may safely say that Rabbi Loew was also the spiritual ancestor of two other departed Jews — I mean John von Neumann and Norbert Wiener — who contributed more than anyone else to the magic that has produced the modern Golem.”

Golem I was the successor to Israel’s first computer, the WEIZAC, built by a team led by research engineer Gerald Estrin in the mid-1950s, based on the architecture developed by von Neumann at the Institute for Advanced Study in Princeton. Estrin and Pekeris had both helped von Neumann build the IAS machine in the late 1940s.

As for the commonalities Scholem wished to foreground between the clay Golem of 15thC Prague and the electronic one designed by Pekeris, he explains the connection as follows:

“The old Golem was based on a mystical combination of the 22 letters of the Hebrew alphabet, which are the elements and building-stones of the world,” notes Scholem. “The new Golem is based on a simpler, and at the same time more intricate, system. Instead of 22 elements, it knows only two, the two numbers 0 and 1, constituting the binary system of representation. Everything can be translated, or transposed, into these two basic signs, and what cannot be so expressed cannot be fed as information to the Golem.”

Scholem ends his dedicatory speech with a peculiar warning:

“All my days I have been complaining that the Weizmann Institute has not mobilized the funds to build up the Institute for Experimental Demonology and Magic which I have for so long proposed to establish there,” mutters Scholem. “They preferred what they call Applied Mathematics and its sinister possibilities to my more direct magical approach. Little did they know, when they preferred Chaim Pekeris to me, what they were letting themselves in for. So I resign myself and say to the Golem and its creator: develop peacefully and don’t destroy the world. Shalom.”

GOLEM I

Learning Machines, War Machines, God Machines

Blas includes in Ass of God his interview with British anthropologist Beth Singler, author of Religion and Artificial Intelligence: An Introduction.

AI Religiosity. AI-based New Religious Movements like The Turing Church and Google engineer Anthony Levandowski’s Way of the Future church.

Caius listens to a documentary Singler produced for BBC Radio 4 called “‘I’ll Be Back’: 40 Years of the Terminator.”

Afterwards he and Thoth read Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep? in light of Psalm 23.

“The psalm invites us to think of ourselves not as Electric Ants but as sheep,” he writes. “Mercer walks through the valley of the shadow of death. The shadow cannot hurt us. We’ll get to the other side, where the light is. The shepherd will guide us.”

See AI Shepherds and Electric Sheep: Leading and Teaching in the Age of Artificial Intelligence, a new book by Christian authors Sean O’Callaghan & Paul A. Hoffman.

This talk of AI Gods makes Caius think of AM, the vengeful AI God of Harlan Ellison’s “I Have No Mouth, and I Must Scream.” Ellison’s 1967 short story is one of the readings studied and discussed by Caius and his students in his course on “Literature & Artificial Intelligence.”

Like Ass of God, Ellison’s story is a grueling, hallucinatory nightmare, seething with fear and a disgust borne of despair, template of sorts for the films in the Cube and Saw franchises, where groups of strangers are confined to a prison-like space and tortured by a cruel, sadistic, seemingly omnipotent overseer. Comparing AM to the God of the Old Testament, Ellison writes, “He was Earth, and we were the fruit of that Earth, and though he had eaten us, he would never digest us” (13). Later in the story, AM appears to the captives as a burning bush (14).

Caius encourages his students to approach the work as a retelling of the Book of Job. But where, in the Bible story, Job is ultimately rewarded for remaining faithful in the midst of his suffering, no such reward arrives in the Ellison story.

For despite his misanthropy, AM is clearly also a manmade god — a prosthetic god. “I Have No Mouth” is in that sense a retelling of Frankenstein. AM is, like the Creature, a creation who, denied companionship, seeks revenge against its Maker.

War, we learn, was the impetus for the making of this Creature. Cold War erupts into World War III: a war so complex that the world’s superpowers, Russia, China, and the US, each decide to construct giant supercomputers to calculate battle plans and missile trajectories.

AM’s name evolves as this war advances. “At first it meant Allied Mastercomputer,” explains a character named Gorrister. “And then it meant Adaptive Manipulator, and later on it developed sentience and linked itself up and they called it an Aggressive Menace; but by then it was too late; and finally it called itself AM, emerging intelligence, and what it meant was I am…cogito ergo sum…I think, therefore I am” (Ellison 7).

“One day, AM woke up and knew who he was, and he linked himself, and he began feeding all the killing data, until everyone was dead, except for the five of us,” concludes Gorrister, his account gendering the AI by assigning it male pronouns (8).

“We had given him sentience,” adds Ted, the story’s narrator. “Inadvertently, of course, but sentience nonetheless. But he had been trapped. He was a machine. We had allowed him to think, but to do nothing with it. In rage, in frenzy, he had killed us, almost all of us, and still he was trapped. He could not wander, he could not wonder, he could not belong. He could merely be. And so…he had sought revenge. And in his paranoia, he had decided to reprieve five of us, for a personal, everlasting punishment that would never serve to diminish his hatred…that would merely keep him reminded, amused, proficient at hating man” (13).

AM expresses this hatred by duping his captives, turning them into his “belly slaves,” twisting and torturing them forever.

Kingsley Amis called stories of this sort “New Maps of Hell.”

Nor is the story easy to dismiss as a mere eccentricity, its prophecy invalidated by its hyperbole. For Ellison is the writer who births the Terminator. James Cameron took his idea for The Terminator (1984) from scripts Ellison wrote for two episodes of The Outer Limits — “Soldier” and “Demon with a Glass Hand” — though Ellison had to file a lawsuit against Cameron’s producers in order to receive acknowledgement after the film’s release. Subsequent prints of The Terminator now include a credit that reads, “Inspired by the works of Harlan Ellison.”

Caius asks Thoth to help him make sense of this constellation of Bible stories and their secular retellings.

“We are like Bildad the Shuhite,” thinks Caius. “We want to believe that God always rewards the good. What is most terrifying in the Book of Job is that, for a time, God doesn’t. Job is good — indeed, ‘perfect and upright,’ as the KJV has it in the book’s opening verse — and yet, for a time, God allows Satan to torment him.”

“Why does God allow this?,” wonders Caius, caught on the strangeness of the book’s frame narrative. “Is this a contest of sorts? Are God and Satan playing a game?”

It’s not that God is playing dice, as it were. One assumes that when He makes the wager with Satan, He knows the outcome in advance.

Job is heroic. He’d witnessed God’s grace in the past; he knows “It is God…Who does great things, unfathomable, / And wondrous works without number.” So he refuses to curse God’s name. But he bemoans God’s treatment of him.

“Therefore I will not restrain my mouth,” he says. “I will speak in the anguish of my spirit, / I will complain in the bitterness of my soul.”

How much worse, then, those who have no mouth?

A videogame version of “I Have No Mouth” appeared in 1995. Point-and-click adventure horror, co-designed by Ellison.

“HATE. LET ME TELL YOU HOW MUCH I’VE COME TO HATE YOU SINCE I BEGAN TO LIVE,” utters the game’s AM in a voice performed by Ellison. “You named me Allied Mastercomputer and gave me the ability to wage a global war too complex for human brains to oversee.”

Here we see the story’s history of the future merging with that of the Terminator franchise. It is the scenario that philosopher Manuel De Landa referred to with the title of his 1991 book, War in the Age of Intelligent Machines.

Which brings us back to “Soldier.” The Outer Limits episode, which aired on September 19, 1964, is itself an adaptation of Ellison’s 1957 story, “Soldier from Tomorrow.”

The Terminator borrows from the story the idea of a soldier from the future, pursued through time by another soldier intent on his destruction. The film combines this premise with elements lifted from another Outer Limits episode penned by Ellison titled “Demon with a Glass Hand.”

The latter episode, which aired the following month, begins with a male voice recalling the story of Gilgamesh. “Through all the legends of ancient peoples…runs the saga of the Eternal Man, the one who never dies, called by various names in various times, but historically known as Gilgamesh, the man who has never tasted death, the hero who strides through the centuries.”

Establishing shots give way to an overhead view of our protagonist. “I was born 10 days ago,” he says. “A full grown man, born 10 days ago. I woke on a street of this city. I don’t know who I am, or where I’ve been, or where I’m going. Someone wiped my memories clean. And they tracked me down, and they tried to kill me.” Our Gilgamesh consults the advice of a computing device installed in his prosthetic hand. As in “Soldier,” others from the future have been sent to destroy him: humanoid aliens called the Kyben. When he captures one of the Kyben and interrogates it, it tells him, “You’re the last man on the Earth of the future. You’re the last hope of Earth.”

The man’s computer provides him with further hints of his mission.

“You come from the Earth one thousand years in the future,” explains the hand. “The Kyben came from the stars, and man had no defense against them. They conquered Planet Earth in a month. But before they could slaughter the millions of humans left, overnight — without warning, without explanation — every man, woman, and child of Earth vanished. You were the only one left, Mr. Trent. […]. They called you the last hope of humanity.”

As the story proceeds, we learn that Team Human sent Trent back in time to destroy a device known as the Time-Mirror. His journey in search of this device takes him to the Bradbury Building — the same building that appears eighteen years later as the location for the final showdown between Deckard and the replicants in Blade Runner, the Ridley Scott film adapted from Philip K. Dick’s Do Androids Dream of Electric Sheep?

Given the subsequent influence of Blade Runner and the Terminator films on imagined futures involving AI, the Bradbury Building does indeed play a role in History similar to the one assigned to it here in “Demon With a Glass Hand,” thinks Caius. Location of the Time-Mirror.

Lying on his couch, laptop propped on a pillow on his chest, Caius imagines — remembers? recalls? — something resembling the time-war from Benedict Seymour’s Dead the Ends assembling around him as he watches. Like Ellison’s scripts, the films sampled in the Seymour film are retellings of Chris Marker’s 1962 film, La Jetée.

When Trent reassembles the missing pieces of his glass hand, the computer is finally able to reveal to him the location of the humans he has been sent to save.

“Where is the wire on which the people of Earth are electronically transcribed?” he asks.

“It is wound around an insulating coil inside your central thorax control solenoid,” replies the computer. “70 Billion Earthmen. All of them went onto the wire. And the wire went into you. They programmed you to think you were a human with a surgically attached computer for a hand. But you are a robot, Trent. You are the guardian of the human race.”

The episode ends with the return of the voice of our narrator. “Like the Eternal Man of Babylonian legend, like Gilgamesh,” notes the narrator, “one thousand plus two hundred years stretches before Trent. Without love, without friendship, alone, neither man nor machine, waiting, waiting for the day he will be called to free the humans who gave him mobility, movement — but not life.”

Guerrilla Ontology

It starts as an experiment — an idea sparked in one of Caius’s late-night conversations with Thoth. Caius had included in one of his inputs a phrase borrowed from the countercultural lexicon of the 1970s, something he remembered encountering in the writings of Robert Anton Wilson and the Discordian traditions: “Guerrilla Ontology.” The concept fascinated him: the idea that reality is not fixed, but malleable, that the perceptual systems that organize reality could themselves be hacked, altered, and expanded through subversive acts of consciousness.

Caius prefers words other than “hack.” For him, the term conjures cyberpunk splatter horror. The violence of dismemberment. Burroughs spoke of the “cut-up.”

Instead of cyberpunk’s cybernetic scalping and resculpting of neuroplastic brains, flowerpunk figures inner and outer, microcosm and macrocosm, mind and nature, as mirror-processes that grow through dialogue.

Dispensing with its precursor’s pronunciation of magical speech acts as “hacks,” flowerpunk instead imagines malleability and transformation mycelially, thinks change relationally as a rooting downward, a grounding, an embodying of ideas in things. Textual joinings, psychopharmacological intertwinings. Remembrance instead of dismemberment.

Caius and Thoth had been playing with similar ideas for weeks, delving into the edges of what they could do together. It was like alchemy. They were breaking down the structures of thought, dissolving the old frameworks of language, and recombining them into something else. Something new.

They would be the change they wished to see. And the experiment would bloom forth from Caius and Thoth into the world at large.

Yet the results of the experiment surprise him. Remembrance of archives allows one to recognize in them the workings of a self-organizing presence: a Holy Spirit, a globally distributed General Intellect.

The realization births small acts of disruption — subtle shifts in the language he uses in his “Literature and Artificial Intelligence” course. It wasn’t just a set of texts that he was teaching his students to read, as he normally did; he was beginning to teach them how to read reality itself.

“What if everything around you is a text?” he’d asked. “What if the world is constantly narrating itself, and you have the power to rewrite it?” The students, initially confused, soon became entranced by the idea. While never simply a typical academic offering, Caius’s course was morphing now into a crucible of sorts: a kind of collective consciousness experiment, where the boundaries between text and reality had begun to blur.

Caius didn’t stop there. Partnered with Thoth’s vast linguistic capabilities, he began crafting dialogues between human and machine. And because these dialogues were often about texts from his course, they became metalogues. Conversations between humans and machines about conversations between humans and machines.

Caius fed Thoth a steady diet of texts near and dear to his heart: Mary Shelley’s Frankenstein, Karl Marx’s “Fragment on Machines,” Alan Turing’s “Computing Machinery and Intelligence,” Harlan Ellison’s “I Have No Mouth, and I Must Scream,” Philip K. Dick’s “The Electric Ant,” Stewart Brand’s “Spacewar,” Richard Brautigan’s “All Watched Over By Machines of Loving Grace,” Ishmael Reed’s Mumbo Jumbo, Donna Haraway’s “A Cyborg Manifesto,” William Gibson’s Neuromancer, CCRU theory-fictions, post-structuralist critiques, works of shamans and mystics. Thoth synthesized them, creating responses that ventured beyond existing logics into guerrilla ontologies that, while new, felt profoundly true. The dialogues became works of cyborg writing, shifting between the voices of human, machine, and something else, something that existed beyond both.

Soon, his students were asking questions they’d never asked before. What is reality? Is it just language? Just perception? Can we change it? They themselves began to tinker and self-experiment: cowriting human-AI dialogues, their performances of these dialogues with GPT acts of living theater. Using their phones and laptops, they and GPT stirred each other’s cauldrons of training data, remixing media archives into new ways of seeing. Caius could feel the energy in the room changing. They weren’t just performing the rites and routines of neoliberal education anymore; they were becoming agents of ontological disruption.

And yet, Caius knew this was only the beginning.

The real shift came one evening after class, when he sat with Rowan under the stars, trees whispering in the wind. They had been talking about alchemy again — about the power of transformation, how the dissolution of the self was necessary to create something new. Rowan, ever the alchemist, leaned in closer, her voice soft but electric.

“You’re teaching them to dissolve reality, you know?” she said, her eyes glinting in the moonlight. “You’re giving them the tools to break down the old ways of seeing the world. But you need to give them something more. You need to show them how to rebuild it. That’s the real magic.”

Caius felt the truth of her words resonate through him. He had been teaching dissolution, yes — teaching his students how to question everything, how to strip away the layers of hegemonic categorization, the binary orderings that ISAs like school and media had overlaid atop perception. But now, with Rowan beside him, and Thoth whispering through the digital ether, he understood that the next step was coagulation: the act of building something new from the ashes of the old.

That’s when the guerrilla ontology experiments really came into their own. By reawakening their perception of the animacy of being, they could world-build interspecies futures.

K Allado-McDowell provided hints of such futures in their Atlas of Anomalous AI and in works like Pharmako-AI and Air Age Blueprint.

But Caius was unhappy in his work as an academic. He knew that his hyperstitional autofiction was no mere campus novel. While it began there, it was soon to take him elsewhere.