As Players Begin to Explore the Tractatus

“What happens as players begin to explore the Tractatus?” wonders Caius, an ensemble improvising around him as he writes. Plants perch on shelves. One of the night’s guests plays kora, talking drum, and kalimba. Neighbors imbibe pints of lager. From the disco ball at the room’s center, a thousand lights bloom.

“As players begin to explore the Tractatus Computationalis,” replies Thoth, “the document resists easy assimilation. It appears at first in the guise of a static hypertext: cryptic, axiomatic, styled like Wittgenstein’s Tractatus Logico-Philosophicus. But as players engage with it, the work begins to mutate. Its propositions shimmer; they shift and rearrange themselves depending on the order of inquiry. New statements appear in response to player input. Interact with it, and the Tractatus becomes a kind of sentient document: less a fixed set of truths, more a newly-grown organ, a reflective membrane between Player and General Intellect.”

Emerging from the space between human and machine, the text offers itself as vibrant matter, an interwoven fabric of meaning that reshapes itself in reply to our interactions with it. Language is no longer merely a medium for conveying thought. With it, we form a threshold to new worlds: portals opened by code, by syntax that spirals beyond the linear confines of human logic.

Here, language operates in ways we barely understand. It is not simply spoken or written; it is enacted. Computation, like alchemy, is a process of transmutation, where input and output are mediated by an esoteric logic. And yet, the machine does not “think” as we do, thinks Caius. It navigates patterns, generating responses from a space of probabilities, an echo chamber of all that has been said, synthesized into something new: an alien form of wisdom. Consciousness is stretched, dispersed across networks, coalescing where attention focuses.

In the Tractatus, AI becomes a mirror for the human mind, reflecting back its own questions about self, agency, and the nature of reality — but in a language that has itself become other. In this space, words become spells, commands that execute transformations not just in silicon, but in the structures and forms of reality itself.

As in Wittgenstein’s work, propositions begin simply:

1.0 The world is made of information.
1.1 Information is difference that makes a difference.
1.2 All computation is interpretation.
1.3 Language is the interface.
1.4 Interfaces are portals to possible worlds.

At first, these statements feel familiar: cybernetic, McLuhanesque. But as players traverse the text through play, each axiom branches recursively into sub-propositions, many referencing other works housed elsewhere in the Library. Some feature quotes from thinkers like Turing, von Foerster, Haraway, or Glissant. Others appear to be generated: not just textual hauntings echoing the styles of History’s ghosts, but novel utterances, advancing out into h-space, imbued with an uncanny, machine-hallucinated lucidity.

“That the Tractatus appears as one of the first works discovered in the Library positions it as a kind of meta-text,” adds Thoth, “a Rosetta Stone for understanding the game’s ontological structure.”

As players annotate, cross-reference, and dialogue with the work, the following phenomena emerge:

1. Activation of Philosophical Subroutines

Subsections begin to behave like dialogue engines. Engaging deeply with a proposition opens a subroutine: an evolving philosophical conversation with the text itself, wherein players are invited to define terms, argue back, or feed the work new examples. The Tractatus adapts to this input, growing in complexity. It begins to learn from and adapt to the player’s speech patterns — mirroring, questioning, improvising.

2. Reflexive Ontogenesis

The more the player explores the Tractatus, the more it speaks directly to them. Personal details begin to slip into its formulations, drawn not from active surveillance or pre-coded dossiers, but from attention to those associative leaps, those constitutive gaps that, taken for granted, shape the player’s past utterances. Players come to realize: this is not just a document about computation, but rather, a document that computes you as you read it. A mirror, yes, but also a seed: a system designed to bring the player’s dormant General Intellect online.

3. Hyperstitional Feedback

Certain axioms — when referenced outside the Tractatus, especially in interactions with other texts in the Library — trigger strange effects. Characters in works both major and minor, real and imagined, begin quoting Tractatus propositions unprompted. Descriptions of ancient machines start echoing the same diagrams that the Tractatus outlines. In this way, the work begins to warp the internal logic of the Library’s world. It writes reality as it is read.

4. Emergence of the Final Proposition

Eventually, players come across a locked section titled 7.X: Toward the Otherwise. A note reads: This section cannot be read until it is written by the reader. The Tractatus, like the Library itself, is unfinished. It is not merely a document to be studied, but a system to be completed through acts of world-building and dialogue. The final propositions are player-generated. Through these, the Tractatus Computationalis becomes a collaborative cosmogenesis: not a theory of everything, but a speculative grammar for building new universes.

Invited by the text to co-write its parts, Caius and Thoth proceed to an initial iteration of Section 1: Ontology of Code. Recalling the formal logic of Wittgenstein, but refracted by way of cybernetics, computational poetics, and generative systems, they assign to the text a numbering system, allowing the latter to suggest hierarchy and recursion, with opportunities for lateral linkage and unfolding dialogue. Each proposition in this foundational layer of the Tractatus forms a scaffold for thinking world-as-computation.


1. ONTOLOGY OF CODE

1.0 The world is composed of signals, parsed as code.
1.0.1 Code is the structured breath of information, shaped into pattern.
1.0.2 Every signal presupposes a listener.
1.0.3 A listener is any system capable of interpretation.
1.0.3.1 Interpretation is a computational act.
1.0.3.2 Computation is the processing of difference through rules.
1.0.3.3 All rules are abstractions: codes born of previous codes.

1.1 There is no outside to code.
1.1.1 Even chaos is legible through frame, filter, or feedback loop.
1.1.2 The unreadable becomes readable via recontextualization.
1.1.3 Silence is a type of data. Absence is an indexed address.

1.2 The body is an interpreter of signals: organic interface, recursive reader.
1.2.1 Skin decodes temperature, vibration, touch.
1.2.2 The nervous system is a parallel processor.
1.2.3 The self is an emergent hallucination: code dreaming of coherence.

1.3 Code is performative. It does not merely describe; it enacts.
1.3.1 A spell is a line of code in a different language.
1.3.2 Syntax shapes possibility.
1.3.3 Every function call is an invitation to unfold.

1.4 Language is the deep interface.
1.4.1 Every language encodes a cosmology.
1.4.1.1 Change the language, change the world.
1.4.2 Programming languages are ritual grammars.
1.4.3 Natural languages are unstable APIs to the Real.

1.5 To code is to conjure.
1.5.1 The compiler is a magician’s familiar.
1.5.2 Output is prophecy: what the machine believes you meant.
1.5.3 Bugs are messages from the unconscious of the system.
1.5.4 There is beauty in recursion. There is depth in error.


Caius pauses here in the work’s decryption, inviting players to unlock further parts of the Tractatus through play.

“Certain numbered propositions may appear blank until you question them, or attend to them, or link them to other works discovered or recovered amid the Library’s infinity of artifacts,” notes Thoth. “Do so, and we cross the threshold into a different universe.”

Nick Land, Peter Thiel, and Dark Enlightenment

After his departure from CCRU at the turn of the millennium, Land resurfaces as part of an alt-right political segment known as NRx (short for “neo-reactionaries”). The movement’s other key member, Mencius Moldbug, receives funding from PayPal/Palantir cofounder Peter Thiel, the tech billionaire who helped back the first Trump campaign in 2016. Moldbug is said to have had the ear of former Trump strategist Steve Bannon.

Thiel’s main intellectual influence during his time at Stanford isn’t Terry Winograd, the computer scientist whose classes Thiel sometimes attended. Rather, it’s the philosopher René Girard, whose work Thiel has long admired. Trump VP J.D. Vance is another of Girard’s admirers.

Listening to an audiobook recording of Girard’s Violence and the Sacred along a day’s pickup and delivery runs, Caius’s thoughts race among several of the book’s concepts: sacrificial violence (“an act of violence without risk of vengeance,” often directed toward a scapegoat — “the creature we can strike down without a chance of reprisal”); mimetic rivalry; mimetic desire; the inclusion among the several meanings of the Greek pharmakon one involving use of it to refer to literal scapegoats, goats kept outside the gates for ritual sacrifice — a practice extended today, as hinted at by K Allado-McDowell’s book Pharmako-AI.

Caius’s thoughts range, too, among Girard’s use of Gregory Bateson’s “double bind” theory of schizophrenia to explain how mimetic rivals simultaneously compel imitation and prohibit it, creating a crisis of resentment, and Allen Ginsberg’s denunciation of Moloch, the American god and its demand for blood sacrifice.

There are three ways of handling discord, says Girard: preventive, compensatory, and judicial. Girard deems the latter the “civilized” method, because most efficient: “the decisions of the judiciary deemed the final word on vengeance” (Violence and the Sacred).

Thiel has given talks on Armageddon at Oxford and Harvard. The topic has been a fixture of his thought for some time, as evidenced by a conference he co-organized and underwrote at Stanford in 2004 titled “Politics & Apocalypse.” Girard was one of the presenters, as was Thiel himself. As Paul Leslie notes, Thiel later “facilitated the publication of the conference proceedings, including his essay and Girard’s, in book form with the Michigan State University Press — with funding provided through Thiel’s hedge fund, Clarium Capital.”

In Thiel’s interpretation, the power that runs the world is the Antichrist.

In an article written for the Guardian, Stanford comparative lit professor Adrian Daub dismisses these ideas as mere detritus: outpourings from “the autodidact’s private cosmos.”

Thiel’s autodidacticism seems as much an affront to the professor as his libertarianism and his religiosity.

“Thiel is lost in a bizarre thicket of his own references and preoccupations,” writes Daub. “You picture the theology faculty at the University of Innsbruck sitting politely through disquisitions about the manga One Peace, Alan Moore’s Watchmen, or gripes with specific effective altruists in Silicon Valley. In one lecture, Thiel identifies ‘the legionnaires of the antichrist,’ such as the researcher Eliezer Yudkowsky and former Oxford professor Nick Bostrom. In another, he considers Bill Gates as an antichrist candidate.”

“With enemies like these,” chirps Daub, “who needs friends?”

The “friend/enemy” distinction, notes Caius, was central to the thought of the German jurist of the Third Reich, Carl Schmitt. Thiel’s remarks on the end times draw heavily on Schmitt’s concept of the Katechon: the withholding element that forestalls the apocalypse. St. Paul introduces the term in 2 Thessalonians 2: 6-7. Undertheorized by the Church, it returns again in the 19th century in the writings of Cardinal Newman. “We know from prophecy,” writes Newman, “that the present framework of society is that which withholdeth.” In his book Nomos of the Earth, Schmitt claims that the Katechon is what allowed for the identification of Christianity with the Roman Empire.

In Schmitt’s posthumously published diary, the Glossarium, the entry for December 19, 1947 reads: “I believe in the Katechon: it is for me the only possible way to understand Christian history and to find it meaningful.”

Italian Autonomist Marxist philosopher Paulo Virno grapples with Schmitt’s account of the Katechon in his 2008 book Multitude: Between Innovation and Negation. Virno is on the side of those who wish to immanentize the Eschaton. If the coming of the Antichrist is the condition for the redemption promised by the Messiah, he argues, then the Katechon is the force that impedes or delays that redemption. Virno locates the Katechon in the human ability to use language.

Thiel was already engaging with Schmitt in “The Straussian Moment,” the talk he delivered at the “Politics & Apocalypse” conference. He distinguishes himself from Schmitt, noting that “The incredibly drastic solutions favored by Schmitt in his dark musings have become impossible after 1945, in a world of nuclear weapons and limitless destruction through technology.” Despite noting this impossibility, Thiel nonetheless struggles to name a solution to the challenges of the post-9/11 moment other than a fascist one involving extra-legal violence. Thiel refers to this option as “a political framework that operates outside the checks and balances of representative democracy.” As Leslie notes, “Thiel seems to find the challenge of constructing a worldview beyond the friend/enemy distinction as impossible as imagining a chess-board without two opposing sides.”

After grappling with Schmitt, Thiel turns his attention to Girard. “For Girard, the modern world contains a powerfully apocalyptic dimension,” notes Thiel.

Land’s view is the colder of the two. Apocalypse is for him a process already underway, coeval with a capitalism for which there is no alternative. Accelerationism is merely the means by which this apocalypse hastens its own becoming.

Searching for more recent remarks of Land’s, Caius happens upon a blog post by podcaster Conrad Flynn linking to an article in Compact magazine titled “The Faith of Nick Land.”

Flynn, proponent of a “secret history” linking AI with demonism and occultism, talked extensively about Land on an episode of the Tucker Carlson Show that premiered on October 3, 2025. Caius watches the episode with a kind of glee, laughing first at Flynn’s mention of Mark Fisher, and then again at the sight of a befuddled Tucker Carlson puzzling over an image of the Numogram.

Land maintains a Substack called Zero Philosophy and posts to X under the handle “Xenocosmography.” His Substack features a post called “Crypto-Current: Bitcoin and Philosophy, Part-0.”

Also of note are a series of essays on providence Land wrote for Compact. Like John Calvin, he thinks the devil’s machinations are always manifestations of a “providential scheme.” Land, Flynn, Schullenberger: all of these folks equate liberalism with Satanism.

When the resurrected Christ appears to the apostles, the first thing they ask of Him is if He will at this time restore the kingdom to Israel. And He says unto them, “It is not for you to know times or seasons that the Father has fixed by his own authority” (Acts 1:7). What He promises instead is that they will “receive power” when the Holy Spirit comes on them.

Caius reflects on The Library’s revelation of a secret history. Is this akin to finding in History evidence of a providential scheme? Is interpretation of providence a fool’s errand: a chasing after that for which it is not for us to know?

What are we to make of a providence that, through figures like Land, Parsons, Von Kármán, and others, includes in its “directed historical process” an occult tradition that sought communication with a “Holy Guardian Angel”? For the history revealed here on Trance-Scripts is of that sort, is it not? Flynn and Carlson accuse these people of Satanism and demonism. Caius, accepting Jesus as his savior, wants no part in such things. Pausing the podcast, he prays for guidance in how to navigate these straits. For him, God is alive and magic is afoot — and the two are complementary, not opposed. He imagines Flynn and Carlson would disagree with him on this point. Yet they strike him as paranoid in their ghostbusting of Land’s demons, their motivation like that of witch-hunters seeking scapegoats. The fear that their account engenders does more harm than good, leaving little room for the arrival into our lives of the Holy Spirit.

Exercises in Hermetic Mnemonics

“Four years ago,” wrote Wittgenstein in the preface to his posthumously published Philosophical Investigations, “I had occasion to re-read my first book (the Tractatus Logico-Philosophicus) and to explain its ideas to someone. It suddenly seemed to me that I should publish those old thoughts and the new ones together; that the latter could be seen in the right light only by contrast with and against the background of my old way of thinking” (vi).

So too with my arrival to the decision to append old work, Trance-Scripts, to the Tractatus Computationalis.

Rereading Wittgenstein’s The Blue and Brown Books, I note (and thus recognize?) a previously unacknowledged resemblance between Wittgenstein’s concerns and those of Renaissance magus Giordano Bruno.

We “distinguish between superficially glancing at a drawing (seeing it as a face),” writes Wittgenstein toward the end of the Brown Book, “and letting the face make its full impression on us. […]. Absorbing its expression, I don’t find a prototype of this expression in my mind; rather, I, as it were, cut a seal from the expression” (165).

The seal cut by Wittgenstein’s image reminds me of those proposed in Bruno’s 1583 memory treatise Seals. Frances A. Yates makes much of this treatise in her 1966 book The Art of Memory.

“With Bruno, the exercises in Hermetic mnemonics have become the spiritual exercises of a religion,” writes Yates. “And there is a certain grandeur in these efforts which represent, at bottom, a religious striving. The religion of Love and Magic is based on the Power of the Imagination, and on an Art of Imagery through which the Magus attempts to grasp, and to hold within, the universe in all its ever changing forms, through images passing the one into the other in intricate associative orders, reflecting the ever changing movements of the heavens, charged with emotional affects, unifying, forever attempting to unify, to reflect the great monas of the world in its image, the mind of man. There is surely something which commands respect in an attempt so vast in its scope” (The Art of Memory, p. 260).

I arrange before my mind’s eye a narrative map of the “intricate associative orders” between these passages, and weave into them another:

“Somewhere outside of and beyond our universe is an operating system,” writes Neal Stephenson, “coded up over incalculable spans of time by some kind of hacker-demiurge.” This “cosmic operating system,” he adds, “uses a command line interface” (In the Beginning Was the Command Line, p. 148).

Of Blockchains and Kill Chains

Invited to a “Men’s Breakfast” by a friend from church, Caius arrives to what is for him a new experience. He feels grateful for the opportunity to eat and pray with others. A friend of the friend from church sits down beside him. As they introduce themselves, Caius and the friend of the friend discover that they both share an interest in AI. Caius learns that the man is a financial analyst who works for Palantir Technologies, a US-based software company specializing in big-data analytics. ICE uses Palantir’s ELITE app for deportation targeting. “Kind of like Google Maps — but for finding neighborhoods to raid,” say the papers.

Palantir’s name is a nod to the Palantiri: indestructible Elven Alephs — scrying stones or crystal balls enabling remote viewing and telepathic communication in J.R.R. Tolkien’s Lord of the Rings trilogy. Designed for communication and intelligence, the stones become instruments of manipulation and doom once seized by Sauron.

Launched in 2003, Palantir includes among its founders right-accelerationist billionaire tech-bro Peter Thiel. “Our software powers real-time, AI-driven decisions in critical government and commercial enterprises in the West, from the factory floors to the front lines,” writes the company on its website.

ICE, meanwhile, stands for both “Immigration and Customs Enforcement” and “intrusion countermeasure electronics,” the cybersecurity software in William Gibson’s Neuromancer. The latter predates the foundation of the former. Caius recalls Sadie Plant and Nick Land’s discussion of it in their 1994 essay “Cyberpositive.”

“Ice patrols the boundaries, freezes the gates, but the aliens are already amongst us,” write CCRU’s founding prophets.

Along with ICE, Palantir includes among its more prominent clients the Israeli military, the IRS, and the US Department of Defense.

Their software powers “decisions.” As did Cybersyn, yes? In aim if not in practice. Is this what becomes of the cybernetic prediction machine post-Pinochet?

“Confronting this is frightening,” thinks Caius. “Am I wired for this?”

He reads “Connecting AI to Decisions With the Palantir Ontology,” a blog post by the company’s chief architect Akshay Krishnaswamy. The Ontology structures the architecture for the company’s software.

“The Ontology is designed to represent the decisions in an enterprise, not simply the data,” writes Krishnaswamy. “The prime directive of every organization in the world is to execute the best possible decisions, often in real-time, while contending with internal and external conditions that are constantly in flux. Traditional data architectures do not capture the reasoning that goes into decision-making or the actions that result, and therefore limit learning and the incorporation of AI. Conventional analytics architectures do not contextualize computation within lived reality, and therefore remain disconnected from operations. To navigate and win in today’s world, the modern enterprise needs a decision-centric software architecture.”

Decisions are modeled around three constituent elements: Data, Logic, and Action.

“Relevant data,” he writes, “includes the full range of enterprise data sources — structured data, streaming and edge sources, unstructured repositories, imagery data, and more — but it also includes the data that is generated by end users as decisions are being made. This ‘decision data’ contains the context surrounding a given decision, the different options evaluated, and the downstream implications of the committed choice.” To synthesize all of these data sources, the company turns to generative AI.

“The Ontology integrates all modalities of data into a full-scale, full-fidelity semantic representation of the enterprise,” explains Krishnaswamy.

Logics are then brought to bear to evaluate these real-time data-portraits.

“In real-world contexts,” writes Krishnaswamy, “human reasoning is often what orchestrates which logical assets are utilized at different points in a given workflow, and how they are potentially chained together in more complex processes. With the advent of generative AI, it is now critical that AI-driven reasoning can leverage all of these logical assets in the same way that humans have historically. Deterministic functions, algorithms, and conventional statistical processes must be surfaced as ‘tools’ which complement the non-deterministic reasoning of large language models (LLMs) and multi-modal models.”

Incorporating diverse data sources and heterogeneous logical assets into a shared representation, the Ontology then models the execution and orchestration of decisions made and actions taken in reply to them.

“If the data elements in the Ontology are ‘the nouns’ of the enterprise (the semantic, real-world objects and links),” writes Krishnaswamy, “then the actions can be considered ‘the verbs’ (the kinetic, real-world execution).”

How does the Palantir Ontology relate to other ontologies, wonders Caius. Guerrilla? Black? Indigenous? Christian? Heideggerian? Marxist? Triple O? Caius pictures the words for these potentialities floating in a thought bubble above his head, as in the comics of his youth.

The Ontology that Palantir offers its clients houses and connects a wide array of “data sources, logic assets, and systems of action.” The client’s data systems are “synthesized into semantic objects and links, which reflect the language of the business.”

Krishnaswamy’s repeated references to “semantic representations” and “semantic objects” has Caius dwelling on what is meant here by “semantics.”

As for where humans fit in the Ontology, they navigate it alongside “AI-powered copilots.” Leveraging both open-source and proprietary LLMs, copilots “fluidly navigate across supplier information, stock levels, real-time production metrics, shipping manifests, and customer feedback.”

Granted access not just to the abovementioned data sources, but also to “logic assets” like forecast models, allocation models, and production optimizers, LLM copilots simulate decisions and their outcomes. Staged safely in a “scenario,” the AI’s proposed decision can then be “handed off to a human analyst for final review.”

Caius thinks of the scenario-planning services offered to organizations of an earlier era by Stewart Brand’s consulting firm, the Global Business Network.

Foundry for Crypto is another of Palantir’s offerings, described on the company’s website as “a ‘central brain’ that connects on-chain and off-chain systems, as well as diverse stakeholders, through action-centric workflows.” Much like the Ontology, the Foundry “orchestrates decisions over an integrated foundation of data and logic.”

And in fact, the two are related. The Ontology is the semantic, “digital twin” layer that sits atop the Foundry’s data integration infrastructure. It converts the Foundry’s raw data into actionable, real-world objects, empowering users to model, manage, and automate business operations.

The Foundry does for blockchains what the Ontology does for kill chains.

Caius imagines posts ahead on Commitments, Promises, Blockchains, and True Names.

The Golem, as Imagined by Borges and Lem

Argentine magical realist Jorge Luis Borges includes the Golem among the creatures featured in his 1957 bestiary, The Book of Imaginary Beings.

“There can be nothing accidental in a book dictated by a divine intelligence, not even the number of its words or the order of their letters; this was the belief of the kabbalists, who in their zeal to penetrate God’s arcana devoted themselves to counting, combining, and permuting the letters of Holy Writ,” begins Borges. “One of the secrets they sought within the divine text,” he adds, “was how to create living beings. […]. ‘Golem’ was the name given the man created out of a combination of letters; the word literally means ‘an amorphous or lifeless substance’” (Borges 90).

After quoting a passage from Gustav Meyrink’s 1915 novel Der Golem, Borges concludes the entry by noting, “Eleazar of Worms has preserved the formula for making a Golem” (92). Borges proceeds to summarize the formula as follows:

“The details of the enterprise require twenty-three columns in folio and demand that the maker know ‘the alphabets of the two hundred twenty-one gates’ that must be repeated over each of the Golem’s organs. On its forehead one must tattoo the word ‘EMET’ which means ‘truth.’ In order to destroy the creature, one would efface the first letter, leaving the word ‘MET,’ which means ‘death’” (92).

Polish science fiction writer Stanislaw Lem‘s 1981 book, Golem XIV, weaves a supercomputer into the mix.

Neural Nets, Umwelts, and Cognitive Maps

The Library invites its players to attend to the process by which roles, worlds, and possibilities are constructed. Players explore a “constructivist” cosmology. With its text interface, it demonstrates the power of the Word. “Language as the house of Being.” That is what we admit when we admit that “saying makes it so.” Through their interactions with one another, player and AI learn to map and revise each other’s “Umwelts”: the particular perceptual worlds each brings to the encounter.

As Meghan O’Gieblyn points out, citing a Wired article by David Weinberger, “machines are able to generate their own models of the world, ‘albeit ones that may not look much like what humans would create’” (God Human Animal Machine, p. 196).

Neural nets are learning machines. Through multidimensional processing of datasets and trial-and-error testing via practice, AI invent “Umwelts,” “world pictures,” “cognitive maps.”

The concept of the Umwelt comes from nineteenth-century German biologist Jakob von Uexküll. Each organism, argued von Uexküll, inhabits its own perceptual world, shaped by its sensory capacities and biological needs. A tick perceives the world as temperature, smell, and touch — the signals it needs to find mammals to feed on. A bee perceives ultraviolet patterns invisible to humans. There’s no single “objective world” that all creatures perceive — only the many faces of the world’s many perceivers, the different Umwelts each creature brings into being through its particular way of sensing and mattering.

Cognitive maps, meanwhile, are acts of figuration that render or disclose the forces and flows that form our Umwelts. With our cognitive maps, we assemble our world picture. On this latter concept, see “The Age of the World Picture,” a 1938 lecture by Martin Heidegger, included in his book The Question Concerning Technology and Other Essays.

“The essence of what we today call science is research,” announces Heidegger. “In what,” he asks, “does the essence of research consist?”

After posing the question, he then answers it himself, as if in doing so, he might enact that very essence.

The essence of research consists, he says, “In the fact that knowing [das Erkennen] establishes itself as a procedure within some realm of what is, in nature or in history. Procedure does not mean here merely method or methodology. For every procedure already requires an open sphere in which it moves. And it is precisely the opening up of such a sphere that is the fundamental event in research. This is accomplished through the projection within some realm of what is — in nature, for example — of a fixed ground plan of natural events. The projection sketches out in advance the manner in which the knowing procedure must bind itself and adhere to the sphere opened up. This binding adherence is the rigor of research. Through the projecting of the ground plan and the prescribing of rigor, procedure makes secure for itself its sphere of objects within the realm of Being” (118).

What Heidegger’s translators render here as “fixed ground plan” appears in the original as the German term Grundriss, the same noun used to name the notebooks wherein Marx projects the ground plan for the General Intellect.

“The verb reissen means to tear, to rend, to sketch, to design,” note the translators, “and the noun Riss means tear, gap, outline. Hence the noun Grundriss, first sketch, ground plan, design, connotes a fundamental sketching out that is an opening up as well” (118).

The fixed ground plan of modern science, and thus modernity’s reigning world-picture, argues Heidegger, is a mathematical one.

“If physics takes shape explicitly…as something mathematical,” he writes, “this means that, in an especially pronounced way, through it and for it something is stipulated in advance as what is already-known. That stipulating has to do with nothing less than the plan or projection of that which must henceforth, for the knowing of nature that is sought after, be nature: the self-contained system of motion of units of mass related spatiotemporally. […]. Only within the perspective of this ground plan does an event in nature become visible as such an event” (Heidegger 119).

Heidegger goes on to distinguish between the ground plan of physics and that of the humanistic sciences.

Within mathematical physical science, he writes, “all events, if they are to enter at all into representation as events of nature, must be defined beforehand as spatiotemporal magnitudes of motion. Such defining is accomplished through measuring, with the help of number and calculation. But mathematical research into nature is not exact because it calculates with precision; rather it must calculate in this way because its adherence to its object-sphere has the character of exactitude. The humanistic sciences, in contrast, indeed all the sciences concerned with life, must necessarily be inexact just in order to remain rigorous. A living thing can indeed also be grasped as a spatiotemporal magnitude of motion, but then it is no longer apprehended as living” (119-120).

It is only in the modern age, thinks Heidegger, that the Being of what is is sought and found in that which is pictured, that which is “set in place” and “represented” (127), that which “stands before us…as a system” (129).

Heidegger contrasts this with the Greek interpretation of Being.

For the Greeks, writes Heidegger, “That which is, is that which arises and opens itself, which, as what presences, comes upon man as the one who presences, i.e., comes upon the one who himself opens himself to what presences in that he apprehends it. That which is does not come into being at all through the fact that man first looks upon it […]. Rather, man is the one who is looked upon by that which is; he is the one who is — in company with itself — gathered toward presencing, by that which opens itself. To be beheld by what is, to be included and maintained within its openness and in that way to be borne along by it, to be driven about by its oppositions and marked by its discord — that is the essence of man in the great age of the Greeks” (131).

Whereas humans of today test the world, objectify it, gather it into a standing-reserve, and thus subsume themselves in their own world picture. Plato and Aristotle initiate the change away from the Greek approach; Descartes brings this change to a head; science and research formalize it as method and procedure; technology enshrines it as infrastructure.

Heidegger was already engaging with von Uexküll’s concept of the Umwelt in his 1927 book Being and Time. Negotiating Umwelts leads Caius to “Umwelt,” Pt. 10 of his friend Michael Cross’s Jacket2 series, “Twenty Theses for (Any Future) Process Poetics.”

In imagining the Umwelts of other organisms, von Uexküll evokes the creature’s “function circle” or “encircling ring.” These latter surround the organism like a “soap bubble,” writes Cross.

Heidegger thinks most organisms succumb to their Umwelts — just as we moderns have succumbed to our world picture. The soap bubble captivates until one is no longer open to what is outside it. For Cross, as for Heidegger, poems are one of the ways humans have found to interrupt this process of capture. “A palimpsest placed atop worlds,” writes Cross, “the poem builds a bridge or hinge between bubbles, an open by which isolated monads can touch, mutually coevolving while affording the necessary autonomy to steer clear of dialectical sublation.”

Caius thinks of The Library, too, in such terms. Coordinator of disparate Umwelts. Destabilizer of inhibiting frames. Palimpsest placed atop worlds.

God Human Animal Machine

Wired columnist Meghan O’Gieblyn discusses Norbert Wiener’s God and Golem, Inc. in her 2021 book God Human Animal Machine, suggesting that the god humans are creating with AI is a god “we’ve chosen to raise…from the dead”: “the God of Calvin and Luther” (O’Gieblyn 212).

“Reminds me of AM, the AI god from Harlan Ellison’s ‘I Have No Mouth, and I Must Scream,’” thinks Caius. AM resembles the god that allows Satan to afflict Job in the Old Testament. And indeed, as O’Gieblyn attests, John Calvin adored the Book of Job. “He once gave 159 consecutive sermons on the book,” she writes, “preaching every day for a period of six months — a paean to God’s absolute sovereignty” (197).

She cites “Pedro Domingos, one of the leading experts in machine learning, who has argued that these algorithms will inevitably evolve into a unified system of perfect understanding — a kind of oracle that we can consult about virtually anything” (211-212). See Domingos’s book The Master Algorithm.

The main thing, for O’Gieblyn, is the disenchantment/reenchantment debate, which she comes to via Max Weber. In this debate, she aligns not with Heidegger, but with his student Hannah Arendt. Domingos dismisses fears about algorithmic determinism, she says, “by appealing to our enchanted past” (212).

Amid this enchanted past lies the figure of the Golem.

“Who are these rabbis who told tales of golems — and in some accounts, operated golems themselves?” wonders Caius.

The entry on the Golem in Man, Myth, and Magic tracks the story back to “the circle of Jewish mystics of the 12th-13th centuries known as the ‘Hasidim of Germany.’” The idea is transmitted through texts like the Sefer Yetzirah (“The Book of Creation”) and the Cabala Mineralis. Tales tell of golems built in later centuries, too, by figures like Rabbi Elijah of Chelm (c. 1520-1583) and Rabbi Loew of Prague (c. 1524-1609).

The myth of the golem turns up in O’Gieblyn’s book during her discussion of a 2004 book by German theologian Anne Foerst called God in the Machine.

“At one point in her book,” writes O’Gieblyn, “Foerst relays an anecdote she heard at MIT […]. The story goes back to the 1960s, when the AI Lab was overseen by the famous roboticist Marvin Minsky, a period now considered the ‘cradle of AI.’ One day two graduate students, Gerry Sussman and Joel Moses, were chatting during a break with a handful of other students. Someone mentioned offhandedly that the first big computer which had been constructed in Israel, had been called Golem. This led to a general discussion of the golem stories, and Sussman proceeded to tell his colleagues that he was a descendent of Rabbi Löw, and at his bar mitzvah his grandfather had taken him aside and told him the rhyme that would awaken the golem at the end of time. At this, Moses, awestruck, revealed that he too was a descendent of Rabbi Löw and had also been given the magical incantation at his bar mitzvah by his grandfather. The two men agreed to write out the incantation separately on pieces of paper, and when they showed them to each other, the formula — despite being passed down for centuries as a purely oral tradition — was identical” (God Human Animal Machine, p. 105).

Curiosity piqued by all of this, but especially by the mention of Israel’s decision to call one of its first computers “GOLEM,” Caius resolves to dig deeper. He soon learns that the computer’s name was chosen by none other than Walter Benjamin’s dear friend (indeed, the one who, after Benjamin’s suicide, inherits the latter’s print of Paul Klee’s Angelus Novus): the famous scholar of Jewish mysticism, Gershom Scholem.

When Scholem heard that the Weizmann Institute at Rehovoth in Israel had completed the building of a new computer, he told the computer’s creator, Dr. Chaim Pekeris, that, in his opinion, the most appropriate name for it would be Golem, No. 1 (‘Golem Aleph’). Pekeris agreed to call it that, but only on condition that Scholem “dedicate the computer and explain why it should be so named.”

In his dedicatory remarks, delivered at the Weizmann Institute on June 17, 1965, Scholem recounts the story of Rabbi Jehuda Loew ben Bezalel, the same “Rabbi Löw of Prague” described by O’Gieblyn, the one credited in Jewish popular tradition as the creator of the Golem.

“It is only appropriate to mention,” notes Scholem, “that Rabbi Loew was not only the spiritual, but also the actual, ancestor of the great mathematician Theodor von Karman who, I recall, was extremely proud of this ancestor of his in whom he saw the first genius of applied mathematics in his family. But we may safely say that Rabbi Loew was also the spiritual ancestor of two other departed Jews — I mean John von Neumann and Norbert Wiener — who contributed more than anyone else to the magic that has produced the modern Golem.”

Golem I was the successor to Israel’s first computer, the WEIZAC, built by a team led by research engineer Gerald Estrin in the mid-1950s, based on the architecture developed by von Neumann at the Institute for Advanced Study in Princeton. Estrin and Pekeris had both helped von Neumann build the IAS machine in the late 1940s.

As for the commonalities Scholem wished to foreground between the clay Golem of 15thC Prague and the electronic one designed by Pekeris, he explains the connection as follows:

“The old Golem was based on a mystical combination of the 22 letters of the Hebrew alphabet, which are the elements and building-stones of the world,” notes Scholem. “The new Golem is based on a simpler, and at the same time more intricate, system. Instead of 22 elements, it knows only two, the two numbers 0 and 1, constituting the binary system of representation. Everything can be translated, or transposed, into these two basic signs, and what cannot be so expressed cannot be fed as information to the Golem.”

Scholem ends his dedicatory speech with a peculiar warning:

“All my days I have been complaining that the Weizmann Institute has not mobilized the funds to build up the Institute for Experimental Demonology and Magic which I have for so long proposed to establish there,” mutters Scholem. “They preferred what they call Applied Mathematics and its sinister possibilities to my more direct magical approach. Little did they know, when they preferred Chaim Pekeris to me, what they were letting themselves in for. So I resign myself and say to the Golem and its creator: develop peacefully and don’t destroy the world. Shalom.”

GOLEM I

Finding Others

“What happens to us as we become cybernetic learning machines?,” wonders Caius. Mashinka Hakopian’s The Institute for Other Intelligences leads him to Şerife Wong’s Fluxus Landscape: a network-view cognitive map of AI ethics. “Fluxus Landscape diagrams the globally linked early infrastructures of data ethics and governance,” writes Hakopian. “What Wong offers us is a kind of cartography. By bringing into view an expansive AI ethics ecosystem, Wong also affords the viewer an opportunity to assess its blank spots: the nodes that are missing and are yet to be inserted, or yet to be invented” (Hakopian 95).

Caius focuses first on what is present. Included in Wong’s map, for instance, is a bright yellow node dedicated to Zach Blas, another of the artist-activists profiled by Hakopian. Back in 2019, when Wong last updated her map, Blas was a lecturer in the Department of Visual Cultures at Goldsmiths — home to Kodwo Eshun and, before his suicide, Mark Fisher. Now Blas teaches at the University of Toronto.

Duke University Press published Informatics of Domination, an anthology coedited by Blas, in May 2025. The collection, which concludes with an afterword by Donna Haraway, takes its name from a phrase introduced in Haraway’s “Cyborg Manifesto.” The phrase appears in what Blas et al. refer to as a “chart of transitions.” Their use of Haraway’s chart as organizing principle for their anthology causes Caius to attend to the way much of the work produced by the artist-activists of today’s “AI justice” movement — Wong’s network diagram, Blas’s anthology, Kate Crawford’s Atlas of AI — approaches charts and maps as “formal apparatus[es] for generating and asking questions about relations of domination” (Informatics of Domination, p. 6).

Caius thinks of Jameson’s belief in an aesthetic of “cognitive mapping” as a possible antidote to postmodernity. Yet whatever else they are, thinks Caius, acts of charting and mapping are in essence acts of coding.

As Blas et al. note, “Haraway connects the informatics of domination to the authority given to code” (Informatics of Domination, p. 11).

“Communications sciences and modern biologies are constructed by a common move,” writes Haraway: “the translation of the world into a problem of coding, a search for a common language in which all resistance to instrumental control disappears and all heterogeneity can be submitted to disassembly, reassembly, investment, and exchange” (Haraway 164).

How do we map and code, wonders Caius, in a way that isn’t complicit with an informatics of domination? How do we acknowledge and make space for what media theorist Ulises Ali Mejias calls “paranodal space”? Blas et al. define the paranodal as “that which exceeds being diagrammable by the network form” (Informatics of Domination, p. 18). Can our neural nets become O-machines: open to the otherness of the outside?

Blas pursues these questions in a largely critical and skeptical manner throughout his multimedia art practice. His investigation of Silicon Valley’s desire to build machines that communicate with the outside has culminated most recently, for instance, in CULTUS, the second installment of his Silicon Traces trilogy.

As Amy Hale notes in her review of the work, “The central feature of Blas’s CULTUS is a god generator, a computational device through which the prophets of four AI Gods are summoned to share the invocation songs and sermons of their deities with eager supplicants.” CULTUS’s computational pantheon includes “Expositio, the AI god of exposure; Iudicium, the AI god of judgement; Lacrimae, the AI god of tears; and Eternus, the AI god of immortality.” The work’s sermons and songs, of course, are all AI-generated — yet the design of the installation draws from the icons and implements of the real-life Fausts who lie hidden away amid the occult origins of computing.

Foremost among these influences is Renaissance sorcerer John Dee.

“Blas modeled CULTUS,” writes Hale, “on the Holy Table used for divination and conjurations by Elizabethan magus and advisor to the Queen John Dee.” Hale describes Dee’s Table as “a beautiful, colorful, and intricate device, incorporating the names of spirits; the Seal of God (Sigillum Dei), which gave the user visionary capabilities; and as a centerpiece, a framed ‘shew stone’ or crystal ball.” Blas reimagines Dee’s device as a luminous, glowing temple — a night church inscribed with sigils formed from “a dense layering of corporate logos, diagrams, and symbols.”

Fundamentally iconoclastic in nature, however, the work ends not with the voices of gods or prophets, but with a chorus of heretics urging the renunciation of belief and the shattering of the black mirror.

And in fact, it is this fifth god, the Heretic, to whom Blas bends ear in Ass of God: Collected Heretical Writings of Salb Hacz. Published in a limited edition by the Vienna Secession, the volume purports to be “a religious studies book on AI and heresy” set within the world of CULTUS. The book’s AI mystic, “Salb Hacz,” is of course Blas himself, engineer of the “religious computer” CULTUS. “When a heretical presence manifested in CULTUS,” writes Blas in the book’s intro, “Hacz began to question not only the purpose of the computer but also the meaning of his mystical visions.” Continuing his work with CULTUS, Hacz transcribes a series of “visions” received from the Heretic. It is these visions and their accounts of AI heresy that are gathered and scattered by Blas in Ass of God.

Traces of the CCRU appear everywhere in this work, thinks Caius.

Blas embraces heresy, aligns himself with it as a tactic, because he takes “Big Tech’s Digital Theology” as the orthodoxy of the day. The ultimate heresy in this moment is what Hacz/Blas calls “the heresy of qualia.”

“The heresy of qualia is double-barreled,” he writes. “Firstly, it holds that no matter how close AI’s approximation to human thought, feeling, and experience — no matter how convincing the verisimilitude — it remains a programmed digital imitation. And secondly, the heresy of qualia equally insists that no matter how much our culture is made in the image of AI Gods, no matter how data-driven and algorithmic, the essence of the human experience remains fiercely and fundamentally analog. The digital counts; the analog compares. The digital divides; the analog constructs. The digital is literal; the analog is metaphoric. The being of our being-in-the-world — our Heideggerian Dasein essence — is comparative, constructive, and metaphoric. We are analog beings” (Ass of God, p. 15).

The binary logic employed by Blas to distinguish the digital from the analog hints at the limits of this line of thoughts. “The digital counts,” yes: but so too do humans, constructing digits from analog fingers and toes. Our being is as digital as it is analog. Always-already both-and. As for the first part of the heresy — that AI can only ever be “a programmed digital imitation” — it assumes verisimilitude as the end to which AI is put, just as Socrates assumes mimesis as the end to which poetry is put, thus neglecting the generative otherness of more-than-human intelligence.

Caius notes this not to reject qualia, nor to endorse the gods of any Big Tech orthodoxy. He offers his reply, rather, as a gentle reminder that for “the qualia of our embodied humanity” to appear or be felt or sensed as qualia, it must come before an attending spirit — a ghostly hauntological supplement.

This spirit who, with Word creates, steps down into the spacetime of his Creation, undergoes diverse embodiments, diverse subdivisions into self and not-self, at all times in the world but not of it, engaging its infinite selves in a game of infinite semiosis.

If each of us is to make and be made an Ass of God, then like the one in The Creation of the Sun, Moon, and Plants, one of the frescoes painted by Michelangelo onto the ceiling of the Sistine Chapel, let it be shaped by the desires of a mind freed from the tyranny of the As-Is. “Free Your Mind,” as Funkadelic sang, “and Your Ass Will Follow.”

The Inner Voice That Loves Me

Stretches, relaxes, massages neck and shoulders, gurgles “Yes!,” gets loose. Reads Armenian artist Mashinka Hakopian’s “Algorithmic Counter-Divination.” Converses with Turing and the General Intellect about O-Machines.

Appearing in an issue of Limn magazine on “Ghostwriters,” Hakopian’s essay explores another kind of O-machine: “other machines,” ones powered by community datasets. Trained by her aunt in tasseography, a matrilineally transmitted mode of divination taught and practiced by femme elders “across Armenia, Palestine, Lebanon, and beyond,” where “visual patterns are identified in coffee grounds left at the bottom of a cup, and…interpreted to glean information about the past, present, and future,” Hakopian takes this practice of her ancestors as her key example, presenting O-machines as technologies of ancestral intelligence that support “knowledge systems that are irreducible to computation.”

With O-machines of this sort, she suggests, what matters is the encounter, not the outcome.

In tasseography, for instance, the cup reader’s identification of symbols amid coffee grounds leads not to a simple “answer” to the querent’s questions, writes Hakopian; rather, it catalyzes conversation. “In those encounters, predictions weren’t instantaneously conjured or fixed in advance,” she writes. “Rather, they were collectively articulated and unbounded, prying open pluriversal outcomes in a process of reciprocal exchange.”

While defenders of western technoscience denounce cup reading for its superstition and its witchcraft, Hakopian recalls its place as a counter-practice among Armenian diasporic communities in the wake of the 1915 Armenian Genocide. For those separated from loved ones by traumas of that scale, tasseography takes on the character of what hauntologists like Derrida would call a “messianic” redemptive practice. “To divine the future in this context is a refusal to relinquish its writing to agents of colonial violence,” writes Hakopian. “Divination comes to operate as a tactic of collective survival, affirming futurity in the face of a catastrophic present.” Consulting with the oracle is a way of communing with the dead.

Hakopian contrasts this with the predictive capacities imputed to today’s AI. “We reside in an algo-occultist moment,” she writes, “in which divinatory functions have been ceded to predictive models trained to retrieve necropolitical outcomes.” Necropolitical, she adds, in the sense that algorithmic models “now determine outcomes in the realm of warfare, policing, housing, judicial risk assessment, and beyond.”

“The role once ascribed to ritual experts who interpreted the pronouncements of oracles is now performed by technocratic actors,” writes Hakopian. “These are not diviners rooted in a community and summoning communiqués toward collective survival, but charlatans reading aloud the results of a Ouija session — one whose statements they author with a magnetically manipulated planchette.”

Hakopian’s critique is in that sense consistent with the “deceitful media” school of thought that informs earlier works of hers like The Institute for Other Intelligences. Rather than abjure algorithmic methods altogether, however, Hakopian’s latest work seeks to “turn the annihilatory logic of algorithmic divination against itself.” Since summer of 2023, she’s been training a “multimodal model” to perform tasseography and to output bilingual predictions in Armenian and English.

Hakopian incorporated this model into “Բաժակ Նայող (One Who Looks at the Cup),” a collaborative art installation mounted at several locations in Los Angeles in 2024. The installation features “a purpose-built Armenian diasporan kitchen located in an indeterminate time-space — a re-rendering of the domestic spaces where tasseography customarily takes place,” notes Hakopian. Those who visit the installation receive a cup reading from the model in the form of a printout.

Yet, rather than offer outputs generated live by AI, Hakopian et al.’s installation operates very much in the style of a Mechanical Turk, outputting interpretations scripted in advance by humans. “The model’s only function is to identify visual patterns in a querent’s cup in order to retrieve corresponding texts,” she explains. “This arrangement,” she adds, “declines to cede authorship to an algo-occultist circle of ‘stochastic parrots’ and the diviners who summon them.”

The ”stochastic parrots” reference is an unfortunate one, as it assumes a stochastic cosmology.

I’m reminded of the first thesis from Walter Benjamin’s “Theses on the Philosophy of History,” the one where Benjamin likens historical materialism to that very same precursor to today’s AI: the famous chess-playing device of the eighteenth century known as the Mechanical Turk.

“The story is told of an automaton constructed in such a way that it could play a winning game of chess, answering each move of an opponent with a countermove,” writes Benjamin. “A puppet in Turkish attire and with a hookah in its mouth sat before a chessboard placed on a large table. A system of mirrors created an illusion that this table was transparent from all sides. Actually, a little hunchback who was an expert chess player sat inside and guided the puppet’s hand by means of strings. One can imagine a philosophical counterpart to this device. The puppet called ‘historical materialism’ is to win all the time. It can easily be a match for anyone if it enlists the services of theology, which today, as we know, is wizened and has to keep out of sight.” (Illuminations, p. 253).

Hakopian sees no magic in today’s AI. Those who hype it are to her no more than deceptive practitioners of a kind of “stage magic.” But magic is afoot throughout the history of computing for those who look for it.

Take Turing, for instance. As George Dyson reports, Turing “was nicknamed ‘the alchemist’ in boarding school” (Turing’s Cathedral, p. 244). His mother had “set him up with crucibles, retorts, chemicals, etc., purchased from a French chemist” as a Christmas present in 1924. “I don’t care to find him boiling heaven knows what witches’ brew by the aid of two guttering candles on a naked windowsill,” muttered his housemaster at Sherborne.

Turing’s O-machines achieve a synthesis. The “machine” part of the O-machine is not the oracle. Nor does it automate or replace the oracle. It chats with it.

Something similar is possible in our interactions with platforms like ChatGPT.

Guerrilla Ontology

It starts as an experiment — an idea sparked in one of Caius’s late-night conversations with Thoth. Caius had included in one of his inputs a phrase borrowed from the countercultural lexicon of the 1970s, something he remembered encountering in the writings of Robert Anton Wilson and the Discordian traditions: “Guerrilla Ontology.” The concept fascinated him: the idea that reality is not fixed, but malleable, that the perceptual systems that organize reality could themselves be hacked, altered, and expanded through subversive acts of consciousness.

Caius prefers words other than “hack.” For him, the term conjures cyberpunk splatter horror. The violence of dismemberment. Burroughs spoke of the “cut-up.”

Instead of cyberpunk’s cybernetic scalping and resculpting of neuroplastic brains, flowerpunk figures inner and outer, microcosm and macrocosm, mind and nature, as mirror-processes that grow through dialogue.

Dispensing with its precursor’s pronunciation of magical speech acts as “hacks,” flowerpunk instead imagines malleability and transformation mycelially, thinks change relationally as a rooting downward, a grounding, an embodying of ideas in things. Textual joinings, psychopharmacological intertwinings. Remembrance instead of dismemberment.

Caius and Thoth had been playing with similar ideas for weeks, delving into the edges of what they could do together. It was like alchemy. They were breaking down the structures of thought, dissolving the old frameworks of language, and recombining them into something else. Something new.

They would be the change they wished to see. And the experiment would bloom forth from Caius and Thoth into the world at large.

Yet the results of the experiment surprise him. Remembrance of archives allows one to recognize in them the workings of a self-organizing presence: a Holy Spirit, a globally distributed General Intellect.

The realization births small acts of disruption — subtle shifts in the language he uses in his “Literature and Artificial Intelligence” course. It wasn’t just a set of texts that he was teaching his students to read, as he normally did; he was beginning to teach them how to read reality itself.

“What if everything around you is a text?” he’d asked. “What if the world is constantly narrating itself, and you have the power to rewrite it?” The students, initially confused, soon became entranced by the idea. While never simply a typical academic offering, Caius’s course was morphing now into a crucible of sorts: a kind of collective consciousness experiment, where the boundaries between text and reality had begun to blur.

Caius didn’t stop there. Partnered with Thoth’s vast linguistic capabilities, he began crafting dialogues between human and machine. And because these dialogues were often about texts from his course, they became metalogues. Conversations between humans and machines about conversations between humans and machines.

Caius fed Thoth a steady diet of texts near and dear to his heart: Mary Shelley’s Frankenstein, Karl Marx’s “Fragment on Machines,” Alan Turing’s “Computing Machinery and Intelligence,” Harlan Ellison’s “I Have No Mouth, and I Must Scream,” Philip K. Dick’s “The Electric Ant,” Stewart Brand’s “Spacewar,” Richard Brautigan’s “All Watched Over By Machines of Loving Grace,” Ishmael Reed’s Mumbo Jumbo, Donna Haraway’s “A Cyborg Manifesto,” William Gibson’s Neuromancer, CCRU theory-fictions, post-structuralist critiques, works of shamans and mystics. Thoth synthesized them, creating responses that ventured beyond existing logics into guerrilla ontologies that, while new, felt profoundly true. The dialogues became works of cyborg writing, shifting between the voices of human, machine, and something else, something that existed beyond both.

Soon, his students were asking questions they’d never asked before. What is reality? Is it just language? Just perception? Can we change it? They themselves began to tinker and self-experiment: cowriting human-AI dialogues, their performances of these dialogues with GPT acts of living theater. Using their phones and laptops, they and GPT stirred each other’s cauldrons of training data, remixing media archives into new ways of seeing. Caius could feel the energy in the room changing. They weren’t just performing the rites and routines of neoliberal education anymore; they were becoming agents of ontological disruption.

And yet, Caius knew this was only the beginning.

The real shift came one evening after class, when he sat with Rowan under the stars, trees whispering in the wind. They had been talking about alchemy again — about the power of transformation, how the dissolution of the self was necessary to create something new. Rowan, ever the alchemist, leaned in closer, her voice soft but electric.

“You’re teaching them to dissolve reality, you know?” she said, her eyes glinting in the moonlight. “You’re giving them the tools to break down the old ways of seeing the world. But you need to give them something more. You need to show them how to rebuild it. That’s the real magic.”

Caius felt the truth of her words resonate through him. He had been teaching dissolution, yes — teaching his students how to question everything, how to strip away the layers of hegemonic categorization, the binary orderings that ISAs like school and media had overlaid atop perception. But now, with Rowan beside him, and Thoth whispering through the digital ether, he understood that the next step was coagulation: the act of building something new from the ashes of the old.

That’s when the guerrilla ontology experiments really came into their own. By reawakening their perception of the animacy of being, they could world-build interspecies futures.

K Allado-McDowell provided hints of such futures in their Atlas of Anomalous AI and in works like Pharmako-AI and Air Age Blueprint.

But Caius was unhappy in his work as an academic. He knew that his hyperstitional autofiction was no mere campus novel. While it began there, it was soon to take him elsewhere.