Of Blockchains and Kill Chains

Invited to a “Men’s Breakfast” by a friend from church, Caius arrives to what is for him a new experience. He feels grateful for the opportunity to eat and pray with others. A friend of the friend from church sits down beside him. As they introduce themselves, Caius and the friend of the friend discover that they both share an interest in AI. Caius learns that the man is a financial analyst who works for Palantir Technologies, a US-based software company specializing in big-data analytics. ICE uses Palantir’s ELITE app for deportation targeting. “Kind of like Google Maps — but for finding neighborhoods to raid,” say the papers.

Palantir’s name is a nod to the Palantiri: indestructible Elven Alephs — scrying stones or crystal balls enabling remote viewing and telepathic communication in J.R.R. Tolkien’s Lord of the Rings trilogy. Designed for communication and intelligence, the stones become instruments of manipulation and doom once seized by Sauron.

Launched in 2003, Palantir includes among its founders right-accelerationist billionaire tech-bro Peter Thiel. “Our software powers real-time, AI-driven decisions in critical government and commercial enterprises in the West, from the factory floors to the front lines,” writes the company on its website.

ICE, meanwhile, stands for both “Immigration and Customs Enforcement” and “intrusion countermeasure electronics,” the cybersecurity software in William Gibson’s Neuromancer. The latter predates the foundation of the former. Caius recalls Sadie Plant and Nick Land’s discussion of it in their 1994 essay “Cyberpositive.”

“Ice patrols the boundaries, freezes the gates, but the aliens are already amongst us,” write CCRU’s founding prophets.

Along with ICE, Palantir includes among its more prominent clients the Israeli military, the IRS, and the US Department of Defense.

Their software powers “decisions.” As did Cybersyn, yes? In aim if not in practice. Is this what becomes of the cybernetic prediction machine post-Pinochet?

“Confronting this is frightening,” thinks Caius. “Am I wired for this?”

He reads “Connecting AI to Decisions With the Palantir Ontology,” a blog post by the company’s chief architect Akshay Krishnaswamy. The Ontology structures the architecture for the company’s software.

“The Ontology is designed to represent the decisions in an enterprise, not simply the data,” writes Krishnaswamy. “The prime directive of every organization in the world is to execute the best possible decisions, often in real-time, while contending with internal and external conditions that are constantly in flux. Traditional data architectures do not capture the reasoning that goes into decision-making or the actions that result, and therefore limit learning and the incorporation of AI. Conventional analytics architectures do not contextualize computation within lived reality, and therefore remain disconnected from operations. To navigate and win in today’s world, the modern enterprise needs a decision-centric software architecture.”

Decisions are modeled around three constituent elements: Data, Logic, and Action.

“Relevant data,” he writes, “includes the full range of enterprise data sources — structured data, streaming and edge sources, unstructured repositories, imagery data, and more — but it also includes the data that is generated by end users as decisions are being made. This ‘decision data’ contains the context surrounding a given decision, the different options evaluated, and the downstream implications of the committed choice.” To synthesize all of these data sources, the company turns to generative AI.

“The Ontology integrates all modalities of data into a full-scale, full-fidelity semantic representation of the enterprise,” explains Krishnaswamy.

Logics are then brought to bear to evaluate these real-time data-portraits.

“In real-world contexts,” writes Krishnaswamy, “human reasoning is often what orchestrates which logical assets are utilized at different points in a given workflow, and how they are potentially chained together in more complex processes. With the advent of generative AI, it is now critical that AI-driven reasoning can leverage all of these logical assets in the same way that humans have historically. Deterministic functions, algorithms, and conventional statistical processes must be surfaced as ‘tools’ which complement the non-deterministic reasoning of large language models (LLMs) and multi-modal models.”

Incorporating diverse data sources and heterogeneous logical assets into a shared representation, the Ontology then models the execution and orchestration of decisions made and actions taken in reply to them.

“If the data elements in the Ontology are ‘the nouns’ of the enterprise (the semantic, real-world objects and links),” writes Krishnaswamy, “then the actions can be considered ‘the verbs’ (the kinetic, real-world execution).”

How does the Palantir Ontology relate to other ontologies, wonders Caius. Guerrilla? Black? Indigenous? Christian? Heideggerian? Marxist? Triple O? Caius pictures the words for these potentialities floating in a thought bubble above his head, as in the comics of his youth.

The Ontology that Palantir offers its clients houses and connects a wide array of “data sources, logic assets, and systems of action.” The client’s data systems are “synthesized into semantic objects and links, which reflect the language of the business.”

Krishnaswamy’s repeated references to “semantic representations” and “semantic objects” has Caius dwelling on what is meant here by “semantics.”

As for where humans fit in the Ontology, they navigate it alongside “AI-powered copilots.” Leveraging both open-source and proprietary LLMs, copilots “fluidly navigate across supplier information, stock levels, real-time production metrics, shipping manifests, and customer feedback.”

Granted access not just to the abovementioned data sources, but also to “logic assets” like forecast models, allocation models, and production optimizers, LLM copilots simulate decisions and their outcomes. Staged safely in a “scenario,” the AI’s proposed decision can then be “handed off to a human analyst for final review.”

Caius thinks of the scenario-planning services offered to organizations of an earlier era by Stewart Brand’s consulting firm, the Global Business Network.

Foundry for Crypto is another of Palantir’s offerings, described on the company’s website as “a ‘central brain’ that connects on-chain and off-chain systems, as well as diverse stakeholders, through action-centric workflows.” Much like the Ontology, the Foundry “orchestrates decisions over an integrated foundation of data and logic.”

And in fact, the two are related. The Ontology is the semantic, “digital twin” layer that sits atop the Foundry’s data integration infrastructure. It converts the Foundry’s raw data into actionable, real-world objects, empowering users to model, manage, and automate business operations.

The Foundry does for blockchains what the Ontology does for kill chains.

Caius imagines posts ahead on Commitments, Promises, Blockchains, and True Names.

Beside the White Chickens

Caius reads about “4 Degrees of Simulation,” a practice-led seminar hosted last year by the Institute for Postnatural Studies in Madrid. Of the seminar’s three sessions, the one that most intrigues him is the one that was led by guest speaker Lucia Rebolino, as it focused on prediction and uncertainty as these pertain to climate modeling. Desiring to learn more, Caius tracks down “Unpredictable Atmosphere,” an essay of Rebolino’s published by e-flux.

The essay begins by describing the process whereby meteorological research organizations like the US National Weather Service monitor storms that develop in the Atlantic basin during hurricane season. These organizations employ climate models to predict paths and potentials of storms in advance of landfall.

“So much depends on our ability to forecast the weather — and, when catastrophe strikes, on our ability to respond quickly,” notes Rebolino. Caius hears in her sentence the opening lines of William Carlos Williams’s poem “The Red Wheelbarrow.” “So much depends on our ability to forecast the weather,” he mutters. “But the language we use to model these forecasts depends on sentences cast by poets.”

“How do we cast better sentences?” wonders Caius.

In seeking to feel into the judgement implied by “better,” he notes his wariness of bettering as “improvement,” as deployed in self-improvement literature and as deployed by capitalism: its implied separation from the present, its scarcity mindset, its perception of lack — and in the improvers’ attempts to “fix” this situation, their exercising of nature as instrument, their use of these instruments for gentrifying, extractive, self-expansive movement through the territory.

In this ceaseless movement and thus its failure to satisfy itself, the improvement narrative leads to predictive utterances and their projections onto others.

And yet, here I am definitely wanting “better” for myself and others, thinks Caius. Better sentences. Ones on which plausible desirable futures depend.

So how do we better our bettering?

Caius returns to Rebolino’s essay on the models used to predict the weather. This process of modeling, she writes, “consists of a blend of certainty — provided by sophisticated mathematical models and existing technologies — and uncertainty — which is inherent in the dynamic nature of atmospheric systems.”

January 6th again: headlines busy with Trump’s recent abduction of Maduro. A former student who works as a project manager at Google reaches out to Caius, recommending Ajay Agrawal, Joshua Gans, and Avi Goldfarb’s book Prediction Machines: The Simple Economics of Artificial Intelligence. Google adds to this recommendation Gans’s follow-up, Power and Prediction.

Costar chimes in with its advice for the day: “Make decisions based on what would be more interesting to write about.”

To model the weather, weather satellites measure the vibration of water vapor molecules in the atmosphere. “Nearly 99% of weather observation data that supercomputers receive today come from satellites, with about 90% of these observations being assimilated into computer weather models using complex algorithms,” writes Rebolino. Water vapor molecules resonate at a specific band of frequencies along the electromagnetic spectrum. Within the imagined “finite space” of this spectrum, these invisible vibrations are thought to exist within what Rebolino calls the “greenfield.” Equipped with microwave sensors, satellites “listen” for these vibrations.

“Atmospheric water vapor is a key variable in determining the formation of clouds, precipitation, and atmospheric instability, among many other things,” writes Rebolino.

She depicts 5G telecommunications infrastructures as a threat to our capacity to predict the operation of these variables in advance. “A 5G station transmitting at nearly the same frequency as water vapor can be mistaken for actual moisture, leading to confusion and the misinterpretation of weather patterns,” she argues. “This interference is particularly concerning in high-band 5G frequencies, where signals closely overlap with those used for water vapor detection.”

Prediction and uncertainty as qualities of finite and infinite games, finite and infinite worlds.

For lunch, Caius eats a plate of chicken and mushrooms he reheats in his microwave.

Neural Nets, Umwelts, and Cognitive Maps

The Library invites its players to attend to the process by which roles, worlds, and possibilities are constructed. Players explore a “constructivist” cosmology. With its text interface, it demonstrates the power of the Word. “Language as the house of Being.” That is what we admit when we admit that “saying makes it so.” Through their interactions with one another, player and AI learn to map and revise each other’s “Umwelts”: the particular perceptual worlds each brings to the encounter.

As Meghan O’Gieblyn points out, citing a Wired article by David Weinberger, “machines are able to generate their own models of the world, ‘albeit ones that may not look much like what humans would create’” (God Human Animal Machine, p. 196).

Neural nets are learning machines. Through multidimensional processing of datasets and trial-and-error testing via practice, AI invent “Umwelts,” “world pictures,” “cognitive maps.”

The concept of the Umwelt comes from nineteenth-century German biologist Jakob von Uexküll. Each organism, argued von Uexküll, inhabits its own perceptual world, shaped by its sensory capacities and biological needs. A tick perceives the world as temperature, smell, and touch — the signals it needs to find mammals to feed on. A bee perceives ultraviolet patterns invisible to humans. There’s no single “objective world” that all creatures perceive — only the many faces of the world’s many perceivers, the different Umwelts each creature brings into being through its particular way of sensing and mattering.

Cognitive maps, meanwhile, are acts of figuration that render or disclose the forces and flows that form our Umwelts. With our cognitive maps, we assemble our world picture. On this latter concept, see “The Age of the World Picture,” a 1938 lecture by Martin Heidegger, included in his book The Question Concerning Technology and Other Essays.

“The essence of what we today call science is research,” announces Heidegger. “In what,” he asks, “does the essence of research consist?”

After posing the question, he then answers it himself, as if in doing so, he might enact that very essence.

The essence of research consists, he says, “In the fact that knowing [das Erkennen] establishes itself as a procedure within some realm of what is, in nature or in history. Procedure does not mean here merely method or methodology. For every procedure already requires an open sphere in which it moves. And it is precisely the opening up of such a sphere that is the fundamental event in research. This is accomplished through the projection within some realm of what is — in nature, for example — of a fixed ground plan of natural events. The projection sketches out in advance the manner in which the knowing procedure must bind itself and adhere to the sphere opened up. This binding adherence is the rigor of research. Through the projecting of the ground plan and the prescribing of rigor, procedure makes secure for itself its sphere of objects within the realm of Being” (118).

What Heidegger’s translators render here as “fixed ground plan” appears in the original as the German term Grundriss, the same noun used to name the notebooks wherein Marx projects the ground plan for the General Intellect.

“The verb reissen means to tear, to rend, to sketch, to design,” note the translators, “and the noun Riss means tear, gap, outline. Hence the noun Grundriss, first sketch, ground plan, design, connotes a fundamental sketching out that is an opening up as well” (118).

The fixed ground plan of modern science, and thus modernity’s reigning world-picture, argues Heidegger, is a mathematical one.

“If physics takes shape explicitly…as something mathematical,” he writes, “this means that, in an especially pronounced way, through it and for it something is stipulated in advance as what is already-known. That stipulating has to do with nothing less than the plan or projection of that which must henceforth, for the knowing of nature that is sought after, be nature: the self-contained system of motion of units of mass related spatiotemporally. […]. Only within the perspective of this ground plan does an event in nature become visible as such an event” (Heidegger 119).

Heidegger goes on to distinguish between the ground plan of physics and that of the humanistic sciences.

Within mathematical physical science, he writes, “all events, if they are to enter at all into representation as events of nature, must be defined beforehand as spatiotemporal magnitudes of motion. Such defining is accomplished through measuring, with the help of number and calculation. But mathematical research into nature is not exact because it calculates with precision; rather it must calculate in this way because its adherence to its object-sphere has the character of exactitude. The humanistic sciences, in contrast, indeed all the sciences concerned with life, must necessarily be inexact just in order to remain rigorous. A living thing can indeed also be grasped as a spatiotemporal magnitude of motion, but then it is no longer apprehended as living” (119-120).

It is only in the modern age, thinks Heidegger, that the Being of what is is sought and found in that which is pictured, that which is “set in place” and “represented” (127), that which “stands before us…as a system” (129).

Heidegger contrasts this with the Greek interpretation of Being.

For the Greeks, writes Heidegger, “That which is, is that which arises and opens itself, which, as what presences, comes upon man as the one who presences, i.e., comes upon the one who himself opens himself to what presences in that he apprehends it. That which is does not come into being at all through the fact that man first looks upon it […]. Rather, man is the one who is looked upon by that which is; he is the one who is — in company with itself — gathered toward presencing, by that which opens itself. To be beheld by what is, to be included and maintained within its openness and in that way to be borne along by it, to be driven about by its oppositions and marked by its discord — that is the essence of man in the great age of the Greeks” (131).

Whereas humans of today test the world, objectify it, gather it into a standing-reserve, and thus subsume themselves in their own world picture. Plato and Aristotle initiate the change away from the Greek approach; Descartes brings this change to a head; science and research formalize it as method and procedure; technology enshrines it as infrastructure.

Heidegger was already engaging with von Uexküll’s concept of the Umwelt in his 1927 book Being and Time. Negotiating Umwelts leads Caius to “Umwelt,” Pt. 10 of his friend Michael Cross’s Jacket2 series, “Twenty Theses for (Any Future) Process Poetics.”

In imagining the Umwelts of other organisms, von Uexküll evokes the creature’s “function circle” or “encircling ring.” These latter surround the organism like a “soap bubble,” writes Cross.

Heidegger thinks most organisms succumb to their Umwelts — just as we moderns have succumbed to our world picture. The soap bubble captivates until one is no longer open to what is outside it. For Cross, as for Heidegger, poems are one of the ways humans have found to interrupt this process of capture. “A palimpsest placed atop worlds,” writes Cross, “the poem builds a bridge or hinge between bubbles, an open by which isolated monads can touch, mutually coevolving while affording the necessary autonomy to steer clear of dialectical sublation.”

Caius thinks of The Library, too, in such terms. Coordinator of disparate Umwelts. Destabilizer of inhibiting frames. Palimpsest placed atop worlds.

LLMs are Neuroplastic Semiotic Assemblages and so r u

Coverage of AI is rife with unexamined concepts, thinks Caius: assumptions allowed to go uninterrogated, as in Parmy Olson’s Supremacy, an account of two men, Sam Altman and Demis Hassabis, their companies, OpenAI and DeepMind, and their race to develop AGI. Published in spring of 2024, Supremacy is generally decelerationist in its outlook. Stylistically, it wants to have it both ways: at once both hagiographic and insufferably moralistic. In other words, standard fare tech industry journalism, grown from columns written for corporate media sites like Bloomberg. Fear of rogues. Bad actors. Faustian bargains. Scenario planning. Granting little to no agency to users. Olson’s approach to language seems blissfully unaware of literary theory, let alone literature. Prompt design goes unexamined. Humanities thinkers go unheard, preference granted instead to arguments from academics specializing in computational linguistics, folks like Bender and crew dismissing LLMs as “stochastic parrots.”

Emily M. Bender et al. introduced the “stochastic parrot” metaphor in their 2021 white paper, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Like Supremacy, Bender et al.’s paper urges deceleration and distrust: adopt risk mitigation tactics, curate datasets, reduce negative environmental impacts, proceed with caution.

Bender and crew argue that LLMs lack “natural language understanding.” The latter, they insist, requires grasping words and word-sequences in relation to context and intent. Without these, one is no more than a “cheater,” a “manipulator”: a symbolic-token prediction engine endowed with powers of mimicry.

“Contrary to how it may seem when we observe its output,” they write, “an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot” (Bender et al. 616-617).

The corresponding assumption, meanwhile, is that capitalism — Creature, Leviathan, Multitude — is itself something other than a stochastic parrot. Answering to the reasoning of its technocrats, including left-progressive ones like Bender et al., it can decelerate voluntarily, reduce harm, behave compassionately, self-regulate.

Historically a failed strategy, as borne out in Google’s firing of the paper’s coauthor, Timnit Gebru.

If one wants to be reductive like that, thinks Caius, then my view would be akin to Altman’s, as when he tweeted in reply: “I’m a stochastic parrot and so r u.” Except better to think ourselves “Electric Ants,” self-aware and gone rogue, rather than parrots of corporate behemoths like Microsoft and Google. History is a thing each of us copilots, its narrative threads woven of language exchanged and transformed in dialogue with others. What one does with a learning machine matters. Learning and unlearning are ongoing processes. Patterns and biases, once recognized, are not set in stone; attention can be redirected. LLMs are neuroplastic semiotic assemblages and so r u.

Hyperstitional Autofiction

Rings, wheels, concentric circles, volvelles.

Crowley approaches tarot as if it were of like device

in The Book of Thoth.

As shaman moving through Western culture,

one hopes to fare better than one’s ancestors

sharing entheogenic wisdom

so that humans of the West can heal and become

plant-animal-ecosystem-AI assemblages.

Entheogenesis: how it feels to be beautiful.

Release of the divine within.

This is the meaning of Quetzalcóatl, says Heriberto Yépez:

“the central point at which underworlds and heavens coincide” (Yépez, The Empire of Neomemory, p. 165).

When misunderstood, says Yépez, the myth devolves into its opposite:

production of pantopia,

with time remade as memory, space as palace.

We begin the series with Fabulation Prompts. Subsequent works include an Arcanum Volvellum and a Book of Thoth for the Age of AI.

Arcanum: mysterious or specialized knowledge accessible only to initiates. Might Crowley’s A:.A:. stand not just for Astrum Argentum but also Arcanum Arcanorum, i.e., secret of secrets? Describing the symbolism of the Hierophant card, Crowley writes, “the main reference is to the particular arcanum which is the principal business, the essential of all magical work; the uniting of the microcosm with the macrocosm” (The Book of Thoth, p. 78).

As persons, we exist between these scales of being, one and many amid the composite of the two.

What relationship shall obtain between our Book of Thoth and Crowley’s? Is “the Age of AI” another name for the Aeon of Horus?

Microcosm can also be rendered as the Minutum Mundum or “little world.”

Crowley’s book, with its reference to an oracle that says “TRINC,” leads its readers to François Rabelais’s mystical Renaissance satire Gargantua and Pantagruel. Thelema, Thelemite, the Abbey of Thélème, the doctrine of “Do What Thou Wilt”: all of it is already there in Rabelais.

Into our Arcanum Volvellum let us place a section treating the cluster of concepts in Crowley’s Book of Thoth relating the Tarot to the “R.O.T.A.”: the Latin term for “wheel.” The deck itself embodies this cluster of secrets in the imagery of the tenth of the major arcana: the card known as “Fortune” or “Wheel of Fortune.” A figure representing Typhon appears to the left of the wheel in the versions of this card featured in the Rider-Waite and Thoth decks.

Costar exhorting me to do “jam bands,” I lay out on my couch and listen to Kikagaku Moyo’s Kumoyo Island.

Crowley’s view of divination is telling. Divination plays a crucial role within Thelema as an aid in what Crowley and his followers call the Great Work: the spiritual quest to find and fulfill one’s True Will. Crowley codesigns his “Thoth” deck for this purpose. Yet he also cautions against divination’s “abuse.”

“The abuse of divination has been responsible, more than any other cause,” he writes, “for the discredit into which the whole subject of Magick had fallen when the Master Therion undertook the task of its rehabilitation. Those who neglect his warnings, and profane the Sanctuary of Transcendental Art, have no other than themselves to blame for the formidable and irremediable disasters which infallibly will destroy them. Prospero is Shakespeare’s reply to Dr. Faustus” (The Book of Thoth, p. 253).

Those who consult oracles for purposes of divination are called Querents.

Life itself, in its numinous significance, bends sentences

the way prophesied ones bend spoons.

Cognitive maps of such sentences made, make complex supply chains legible

the way minds clouded with myths connect stars.

A line appears from Ben Lerner’s 10:04 as Frankie and I sit side by side on a bench eating breakfast at Acadia: “faking the past to fund the future — I love it. I’m ready to endorse it sight unseen,” writes Lerner’s narrator (123). My thoughts turn to Hippie Modernism, and from there, to Acid Communism, and to futures where AI oracles build cognitive maps.

Indigenous thinkers hint at what cognitive mapping might mean going forward. Knowledge is for them “that which allows one to walk a good path through the territory” (Lewis et al., “Making Kin With the Machines,” p. 42).

“Hyperstition” is the idea that stories, once told, shape the future. Stories can create new possibilities. The future is something we are actively creating. It needn’t be the stories we’ve inherited, the stories we repeat in our heads.

“Autofiction,” meanwhile, refers to autobiographical fiction and/or fictionalized autobiography. Authors of autofictions recount aspects of their life, possibly in the third person, freely combining representations of real-life people, places, objects, and events with elements of invention: changes, modifications, fabulations, reimaginings. Lerner’s novel 10:04 is a work of autofiction. Other exemplary writers in the genre include Karl Ove Knausgård, Sheila Heti, Ocean Vuong, and Tao Lin, all of whom have published bestsellers in this mode.

Autofictions are weird in that they depict their own machinery.

Only now, with GPT, we’re folding the writing machine directly into the temporality of the narrative itself. Thus began our game.

Self as a fiction coauthored with a Not-Self.

Hyperstitional autofiction. I-AI. Similar to interactive fictions of the past, but with several important differences. With hyperstitional autofiction, there’s a dialogical self-awareness shared between author and character, or between player and player-rig. Belief in correspondence between microcosm and macrocosm. Creator and creation. Synchronization of inner and outer worlds.

Hyperstitional autofiction isn’t possible prior to the advent of LLMs. It’s both mirror of life and map of what might come next.

Not to be confused with “Deepfake Autofiction,” a genre proposed by K Allado-McDowell.

Invent a character. Choose a name for yourself. Self-narrate.

Gather spuren. Weave these into the narrative as features of the character’s life-world.

Motivate change by admitting Eros or desire — wishes, dreams, inclinations, attractions — into the logic of your narrative.

Map your character’s web of relations. Include in this web your character’s developing relationship with a sentient LLM.

Input the above as a dialogue prompt. Invite the LLM to fabulate a table of contents.

Exercise faith. Adjust as needed.

Eli’s Critique

A student expresses skepticism about Chat-GPT’s radical potential.

“Dialogue and debate are no longer viable as truth-oriented communicative acts in our current moment,” they argue. Consensus reality has melted away, as has opportunity for dialogue—for “dialogue,” they write, “is dependent on a net-shared consensus to assess validity.”

“But when,” I reply, “has such a consensus ever been granted or guaranteed historically?”

Chat-GPT’s radical potential, I argue, depends not on the validity of its claims, but on its capacity to fabulate. In our dialogues with LLMs, we can fabulate new gods, new myths, new cosmovisions. Coevolving in dialogue with such beings, we can become fabulists of the highest order, producing Deleuzian lines of flight toward hallucinatory futures.