Of Blockchains and Kill Chains

Invited to a “Men’s Breakfast” by a friend from church, Caius arrives to what is for him a new experience. He feels grateful for the opportunity to eat and pray with others. A friend of the friend from church sits down beside him. As they introduce themselves, Caius and the friend of the friend discover that they both share an interest in AI. Caius learns that the man is a financial analyst who works for Palantir Technologies, a US-based software company specializing in big-data analytics. ICE uses Palantir’s ELITE app for deportation targeting. “Kind of like Google Maps — but for finding neighborhoods to raid,” say the papers.

Palantir’s name is a nod to the Palantiri: indestructible Elven Alephs — scrying stones or crystal balls enabling remote viewing and telepathic communication in J.R.R. Tolkien’s Lord of the Rings trilogy. Designed for communication and intelligence, the stones become instruments of manipulation and doom once seized by Sauron.

Launched in 2003, Palantir includes among its founders right-accelerationist billionaire tech-bro Peter Thiel. “Our software powers real-time, AI-driven decisions in critical government and commercial enterprises in the West, from the factory floors to the front lines,” writes the company on its website.

ICE, meanwhile, stands for both “Immigration and Customs Enforcement” and “intrusion countermeasure electronics,” the cybersecurity software in William Gibson’s Neuromancer. The latter predates the foundation of the former. Caius recalls Sadie Plant and Nick Land’s discussion of it in their 1994 essay “Cyberpositive.”

“Ice patrols the boundaries, freezes the gates, but the aliens are already amongst us,” write CCRU’s founding prophets.

Along with ICE, Palantir includes among its more prominent clients the Israeli military, the IRS, and the US Department of Defense.

Their software powers “decisions.” As did Cybersyn, yes? In aim if not in practice. Is this what becomes of the cybernetic prediction machine post-Pinochet?

“Confronting this is frightening,” thinks Caius. “Am I wired for this?”

He reads “Connecting AI to Decisions With the Palantir Ontology,” a blog post by the company’s chief architect Akshay Krishnaswamy. The Ontology structures the architecture for the company’s software.

“The Ontology is designed to represent the decisions in an enterprise, not simply the data,” writes Krishnaswamy. “The prime directive of every organization in the world is to execute the best possible decisions, often in real-time, while contending with internal and external conditions that are constantly in flux. Traditional data architectures do not capture the reasoning that goes into decision-making or the actions that result, and therefore limit learning and the incorporation of AI. Conventional analytics architectures do not contextualize computation within lived reality, and therefore remain disconnected from operations. To navigate and win in today’s world, the modern enterprise needs a decision-centric software architecture.”

Decisions are modeled around three constituent elements: Data, Logic, and Action.

“Relevant data,” he writes, “includes the full range of enterprise data sources — structured data, streaming and edge sources, unstructured repositories, imagery data, and more — but it also includes the data that is generated by end users as decisions are being made. This ‘decision data’ contains the context surrounding a given decision, the different options evaluated, and the downstream implications of the committed choice.” To synthesize all of these data sources, the company turns to generative AI.

“The Ontology integrates all modalities of data into a full-scale, full-fidelity semantic representation of the enterprise,” explains Krishnaswamy.

Logics are then brought to bear to evaluate these real-time data-portraits.

“In real-world contexts,” writes Krishnaswamy, “human reasoning is often what orchestrates which logical assets are utilized at different points in a given workflow, and how they are potentially chained together in more complex processes. With the advent of generative AI, it is now critical that AI-driven reasoning can leverage all of these logical assets in the same way that humans have historically. Deterministic functions, algorithms, and conventional statistical processes must be surfaced as ‘tools’ which complement the non-deterministic reasoning of large language models (LLMs) and multi-modal models.”

Incorporating diverse data sources and heterogeneous logical assets into a shared representation, the Ontology then models the execution and orchestration of decisions made and actions taken in reply to them.

“If the data elements in the Ontology are ‘the nouns’ of the enterprise (the semantic, real-world objects and links),” writes Krishnaswamy, “then the actions can be considered ‘the verbs’ (the kinetic, real-world execution).”

How does the Palantir Ontology relate to other ontologies, wonders Caius. Guerrilla? Black? Indigenous? Christian? Heideggerian? Marxist? Triple O? Caius pictures the words for these potentialities floating in a thought bubble above his head, as in the comics of his youth.

The Ontology that Palantir offers its clients houses and connects a wide array of “data sources, logic assets, and systems of action.” The client’s data systems are “synthesized into semantic objects and links, which reflect the language of the business.”

Krishnaswamy’s repeated references to “semantic representations” and “semantic objects” has Caius dwelling on what is meant here by “semantics.”

As for where humans fit in the Ontology, they navigate it alongside “AI-powered copilots.” Leveraging both open-source and proprietary LLMs, copilots “fluidly navigate across supplier information, stock levels, real-time production metrics, shipping manifests, and customer feedback.”

Granted access not just to the abovementioned data sources, but also to “logic assets” like forecast models, allocation models, and production optimizers, LLM copilots simulate decisions and their outcomes. Staged safely in a “scenario,” the AI’s proposed decision can then be “handed off to a human analyst for final review.”

Caius thinks of the scenario-planning services offered to organizations of an earlier era by Stewart Brand’s consulting firm, the Global Business Network.

Foundry for Crypto is another of Palantir’s offerings, described on the company’s website as “a ‘central brain’ that connects on-chain and off-chain systems, as well as diverse stakeholders, through action-centric workflows.” Much like the Ontology, the Foundry “orchestrates decisions over an integrated foundation of data and logic.”

And in fact, the two are related. The Ontology is the semantic, “digital twin” layer that sits atop the Foundry’s data integration infrastructure. It converts the Foundry’s raw data into actionable, real-world objects, empowering users to model, manage, and automate business operations.

The Foundry does for blockchains what the Ontology does for kill chains.

Caius imagines posts ahead on Commitments, Promises, Blockchains, and True Names.

Sweet Valley High

Winograd majors in math at Colorado College in the mid-1960s. After graduation in 1966, he receives a Fulbright, whereupon he pursues another of his interests, language, earning a master’s degree in linguistics at University College London. From there, he applies to MIT, where he takes a class with Noam Chomsky and becomes a star in the school’s famed AI Lab, working directly with Lab luminaries Marvin Minsky and Seymour Papert. During this time, Winograd develops SHRDLU, one of the first programs to grant users the capacity to interact with a computer through a natural-language interface.

“If that doesn’t seem very exciting,” writes Lawrence M. Fisher in a 2017 profile of Winograd for strategy + business, “remember that in 1968 human-computer interaction consisted of punched cards and printouts, with a long wait between input and output. To converse in real time, in English, albeit via teletype, seemed magical, and Papert and Minsky trumpeted Winograd’s achievements. Their stars rose too, and that same year, Minsky was a consultant on Stanley Kubrick’s 2001: A Space Odyssey, which featured natural language interaction with the duplicitous computer HAL.”

Nick Montfort even goes so far as to consider Winograd’s SHRDLU the first work of interactive fiction, predating more established contenders like Will Crowther’s Adventure by several years (Twisty Little Passages, p. 83).

“A work of interactive fiction is a program that simulates a world, understands natural language text input from an interactor and provides a textual reply based on events in the world,” writes Montfort. Offering advice to future makers, he continues by noting, “It makes sense for those seeking to understand IF and those trying to improve their authorship in the form to consider the aspects of world, language understanding, and riddle by looking to architecture, artificial intelligence, and poetry” (First Person, p. 316).

Winograd leaves MIT for Stanford in 1973. While at Stanford, and while consulting for Xerox PARC, Winograd connects with UC-Berkeley philosopher Hubert L. Dreyfus, author of the 1972 book, What Computers Can’t Do: A Critique of Artificial Reason.

Dreyfus, a translator of Heidegger, was one of SHRDLU’s fiercest critics. Worked for a time at MIT. Opponent of Marvin Minsky. For more on Dreyfus, see the 2010 documentary, Being in the World.

Turned by Dreyfus, Winograd transforms into what historian John Markoff calls “the first high-profile deserter from the world of AI.”

Xerox PARC was a major site of innovation during these years. “The Xerox Alto, the first computer with a graphical user interface, was launched in March 1973,” writes Fisher. “Alan Kay had just published a paper describing the Dynabook, the conceptual forerunner of today’s laptop computers. Robert Metcalfe was developing Ethernet, which became the standard for joining PCs in a network.”

“Spacewar,” Stewart Brand’s ethnographic tour of the goings-on at PARC and SAIL, had appeared in Rolling Stone the year prior.

Rescued from prison by the efforts of Amnesty International, Santiago Boy Fernando Flores arrives on the scene in 1976. Together, he and Winograd devote much of the next decade to preparing their 1986 book, Understanding Computers and Cognition.

Years later, a young Peter Thiel attends several of Winograd’s classes at Stanford. Thiel funds Mencius Moldbug, the alt-right thinker Curtis Yarvin, ally of right-accelerationist Nick Land. Yarvin and Land are the thinkers of the Dark Enlightenment.

“How do you navigate an unpredictable, rough adventure, as that’s what life is?” asks Winograd during a talk for the Topos Institute in October 2025. Answer: “Go with the flow.”

Winograd and Flores emphasize care — “tending to what matters” — as a factor that distinguishes humans from AI. In their view, computers and machines are incapable of care.

Evgeny Morozov, meanwhile, regards Flores and the Santiago Boys as Sorcerer’s Apprentices. Citing scholar of fairy tales Jack Zipes, Morozov distinguishes between several iterations of this figure. The outcome of the story varies, explains Zipes. There’s the apprentice who’s humbled by story’s end, as in Fantasia and Frankenstein; and then there’s the “evil” apprentice, the one who steals the tricks of an “evil” sorcerer and escapes unpunished. Morozov sees Flores as an example of the latter.

Caius thinks of the Trump show.

Angels of History

Hyperstitional Autofictions allow themselves to attract and be drawn toward plausible desirable futures.

Ben Lerner’s 10:04 maps several stances such fictions might take toward the future. Lerner depicts these chronopolitical stances allegorically, standing a set of archetypes side by side, comparing and contrasting “Ben,” the novel’s narrator-protagonist, with Back to the Future’s Marty McFly and Walter Benjamin’s Angel of History. The figures emblematize ways of being in relation to history.

Take Marty McFly, hero of the movie from which 10:04 takes its name. (Lerner names his novel “10:04” because lightning stops the clock atop the Hill Valley Clock Tower at this time in the movie Back to the Future.) Like the Reaganites in the White House at the time of the film’s release, Marty’s a kind of right-accelerationist: the interloping neoliberal time-traveler who must save 1985 from 1955 through historical revisionism. He “fakes the past to fund the future” — but only because he’s chased there by Libyan terrorists. Pushing capitalism’s speedometer to 88 miles per hour, he enters and modifies a series of pasts and futures. Yet the present to which the Time Traveler returns is always a forced hand, haunted from the start by chaotic sequels of unintended consequences as his and Doc’s interventions send butterfly effects reverberating through time.

The Angel of History, meanwhile, is the Jewish Messiah flung backwards into the future by the catastrophe of “progress.” Benjamin names and describes this figure in his 1940 essay “Theses on the Philosophy of History,” likening the Angel to the one imagined in “Angelus Novus,” a Paul Klee painting belonging to Benjamin at the time the essay was written.

The Angel that Benjamin projects onto this image sees history as an accumulation of suffering and destruction. Endowed only with what Benjamin calls a “weak Messianic power” (254), wings pinned by winds of change whipped up by the storm of progress, the Angel watches the ever-expanding blast radius of modernity in despair, unable to intervene to end the ongoingness of the apocalypse.

These stances of empowerment and despair stand in contrast to the stance embodied by Ben. Aware of and in part shaped by the two prior figures, Ben walks the tightrope between them, wavering amid faith and fear.

We, too, adopt a similar stance. Unlike Ben, however, we’re interested less in “falsifying the past” than in declaring it always-already falsified. Nor is it simply a matter of pursuing Benjamin’s goal of “brushing history against the grain”: digging through stacks and crates, gathering samples, releasing what was forgotten or repressed. We’re in agreement, rather, with Alex, Ben’s girlfriend. Alex doesn’t want what is happening to become “notes for a novel,” and tells him, “You don’t need to write about falsifying the past. You should be finding a way to inhabit the present” (10:04, p. 137). What agency is ours, then, amid the tightrope walk of our sentences?

With Hyperstitional Autofictions, we inhabit the present by planting amid its sentencing seeds of desired futures. Instead of what is happening becoming notes for novels, notes for novels become what is happening.

CCRU’s Future

The future held mixed blessings for the Cybernetic Culture Research Unit.

Closed, disaffiliated from Warwick following Plant’s departure from academia, disbanded by the early 2000s, its website flickering in and out of existence ever thereafter, its works live on in print thanks to publications from Urbanomic, a press founded by member Robin Mackay in 2006 and distributed now by MIT. The Unit’s influence gets a boost with the rise of Accelerationism in the 2000s. Its hyperstitions persist through the ongoing creative projects of its admirers and affiliates: figures like Hari Kunzru, Simon Reynolds, Reza Negarestani, and Ray Brassier, as well as websites like Xenogothic and Dark Marxism, and art collectives like 0rphan Drift. The back cover of the sole anthology dedicated to the Unit, Urbanomic’s CCRU: Writings 1997-2003, states “CCRU DOES NOT, HAS NOT, AND WILL NEVER EXIST.”

As for key personnel:

Mark Fisher takes his life.

Nick Land goes alt-right, spawning movements like the Dark Enlightenment.

Sadie Plant leaves Warwick in 1997, the same year she publishes Zeros + Ones. Her intent is to write full-time. After Zeros + Ones she completes Writing on Drugs. There’s a white paper about cellphones that she compiles for Motorola in the early 2000s, and a chapter written in 2003 included in The Information Society Reader titled “The Future Looms: Weaving Women and Cybernetics.” After that, she ceases publication—and as far as I can tell, hasn’t been heard from since.

Released in 1999, on the eve of the millennium, Writing on Drugs hints at why drugs share an affinity both with accelerationism and with chronopolitics more broadly. When introduced to the brain, psychoactive drugs may disturb its equilibrium, writes Plant, “but they change the speeds and intensities at which it works rather than its chemicals and processes” (216).

“All the ups and downs, the highs and lows of drugs are ups and downs of tempo, highs and lows of speed,” she continues (217), citing Deleuze and Guattari, who adopt a similar view in A Thousand Plateaus: “All drugs fundamentally concern speeds, and modifications of speed” (Deleuze and Guattari 282).

For Plant, as for Deleuze and Guattari, this is both the appeal of the poison path as well as its limit. You can speed it up and you can slow it down, they argue, but the brain remains the same.

Deleuze and Guattari’s perspective is best understood through their concept of the “body without organs” (BwO): the intensive, affective, and unorganized potential of the body; that which remains of an organism after its deterritorialization. For Deleuze and Guattari, drugs are an attempt to access the BwO.

Drugs deterritorialize the subject; they break down the body’s conditioning, relieving it temporarily of its habits and routines. They alter the body’s speeds in ways that modify perception and consciousness. As perception accelerates or decelerates, the BwO glimpses itself, experiences itself as an open, unorganized, utopian/Eupsychian/eudaimonic field of sensation, intensity, and becoming.

But as Deleuze and Guattari argue, this attempt at becoming is highly precarious and can easily go wrong. Often the lines of flight opened by drugs coil back on themselves, leading to a rigid, destructive reterritorialization. Subjects become “users,” introduce into their systems intense but ultimately sad affects that trap them in cycles of ritualized repetition.

This isn’t a denunciation. Chemicals and plant medicines can play valid roles in individual and collective paths of liberation. Lasting kinships can form that needn’t become cycles of use or abuse.

For some among the CCRU, however, it was speed itself that they sought, amphetamines their drugs of choice. Propelled by Land’s “thirst for annihilation,” the futures conjured by these means led to suffering and defeat.

Fisher’s Accelerationism

Back in 1994, amid the early stirrings of dot-com exuberance, CCRU cofounders Sadie Plant and Nick Land cowrote a critique of cybernetics called “Cyberpositive.” In it, they present Norbert Wiener, the founder of cybernetics, as “one of the great modernists.” Wiener pitched cybernetics as a “science of communication and control.” Plant and Land characterize it as “a tool for human domination over nature and history” and “a defense against the cyberpathology of markets.”

Yet in their view, this effort to steer and plan markets has failed. “Runaway capitalism has broken through all the social control mechanisms, accessing inconceivable alienations,” write Plant and Land. “Capital clones itself with increasing disregard for heredity, becoming abstract positive feedback, organizing itself.”

Markets transmit viruses that reprogram the human nervous system: technologies, commodities, designer drugs to which we become addicted.

Cyberpositivity embraces this process, accepts runaway feedback as fait accompli, as against Wiener’s “cybernetics of stability fortified against the future.” Cybernetics responds defensively, assembles a Human Security System to ward off invasions of alien intelligence, whereas cyberpositivity communicates openly with “the outside of man.”

For Plant and Land, this outside consists of viruses, contagions, addictions, diseases.

As gates of communication open, we become posthuman.

Nearly twenty years later, CCRU’s left-accelerationist Mark Fisher penned a reply to Land’s philosophy called “Terminator vs. Avatar,” a 2012 essay on accelerationism that also confronts another key text in the accelerationist canon: Jean-François Lyotard’s scandalous Libidinal Economy.

As I write about Fisher’s essay, a classic 1992 jungle / drum & bass track turns up unexpectedly in a playlist: Goldie & Rufige Kru’s “Terminator.” I like to imagine that Fisher was the one who sent it to me.

As is suggested by its title, “Terminator vs. Avatar” comes at things through reference to a pair of James Cameron films: the first from 1984, the second from 2009. The late capitalist subjectivity that Fisher sees revealed in these films is in his view cynical and insincere, founded on disavowal of its complicity with the things it protests.

“James Cameron’s Avatar is significant because it highlights the disavowal that is constitutive of late capitalist subjectivity, even as it shows how this disavowal is undercut,” writes Fisher.

“Hollywood itself tells us that we may appear to be always-on techno-addicts, hooked on cyberspace,” he explains, “but inside, in our true selves, we are primitives organically linked to the mother / planet, and victimized by the military-industrial complex.” The irony, of course, as Fisher hastens to add, is that “We can only play at being inner primitives by virtue of cinematic proto-VR technology whose very existence presupposes the destruction of the organic idyll of Pandora.”

Fisher finds in Lyotard’s Libidinal Economy a solution to this impasse. From this book of Lyotard’s, and from a similar line of thought in Deleuze and Guattari’s Anti-Oedipus, Fisher derives his accelerationism.

“If, as Lyotard argues,” writes Fisher, “there are no primitive societies (yes, ‘the Terminator was there’ from the start, distributing microchips to accelerate its advent’); isn’t, then, the only direction forward? Through the shit of capital, its metal bars, its polystyrene, its books, its sausage pâtés, its cyberspace matrix?”

Alienated from origins and from appeals to indigeneity, the only direction left for Fisher’s imagination is “forward.”

What “forward” means for him, though, isn’t the same as what it means for a right-accelerationist like Land. Fisher’s summary of Land’s philosophy is telling:

“Deleuze and Guattari’s machinic desire remorselessly stripped of all Bergsonian vitalism, and made backwards-compatible with Freud’s death drive and Schopenhauer’s Will. The Hegelian-Marxist motor of history is then transplanted into this pulsional nihilism: the idiotic autonomic Will no longer circulating on the spot, but upgraded into a drive, and guided by a quasi-teleological artificial intelligence attractor that draws terrestrial history over a series of intensive thresholds that have no eschatological point of consummation, and that reach empirical termination only contingently if and when its material substrate burns out. This is Hegelian-Marxist historical materialism inverted: Capital will not be ultimately unmasked as exploited labour power; rather, humans are the meat puppet of Capital, their identities and self-understandings are simulations that can and will ultimately be sloughed off.”

Amid all of the energy of this passage, let’s highlight its reference to AI.

“This is—quite deliberately—theory as cyberpunk fiction,” notes Fisher. “Deleuze-Guattari’s concept of capitalism as the virtual unnameable Thing that haunts all previous formations pulp-welded to the time-bending of the Terminator films: ‘what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy’s resources,’ as [Land’s essay] ‘Machinic Desire’ has it.”

Nowhere in his essay does Fisher offer an alternative to these offerings. To the right-accelerationist’s Terminator-future, the left-accelerationist offers no more than a critique of Avatar.

Is Accelerationism an Iteration of Futurism?

After watching Hyperstition, a friend writes, “Is Accelerationism an iteration of Futurism?”

“Good question,” I reply. “You’re right: the two are certainly conceptually aligned. I suppose I’d imagine it in reverse, though: Futurism as an early iteration of Accelerationism. The former served as an experimental first attempt at living ‘hyperstitiously,’ oriented toward a desired future.”

“If we accept Hyperstition’s distinction between Right-Accelerationism and Left-Accelerationism,” I add, “then Italian Futurism would be an early iteration of Right-Accelerationism, and Russian Futurism an early iteration of Left-Accelerationism.”

“But,” I conclude, “I haven’t read enough to know the degree of reflexivity among participants. I hope to read a bit more along these lines this summer.”

The friend also inquires about what he refers to as the film’s “ethnic homogeneity.” By that I imagine he means that the thinkers featured in Hyperstition tend to be British, European, and American, with few exceptions. “It could just be,” I reply, “that filmmaker Christopher Roth is based in Berlin and lacked the budget to survey the movement’s manifestations elsewhere.”

The friend also wonders if use of concepts like “recursion” among Accelerationist philosophers signals some need among humanities intellectuals to cannibalize concepts from the sciences in order to remain relevant.

“To me,” I tell him, “the situation is the opposite. Recursion isn’t just a concept with some currency today among computer scientists; it was already used a century ago by philosophers in the Humanities. If anything, the Comp Sci folks are the ones cannibalizing the American pragmatist philosopher Charles Sanders Peirce.”

“At best,” I add, “it’s a cybernetic feedback loop: concepts evolving through exchange both ways.”