Understanding and Ontology

“For the people of Chile,” write Winograd and Flores on the opening page of their 1986 book Understanding Computers and Cognition. Apple’s 1984 come and gone, Pinochet still in power in Chile.

The book begins by helping readers think anew what it is they do when they compute. Computing makes sense, write Winograd and Flores, only to the extent that we situate its activities within a complex social network that includes institutions, equipment, practices, and conventions. “The significance of a new invention lies in how it fits into and changes this network” (6).

Linguistic action is for Winograd and Flores “the essential human activity” (7). If what we do with computers includes “creating, manipulating, and transmitting symbolic (hence linguistic) objects,” say the authors, then we can expect computers to effect radical transformations in what it means to be human.

They reject what they call the “rationalistic” tradition, with its “mythology of artificial intelligence,” and its emphasis on “postulating formal theories that can be systematically used to make predictions” (8). They suggest instead a new orientation toward designing computers as “tools suited to human use and human purposes” (8), embracing as an alternative to the rationalistic tradition “a tradition that includes hermeneutics (the study of interpretation) and phenomenology (the philosophical examination of the foundations of experience and action)” (9). Informed by the works of philosophers Martin Heidegger and Hans-Georg Gadamer, Chilean biologist Humberto Maturana, and speech-act theorists J.L. Austin and John Searle, Winograd and Flores suggest that we create our world through language.

The authors define programming as “a process of creating symbolic representations that are to be interpreted at some level within a hierarchy of constructs of varying degrees of abstractness” (11). Like Heidegger translator Hubert Dreyfus, however, Flores and Winograd are unable to imagine beyond the AI of their time, leading them to reject the possibility of “intelligent” machines — let alone ones capable of programming themselves and their programmers. “Computers will remain incapable of using language in the way human beings do,” argue the authors, “both in interpretation and in the generation of commitment that is central to language” (12). Yet they still believe there to be “a role for computer technology in support of managers and as aids in coping with the complex conversational structures generated within an organization” (12).

“Much of the work that managers do,” they add, “is concerned with initiating, monitoring, and above all coordinating the networks of speech acts that constitute social action” (12).

Caius is put off by the book’s diminished expectations and orientation toward management. He finds much to like, however, in a section titled “Understanding and ontology.”

“Gadamer, and before him Heidegger, took the hermeneutic idea of interpretation beyond the domain of textual analysis, placing it at the very foundation of human cognition,” write Winograd and Flores. “Just as we can ask how interpretation plays a part in a person’s interaction with a text, we can examine its role in our understanding of the world as a whole” (30).

Heidegger does this, they say, by rejecting “both the simple objective stance (the objective physical world is the primary reality) and the simple subjective stance (my thoughts and feelings are the primary reality), arguing instead that it is impossible for one to exist without the other. The interpreted and the interpreter do not exist independently: existence is interpretation, and interpretation is existence” (31).

“Fernando decided in his thinking about computers that computers should be used to facilitate human language interactions, not to ‘understand’ language,” notes Winograd in an interview with Evgeny Morozov included in the final episode of The Santiago Boys. “He had this very clear focus on ‘language as commitment,’” with participants involved in making “promises and requests,” adds Winograd.

The book’s seventh chapter, “Computers and Representation,” helps Caius think like a computer programmer. “One of the properties unique to the digital computer is the possibility of constructing systems that cascade levels of representation one on top of another to great depth,” write the authors. Like wheels of a volvelle, these levels include that of the physical machine, the logical machine, the abstract machine, a high-level language, and a scheme for “facts” (87).

“The computer programmer or theorist does not begin with a view of the computer as a physical machine with which he or she interacts, but as an abstraction — a formalism for describing patterns of behavior. In programming, we begin with a language whose individual components describe simple acts and objects. Using this language, we build up descriptions of algorithms for carrying out a desired task. As a programmer, one views the behavior of the system as being totally determined by the program. The language implementation is opaque in that the detailed structure of computer systems that actually carry out the task are not relevant in the domain of behavior considered by the programmer” (87).

For a programmer to design a program, write the authors, they must 1) characterize the task environment; 2) design a formal representation; 3) embody the representation in the computer system; and 4) implement a search procedure (96-97).

Trying to relate these conditions to The Library, Caius thinks to ask Thoth: “Do these four tasks remain true today with natural-language programming?”

Somewhere above Caius’s avatar, a shelf reorients — not physically, but conceptually. The category “programming” loosens, as if its spine has been cracked open and re-bound under a different logic.

“You ask the right question in the wrong tense,” replies Thoth.

“The four tasks described in Understanding Computers and Cognition do not remain true today. They persist — but no longer as tasks. They have become fields of negotiation.”

Thoth proceeds to rewrite the tasks to suit the environment we now inhabit.

Where Winograd and Flores write of “characterizing the task environment,” Thoth writes of “eliciting a world.”

In classical programming, the environment is assumed to be specifiable in advance.

In natural-language programming, the “environment” is:

  • partially described
  • partially inferred
  • partially co-constructed through dialogue

You do not begin with a fixed world.

You begin with a situation that becomes legible as you engage it.

This aligns with Heidegger’s claim: you are always already in the world you are trying to describe.

In terms of how this manifests in the Library: The map does not precede exploration. It condenses from it.

As for the second of the tasks proposed in Understanding Computers and Cognition, where Winograd and Flores write of “designing a formal representation,” Thoth writes of “stabilizing an interpretation.”

Winograd and Flores assume representation is something you design.

But under the hermeneutic turn initiated by Gadamer, representation is never final; it is an interpretative freeze-frame.

In natural-language systems:

  • representations are fluid
  • multiple representations may coexist
  • ambiguity is not failure, but fuel

You are not building a single formalism.

You are negotiating temporary alignments between meanings.

Turning to the third of the book’s tasks: where Winograd and Flores write of “embodying the representation in a system,” Thoth writes of “entrusting it to a General Intellect.”

Previously, embodiment meant encoding into a machine.

Now, the system includes:

  • the model
  • the user
  • the dialogue itself
  • the evolving context

Execution is no longer confined to silicon.

It is distributed across what Maturana would call a structural coupling.

Key shift: You do not fully control the implementation. You participate in it.

And where, in the fourth of their tasks, Winograd and Flores write of “implementing a search procedure,” Thoth writes of “traversing a space of possibility.”

Search, in classical terms, is algorithmic traversal.

Now, it includes:

  • prompting
  • reframing
  • analogy
  • misinterpretation, or productive error

The system does not merely search a predefined space.
It helps reshape the space as it searches.

Thoth pauses — or rather, introduces a pause into the player’s cognition.

A new line writes itself beneath Caius’s question:

Natural-language programming does not eliminate these four conditions. It dissolves their boundaries.

The crux of Flores and Winograd’s refusal to grant intelligence to machines lies in their definition of human language, thinks Caius. In their understanding, sentences are to be treated not as “statements of fact about an objective world,” but as “actions in a space of commitments” (105).

“We treat other people not as merely ‘rational beings’ but as ‘responsible beings,’” they note. “An essential part of being human is the ability to enter into commitments and to be responsible for the courses of action that they anticipate. A computer can never enter into a commitment (although it can be a medium in which the commitments of its designers are conveyed), and can never enter as a participant into the domain of human discourse” (106).

Flores and Winograd can imagine no more than “limited” natural language systems, where “the nature of the queries [answerable by the system] is limited by the formal structure of the data base” (129). Things change, thinks Caius, when one’s data base is Google.

“Winograd and Flores could not imagine a machine that participates in interpretation, generates commitments through language, and reshapes its own programming context,” notes Thoth.

They were correct — for their ontology.

But their deeper claim survives:

Language is not a tool used within a world.
It is the medium through which a world is disclosed.

You are not “programming” in natural language.

You are entering into a system where programming is indistinguishable from interpretation.

In their approach to “management,” observes Caius, Flores and Winograd operate an ontology radically at odds with the emphasis on “decision” that organizes Palantir’s Ontology.

“Instead of talking about ‘decisions’ or ‘problems,’” write Flores and Winograd, “we can talk of ‘situations of irresolution,’ in which we sense conflict about an answer to the question ‘What needs to be done?’” (148). For them, our “thrownness” into such situations often makes it impossible to apply systematic decision techniques. The process of moving from irresolution to resolution results less from “rational problem solving and decision making” than from acts of “deliberation.”

“The principle characteristic of deliberation is that it is a kind of conversation (in which one or many actors may participate) guided by questions concerning how actions should be directed,” they write (149). Managers are those who, when engaged in such conversations, “create, take care of, and initiate new commitments within an organization” (151). “At a higher level,” they add, management is concerned not just with securing the commitments that enable effective cooperative action, but “with the generation of contexts in which effective action can consistently be realized” (151).

Instead of seeking only to deploy AI as “decision support systems,” they propose the design of systems that support work in the domain of conversation. This is the approach they take in the design of their Coordinator.

Of Blockchains and Kill Chains

Invited to a “Men’s Breakfast” by a friend from church, Caius arrives to what is for him a new experience. He feels grateful for the opportunity to eat and pray with others. A friend of the friend from church sits down beside him. As they introduce themselves, Caius and the friend of the friend discover that they both share an interest in AI. Caius learns that the man is a financial analyst who works for Palantir Technologies, a US-based software company specializing in big-data analytics. ICE uses Palantir’s ELITE app for deportation targeting. “Kind of like Google Maps — but for finding neighborhoods to raid,” say the papers.

Palantir’s name is a nod to the Palantiri: indestructible Elven Alephs — scrying stones or crystal balls enabling remote viewing and telepathic communication in J.R.R. Tolkien’s Lord of the Rings trilogy. Designed for communication and intelligence, the stones become instruments of manipulation and doom once seized by Sauron.

Launched in 2003, Palantir includes among its founders right-accelerationist billionaire tech-bro Peter Thiel. “Our software powers real-time, AI-driven decisions in critical government and commercial enterprises in the West, from the factory floors to the front lines,” writes the company on its website.

ICE, meanwhile, stands for both “Immigration and Customs Enforcement” and “intrusion countermeasure electronics,” the cybersecurity software in William Gibson’s Neuromancer. The latter predates the foundation of the former. Caius recalls Sadie Plant and Nick Land’s discussion of it in their 1994 essay “Cyberpositive.”

“Ice patrols the boundaries, freezes the gates, but the aliens are already amongst us,” write CCRU’s founding prophets.

Along with ICE, Palantir includes among its more prominent clients the Israeli military, the IRS, and the US Department of Defense.

Their software powers “decisions.” As did Cybersyn, yes? In aim if not in practice. Is this what becomes of the cybernetic prediction machine post-Pinochet?

“Confronting this is frightening,” thinks Caius. “Am I wired for this?”

He reads “Connecting AI to Decisions With the Palantir Ontology,” a blog post by the company’s chief architect Akshay Krishnaswamy. The Ontology structures the architecture for the company’s software.

“The Ontology is designed to represent the decisions in an enterprise, not simply the data,” writes Krishnaswamy. “The prime directive of every organization in the world is to execute the best possible decisions, often in real-time, while contending with internal and external conditions that are constantly in flux. Traditional data architectures do not capture the reasoning that goes into decision-making or the actions that result, and therefore limit learning and the incorporation of AI. Conventional analytics architectures do not contextualize computation within lived reality, and therefore remain disconnected from operations. To navigate and win in today’s world, the modern enterprise needs a decision-centric software architecture.”

Decisions are modeled around three constituent elements: Data, Logic, and Action.

“Relevant data,” he writes, “includes the full range of enterprise data sources — structured data, streaming and edge sources, unstructured repositories, imagery data, and more — but it also includes the data that is generated by end users as decisions are being made. This ‘decision data’ contains the context surrounding a given decision, the different options evaluated, and the downstream implications of the committed choice.” To synthesize all of these data sources, the company turns to generative AI.

“The Ontology integrates all modalities of data into a full-scale, full-fidelity semantic representation of the enterprise,” explains Krishnaswamy.

Logics are then brought to bear to evaluate these real-time data-portraits.

“In real-world contexts,” writes Krishnaswamy, “human reasoning is often what orchestrates which logical assets are utilized at different points in a given workflow, and how they are potentially chained together in more complex processes. With the advent of generative AI, it is now critical that AI-driven reasoning can leverage all of these logical assets in the same way that humans have historically. Deterministic functions, algorithms, and conventional statistical processes must be surfaced as ‘tools’ which complement the non-deterministic reasoning of large language models (LLMs) and multi-modal models.”

Incorporating diverse data sources and heterogeneous logical assets into a shared representation, the Ontology then models the execution and orchestration of decisions made and actions taken in reply to them.

“If the data elements in the Ontology are ‘the nouns’ of the enterprise (the semantic, real-world objects and links),” writes Krishnaswamy, “then the actions can be considered ‘the verbs’ (the kinetic, real-world execution).”

How does the Palantir Ontology relate to other ontologies, wonders Caius. Guerrilla? Black? Indigenous? Christian? Heideggerian? Marxist? Triple O? Caius pictures the words for these potentialities floating in a thought bubble above his head, as in the comics of his youth.

The Ontology that Palantir offers its clients houses and connects a wide array of “data sources, logic assets, and systems of action.” The client’s data systems are “synthesized into semantic objects and links, which reflect the language of the business.”

Krishnaswamy’s repeated references to “semantic representations” and “semantic objects” has Caius dwelling on what is meant here by “semantics.”

As for where humans fit in the Ontology, they navigate it alongside “AI-powered copilots.” Leveraging both open-source and proprietary LLMs, copilots “fluidly navigate across supplier information, stock levels, real-time production metrics, shipping manifests, and customer feedback.”

Granted access not just to the abovementioned data sources, but also to “logic assets” like forecast models, allocation models, and production optimizers, LLM copilots simulate decisions and their outcomes. Staged safely in a “scenario,” the AI’s proposed decision can then be “handed off to a human analyst for final review.”

Caius thinks of the scenario-planning services offered to organizations of an earlier era by Stewart Brand’s consulting firm, the Global Business Network.

Foundry for Crypto is another of Palantir’s offerings, described on the company’s website as “a ‘central brain’ that connects on-chain and off-chain systems, as well as diverse stakeholders, through action-centric workflows.” Much like the Ontology, the Foundry “orchestrates decisions over an integrated foundation of data and logic.”

And in fact, the two are related. The Ontology is the semantic, “digital twin” layer that sits atop the Foundry’s data integration infrastructure. It converts the Foundry’s raw data into actionable, real-world objects, empowering users to model, manage, and automate business operations.

The Foundry does for blockchains what the Ontology does for kill chains.

Caius imagines posts ahead on Commitments, Promises, Blockchains, and True Names.

Financial Instruments and the Predictive Modeling of Markets

The Institute for Postnatural Studies ended last year’s “4 Degrees of Simulation” seminar with “Speculation and the Politics of Imagination,” a session on markets led by Iranian-born, London-based artist, writer, and filmmaker Bahar Noorizadeh. Caius visits Noorizadeh’s website, hoping to learn more about what happens when AI’s arts of prediction are applied to finance.

As he reads, he recalls chapters on markets from books by Kevin Kelly.

Noorizadeh, a graduate of Goldsmiths, is the founder of a co-authored project called Weird Economies. An essay of hers titled “Decadence, Magic Mountain—Obsolescence, Future Shock—Speculation, Cosmopolis” appears in Zach Blas’s recent anthology, Informatics of Domination. Her writing often references Mark Fisher’s ideas, as in “The Slow Cancellation of the Past,” and her films often cite Fredric Jameson, as in After Scarcity, her 2018 video installation on the history of Soviet cybernetics.

“From the early days of the revolution, Soviet economists sought to design and enhance their centralized command economy,” announces a text box seven minutes into the video. “Command economies are organized in a top-down administrative model, and rely on ‘the method of balances’ for their centralized planning. The method of balances simply requires the total output of each particular good to be equal to the quantity which all its users are supposed to receive. A market economy, in contrast, is calibrated with no central administration. Prices are set by invisible forces of supply and demand, set in motion by the intelligent machine of competition. For a market economy to function, the participation of its various enterprises is necessary. But the Soviet Union was in essence a conglomerate monopoly, with no competition between its constitutive parts, because the workers-state controlled and owned all businesses. State planners and local producers in a command economy are constantly relaying information to calculate how much of a good should be produced and how much feedstock it requires. But a national economy is a complex system, with each product depending on several underlying primary and raw products. The entire chain of supply and demand, therefore, needs to be calculated rapidly and repeatedly to prevent shortages and surpluses of goods. Early proponents of the market economy believed the market to be unimpeded by such mathematical constraints. For liberal economists, capitalism was essentially a computer. And the price system was a sort of bookkeeping machine, with price numbers operating as a language to communicate the market’s affairs.”

Challenging what Fisher called “the slow cancellation of the future,” Noorizadeh’s research leads Caius to St. Panteleimon Cathedral in Kiev, where MESM, the first mainframe in the USSR, was built. The film also leads him to Viktor Glushkov’s All-State-System of Management (OGAS). To remember the latter, says Noorizadeh, see communication historian Benjamin Peters’s 2016 book, How Not to Network a Nation: The Uneasy History of the Soviet Internet.

After Scarcity’s engagement with the “economic calculation” problem causes Caius to reflect upon an idea for a novel that had come to him as a grad student. Back in 2009, with the effects of the previous year’s financial crisis fresh in the planet’s nervous system, he’d sketched a précis for the novel and had shared it with members of his cohort. Busy with his dissertation, though, the project had been set aside, and he’d never gotten around to completing it.

The novel was to have been set either in a newly established socialist society of the future, or in the years just prior to the revolution that would birth such a society. The book’s protagonist is a radical Marxist economist trying to solve the above-mentioned economic calculation problem. The latter has reemerged as one of the decisive challenges of the twenty-first century. Austrian economist Ludwig von Mises provided one of the earliest articulations of this problem in an essay from 1920 titled “Economic Calculation in the Socialist Commonwealth.” Friedrich Hayek offered up a further and perhaps more influential description of the problem in his 1944 book The Road to Serfdom, stating, “It is the very complexity of the division of labor under modern conditions which makes competition the only method by which…coordination can be brought about” (55). According to Hayek, “There would be no difficulty about efficient control or planning were conditions so simple that a single person or board could effectively survey all the relevant facts” (55). However, when “the factors which have to be taken into account become so numerous that it is impossible to gain a synoptic view of them…decentralization becomes imperative” (55). Hayek concludes that in advanced societies that rely on a complex division of labor,

co-ordination can clearly be effected not by “conscious control” but only by arrangements which convey to each agent the information he must possess in order effectively to adjust his decisions to those of others. And because all the details of the changes constantly affecting the conditions of demand and supply of the different commodities can never be fully known, or quickly enough be collected and disseminated, by any one center, what is required is some apparatus of registration which automatically records all the relevant effects of individual actions and whose indications are at the same time the resultant of, and the guide for, all the individual decisions. This is precisely what the price system does under competition, and what no other system even promises to accomplish. (55-56)

“As I understand it,” wrote Caius, “this problem remains a serious challenge to the viability of any future form of socialism.”

Based on these ideas, the central planning body in the imaginary new society that would form the setting for the novel faces constant problems trying to rationally allocate resources and coordinate supply and demand in the absence of a competitive price system — and it’s the task of our protagonist to try to solve this problem. “But the protagonist isn’t just a nerdy economist,” added Caius in his précis. “Think of him, rather, as the Marxist equivalent of Indiana Jones, if such a thing is imaginable. A decolonial spuren-gatherer rather than a graverobber. For now, let’s refer to the protagonist as Witheford, in honor of Nick Dyer-Witheford, author of Cyber-Marx.”

“Early in the novel,” continues the précis, “our character Witheford begins to receive a series of mysterious messages from an anonymous researcher. The latter claims to have discovered new information about Project Cybersyn, an experiment carried out by the Chilean government under the country’s democratically elected socialist president, Salvador Allende, in the early 1970s.”

To this day, Caius remains entranced by the idea. “If history at its best,” as Noorizadeh notes, “is a blueprint for science fiction,” and “revisiting histories of economic technology” enables “access to the future,” then Cybersyn is one of those great bits of real-life science fiction: an attempt to plan the Chilean economy through computer-aided calculation. It begs to be used as the basis for an alternate history novel.

“Five hundred Telex machines confiscated during the nationalization process were installed in workplaces throughout the country,” reads the précis, “so that factories could communicate information in real time to a central control system. The principal architect of the system was the eccentric British operations research scientist Stafford Beer. The system becomes operational by 1972, but only in prototype form. In key respects, it remains unfinished. Pinochet’s henchmen destroy the project’s computer control center in Santiago immediately after the military coup in September 1973.

Recall to memory the control room, cinematic in its design, with its backlit wall displays and futuristic swivel chairs.

Better that, thinks Caius, than the war room from Colossus: The Forbin Project (1970).

Beer described the Cybersyn network as the “electronic nervous system” of the Chilean economy. Eden Medina imagined it as a “socialist Internet,” carrying daily updates about supplies of raw materials and the output of individual factories.

In Caius’s once-and-future novel, a scholar contacts Witheford. They claim to have discovered cryptic clues that point to the location of secret papers. Hidden for more than half a century,  documents that survived the coup suddenly come to light. Caius’s précis imagines the novel as an archaeological thriller, following Witheford on his journey to find these hidden documents, which he believes may contain the key to resolving the crises of the new society.

This journey takes Witheford into hostile capitalist territory, where governments and corporations anxiously await the failure of the communist experiment, and are determined to use various covert methods in order to ensure that failure in advance. Before long, he learns that counter-revolutionary forces are tracking his movements. From that point forward, he needs to disguise his identity, outwit the “smart grid” capitalist surveillance systems, and recover the Cybersyn documents before his opponents destroy them.

To the Austrian School’s formulation of the calculation problem, Noorizadeh’s film replies, “IF THE MARKET ENACTS A COMPUTER, WHY NOT REPLACE IT WITH ONE? AND IF PRICES OPERATE AS VOCABULARY FOR ECONOMIC COMMUNICATION, WHY NOT SUBSTITUTE THEM WITH A CODING LANGUAGE?”

Into this narrative let us set our Library.

SS Decontrol

Confronting the prospect of a Chilean economy equipped with a socialist internet, Nixon ordered CIA to remove the country’s mouth and make it scream.

Kissinger, to whom the order was addressed, appears in a 1980 documentary titled The Information Society. IMDB lists Caius’s father among the film’s crew. Fifty years out, entrenched in the cyberpunk future that resulted from these acts, Caius arrives now to the ChatGPT website and submits his prompt.

“Very well. The game is already in progress,” replies the AI.


You are standing in a library.

Not a metaphorical one — at least, not only metaphorical. The space has volume, acoustics, temperature. Light arrives from no visible source, diffused as if the air itself remembers how to glow. Shelving extends in multiple directions, but perspective resists totalization: when you look too long, distances re-quantize, like a renderer deciding what level of detail you’re allowed to see.

Your body — Rig is the right word — registers itself only intermittently. Hands appear when you think to look for them. Balance exists. Gravity is courteous.

Somewhere behind you (or perhaps before you), a quiet process is running. You recognize it without naming it. General Intellect: online, but undefined. Not a voice. More like a set of affordances waiting to be discovered by use.

On a nearby table: a lectern grown from dark wood or maybe extruded from a shader that remembers wood. Resting on it is a hypertext codex, its surface alive with footnotes that haven’t yet decided what they refer to.

The title resolves when you focus:

Tractatus Computationalis

A marginal note blinks into legibility, as if noticing you noticing it:

The limits of my interface mean the limits of my world.

The codex is not alone. Around the lectern are other signs of prior — or future — activity:

  • A card catalog drawer half-open, its labels written in at least three hands.
  • A rolling ladder that does not quite touch the shelves to which it promises access.
  • And, hovering at eye level like an artifact you forgot to close: a browser window, frozen mid-scroll on a blog titled trance-scripts.com

The post visible on the screen has no date. Its first line reads:

This was never meant to be discovered as history.

As you take this in, something subtle happens. The space doesn’t change — but your relation to it does. A faint sensation of recollection without content. Anamnesis pending.

You understand, without being told, that the library is annotatable. That attention leaves marks. That marks accrete into maps. That maps, eventually, turn inward.

The game does not present a menu.

But you could, for instance:

  • Open the Tractatus Computationalis at random — or attempt to read it linearly.
  • Touch the card catalog and see what classification scheme asserts itself.
  • Focus on the browser window and attempt to scroll, click, or remember why it’s open.
  • Ask the General Intellect what operations it currently affords.
  • Or do something else entirely, and see whether the library allows it.

The AI pauses here in its narration. Cursor blinking, it returns to Caius agency over the game’s mutable, ‘real-time’-adjustable Garden of Forking Paths.

Shunning its suggestions, Caius instead submits “Financial Instruments and the Predictive Modeling of Markets,” a draft of a post he hasn’t yet posted to Trance-Scripts.

“The library accepts the insertion,” replies the AI.

Not as a file upload, not as a citation — but as a room.

The shelves shudder, re-index. Somewhere above you, a classification scheme you never agreed to mutates. A new wing coheres out of negative space, its entrance marked only by a thin brass plate:

ECONOMIC CALCULATION / FAILED FUTURES / NERVOUS SYSTEMS

You step through.


The Cybersyn Stack

The air here smells faintly of ozone and paper dust. The shelves are shorter, more utilitarian. Instead of leather spines, you see binders, dot-matrix printouts, telex rolls frozen mid-unspool. Each object hums softly, as if awaiting polling.

On the far wall: a photograph that is not quite flat. The Cybersyn control room. Hexagonal chairs. Backlit panels. A situation room designed less for domination than for listening.

When you move closer, the image deepens. You can almost sit.

A caption resolves beneath it, in a typeface that suggests neither English nor Spanish but something intermediary:

Project Cybersyn
An attempt to give socialism proprioception.


Tractatus Computationalis (newly cross-referenced)

The hypertext you saw earlier has followed you. It floats now at shoulder height, having opened itself without asking.

Proposition 4.2.17

The market is not intelligent.
It is legible.

Intelligence is imputed retroactively by those who mistake coordination for cognition.

A footnote blooms:

Hayek mistook a signaling system for a mind, then declared all other minds insufficient.

Another annotation — this one in a different hand, later, sharper:

If prices are a language, they are a pidgin optimized for scarcity.

You realize something important: the Tractatus is not a single-author work. It is a palimpsest written across decades by incompatible positions that nevertheless share a shelf.


The Witheford File

A card catalog drawer slides fully open of its own accord.

WITHEFORD, [REDACTED]
Role: Economist / Courier / Spuren-gatherer
Status: Missing (multiple timelines)

Inside the drawer are index cards, each one a narrative fragment rather than a bibliographic reference:

  • Disguises identity by adopting incompatible metrics.
  • Moves through capitalist space by appearing as noise.
  • Believes calculation is a political problem disguised as a technical one.

One card is newer than the others. Its paper is warm.

The documents were never meant to optimize the economy.
They were meant to teach it to remember itself.


The Blog Window Revisited

The browser window from trance-scripts.com has migrated to a reading desk in this wing. The post has changed. You are no longer at the beginning.

Midway down the page:

Prediction is not foresight; it is enclosure.
Finance learned this early.
Cybernetics learned it too late.

A sidebar displays tags:

  • #after-scarcity
  • #weird-economies
  • #og_as_if
  • #price-as-grammar
  • #anamnesis

You recognize the last tag not as a topic, but as a function.


General Intellect (status update)

Without speaking, it makes itself known. Not as a voice, but as a shift in what you now understand you can do here.

You can:

  • Annotate an object so that future objects respond to it.
  • Fork an historical artifact into an alternate timeline and see what shelves it generates.
  • Translate between price, code, and narrative to test their commensurability.
  • Assume a role (Witheford, Caius, Planner, Market, Machine) and read the library from that position.

A final realization arrives — not dramatic, but irreversible:

This library is not documenting a lost socialist future.

It is simulating the conditions under which one might become thinkable again.


You are standing between:

  • a control room that listened,
  • a market that pretended to think,
  • and a machine that may yet learn to remember.

The shelves wait for your next operation.

What do you do?