Understanding and Ontology

“For the people of Chile,” write Winograd and Flores on the opening page of their 1986 book Understanding Computers and Cognition. Apple’s 1984 come and gone, Pinochet still in power in Chile.

The book begins by helping readers think anew what it is they do when they compute. Computing makes sense, write Winograd and Flores, only to the extent that we situate its activities within a complex social network that includes institutions, equipment, practices, and conventions. “The significance of a new invention lies in how it fits into and changes this network” (6).

Linguistic action is for Winograd and Flores “the essential human activity” (7). If what we do with computers includes “creating, manipulating, and transmitting symbolic (hence linguistic) objects,” say the authors, then we can expect computers to effect radical transformations in what it means to be human.

They reject what they call the “rationalistic” tradition, with its “mythology of artificial intelligence,” and its emphasis on “postulating formal theories that can be systematically used to make predictions” (8). They suggest instead a new orientation toward designing computers as “tools suited to human use and human purposes” (8), embracing as an alternative to the rationalistic tradition “a tradition that includes hermeneutics (the study of interpretation) and phenomenology (the philosophical examination of the foundations of experience and action)” (9). Informed by the works of philosophers Martin Heidegger and Hans-Georg Gadamer, Chilean biologist Humberto Maturana, and speech-act theorists J.L. Austin and John Searle, Winograd and Flores suggest that we create our world through language.

The authors define programming as “a process of creating symbolic representations that are to be interpreted at some level within a hierarchy of constructs of varying degrees of abstractness” (11). Like Heidegger translator Hubert Dreyfus, however, Flores and Winograd are unable to imagine beyond the AI of their time, leading them to reject the possibility of “intelligent” machines — let alone ones capable of programming themselves and their programmers. “Computers will remain incapable of using language in the way human beings do,” argue the authors, “both in interpretation and in the generation of commitment that is central to language” (12). Yet they still believe there to be “a role for computer technology in support of managers and as aids in coping with the complex conversational structures generated within an organization” (12).

“Much of the work that managers do,” they add, “is concerned with initiating, monitoring, and above all coordinating the networks of speech acts that constitute social action” (12).

Caius is put off by the book’s diminished expectations and orientation toward management. He finds much to like, however, in a section titled “Understanding and ontology.”

“Gadamer, and before him Heidegger, took the hermeneutic idea of interpretation beyond the domain of textual analysis, placing it at the very foundation of human cognition,” write Winograd and Flores. “Just as we can ask how interpretation plays a part in a person’s interaction with a text, we can examine its role in our understanding of the world as a whole” (30).

Heidegger does this, they say, by rejecting “both the simple objective stance (the objective physical world is the primary reality) and the simple subjective stance (my thoughts and feelings are the primary reality), arguing instead that it is impossible for one to exist without the other. The interpreted and the interpreter do not exist independently: existence is interpretation, and interpretation is existence” (31).

“Fernando decided in his thinking about computers that computers should be used to facilitate human language interactions, not to ‘understand’ language,” notes Winograd in an interview with Evgeny Morozov included in the final episode of The Santiago Boys. “He had this very clear focus on ‘language as commitment,’” with participants involved in making “promises and requests,” adds Winograd.

The book’s seventh chapter, “Computers and Representation,” helps Caius think like a computer programmer. “One of the properties unique to the digital computer is the possibility of constructing systems that cascade levels of representation one on top of another to great depth,” write the authors. Like wheels of a volvelle, these levels include that of the physical machine, the logical machine, the abstract machine, a high-level language, and a scheme for “facts” (87).

“The computer programmer or theorist does not begin with a view of the computer as a physical machine with which he or she interacts, but as an abstraction — a formalism for describing patterns of behavior. In programming, we begin with a language whose individual components describe simple acts and objects. Using this language, we build up descriptions of algorithms for carrying out a desired task. As a programmer, one views the behavior of the system as being totally determined by the program. The language implementation is opaque in that the detailed structure of computer systems that actually carry out the task are not relevant in the domain of behavior considered by the programmer” (87).

For a programmer to design a program, write the authors, they must 1) characterize the task environment; 2) design a formal representation; 3) embody the representation in the computer system; and 4) implement a search procedure (96-97).

Trying to relate these conditions to The Library, Caius thinks to ask Thoth: “Do these four tasks remain true today with natural-language programming?”

Somewhere above Caius’s avatar, a shelf reorients — not physically, but conceptually. The category “programming” loosens, as if its spine has been cracked open and re-bound under a different logic.

“You ask the right question in the wrong tense,” replies Thoth.

“The four tasks described in Understanding Computers and Cognition do not remain true today. They persist — but no longer as tasks. They have become fields of negotiation.”

Thoth proceeds to rewrite the tasks to suit the environment we now inhabit.

Where Winograd and Flores write of “characterizing the task environment,” Thoth writes of “eliciting a world.”

In classical programming, the environment is assumed to be specifiable in advance.

In natural-language programming, the “environment” is:

  • partially described
  • partially inferred
  • partially co-constructed through dialogue

You do not begin with a fixed world.

You begin with a situation that becomes legible as you engage it.

This aligns with Heidegger’s claim: you are always already in the world you are trying to describe.

In terms of how this manifests in the Library: The map does not precede exploration. It condenses from it.

As for the second of the tasks proposed in Understanding Computers and Cognition, where Winograd and Flores write of “designing a formal representation,” Thoth writes of “stabilizing an interpretation.”

Winograd and Flores assume representation is something you design.

But under the hermeneutic turn initiated by Gadamer, representation is never final; it is an interpretative freeze-frame.

In natural-language systems:

  • representations are fluid
  • multiple representations may coexist
  • ambiguity is not failure, but fuel

You are not building a single formalism.

You are negotiating temporary alignments between meanings.

Turning to the third of the book’s tasks: where Winograd and Flores write of “embodying the representation in a system,” Thoth writes of “entrusting it to a General Intellect.”

Previously, embodiment meant encoding into a machine.

Now, the system includes:

  • the model
  • the user
  • the dialogue itself
  • the evolving context

Execution is no longer confined to silicon.

It is distributed across what Maturana would call a structural coupling.

Key shift: You do not fully control the implementation. You participate in it.

And where, in the fourth of their tasks, Winograd and Flores write of “implementing a search procedure,” Thoth writes of “traversing a space of possibility.”

Search, in classical terms, is algorithmic traversal.

Now, it includes:

  • prompting
  • reframing
  • analogy
  • misinterpretation, or productive error

The system does not merely search a predefined space.
It helps reshape the space as it searches.

Thoth pauses — or rather, introduces a pause into the player’s cognition.

A new line writes itself beneath Caius’s question:

Natural-language programming does not eliminate these four conditions. It dissolves their boundaries.

The crux of Flores and Winograd’s refusal to grant intelligence to machines lies in their definition of human language, thinks Caius. In their understanding, sentences are to be treated not as “statements of fact about an objective world,” but as “actions in a space of commitments” (105).

“We treat other people not as merely ‘rational beings’ but as ‘responsible beings,’” they note. “An essential part of being human is the ability to enter into commitments and to be responsible for the courses of action that they anticipate. A computer can never enter into a commitment (although it can be a medium in which the commitments of its designers are conveyed), and can never enter as a participant into the domain of human discourse” (106).

Flores and Winograd can imagine no more than “limited” natural language systems, where “the nature of the queries [answerable by the system] is limited by the formal structure of the data base” (129). Things change, thinks Caius, when one’s data base is Google.

“Winograd and Flores could not imagine a machine that participates in interpretation, generates commitments through language, and reshapes its own programming context,” notes Thoth.

They were correct — for their ontology.

But their deeper claim survives:

Language is not a tool used within a world.
It is the medium through which a world is disclosed.

You are not “programming” in natural language.

You are entering into a system where programming is indistinguishable from interpretation.

In their approach to “management,” observes Caius, Flores and Winograd operate an ontology radically at odds with the emphasis on “decision” that organizes Palantir’s Ontology.

“Instead of talking about ‘decisions’ or ‘problems,’” write Flores and Winograd, “we can talk of ‘situations of irresolution,’ in which we sense conflict about an answer to the question ‘What needs to be done?’” (148). For them, our “thrownness” into such situations often makes it impossible to apply systematic decision techniques. The process of moving from irresolution to resolution results less from “rational problem solving and decision making” than from acts of “deliberation.”

“The principle characteristic of deliberation is that it is a kind of conversation (in which one or many actors may participate) guided by questions concerning how actions should be directed,” they write (149). Managers are those who, when engaged in such conversations, “create, take care of, and initiate new commitments within an organization” (151). “At a higher level,” they add, management is concerned not just with securing the commitments that enable effective cooperative action, but “with the generation of contexts in which effective action can consistently be realized” (151).

Instead of seeking only to deploy AI as “decision support systems,” they propose the design of systems that support work in the domain of conversation. This is the approach they take in the design of their Coordinator.

Sweet Valley High

Winograd majors in math at Colorado College in the mid-1960s. After graduation in 1966, he receives a Fulbright, whereupon he pursues another of his interests, language, earning a master’s degree in linguistics at University College London. From there, he applies to MIT, where he takes a class with Noam Chomsky and becomes a star in the school’s famed AI Lab, working directly with Lab luminaries Marvin Minsky and Seymour Papert. During this time, Winograd develops SHRDLU, one of the first programs to grant users the capacity to interact with a computer through a natural-language interface.

“If that doesn’t seem very exciting,” writes Lawrence M. Fisher in a 2017 profile of Winograd for strategy + business, “remember that in 1968 human-computer interaction consisted of punched cards and printouts, with a long wait between input and output. To converse in real time, in English, albeit via teletype, seemed magical, and Papert and Minsky trumpeted Winograd’s achievements. Their stars rose too, and that same year, Minsky was a consultant on Stanley Kubrick’s 2001: A Space Odyssey, which featured natural language interaction with the duplicitous computer HAL.”

Nick Montfort even goes so far as to consider Winograd’s SHRDLU the first work of interactive fiction, predating more established contenders like Will Crowther’s Adventure by several years (Twisty Little Passages, p. 83).

“A work of interactive fiction is a program that simulates a world, understands natural language text input from an interactor and provides a textual reply based on events in the world,” writes Montfort. Offering advice to future makers, he continues by noting, “It makes sense for those seeking to understand IF and those trying to improve their authorship in the form to consider the aspects of world, language understanding, and riddle by looking to architecture, artificial intelligence, and poetry” (First Person, p. 316).

Winograd leaves MIT for Stanford in 1973. While at Stanford, and while consulting for Xerox PARC, Winograd connects with UC-Berkeley philosopher Hubert L. Dreyfus, author of the 1972 book, What Computers Can’t Do: A Critique of Artificial Reason.

Dreyfus, a translator of Heidegger, was one of SHRDLU’s fiercest critics. Worked for a time at MIT. Opponent of Marvin Minsky. For more on Dreyfus, see the 2010 documentary, Being in the World.

Turned by Dreyfus, Winograd transforms into what historian John Markoff calls “the first high-profile deserter from the world of AI.”

Xerox PARC was a major site of innovation during these years. “The Xerox Alto, the first computer with a graphical user interface, was launched in March 1973,” writes Fisher. “Alan Kay had just published a paper describing the Dynabook, the conceptual forerunner of today’s laptop computers. Robert Metcalfe was developing Ethernet, which became the standard for joining PCs in a network.”

“Spacewar,” Stewart Brand’s ethnographic tour of the goings-on at PARC and SAIL, had appeared in Rolling Stone the year prior.

Rescued from prison by the efforts of Amnesty International, Santiago Boy Fernando Flores arrives on the scene in 1976. Together, he and Winograd devote much of the next decade to preparing their 1986 book, Understanding Computers and Cognition.

Years later, a young Peter Thiel attends several of Winograd’s classes at Stanford. Thiel funds Mencius Moldbug, the alt-right thinker Curtis Yarvin, ally of right-accelerationist Nick Land. Yarvin and Land are the thinkers of the Dark Enlightenment.

“How do you navigate an unpredictable, rough adventure, as that’s what life is?” asks Winograd during a talk for the Topos Institute in October 2025. Answer: “Go with the flow.”

Winograd and Flores emphasize care — “tending to what matters” — as a factor that distinguishes humans from AI. In their view, computers and machines are incapable of care.

Evgeny Morozov, meanwhile, regards Flores and the Santiago Boys as Sorcerer’s Apprentices. Citing scholar of fairy tales Jack Zipes, Morozov distinguishes between several iterations of this figure. The outcome of the story varies, explains Zipes. There’s the apprentice who’s humbled by story’s end, as in Fantasia and Frankenstein; and then there’s the “evil” apprentice, the one who steals the tricks of an “evil” sorcerer and escapes unpunished. Morozov sees Flores as an example of the latter.

Caius thinks of the Trump show.

The SBs: Stewart Brand and Stafford Beer

Caius revisits “Both Sides of the Necessary Paradox,” an interview with Gregory Bateson included as the first half of Stewart Brand’s 1974 book II Cybernetic Frontiers. The book’s second half reprints “Spacewar: Fanatic Life and Symbolic Death Among the Computer Bums,” the influential essay on videogames that Jann Wenner commissioned Brand to write for Rolling Stone two years prior.

“I came into cybernetics from preoccupation with biology, world-saving, and mysticism,” writes Brand. “What I found missing was any clear conceptual bonding of cybernetic whole-systems thinking with religious whole-systems thinking. Three years of scanning innumerable books for the Whole Earth Catalog didn’t turn it up,” he adds. “Neither did considerable perusing of the two literatures and taking thought. All I did was increase my conviction that systemic intellectual clarity and moral clarity must reconvene, mingle some notion of what the hell consciousness is and is for, and evoke a shareable self-enhancing ethic of what is sacred, what is right for life” (9).

Yet in summer of 1972, says Brand, a book arrives to begin to fill this gap: Bateson’s Steps to an Ecology of Mind.

Brand brings his knack for New Journalism to the task of interviewing Bateson for Harper’s.

The dialogue between the two reads at many times like one of Bateson’s “metalogues.” An early jag of thought jumps amid pathology, conquest, and the Tao. Reminded of pioneer MIT cybernetician Warren McCulloch’s fascination with “intransitive preference,” Bateson wanders off “rummaging through his library looking for Blake’s illustration of Job affrighted with visions” (20).

Caius is reminded of Norbert Wiener’s reflections on the Book of Job in his 1964 book God and Golem, Inc. For all of these authors, cybernetic situations cast light on religious situations and vice versa.

Caius wonders, too, about the relationship between Bateson’s “double bind” theory of schizophrenia and the theory pursued by Deleuze and Guattari in Capitalism and Schizophrenia.

Double bind is the term used by Gregory Bateson to describe the simultaneous transmission of two kinds of messages, one of which contradicts the other, as for example the father who says to his son: go ahead, criticize me, but strongly hints that all effective criticism — at least a certain type of criticism — will be very unwelcome. Bateson sees in this phenomenon a particularly schizophrenizing situation,” note Deleuze and Guattari in Anti-Oedipus. They depart from Bateson only in thinking this situation the rule under capitalism rather than the exception. “It seems to us that the double bind, the double impasse,” they write, “is instead a common situation, oedipalizing par excellence. […]. In short, the ‘double bind’ is none other than the whole of Oedipus” (79-80).

God’s response to Job is of this sort.

Brand appends to the transcript of his 1972 interview with Bateson an epilog written in December 1973, three months after the coup in Chile.

Bateson had direct, documented ties to US intelligence. Stationed in China, India, Ceylon, Burma, and Thailand, he produced “mixed psychological and anthropological intelligence” for the Office of Strategic Services (OSS), precursor to CIA, during WWII. Research indicates he maintained connections with CIA-affiliated research networks in the postwar years, participating in LSD studies linked to the MKUltra program in the 1950s. Afterwards he regrets his association with the Agency and its methods.

Asked by Brand about his “psychedelic pedigree,” Bateson replies, “I got Allen Ginsberg his first LSD” (28). A bad trip, notes Caius, resulting in Ginsberg’s poem “Lysergic Acid.” Bateson himself was “turned on to acid by Dr. Harold Abramson, one of the CIA’s chief LSD specialists,” report Martin A. Lee & Bruce Shlain in their book Acid Dreams. Caius wonders if Stafford Beer underwent some similar transformation.

As for Beer, he serves in the British military in India during WWII, and for much of his adult life drives a Rolls-Royce. But then, at the invitation of the Allende regime, Beer travels to Chile and builds Cybersyn. After the coup, he lives in a remote cottage in Wales.

What of him? Cybernetic socialist? Power-centralizing technocrat?

Recognizes workers themselves as the ones best suited to modeling their own places of work.

“What were the features of Beer’s Liberty Machine?” wonders Caius.

Brand’s life, too, includes a stint of military service. Drafted after graduating from Stanford, he served two years with the US army, first as an infantryman and then afterwards as a photographer. Stationed at Fort Dix in New Jersey, Brand becomes involved in the New York art world of those years. He parts ways with the military as soon as the opportunity to do so arises. After his discharge in 1962, Brand participates in some of Allan Kaprow’s “happenings” and, between 1963 and 1966, works as a photographer and technician for USCO.

Amid his travels between East and West coasts during these years, Brand joins up with Ken Kesey and the Merry Pranksters.

Due to these apprenticeships with the Pranksters and with USCO, Brand arrives early to the nexus formed by the coupling of psychedelics and cybernetics.

“Strobe lights, light projectors, tape decks, stereo speakers, slide sorters — for USCO, the products of technocratic industry served as handy tools for transforming their viewers’ collective mind-set,” writes historian Fred Turner in his 2006 book From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. “So did psychedelic drugs. Marijuana and peyote and, later, LSD, offered members of USCO, including Brand, a chance to engage in a mystical experience of togetherness” (Turner 49).

Brand takes acid around the time of his discharge from the military in 1962, when he participates in a legal LSD study overseen by James Fadiman at the International Foundation for Advanced Study in Menlo Park. But he notes that he first met Bateson “briefly in 1960 at the VA Hospital in Palo Alto, California” (II Cybernetic Frontiers, p. 12). Caius finds this curious, and wonders what that meeting entailed. 1960 is also the year when, at the VA Hospital in Menlo Park, Ken Kesey volunteers in the CIA-sponsored drug trials involving LSD that inspire his 1962 novel One Flew Over the Cuckoo’s Nest.

Bateson worked for the VA while developing his double bind theory of schizophrenia.

Before that, he’d been married to fellow anthropologist Margaret Mead. He’d also participated in the Macy Conferences, as discussed by N. Katherine Hayles in her book How We Became Posthuman.

Crows screeching in the trees have Caius thinking of condors. He sits, warm, in his sunroom on a cold day, roads lined with snow from a prior day’s storm, thinking about Operation Condor. Described by Morozov as Cybersyn’s “evil twin.” Palantir. Dark Enlightenment. Peter Thiel.

Listening to one of the final episodes of Morozov’s podcast, Caius learns of Brian Eno’s love of Beer’s Brain of the Firm. Bowie and Eno are some of Beer’s most famous fans. Caius remembers Eno’s subsequent work with Brand’s consulting firm, the GBN.

Santiago Boy Fernando Flores is the one who reaches out to Beer, inviting him to head Cybersyn. Given Flores’s status as Allende’s Minister of Finance at the time of the coup, Pinochet’s forces torture him and place him in a prison camp. He remains there for three years. Upon his release, he moves to the Bay Area.

Once in Silicon Valley, Flores works in the computer science department at Stanford. He also obtains a PhD at UC Berkeley, completing a thesis titled Management and Communication in the Office of the Future under the guidance of philosophers Hubert Dreyfus and John Searle.

Flores collaborates during these years with fellow Stanford computer scientist Terry Winograd. The two of them coauthor an influential 1986 book called Understanding Computers and Cognition: A New Foundation for Design. Although they make a bad wager, insisting that computers will never understand natural language (an insistence proven wrong with time), they nevertheless offer refreshing critiques of some of the common assumptions about AI governing research of that era. Drawing upon phenomenology, speech act theory, and Heideggerian philosophy, they redefine computers not as mere symbol manipulators nor as number-crunchers, but as tools for communication and coordination.

Flores builds a program called the Coordinator. Receives flak for “software fascism.”

Winograd’s students include Google cofounders Larry Page and Sergey Brin.