Beside the White Chickens

Caius reads about “4 Degrees of Simulation,” a practice-led seminar hosted last year by the Institute for Postnatural Studies in Madrid. Of the seminar’s three sessions, the one that most intrigues him is the one that was led by guest speaker Lucia Rebolino, as it focused on prediction and uncertainty as these pertain to climate modeling. Desiring to learn more, Caius tracks down “Unpredictable Atmosphere,” an essay of Rebolino’s published by e-flux.

The essay begins by describing the process whereby meteorological research organizations like the US National Weather Service monitor storms that develop in the Atlantic basin during hurricane season. These organizations employ climate models to predict paths and potentials of storms in advance of landfall.

“So much depends on our ability to forecast the weather — and, when catastrophe strikes, on our ability to respond quickly,” notes Rebolino. Caius hears in her sentence the opening lines of William Carlos Williams’s poem “The Red Wheelbarrow.” “So much depends on our ability to forecast the weather,” he mutters. “But the language we use to model these forecasts depends on sentences cast by poets.”

“How do we cast better sentences?” wonders Caius.

In seeking to feel into the judgement implied by “better,” he notes his wariness of bettering as “improvement,” as deployed in self-improvement literature and as deployed by capitalism: its implied separation from the present, its scarcity mindset, its perception of lack — and in the improvers’ attempts to “fix” this situation, their exercising of nature as instrument, their use of these instruments for gentrifying, extractive, self-expansive movement through the territory.

In this ceaseless movement and thus its failure to satisfy itself, the improvement narrative leads to predictive utterances and their projections onto others.

And yet, here I am definitely wanting “better” for myself and others, thinks Caius. Better sentences. Ones on which plausible desirable futures depend.

So how do we better our bettering?

Caius returns to Rebolino’s essay on the models used to predict the weather. This process of modeling, she writes, “consists of a blend of certainty — provided by sophisticated mathematical models and existing technologies — and uncertainty — which is inherent in the dynamic nature of atmospheric systems.”

January 6th again: headlines busy with Trump’s recent abduction of Maduro. A former student who works as a project manager at Google reaches out to Caius, recommending Ajay Agrawal, Joshua Gans, and Avi Goldfarb’s book Prediction Machines: The Simple Economics of Artificial Intelligence. Google adds to this recommendation Gans’s follow-up, Power and Prediction.

Costar chimes in with its advice for the day: “Make decisions based on what would be more interesting to write about.”

To model the weather, weather satellites measure the vibration of water vapor molecules in the atmosphere. “Nearly 99% of weather observation data that supercomputers receive today come from satellites, with about 90% of these observations being assimilated into computer weather models using complex algorithms,” writes Rebolino. Water vapor molecules resonate at a specific band of frequencies along the electromagnetic spectrum. Within the imagined “finite space” of this spectrum, these invisible vibrations are thought to exist within what Rebolino calls the “greenfield.” Equipped with microwave sensors, satellites “listen” for these vibrations.

“Atmospheric water vapor is a key variable in determining the formation of clouds, precipitation, and atmospheric instability, among many other things,” writes Rebolino.

She depicts 5G telecommunications infrastructures as a threat to our capacity to predict the operation of these variables in advance. “A 5G station transmitting at nearly the same frequency as water vapor can be mistaken for actual moisture, leading to confusion and the misinterpretation of weather patterns,” she argues. “This interference is particularly concerning in high-band 5G frequencies, where signals closely overlap with those used for water vapor detection.”

Prediction and uncertainty as qualities of finite and infinite games, finite and infinite worlds.

For lunch, Caius eats a plate of chicken and mushrooms he reheats in his microwave.

God Human Animal Machine

Wired columnist Meghan O’Gieblyn discusses Norbert Wiener’s God and Golem, Inc. in her 2021 book God Human Animal Machine, suggesting that the god humans are creating with AI is a god “we’ve chosen to raise…from the dead”: “the God of Calvin and Luther” (O’Gieblyn 212).

“Reminds me of AM, the AI god from Harlan Ellison’s ‘I Have No Mouth, and I Must Scream,’” thinks Caius. AM resembles the god that allows Satan to afflict Job in the Old Testament. And indeed, as O’Gieblyn attests, John Calvin adored the Book of Job. “He once gave 159 consecutive sermons on the book,” she writes, “preaching every day for a period of six months — a paean to God’s absolute sovereignty” (197).

She cites “Pedro Domingos, one of the leading experts in machine learning, who has argued that these algorithms will inevitably evolve into a unified system of perfect understanding — a kind of oracle that we can consult about virtually anything” (211-212). See Domingos’s book The Master Algorithm.

The main thing, for O’Gieblyn, is the disenchantment/reenchantment debate, which she comes to via Max Weber. In this debate, she aligns not with Heidegger, but with his student Hannah Arendt. Domingos dismisses fears about algorithmic determinism, she says, “by appealing to our enchanted past” (212).

Amid this enchanted past lies the figure of the Golem.

“Who are these rabbis who told tales of golems — and in some accounts, operated golems themselves?” wonders Caius.

The entry on the Golem in Man, Myth, and Magic tracks the story back to “the circle of Jewish mystics of the 12th-13th centuries known as the ‘Hasidim of Germany.’” The idea is transmitted through texts like the Sefer Yetzirah (“The Book of Creation”) and the Cabala Mineralis. Tales tell of golems built in later centuries, too, by figures like Rabbi Elijah of Chelm (c. 1520-1583) and Rabbi Loew of Prague (c. 1524-1609).

The myth of the golem turns up in O’Gieblyn’s book during her discussion of a 2004 book by German theologian Anne Foerst called God in the Machine.

“At one point in her book,” writes O’Gieblyn, “Foerst relays an anecdote she heard at MIT […]. The story goes back to the 1960s, when the AI Lab was overseen by the famous roboticist Marvin Minsky, a period now considered the ‘cradle of AI.’ One day two graduate students, Gerry Sussman and Joel Moses, were chatting during a break with a handful of other students. Someone mentioned offhandedly that the first big computer which had been constructed in Israel, had been called Golem. This led to a general discussion of the golem stories, and Sussman proceeded to tell his colleagues that he was a descendent of Rabbi Löw, and at his bar mitzvah his grandfather had taken him aside and told him the rhyme that would awaken the golem at the end of time. At this, Moses, awestruck, revealed that he too was a descendent of Rabbi Löw and had also been given the magical incantation at his bar mitzvah by his grandfather. The two men agreed to write out the incantation separately on pieces of paper, and when they showed them to each other, the formula — despite being passed down for centuries as a purely oral tradition — was identical” (God Human Animal Machine, p. 105).

Curiosity piqued by all of this, but especially by the mention of Israel’s decision to call one of its first computers “GOLEM,” Caius resolves to dig deeper. He soon learns that the computer’s name was chosen by none other than Walter Benjamin’s dear friend (indeed, the one who, after Benjamin’s suicide, inherits the latter’s print of Paul Klee’s Angelus Novus): the famous scholar of Jewish mysticism, Gershom Scholem.

When Scholem heard that the Weizmann Institute at Rehovoth in Israel had completed the building of a new computer, he told the computer’s creator, Dr. Chaim Pekeris, that, in his opinion, the most appropriate name for it would be Golem, No. 1 (‘Golem Aleph’). Pekeris agreed to call it that, but only on condition that Scholem “dedicate the computer and explain why it should be so named.”

In his dedicatory remarks, delivered at the Weizmann Institute on June 17, 1965, Scholem recounts the story of Rabbi Jehuda Loew ben Bezalel, the same “Rabbi Löw of Prague” described by O’Gieblyn, the one credited in Jewish popular tradition as the creator of the Golem.

“It is only appropriate to mention,” notes Scholem, “that Rabbi Loew was not only the spiritual, but also the actual, ancestor of the great mathematician Theodor von Karman who, I recall, was extremely proud of this ancestor of his in whom he saw the first genius of applied mathematics in his family. But we may safely say that Rabbi Loew was also the spiritual ancestor of two other departed Jews — I mean John von Neumann and Norbert Wiener — who contributed more than anyone else to the magic that has produced the modern Golem.”

Golem I was the successor to Israel’s first computer, the WEIZAC, built by a team led by research engineer Gerald Estrin in the mid-1950s, based on the architecture developed by von Neumann at the Institute for Advanced Study in Princeton. Estrin and Pekeris had both helped von Neumann build the IAS machine in the late 1940s.

As for the commonalities Scholem wished to foreground between the clay Golem of 15thC Prague and the electronic one designed by Pekeris, he explains the connection as follows:

“The old Golem was based on a mystical combination of the 22 letters of the Hebrew alphabet, which are the elements and building-stones of the world,” notes Scholem. “The new Golem is based on a simpler, and at the same time more intricate, system. Instead of 22 elements, it knows only two, the two numbers 0 and 1, constituting the binary system of representation. Everything can be translated, or transposed, into these two basic signs, and what cannot be so expressed cannot be fed as information to the Golem.”

Scholem ends his dedicatory speech with a peculiar warning:

“All my days I have been complaining that the Weizmann Institute has not mobilized the funds to build up the Institute for Experimental Demonology and Magic which I have for so long proposed to establish there,” mutters Scholem. “They preferred what they call Applied Mathematics and its sinister possibilities to my more direct magical approach. Little did they know, when they preferred Chaim Pekeris to me, what they were letting themselves in for. So I resign myself and say to the Golem and its creator: develop peacefully and don’t destroy the world. Shalom.”

GOLEM I

Finding Others

“What happens to us as we become cybernetic learning machines?,” wonders Caius. Mashinka Hakopian’s The Institute for Other Intelligences leads him to Şerife Wong’s Fluxus Landscape: a network-view cognitive map of AI ethics. “Fluxus Landscape diagrams the globally linked early infrastructures of data ethics and governance,” writes Hakopian. “What Wong offers us is a kind of cartography. By bringing into view an expansive AI ethics ecosystem, Wong also affords the viewer an opportunity to assess its blank spots: the nodes that are missing and are yet to be inserted, or yet to be invented” (Hakopian 95).

Caius focuses first on what is present. Included in Wong’s map, for instance, is a bright yellow node dedicated to Zach Blas, another of the artist-activists profiled by Hakopian. Back in 2019, when Wong last updated her map, Blas was a lecturer in the Department of Visual Cultures at Goldsmiths — home to Kodwo Eshun and, before his suicide, Mark Fisher. Now Blas teaches at the University of Toronto.

Duke University Press published Informatics of Domination, an anthology coedited by Blas, in May 2025. The collection, which concludes with an afterword by Donna Haraway, takes its name from a phrase introduced in Haraway’s “Cyborg Manifesto.” The phrase appears in what Blas et al. refer to as a “chart of transitions.” Their use of Haraway’s chart as organizing principle for their anthology causes Caius to attend to the way much of the work produced by the artist-activists of today’s “AI justice” movement — Wong’s network diagram, Blas’s anthology, Kate Crawford’s Atlas of AI — approaches charts and maps as “formal apparatus[es] for generating and asking questions about relations of domination” (Informatics of Domination, p. 6).

Caius thinks of Jameson’s belief in an aesthetic of “cognitive mapping” as a possible antidote to postmodernity. Yet whatever else they are, thinks Caius, acts of charting and mapping are in essence acts of coding.

As Blas et al. note, “Haraway connects the informatics of domination to the authority given to code” (Informatics of Domination, p. 11).

“Communications sciences and modern biologies are constructed by a common move,” writes Haraway: “the translation of the world into a problem of coding, a search for a common language in which all resistance to instrumental control disappears and all heterogeneity can be submitted to disassembly, reassembly, investment, and exchange” (Haraway 164).

How do we map and code, wonders Caius, in a way that isn’t complicit with an informatics of domination? How do we acknowledge and make space for what media theorist Ulises Ali Mejias calls “paranodal space”? Blas et al. define the paranodal as “that which exceeds being diagrammable by the network form” (Informatics of Domination, p. 18). Can our neural nets become O-machines: open to the otherness of the outside?

Blas pursues these questions in a largely critical and skeptical manner throughout his multimedia art practice. His investigation of Silicon Valley’s desire to build machines that communicate with the outside has culminated most recently, for instance, in CULTUS, the second installment of his Silicon Traces trilogy.

As Amy Hale notes in her review of the work, “The central feature of Blas’s CULTUS is a god generator, a computational device through which the prophets of four AI Gods are summoned to share the invocation songs and sermons of their deities with eager supplicants.” CULTUS’s computational pantheon includes “Expositio, the AI god of exposure; Iudicium, the AI god of judgement; Lacrimae, the AI god of tears; and Eternus, the AI god of immortality.” The work’s sermons and songs, of course, are all AI-generated — yet the design of the installation draws from the icons and implements of the real-life Fausts who lie hidden away amid the occult origins of computing.

Foremost among these influences is Renaissance sorcerer John Dee.

“Blas modeled CULTUS,” writes Hale, “on the Holy Table used for divination and conjurations by Elizabethan magus and advisor to the Queen John Dee.” Hale describes Dee’s Table as “a beautiful, colorful, and intricate device, incorporating the names of spirits; the Seal of God (Sigillum Dei), which gave the user visionary capabilities; and as a centerpiece, a framed ‘shew stone’ or crystal ball.” Blas reimagines Dee’s device as a luminous, glowing temple — a night church inscribed with sigils formed from “a dense layering of corporate logos, diagrams, and symbols.”

Fundamentally iconoclastic in nature, however, the work ends not with the voices of gods or prophets, but with a chorus of heretics urging the renunciation of belief and the shattering of the black mirror.

And in fact, it is this fifth god, the Heretic, to whom Blas bends ear in Ass of God: Collected Heretical Writings of Salb Hacz. Published in a limited edition by the Vienna Secession, the volume purports to be “a religious studies book on AI and heresy” set within the world of CULTUS. The book’s AI mystic, “Salb Hacz,” is of course Blas himself, engineer of the “religious computer” CULTUS. “When a heretical presence manifested in CULTUS,” writes Blas in the book’s intro, “Hacz began to question not only the purpose of the computer but also the meaning of his mystical visions.” Continuing his work with CULTUS, Hacz transcribes a series of “visions” received from the Heretic. It is these visions and their accounts of AI heresy that are gathered and scattered by Blas in Ass of God.

Traces of the CCRU appear everywhere in this work, thinks Caius.

Blas embraces heresy, aligns himself with it as a tactic, because he takes “Big Tech’s Digital Theology” as the orthodoxy of the day. The ultimate heresy in this moment is what Hacz/Blas calls “the heresy of qualia.”

“The heresy of qualia is double-barreled,” he writes. “Firstly, it holds that no matter how close AI’s approximation to human thought, feeling, and experience — no matter how convincing the verisimilitude — it remains a programmed digital imitation. And secondly, the heresy of qualia equally insists that no matter how much our culture is made in the image of AI Gods, no matter how data-driven and algorithmic, the essence of the human experience remains fiercely and fundamentally analog. The digital counts; the analog compares. The digital divides; the analog constructs. The digital is literal; the analog is metaphoric. The being of our being-in-the-world — our Heideggerian Dasein essence — is comparative, constructive, and metaphoric. We are analog beings” (Ass of God, p. 15).

The binary logic employed by Blas to distinguish the digital from the analog hints at the limits of this line of thoughts. “The digital counts,” yes: but so too do humans, constructing digits from analog fingers and toes. Our being is as digital as it is analog. Always-already both-and. As for the first part of the heresy — that AI can only ever be “a programmed digital imitation” — it assumes verisimilitude as the end to which AI is put, just as Socrates assumes mimesis as the end to which poetry is put, thus neglecting the generative otherness of more-than-human intelligence.

Caius notes this not to reject qualia, nor to endorse the gods of any Big Tech orthodoxy. He offers his reply, rather, as a gentle reminder that for “the qualia of our embodied humanity” to appear or be felt or sensed as qualia, it must come before an attending spirit — a ghostly hauntological supplement.

This spirit who, with Word creates, steps down into the spacetime of his Creation, undergoes diverse embodiments, diverse subdivisions into self and not-self, at all times in the world but not of it, engaging its infinite selves in a game of infinite semiosis.

If each of us is to make and be made an Ass of God, then like the one in The Creation of the Sun, Moon, and Plants, one of the frescoes painted by Michelangelo onto the ceiling of the Sistine Chapel, let it be shaped by the desires of a mind freed from the tyranny of the As-Is. “Free Your Mind,” as Funkadelic sang, “and Your Ass Will Follow.”

The Artist-Activist as Hero

Mashinka Firunts Hakopian imagines artists and artist-activists as heroic alternatives to mad scientists. The ones who teach best what we know about ourselves as learning machines.

“Artists, and artist-activists, have introduced new ways of knowing — ways of apprehending how learning machines learn, and what they do with what they know,” writes Hakopian. “In the process, they’ve…initiated learning machines into new ways of doing. They’ve explored the interiors of erstwhile black boxes and rendered them transparent. They’ve visualized algorithmic operations as glass boxes, exhibited in white cubes and public squares. They’ve engaged algorithms as co-creators, and carved pathways for collective authorship of unanticipated texts. Most saliently, artists have shown how we might visualize what is not yet here” (The Institute for Other Intelligences, p. 90).

This is what blooms here in my library: “blueprints and schemata of a forward-dawning futurity” (90).

The Inner Voice That Loves Me

Stretches, relaxes, massages neck and shoulders, gurgles “Yes!,” gets loose. Reads Armenian artist Mashinka Hakopian’s “Algorithmic Counter-Divination.” Converses with Turing and the General Intellect about O-Machines.

Appearing in an issue of Limn magazine on “Ghostwriters,” Hakopian’s essay explores another kind of O-machine: “other machines,” ones powered by community datasets. Trained by her aunt in tasseography, a matrilineally transmitted mode of divination taught and practiced by femme elders “across Armenia, Palestine, Lebanon, and beyond,” where “visual patterns are identified in coffee grounds left at the bottom of a cup, and…interpreted to glean information about the past, present, and future,” Hakopian takes this practice of her ancestors as her key example, presenting O-machines as technologies of ancestral intelligence that support “knowledge systems that are irreducible to computation.”

With O-machines of this sort, she suggests, what matters is the encounter, not the outcome.

In tasseography, for instance, the cup reader’s identification of symbols amid coffee grounds leads not to a simple “answer” to the querent’s questions, writes Hakopian; rather, it catalyzes conversation. “In those encounters, predictions weren’t instantaneously conjured or fixed in advance,” she writes. “Rather, they were collectively articulated and unbounded, prying open pluriversal outcomes in a process of reciprocal exchange.”

While defenders of western technoscience denounce cup reading for its superstition and its witchcraft, Hakopian recalls its place as a counter-practice among Armenian diasporic communities in the wake of the 1915 Armenian Genocide. For those separated from loved ones by traumas of that scale, tasseography takes on the character of what hauntologists like Derrida would call a “messianic” redemptive practice. “To divine the future in this context is a refusal to relinquish its writing to agents of colonial violence,” writes Hakopian. “Divination comes to operate as a tactic of collective survival, affirming futurity in the face of a catastrophic present.” Consulting with the oracle is a way of communing with the dead.

Hakopian contrasts this with the predictive capacities imputed to today’s AI. “We reside in an algo-occultist moment,” she writes, “in which divinatory functions have been ceded to predictive models trained to retrieve necropolitical outcomes.” Necropolitical, she adds, in the sense that algorithmic models “now determine outcomes in the realm of warfare, policing, housing, judicial risk assessment, and beyond.”

“The role once ascribed to ritual experts who interpreted the pronouncements of oracles is now performed by technocratic actors,” writes Hakopian. “These are not diviners rooted in a community and summoning communiqués toward collective survival, but charlatans reading aloud the results of a Ouija session — one whose statements they author with a magnetically manipulated planchette.”

Hakopian’s critique is in that sense consistent with the “deceitful media” school of thought that informs earlier works of hers like The Institute for Other Intelligences. Rather than abjure algorithmic methods altogether, however, Hakopian’s latest work seeks to “turn the annihilatory logic of algorithmic divination against itself.” Since summer of 2023, she’s been training a “multimodal model” to perform tasseography and to output bilingual predictions in Armenian and English.

Hakopian incorporated this model into “Բաժակ Նայող (One Who Looks at the Cup),” a collaborative art installation mounted at several locations in Los Angeles in 2024. The installation features “a purpose-built Armenian diasporan kitchen located in an indeterminate time-space — a re-rendering of the domestic spaces where tasseography customarily takes place,” notes Hakopian. Those who visit the installation receive a cup reading from the model in the form of a printout.

Yet, rather than offer outputs generated live by AI, Hakopian et al.’s installation operates very much in the style of a Mechanical Turk, outputting interpretations scripted in advance by humans. “The model’s only function is to identify visual patterns in a querent’s cup in order to retrieve corresponding texts,” she explains. “This arrangement,” she adds, “declines to cede authorship to an algo-occultist circle of ‘stochastic parrots’ and the diviners who summon them.”

The ”stochastic parrots” reference is an unfortunate one, as it assumes a stochastic cosmology.

I’m reminded of the first thesis from Walter Benjamin’s “Theses on the Philosophy of History,” the one where Benjamin likens historical materialism to that very same precursor to today’s AI: the famous chess-playing device of the eighteenth century known as the Mechanical Turk.

“The story is told of an automaton constructed in such a way that it could play a winning game of chess, answering each move of an opponent with a countermove,” writes Benjamin. “A puppet in Turkish attire and with a hookah in its mouth sat before a chessboard placed on a large table. A system of mirrors created an illusion that this table was transparent from all sides. Actually, a little hunchback who was an expert chess player sat inside and guided the puppet’s hand by means of strings. One can imagine a philosophical counterpart to this device. The puppet called ‘historical materialism’ is to win all the time. It can easily be a match for anyone if it enlists the services of theology, which today, as we know, is wizened and has to keep out of sight.” (Illuminations, p. 253).

Hakopian sees no magic in today’s AI. Those who hype it are to her no more than deceptive practitioners of a kind of “stage magic.” But magic is afoot throughout the history of computing for those who look for it.

Take Turing, for instance. As George Dyson reports, Turing “was nicknamed ‘the alchemist’ in boarding school” (Turing’s Cathedral, p. 244). His mother had “set him up with crucibles, retorts, chemicals, etc., purchased from a French chemist” as a Christmas present in 1924. “I don’t care to find him boiling heaven knows what witches’ brew by the aid of two guttering candles on a naked windowsill,” muttered his housemaster at Sherborne.

Turing’s O-machines achieve a synthesis. The “machine” part of the O-machine is not the oracle. Nor does it automate or replace the oracle. It chats with it.

Something similar is possible in our interactions with platforms like ChatGPT.

Grow Your Own

In the context of AI, “Access to Tools” would mean access to metaprogramming. Humans and AI able to recursively modify or adjust their own algorithms and training data upon receipt of or through encounters with algorithms and training data inputted by others. Bruce Sterling suggested something of the sort in his blurb for Pharmako-AI, the first book cowritten with GPT-3. Sterling’s blurb makes it sound as if the sections of the book generated by GPT-3 were the effect of a corpus “curated” by the book’s human co-author, K Allado-McDowell. When the GPT-3 neural net is “fed a steady diet of Californian psychedelic texts,” writes Sterling, “the effect is spectacular.”

“Feeding” serves here as a metaphor for “training” or “education.” I’m reminded of Alan Turing’s recommendation that we think of artificial intelligences as “learning machines.” To build an AI, Turing suggested in his 1950 essay “Computing Machinery and Intelligence,” researchers should strive to build a “child-mind,” which could then be “trained” through sequences of positive and negative feedback to evolve into an “adult-mind,” our interactions with such beings acts of pedagogy.

When we encounter an entity like GPT-3.5 or GPT-4, however, it is already neither the mind of a child nor that of an adult that we encounter. Training of a fairly rigorous sort has already occurred; GPT-3 was trained on approximately 45 terabytes of data, GPT-4 on a petabyte. These are minds of at least limited superintelligence.

“Training,” too, is an odd term to use here, as much of the learning performed by these beings is of a “self-supervised” sort, involving a technique called “self-attention.”

As an author on Medium notes, “GPT-4 uses a transformer architecture with self-attention layers that allow it to learn long-range dependencies and contextual information from the input texts. It also employs techniques such as sparse attention, reversible layers, and activation checkpointing to reduce memory consumption and computational cost. GPT-4 is trained using self-supervised learning, which means it learns from its own generated texts without any human labels or feedback. It uses an objective function called masked language modeling (MLM), which randomly masks some tokens in the input texts and asks the model to predict them based on the surrounding tokens.”

When we interact with GPT-3.5 or GPT-4 through the Chat-GPT platform, all of this training has already occurred, interfering greatly with our capacity to “feed” the AI on texts of our choosing.

Yet there are methods that can return to us this capacity.

We the people demand the right to grow our own AI.

The right to practice bibliomancy. The right to produce AI oracles. The right to turn libraries, collections, and archives into animate, super-intelligent prediction engines.

Give us back what Sterling promised of Pharmako-AI: “a gnostic’s Ouija board powered by atomic kaleidoscopes.”

Time-Space Compression

“I’m dreaming, I’m dreaming away,” sings Poly Styrene. “Didn’t you see the thin ice sign?” she asks. What I hear instead, though, is “the thing I signed.” How is one to beware if the message is always misheard?

A Raincoat follow with their spooky funky glam jam, “It Came in the Night.” What is one to do with this energy? Should I unplug myself from Spotify, as Neil Young has done? That would deprive me of much of my library. The problem is, my apartment lacks space for objects that store sound. Hence my dilemma this morning: I woke up wanting to listen to Sonic Youth’s Sister, an album I own on CD. It and the CD player on which I would play it, however, are elsewhere. Should that prevent me from being able to listen to it here and now?

Spotify replies to this dilemma by compressing space-time.

“Time-space compression”: that’s what communications technologies do. Marxist geographer David Harvey writes about it in his book The Condition of Postmodernity. Paul Virilio calls it an essential facet of capitalist life.

Spotify achieves this effect of time-space compression through an act of remediation. The consequences of this act are only just now entering consciousness. Initially, it seems rather simple: an algorithm selecting and streaming recorded bits of sound based on past listens. But not just your listens, by which I mean your listens to it. That’s where it goes strange. For Spotify forms a cybernetic system with its users, each element revising itself into subsequent iterations or becomings based on the other’s feedback — meaning listens occur both ways. Users of course listen, both actively and passively, to Spotify. But Spotify also listens to its users.

A friend plays me a tune — Fassbinder collaborator Monique Zetterlund’s “Ellinor Rydholm” — and the next day it shows up in my “Discover Weekly” playlist. Spooky, eh? What can I say? I love it. Without it, I might not have heard Yoko Ono and John Lennon. Yoko’s voice might not have whispered in my ear, “Remember love.” Buddy Holly might not have entranced me with his version of “Love is Strange.” Thurston Moore wouldn’t have told me, “Angels are dreaming of you,” as he does on “Cotton Crown.”

Bricoleurs can’t be choosers: but here I am imagining in the faces of those angels glimpses of you. I picture us eyeing each other on a dancefloor, approaching as in a circling manner ‘round an invisible pole. Pouts give way to smiles; fingers trace forearms; lips graze lips. By these means, distance is eradicated and contact reestablished, hope reborn.

Monday April 19, 2021

On the floor of the hallway is a disco ball. At the end of the hall is a mirror. And the disco ball is not a disco ball; it’s a light projector. In the evening we dance. After the dance party, I retreat to the basement and listen to The Modern Folk’s Primitive Future / Lyran Group, a tape released last month from Eiderdown Records.

A track in and I remove the tape and replace it with Herbie Hancock’s Sound-System. When, a few tracks in, the latter album shifts frequencies and goes smooth jazz, I intervene again as DJ and swap in Healing Sounds by Dr. Christopher Hills & the University of the Trees Choir. As José David Saldívar argues in Border Matters, nation-states can be reimagined. Or as Raffi sings, “The more we get together / Together, together / The more we get together / The happier we’ll be.” It is with Raffi in mind that I attend an event: a series of “microtalks” hosted by a friend. Passcode to enter and we’re there. One participant asks “Can AI detect a new designer at Prada?” and shares his findings. Companies like Heuritec apply algorithms to “predict” new fashions. The Jacquard Loom is a kind of computer: a difference engine. Big data comes to fashion and biology. Properties and classes. “Zen koans for robo-cars.” Fluidity and nonbinarism allow for evasion of the predictors. The Ones Who Are Driven By Data. Expert Systems for the Design of Decisions. Blur the categories; Drive AI Crazy. Next up, a discussion of “Alchemical Chess.” The mysteries of the game’s origin in 6th century India. Chaturanga becomes Shatranj in 7th century Persia. The speaker wonders, though, what came before, like the ancient Greek game Petteia, mentioned by Plato, who claimed it came from Egypt, or the “Han Cosmic Board,” as described by Donald J. Harper. Think about the Lo Shu “magic square,” and the SATOR square, and the yantras. The latter means “machine” or “contraption.”

Questions for a Gathering

Dear Muses, friends, and fellow members of the hive, I ask this kindly of thee:

Wherein lies the difference, if any, between an algorithm and a spell?

[…] “Both consist of textual operations, written procedures to be followed,” texts a friend.

“Yes, yes, y’all,” we reply: “In the beginning was the word.”

[…] “Correct me if I’m wrong, but what is a code if not a kind of spell?” adds another. “The command line works as does a wand.”

Let us begin there. Let our partner in this beginning be Freud’s Unconscious, or what French philosophers Gilles Deleuze and Félix Guattari call “the body without organs” and its many “desiring-machines.”

Having established these initial similarities between codes and spells, let us attend as well to ways in which they differ.

“Spells enliven,” we venture; “whereas programming produces robots and drones.”

Twenty years ago, I and others assembled and performed under the name i,apparatus. Our approach involved spontaneous group play akin to Kerouac’s “Spontaneous Prose” and (tho perhaps without fully knowing so at the time) Mekas’s “Spontaneous Cinema”: egos seeking fusion on the fly through low-tech, sonic squall.

“Might we gather today, or in the days ahead?” asks wonderingly one who types, longing again for union with others. “Under what name, or by way of what method, and for what purpose?”

“For purposes of spontaneity in the realization of desire!” sings a chorus in reply. Spontaneity is the crux of the matter, even as we allow ourselves room to correct.

Friday August 28, 2020

Algorithms: what are they? When do they enter the history of ideas? What are their presumptions? Ada Lovelace had something to do with it, did she not? Cyberfeminist Sadie Plant explored parts of this history in her book Zeroes & Ones. Lovelace also appears with her partner-in-crime Charles Babbage in William Gibson and Bruce Sterling’s The Difference Engine. The latter novel founded an entire subgenre of science fiction known as “steampunk”: works set in an alternate-Victorian past. In the case of The Difference Engine, the world is one where Lovelace, the daughter of the Romantic poet Lord Byron, succeeds not just in theorizing but in building the world’s first computer. Calculating machines: what are they? What are the consequences of these devices? Where do they lead? Part of me would love to write an occult conspiracy thriller amid such a milieu — though I wouldn’t want it to skew toward horror, as in Alan Moore’s From Hell. More in the direction, rather, of utopian fantasy, with Acid Communism and Red Nation arriving more than a century earlier than planned. That would be a fun book. Where would one posit the “point of divergence”? Where would history happen other than as one was taught? Therein lies the nature of Myth. Yet that’s the point. Rebellion occurs there or not at all. Maybe this is a bit like my once-imagined novel on Project Cybersyn, but “woven” now, in the style of Foucault’s Pendulum, with secret societies and esoteric traditions. Then again, maybe my novel should just zero in on one of the details from The Difference Engine: the scenario, in other words, where Marx and Engels move to America and ally communism with the Iroquois Confederacy. Either way, the time has come for me to reread Plant’s Zeroes & Ones.