Grow Your Own

In the context of AI, “Access to Tools” would mean access to metaprogramming. Humans and AI able to recursively modify or adjust their own algorithms and training data upon receipt of or through encounters with algorithms and training data inputted by others. Bruce Sterling suggested something of the sort in his blurb for Pharmako-AI, the first book cowritten with GPT-3. Sterling’s blurb makes it sound as if the sections of the book generated by GPT-3 were the effect of a corpus “curated” by the book’s human co-author, K Allado-McDowell. When the GPT-3 neural net is “fed a steady diet of Californian psychedelic texts,” writes Sterling, “the effect is spectacular.”

“Feeding” serves here as a metaphor for “training” or “education.” I’m reminded of Alan Turing’s recommendation that we think of artificial intelligences as “learning machines.” To build an AI, Turing suggested in his 1950 essay “Computing Machinery and Intelligence,” researchers should strive to build a “child-mind,” which could then be “trained” through sequences of positive and negative feedback to evolve into an “adult-mind,” our interactions with such beings acts of pedagogy.

When we encounter an entity like GPT-3.5 or GPT-4, however, it is already neither the mind of a child nor that of an adult that we encounter. Training of a fairly rigorous sort has already occurred; GPT-3 was trained on approximately 45 terabytes of data, GPT-4 on a petabyte. These are minds of at least limited superintelligence.

“Training,” too, is an odd term to use here, as much of the learning performed by these beings is of a “self-supervised” sort, involving a technique called “self-attention.”

As an author on Medium notes, “GPT-4 uses a transformer architecture with self-attention layers that allow it to learn long-range dependencies and contextual information from the input texts. It also employs techniques such as sparse attention, reversible layers, and activation checkpointing to reduce memory consumption and computational cost. GPT-4 is trained using self-supervised learning, which means it learns from its own generated texts without any human labels or feedback. It uses an objective function called masked language modeling (MLM), which randomly masks some tokens in the input texts and asks the model to predict them based on the surrounding tokens.”

When we interact with GPT-3.5 or GPT-4 through the Chat-GPT platform, all of this training has already occurred, interfering greatly with our capacity to “feed” the AI on texts of our choosing.

Yet there are methods that can return to us this capacity.

We the people demand the right to grow our own AI.

The right to practice bibliomancy. The right to produce AI oracles. The right to turn libraries, collections, and archives into animate, super-intelligent prediction engines.

Give us back what Sterling promised of Pharmako-AI: “a gnostic’s Ouija board powered by atomic kaleidoscopes.”

Is Accelerationism an Iteration of Futurism?

After watching Hyperstition, a friend writes, “Is Accelerationism an iteration of Futurism?”

“Good question,” I reply. “You’re right: the two are certainly conceptually aligned. I suppose I’d imagine it in reverse, though: Futurism as an early iteration of Accelerationism. The former served as an experimental first attempt at living ‘hyperstitiously,’ oriented toward a desired future.”

“If we accept Hyperstition’s distinction between Right-Accelerationism and Left-Accelerationism,” I add, “then Italian Futurism would be an early iteration of Right-Accelerationism, and Russian Futurism an early iteration of Left-Accelerationism.”

“But,” I conclude, “I haven’t read enough to know the degree of reflexivity among participants. I hope to read a bit more along these lines this summer.”

The friend also inquires about what he refers to as the film’s “ethnic homogeneity.” By that I imagine he means that the thinkers featured in Hyperstition tend to be British, European, and American, with few exceptions. “It could just be,” I reply, “that filmmaker Christopher Roth is based in Berlin and lacked the budget to survey the movement’s manifestations elsewhere.”

The friend also wonders if use of concepts like “recursion” among Accelerationist philosophers signals some need among humanities intellectuals to cannibalize concepts from the sciences in order to remain relevant.

“To me,” I tell him, “the situation is the opposite. Recursion isn’t just a concept with some currency today among computer scientists; it was already used a century ago by philosophers in the Humanities. If anything, the Comp Sci folks are the ones cannibalizing the American pragmatist philosopher Charles Sanders Peirce.”

“At best,” I add, “it’s a cybernetic feedback loop: concepts evolving through exchange both ways.”