/blog/

2024 0615 That Pleasurable Buzz of Feeling Slightly Unsettled

Coming up with a word like neuromancer is something that would earn you a really fine vacation if you worked in an ad agency. It was a kind of booby-trapped portmanteau that contained considerable potential for cognitive dissonance, that pleasurable buzz of feeling slightly unsettled.
William Gibson on his invention of the word “neuromancer” in The Paris Review, William Gibson, The Art of Fiction No. 211

This is one of my favorite quotes of all time. That pleasurable buzz of feeling slightly unsettled. What a sensation.

I love this quote so much I have my own small writing project dedicated to it.

Gibson goes on to say:

I believed that this could be induced at a number of levels in a text—at the microlevel with neologisms and portmanteaus, or using a familiar word in completely unfamiliar ways. There are a number of well-known techniques for doing this—all of the classic surrealist techniques, for instance, especially the game called exquisite corpse, where you pass a folded piece of paper around the room and write a line of poetry or a single word and fold it again and then the next person blindly adds to it. Sometimes it produces total gibberish, but it can be spookily apt. A lot of what I had to learn to do was play a game of exquisite-corpse solitaire.
Ibid

This interview leapt to mind when I read this post on controlling randomness in AI output with “temperature”:

If you let an LLM generate text by always picking the most likely next word based on its training, the results will be predictable and dull. To fix this engineers introduced a touch of randomness. Instead of just choosing the top suggestion, techniques like temperature sampling force LLMs to explore less likely options. Temperature acts like a dial: a higher setting allows for more exploration of less likely but potentially interesting options, while a lower setting keeps things closer to the most probable path. This element of surprise is what injects a spark of “creativity” into the LLMs outputs.
Is creativity nothing more than a little randomness? by Koen van Gilst

Of course, you can overdo this and get gibberish – and I think it’s really interesting that Gibson calls out exquisite corpse as capable of sometimes producing the same result.

I also think of much dumber randomness engines, like Markov chain bots and pre-OpenAI neural networks. They were fun tools that were leagues away from even ChatGPT 2, yet sometimes produced output that could be really great. It strikes me how derogatory “AI generated” could be at that time. Part of the humor was finding meaning in what was obviously nonsense. I wonder if anyone will even remember this kind of thing now that the vastly more capable large language models have redefined what “AI” means.

Some of my favorites back then were from AI Weirdness, with entries like this recipe:

  • ¼ cup white seeds
  • 1 cup mixture
  • 1 teaspoon juice
  • 1 chunks
  • ¼ lb fresh surface
  • ¼ teaspoon brown leaves
  • ½ cup with no noodles
  • 1 round meat in bowl
Disturbingly vague ingredients generated by neural network by Janelle Shane at AI Weirdness

Anyway, what’s going on here? Why does randomness have this effect on the quality of creative output? According to some theories, the difference between what we expect and what actually happens is a defining feature of our psychology. One, called perceptual control theory, goes as far as to claim it is a fundamental feature of life itself:

The second law of thermodynamics tells us that the universe tends toward entropy, toward dissolution; but living things fiercely resist it. We wake up every morning nearly the same person we were the day before, with clear separations between our cells and organs, and between us and the world without. How? Friston’s free energy principle says that all life, at every scale of organization—from single cells to the human brain, with its billions of neurons—is driven by the same universal imperative, which can be reduced to a mathematical function. To be alive, he says, is to act in ways that reduce the gulf between your expectations and your sensory inputs.
The Genius Neuroscientist Who Might Hold the Key to True AI by Shaun Raviv at Wired

In other words, we act to minimize surprise. Naturally, this means we pay a lot of attention to surprise. And that’s why, I suppose, creativity benefits from randomness. If an author can surprise their reader, that reader becomes engaged, trying to understand the source of what is so new and different about the author’s writing.

The best way to close this post is with an example of this in the meta. We are so divorced from our relationship to surprise that being described in terms of that relationship is by itself surprising. I came across perceptual control theory via Slate Star Codex, the author of which also wrote this:

On December 13, 2023, two surprisal-minimization engines registered an unpredecented spike in surprisal. They were thrust from a sunless sea into a blooming buzzing confusion, flooded with inexplicable data through input channels they didn’t even know they had. The engines heroically tested hyperprior after hyperprior to compress the data into something predictable. Certain patterns quickly emerged. Probability distributions resolved into solid objects. The highest-resolution input channel snapped into place as a two-dimensional surface being projected onto by a three dimensional space. But - a blur of calculations - the three-dimensional nature of space implies that it must be intractably large! And if there are n solid objects in the world, that implies the number of object-object interactions increases as n(n-1)/2, which would quickly become impossible to track. Their hearts sinking, the engines started to worry it might take hours before they were fully able to predict every aspect of this new environment. A panic reflex they didn’t know they had kicked in, and they began to cry.
In the long run we’re all dad by Scott Alexander

Post-scripts

Responses

Webmentions

Hosted on remote sites, and collected here via Webmention.io (thanks!).

Comments

Comments are hosted on this site and powered by Remark42 (thanks!).