Hallucinations as a feature, not a bug

Co-authored with Grace Carney

A few months ago Fred kicked off a conversation about what the “native” applications of AI technology will be. What are the new things or businesses that we can now build that weren’t possible before the current era of AI maturity? How might this shine a fresh light on the ways in which we at USV can actualize our Thesis 3.0?

Fred advised, “If you want to figure out what the native AI applications will be, start by laying out the new primitives and going from there.”

One primitive we are considering is hallucinations. While there has been much discussion about how to engender trust with respect to hallucinating AIs, does this mean that hallucinations are de facto something to be avoided or erased? Or rather, do these “artifacts and happy accidents” enable new businesses and forms of media?

What if AI hallucinations are a feature, not a bug, of certain native AI applications?

If you’ve come across these hiccups in your own AI interactions, maybe you’re like us in that you’ve raised an eyebrow at a surprising ChatGPT answer or smiled at a glitch in your MidJourney art. 

Hallucinations are random acts of creation, and this randomness . . . is fun.  Perhaps, as Michael Dempsey said to us, hallucinations are the manifestation of the volatility of humans in AI.

Now let’s think about the AI-native applications where this is particularly desirable.  Some ideas might include personal companions, social media, content generation, gaming, architecture, music and even mental health.

Take personal companions, like Replika, HeyPI, or character.ai: AI-generated ‘friends’ that you can chat and interact with. Some have been built in this space already, however we think there is a greater role for hallucinations to generate serendipity, creativity and entertainment for users. We’re already seeing some early examples of this, such as a creature called Mo that creates paintings on the fly or a website that lets you upload your images and have conversations with them. Arbitrary, like your existing friends; also novel and new.

We are considering whether the artifacts and quirks that hallucinations produce can actually be components (primitives) themselves for novel and native forms of media. For example, “interdimensional clown wrestling”:

In thinking about architecture and design, the quirks that neural networks produce could offer a way to generate quite radical, hallucinatory effects by distorting, morphing and reimagining spatial forms, geometries, details and textures. Imagine what we could learn (and produce, subject to the constraints of the physical world) from the adjacent possible of dreamlike spaces, melted facades, impossible shapes and glitched textures. The same could apply to gaming, in inventing new worlds, plot devices, and narratives.

What new infrastructure might be created to amplify these ideas?  One answer could be something like dreamGPT: the “GPT-based solution that uses hallucinations from LLMs for divergent thinking to generate new innovative ideas.”  Another might be something Fabian Stelzer likes to call ‘day dreaming’: inserting random words into a prompt where the random word is pulled from an API.  We like to think of this as a bit like Mad Libs for models. Or, an LLM optimized for creative hallucinations (perhaps one that has not been gentled by RLHF).

Have you ever been to an improv show where the comedians pull random props out of a box and have to use them to guide the narrative on stage?  The alchemy of trained comedians and odd variables can generate sketches that are uniquely novel and hilarious.  We wonder whether hallucinations may be the ultimate sketches of their own.