AI hallucinations and creativity
Mike Elgan has a piece about AI and its superior brainwashing capabilities:
Puny humans are no match for AI computerworld.com
In a typical Elgan style, it’s packed with links and references that are worth reading. One of those was the YouTube clip explaining (to an extent) the issue of AI hallucinations:
The big takeaway from this video for me was that hallucinations are actually a good thing, to an extent. But I see the pitchforks coming because I say AI is good, so I’ll get back to that in a minute.
Like many other topics in the US today, AI splits into camps of two extremes. It’s a political issue like many others, especially since it seems to automatically be associated with Republican rich white cis men who look into AI to deal with their usual bullshit, be it living forever, replacing teachers, or doing government work. Add to that that big companies are destroying the planet with their massive datacenters for AI while stealing tons of data and copyright work from artists and authors, and you’ve got yourself a lot of enemies, which, surprise, surprise, tend to be more liberal and left-leaning. It’s like 1+1=2; It’s how the world works today.
But AI is a technology, not a political camp or a cesspool of assholes, even if it seems to attract those. It’s a tool. And like any other tool, it can be misused and abused, or it can help, from finding cures for various cancer types to fixing typos and making images more accessible on this post you’re reading right now. On that last front, I’ve been a wary and anxious supporter of AI in general for a while. It depends on how you train it, how you use it, and for what ends. The bad components, just like the good ones, are human, as it’s always been with any other technology.
On that note, back to hallucinations: these are, in a way, the AI’s “creativity.” It’s why it’s called hallucinations: it’s when AI makes up stuff that doesn’t exist based on the data it’s given. It can be useuful, for exmaple, when you ask AI to make you a picture of a knight riding a dragon; you don’t know how a dragon looks like, but you have a rough idea, and you want the knight to look magestic, so chances are that among the various pictures it will create for you some of them you will like more. The dragon can have two heads, maybe, or the knight will have robes around the armor, stuff like that. With guidance, these hallucinations can become rather creative; without it, your knight will have 6 fingers in one hand and 3 in the other, the tell-tale signs of lazy marketing. The point remains the same: it’s a tool. You have to mix it with other skills to make something good out of it.
But AI, especially LLMs (like Chat GPT and the like), has put human laziness front and center, as this article shows. People throw questions at it and expect accurate answers within seconds. Because AI is very convincing, especially with all the hype around it, we check the results even less. We are also expected to use it to achieve results faster than ever without analysis. Is there a real surprise, then, when we hear what happens when people use it blindly?
I think that eventually we’ll get used to this whole thing, after the AI bubble bursts and people calm down a bit. I even let myself be a little optimistic about some of the solutions it will provide. But there’s no replacement for skills and patience, and no one wants to hear that for now.