AI IS A HOT MESS. And it’s not as great as people are making it out to be — when it comes to creative, human storytelling and story-making. Let me explain.
AI freaks out whenever it sees something new or weird. And in this crazy world, that’s like every second: The data changes all the time, and AI can’t tell what’s up or down, or what day it is. But AI is clueless, so it doesn’t know that it’s messed up. It just keeps doing its thing, spreading the mess to everything else. Cars go boom. Words go ouch. Friends go bye-bye.
This hurts humans in the obvious way of ouching, even killing, us. But it also hurts us in a sneaky way. AI can’t handle a little bit of change, so its makers are trying to make everything boring and predictable. And since humans are the most fun and wild thing around, we have to be tamed and trained. We have to do tests and tasks and follow rules at school, work, and everywhere else. We lose our mojo, our spark, and our guts that make us awesome, and we become sad, mad, and bad.
If we want a better future, we need to fix AI’s brain problem. Instead of making ourselves more like AI, we should do the opposite. We should make AI more like us. We are tough and smart. We can take a hit and get back up stronger. We can face a challenge and win with style.
Making AI like this would be awesome. We can do it, if we change our minds.
Thinking Different
First, we have to stop dreaming that AI is better than us. AI thinks in a different way from humans: Computers don’t have feelings, so they can’t be brave, and they don’t get stories, so they can’t be clever. That means that AI will never be like us, let alone better than us; it will be a different thing with its own pros and cons.
We then have to admit that the main reason why AI sucks right now is the thing that AI people worship the most: optimization.
Optimization is the push to make AI as right as possible. In the fantasy world of math, this push is cool. But in the real world where AI lives, everything has a price. For optimization, the price is data. More data is needed to make AI more precise, and better data is needed to make AI more true. To make AI better, its bosses have to spy on everything, stealing our secrets from our phones and websites, watching us when we’re not looking or too tired to care, and buying dirt and gossip from shady sources.
This spying is evil, and it’s also dumb. The cost of good data goes up and up; there’s no way to know everything about nature, so they have to guess and hope; and just when they think they have it all figured out, something new pops up and changes the game. Then the AI fails.
AI GOES NUTS
It calls dogs fruit, chases good guys as bad guys, and smashes big trucks into little buses that it thinks are bridges.The reason why AI is so nuts is because it tries to be too smart. The human brain is not like that. The human brain is lazy: It makes guesses from a few clues. And it doesn’t care about being 100 percent right. It’s happy to get by with the bare minimum. If it can live by being right 1 percent of the time, that’s good enough for it.
The brain’s way of doing the least work possible is a source of many dumb mistakes that can hurt us: being stubborn, jumping to conclusions, being reckless, giving up, freaking out. That’s why AI’s way of using lots of data can help us see our errors and fix our biases. But in making up for our brain’s weaknesses, we don’t want to go too far the other way. There can be a lot of good things about being good enough: It keeps us from being too hard on ourselves, causing stress, worry, hate, jealousy, unhappiness, tiredness, and self-hate. A more chill brain has helped us survive in life’s twists and turns, which need flexible plans that can change, based on feedback, on the go.These tough and smart brain benefits can all be given to AI. Instead of making faster data-eaters that gobble up more and more data, we can make AI more cool with bad data, different users, and crazy situations. That AI would trade near-perfection for steady okayness, increasing dependability and usefulness while giving up nothing important. It would use less power, go haywire less often, and put less pressure on its human buddies. It would, in short, have more of the down-to-earth skill called common sense.
Here’s three ways to do it
Making AI Brave of Confusion...
Five hundred years ago, Niccolò Machiavelli, the boss of reality, said that worldly success needs a weird kind of courage: the guts to go beyond what we know for sure. Life, after all, is too unpredictable to let us know everything, and the more that we worry about perfect answers, the more that we lose our chance to act. So, the smarter way is to focus on info that can be quickly gotten—and to move forward without the rest. Most of that missing info will turn out to be useless, anyway; life will change in a way that we didn’t expect, solving our ignorance by making it not matter.
We can teach AI to work this same way by changing our current way of dealing with confusion. Right now, when a Natural Language Processor sees a word—suit—that could mean different things—a thing to wear or a thing to sue—it works hard to analyze more and more related info to figure out the word’s exact meaning.This is “closing the circle.” It uses big data to shrink a ring of possibilities to a single point. And 99.9 percent of the time, it works: It correctly decides that the word suit is part of a judge’s email to a lawyer. The other 0.1 percent of the time, the AI loses it. It mistakes a diving suit for a legal talk, shrinking the ring to leave out the real truth and diving into a sea that it thinks is a court.
Let the circle stay big. Instead of making AI care about solving confusing data points, we can program it to do fast-and-dirty recalls of all possible meanings–and to then take those splitting options to its next tasks, like a human brain that keeps reading a poem with many possible interpretations held at the same time in mind.
This saves the data hunger that traditional machine-learning has for optimization. In many cases, the confusion will get washed away by later events: Maybe every done query works the same with either meaning of suit; maybe the system gets an email that talks about a lawsuit over a diving suit; maybe the user figures out that (in a typical human goof) she typed suite wrong.
Worst case, if the system faces a situation where it can’t go on unless the confusion is solved, it can stop to ask for human help, mixing guts with smart timing. And in any and all cases, the AI won’t break itself, blowing up (via a digital version of worry) into making dumb mistakes because it’s too stressed about being perfect.
Use Data to Help Creativity
The next big thing for antifragility is creativity. Current AI tries to be creative via a big-data use of divergent thinking, a way made up 70 years ago by Air Force Colonel J.P. Guilford. Guilford did okay, as far as he managed to turn some creativity into computer steps. But because most biological creativity, as later science has shown, uses no data and no logic, divergent thinking is far more boring in its results than human imagination. Although it can spam out huge amounts of “new” works, those works are stuck to mix-and-matches of old models, so what divergent thinking gets in size it loses in range.The practical limits of this data-powered, robo-recipe for making stuff up can be seen in text and image makers such as GPT-4 and ArtBreeder. By using old sets to think, these AI fill their creations with expert bias, so that while trying to make the next van Gogh, they instead spit out copies of every painter before. The follow-up result of such fake-invention is a culture of AI design that totally gets wrong what innovation is: FaceNet’s “deep convolutional network” is praised as a breakthrough over previous face recognition software when it’s more of the same forceful optimization, like changing a car’s torque band to add power—and calling it a revolution in driving.
The antifragile option is to flip from using data as a source of ideas to using it as a source of tests. Tests is the idea of Karl Popper, who, ninety years ago, in his Logic of Scientific Discovery, said that it’s more logical to use facts to kill ideas than to prove them. When put on AI, this Popperian change can flip data’s role from a mass-maker of boringly new ideas into a mass-killer of anything except wildly new ones.
What are your thoughts 💭🤔....??? Message me through my website.