We will never have true AI, and that's OK (except when it isn't)
10 minutes read

We will never have true AI

Since the dawn of computers, prognosticators have held that true artificial intelligence is just around the corner. As the decades have passed, the notion that we’re on the cusp of articificial intelligence has come in and out of fashion, but we never seem to get around that corner. Part of the problem is a moving goal post issue. Abilities that seemed like “intelligence” in the 1930s became commonplace for machines by the 1950s, and our cultural understanding of the requirements for artificial intelligence changed likewise. For decades, playing Chess was the lofty summit that artificial intelligence strived for, and then Deep Blue came along. Now you can have an artificial Chess player stronger than Deep Blue in your pocket everywhere you go. The path to “true artificial intelligence” meanders past the many false summits we’ve achieved: Chess, Go, translation, image classification, audio recognition, and chat bots. We have had to improve the turing test several times over, because the naive implementation was too easy to succeed at. Computers have come so far in the last century, but true artificial intelligence is still, as ever, just around the corner. We are never satisfied with the progress we’ve made, and I don’t think we ever will be.

The fundamental issue is that general AI with human level reasoning is hard. We’ve set objectives like Chess or Go, because we felt they were difficult but achievable. They are hard problems for humans, but easy problems for computers. A game like Go, despite its vast universe of possible state spaces, is a problem ideally posed for a computer to solve. The inputs and outputs are all discrete and calculable, and the knowledge required to play well is completely self-contained. You can play it perfectly. There is a right answer. That machines have only recently bested humans at a task so uniquely tailored to computers is less of a triumph of AI and more an example of how very, very far we are from human-level artificial intelligence.

Any modern machine intelligence is rooted in a model its programmers included, such as the rules of Go or the task to cluster images. Our algorithms are bounded by these models in an inescapable fashion. The strictness of these programmed models vary. Most machine learning algorithms use a fairly strict paradigm: highly curated input along with some manner of objective. The objective varies from the highly structured supervised learning paradigms (a picture of a dog should output “dog”) to less structured approaches like auto-encoders (a picture of a dog should output the same picture, but we make it challenging to do). In all cases, the bounds of what can be learned are rigidly defined by the chosen input data, the training method, and the chosen model. An algorithm that only ever looks at pictures of dogs, will not be able to make sense of touching a porcupine.

Researchers have tried to broaden these restrictions in various, ingenious manners. For example, there is reinforcement learning, where an algorithm is presented with a goal and available actions, but left to figure out on its own a heuristic to select actions to achieve that goal. Reinforcement learning can be performed with very open-ended goals, such as “seek out new things you haven’t seen before”. These curiosity-based objectives work surprisingly well, and by their very nature will expose an algorithm to a broader variety of scenarios than more narrow-minded approaches. In the end, even such curious artificial explorers will be bounded by the limits of the simulation or task they’re evaluated on. They, too, can not escape the model they reside within.

A rudimentary understanding of neuroscience would tell us that humans, too, are bounded by the confines of their mental model. And it’s true, as far as I’m aware. I know I certainly am. Fortunately, we have the benefit of being bounded by a particularly large and complex model: our reality. Our capacity to understand and process reality varies naturally from person to person, but you can’t fault our learning environment for not being sufficiently diverse or complicated. Ultimately, it may be the case that the base requirement for true, human-level AI is none other than the ability to perfectly simulate human-level reality. Such an endeavour quickly becomes paradoxical. If you take as given that it’s possible to have a computer powerful enough to simulate our reality, that simulation must then be capable of, itself, simulating a computer simulating its reality (which, remember, is already simulating our own reality), and the whole thing devolves into a nasty bit of circular recursion.

Free-form inventiveness of the kind humans practice (inventing, for example, music or poetry or painting) requires a sandbox to play and learn in that is simply too big to achieve on computers in the near or distant horizon. As we pass false summit after false summit of true AI, it is ultimately the human capacity to produce that which has never existed before that is the mark of human intelligence, and computers as we know them today are never* going to attain that lofty height.

* A caveman could not conceive of an iPhone, and neither can we conceive of how our fundamental understanding of our surroundings could shift over millenia. In deference to this fact, my “never” is a soft never. A never that goes to, but not far past, the point at which me and everyone who remembers me is long dead and forgotten.

…and that’s OK…

Just because computers aren’t going to invent painting, that doesn’t mean they can’t learn to paint. The limitation in inventiveness does not mean that AI will not become useful or magical. There are a tremendous number of problems that current AI is poised to solve, or already has. A great preponderance of the work humans do is very similar to Go, in the sense that it can be modeled tidily without simulating the entire universe, and there is a clear metric for success. This varies from straightforward numerical tasks like logistics or finance, to slightly more ambiguous ones like customer service, to fairly arbitrary ones like composing music or writing novels.

True orthogonal inventiveness, where a wholly new type of thing is deliberately created, may be outside the scope of AI, but inventiveness within a model is totally doable for AI. We are marching toward a future rife with this kind of “smart but dumb” AI. Artificial Intelligence that can write you a novel exactly to your tastes, but which only knows about the world through words and the order that words are placed in. Such AI will be very useful, and it’s hard to think of a single aspect of modern existence that couldn’t be touched by and benefit from such a tool. But we must not forget the fundamental limitations of the model the algorithm exists within.

(except when it isn’t)

The inability to step outside of one’s own reality can present serious problems if that reality is not carefully constructed, and if the potential impact of actions outside that reality are not carefully evaluated. An artificial intelligence algorithm has no broader context, its whole existence is confined by the data it receives and the design of its programming. In this way, AI is a tool. Like a hammer, it swings when we swing our arm, it drives the nail. Like a hammer, it can also hit our thumb. Like a hammer, it can also be a weapon.

Artificial Intelligence has finally broken out of the laboraties. It is compelling and useful, capable of analyzing photos and audio, playing games, and making decisions. But we have an onus to think very long and very hard about deploying AI anywhere it could negatively impact people’s lives. An app that uses a GAN (a type of algorithm that generates content) to turn selfies into caricature drawings? Cool! But that same algorithm with different data can do much more malicious things, such as replacing clothes with naked bodies in photos of unconsenting parties. A recently published app did exactly this, and was removed quickly, but serves as a potent object lesson.

Some might argue that dangerous algorithms should not be published. They should be kept secret by the researchers who develop them, lest they fall in the wrong hands. I think this shifts the blame to the wrong party. The smith forged the hammer to drive nails, and never considered it might be used for murder. Even if you could foresee misuse, it’s only a matter of time before somebody with fewer scruples decides to publish it or worse, use it clandestinely. Revealing new dangerous technology to the public becomes a responsibility, to give society the chance to identify new attacks and prepare defenses against them. So there is no way to stopper the genie back in the bottle, or to return the lid to pandora’s box. The best we can do is try to educate and prepare. In truth, these dilemmae are no different than the ones society has been encountering with new technology since the dawn of time. Bad actors have always been present, and will use new tools to do the same old bad things in brand new ways.

The more insidious issue that AI presents is its use with good intentions, but poor consideration. This has started popping up more and more lately. Issues like police using criminal profiling algorithms trained on data that has a racial bias, or customized recommenders resulting in echo chambers that swing elections. In cases like these, nobody is behaving with malicious intent. The police want to stop criminals, which is a foundational principle in our social contract. Social media wants to give you more of what you like, by giving you more of what you’ve previously liked. Everyone is acting with good or neutral intentions, but when you introduce the tool of AI things can go out of balance very quickly. If you use a complicated tool on a complicated problem, it will behave in unanticipated ways.

Some negative effects can be forestalled, even if they are not foreseen. Before including any algorithm into a system, stop to consider the worst case scenario. If somebody could choose an output of the algorithm that would cause the greatest harm, what would it be? Identify these scenarios, and plan for them where possible. Often, a simple human double-check could be sufficient to avert disaster.

Unfortunately, even the most skilled doomsayers have blind spots, scenarios that are so far outside their own personal model of reality that they have no chance of predicting them. To avoid these, we can only make sure to tread very, very lightly. I would argue that a rule of thumb is: the less important a thing is, the safer it is to apply AI with little consideration. A fart noise classifier that listens to your farts and tries to predict what you ate previously (without any personal information)? Probably pretty safe to throw the most advanced machine learning at that problem. But anything that brushes up against finance, society, health, or government should be subject to a long period of careful consideration, along with plenty of checks and balances and ongoing evaluation to make sure the AI isn’t doing anything you didn’t expect it to.

Of course, if your AI is doing something you not only didn’t expect, but which it shouldn’t be able to do based on its model of reality… Well, then I get to write an entirely different essay.

tags:  artificial intelligence  machine learning  ai  ml  computers  ethical AI  go  chess