Like the IBM researcher who presented on a joint project between IBM and MIT to create smarter AIs, I’m not a fan of the term AI. But don’t get me started on naming because, when I did it, I discovered a new rule and that rule is that “the only thing folks will agree on when it comes to a new name is that the poor sap that came up with it is an idiot.” Still to me something is either intelligent or not and if you are going to define a class of intelligence you likely should connect it to the source like human intelligence, animal intelligence, or, in this case, Machine Intelligence.
Our current AI technology level is stupid. We call it Narrow AI, but it means the AI can do one of a limited number of highly defined tasks somewhat autonomously. I say “somewhat” because they make mistakes and we currently lack good tools to fix those mistakes.
Well, there is a well-funded (currently at $240M over ten years but expected to go much higher) joint effort between IBM and MIT to create a far smarter AI. Which should be relatively easy, the current AI level could generally be outperformed, significantly, by a motivated 5-year-old child (in terms of quality of the decision, not decision speed).
Now there are three things that caught my eye during this presentation. First is that we currently lack a good AI error-correcting process, second we don’t have a good defense of someone wanted to recreate the Terminator movie in real life, and third Deep Learning AIs (which used to be called Neural Networks) can’t even answer relatively simple questions about complex objects and images.
Let’s talk about the state of fixing these things this week.
AI Error Correction
Right now, if an AI makes a mistake, say like a Tesla car driving into a median at speed, there isn’t a good way to figure out why it made it. You can see what happened, but both Machine and Deep learning don’t provide a good way to analyze that mistake. Think about that next time you turn on Tesla Autopilot.
This lack of error-correcting capability is one of the reasons for running lots and lots of simulations is critical to the current state of the art because you want those mistakes made and mitigated before they translate to the real world. But the process of correcting these mistakes often mean recreating the AI. This practice would be like tossing out your kid because he flunked Algebra.
IBM and MIT are working on a process using a blended approach of Symbolic AI and more traditional AI approaches (Machine Learning and Deep Learning) to create not only a forensic audit trail that can be analyzed but to provide ever more simple ways to focus on correcting problems without having to start over.
This error correction capability could massively reduce the amount of simulation time needed because the AI developer can focus on the specific problem, avoid having to regenerate the AI and be far less concerned that the fix created new problems. IBM didn’t provide numbers, but it should reduce massively the current time it takes to develop a working AI and, part of that reduction, is that Symbolic AIs have proven to be more accurate while only requiring about 1% of the training data a Deep Learning AI requires.
This error-correcting process alone could be a game-changer.
Now I’m not talking about the Lifeboat Foundation effort AI Shield which focuses on creating a defense in case someone creates something like Skynet but an effort to make sure no one can turn your AI against you.
If you think about a future where everything from our homes, cars, cities, and aircraft are “Smart” the concept that one of the related AIs could become hostile should keep more of us awake than it currently does.
MIT and IBM are working to create AI attacks, not so they can turn AIs against their owners, but so they can test defenses of the AIs they develop to make sure those AIs don’t go rogue. I’m pretty sure none of us want to live in a world of Rogue AIs but if you want to see what that world might look like I suggest a book called Robopocalypse.
But with State-level attackers and AIs increasingly having control over critical systems IBM and MIT’s efforts are critical to making sure we don’t live in a world where a critical mass of AIs is working against us.
Making Smarter AIs
AIs today are stupid. Yes, they can handle a massive amount of data very quickly, but their ability to make decisions from that data is surprisingly limited. For instance, in a picture of mixed objects like trees and buildings if you were to ask whether there are more trees than buildings the AI would be flummoxed because it looks at a picture as a cohesive object. It can delineate a building from a tree, but qualitative information on what it sees is generally beyond its skillset.
Fixing this shortcoming brings us back to Symbolic AI which breaks the image down into components, converts the information into language, and then creates a program from that language that defines the object and, most important, allows for detailed analysis.
With this, the AI can become far smarter and, in an increasing number of cases, substantially more useful.
IBM researchers believe we are around 50 years out from our first General AI, which can perform like a human. While this should give us comfort that many of our jobs may be safe for our lifetimes, be aware that 50 year predictions are the equivalent of saying “we have no clue how long this will take” and things continue to move far faster in this area than previous forecasts (outside of Science Fiction) predicted.
But IBM and MIT are making real progress, and while I wonder if they shouldn’t also be thinking of an AI defense like AI Shield because not everyone will be using an IBM AI, we’ll likely own much of our Smart future to this critical effort.