Good Thinking

One Big Question: Why is artificial intelligence still kind of dumb?

View 2 Images
Artificial ... intelligence?
While still impressive, AI has a long way to go to rival the human brain
Artificial ... intelligence?

Once the domain of science-fiction authors and script writers, artificial intelligence is steadily marching into the real world. Recently we've seen the technology do everything from reading lips better than the experts to playing in a poker tournament and trouncing its human competition.

But while it seems everyone is jumping on the AI bandwagon (or should that be futuristic car?), we were wondering just how advanced the technology really is. So we put a question to Igal Raichelgauz, the founder of Cortica, an image-recognition company that relies on AI technology to thrive: Why is artificial intelligence still kind of dumb?

Here's what he had to say.

AI will always lack full intelligence, since there is no practical limit to the information it can take in. While human beings are limited by our biological hardware and the natural design process of our evolution, AI can theoretically continue to scale and gain intelligence without limits. The real limitation of artificial intelligence's "intelligence" is our ability produce it. That said, the intelligence of AI today in completing multiple tasks is by far inferior to the intelligence of humans and other biological organisms.

For example, despite the significant progress of deep learning during the last five years, these systems have not come close to the ability of a person in understanding images. AI systems produce non-intelligent false-positives, are unable to understand contextual information, and are not sufficiently granular. There are, of course, other tasks such as number crunching, playing chess, and mastering GO, where the AI of today beats human capacity. But the fact remains that AI falls short in the most trivial of tasks for humans –interacting with the physical world and perceiving natural signals – indicating that AI systems are simply powerful computing machines with a misleading title.

While still impressive, AI has a long way to go to rival the human brain

For AI to achieve human-level intelligence, the most important milestone is to excel in the fundamental tasks where human-beings have been excelling for thousands of years. Visual understanding and the ability to navigate the physical world intelligently, therefore, are more proper benchmarks for this milestone than playing poker. Matching human-level intelligence in these kinds of natural tasks will bring AI infinitely close to surpassing our intelligence.

To understand the gap that still stands in the way of reaching this milestone, we must look at the differences between biological systems and deep-learning technology.

Creators of machine-learning AI boast that machines can learn and process data on their own. But in actuality, machine-learning technologies follow a top-down approach that prohibits them from doing anything on their own.

In top-down architectures, first the system undergoes training – its algorithm is developed and shown enormous labeled data sets. Only then can it apply this knowledge to new data. Deep-learning systems are fed with labeled training data until they can successfully spit out the desired output variables of new data. The machine is told when it gets the answer right until it can successfully extrapolate that knowledge. Deep-learning machines are built using algorithms with numerous layers that process data using many levels of abstraction. These top-down systems have accomplished great feats, but their reliance on training makes them complex machines, not intelligent ones.

Machine mimics

Most of the learning that humans do happens without supervision. From the day they're born, children learn to navigate the world by constantly absorbing the plethora of information to which they're exposed and learning to make sense of it. To possess the broadest definition of intelligence, machines must mimic the way humans learn and understand – bottom-up. Without training, parameters, or data sets, their algorithms and structures will be able to absorb data, process it, and build their own functions for understanding it. Following logical learning patterns, intelligent machines are those that can understand and learn by generalizing, contextualizing, and utilizing creativity on their own.

Before co-founding Cortica, I joined a team of neuroscientists and engineers at the Israel Institute of Technology's Technion, aiming to understand how the cortex functions and engineer a machine that could mimic this process. While our instinct was to throw as much computational power at such a complicated problem, the decision to limit the number of layers in our system proved successful in building unsupervised learning and computer-vision AI.

Today, Cortica's system can differentiate on its own, create clusters of like-data, and then label them using information that already exists on the web. The onset of truly intelligent technology will fundamentally improve the way we drive some of the most important technological innovations and will allow us to leverage the seemingly endless amount of visual data that exists.

If AI is developed to mimic human processes, then it could indeed surpass human intelligence. There is no reason to limit it from developing its own, more superior AI, but we must give it the correct architecture to do so.

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
9 comments
Daishi
I've been a vocal critic of legs on robots as a mobility platform for as long as I can remember and everyone was sure I'd be eating my words about it long before now. It turns out copying humans as a method of mobility is good science fiction by not very good science. I think the same thing mostly applies to how we approach AI. I think there is a view that we try to replicate how humans think and it's just going to start learning and become self aware and then surpass us but that idea is about as science fiction as legs on robots.
ZachariahE.Nelson
Whenever I see an article like this I think about Stranger In A Strange Land by Robert Heinlein. *spoilers*. There was a scene where Michael Valentine (a human raised by martians; who has no idea what sarcasm or irony or humor was to humans because that's not a thing for martians) was at a zoo looking at some animals interact and when he saw them do something illogical, he busted out laughing for the first time in his life. Even a logical child-like machine-minded human finally saw the illogical rationale of humanity in this ironic interaction between some monkeys. And at that point on he started acting "human" for the first time without any help.
I think it's when machines can finally comprehend the rationale of human illogical functions that they start acting like humans. Because that is what seems like the main factor of being human from some other nature-driven animal. We are in a position where we can make mistakes, and a lot of the times willingly, but know that our lives will not be threatened so we make light of the situation and not really take meaningful steps until it does become something we perceive life-threatening.
P51d007
Kind of simple in reality...because they are programmed by humans ;)
Fabrizio
It is not true that machines only mimic humans. Supervised learning is an important part of the learning process, but not the only one. Unsupervised, and especially reinforcement learning, are increasingly breaking grounds in practical applications, where machines are able to find innovative solutions to existing problems. Learning from a teacher is always a good first step, but the creativity comes out later on, when the brain (either biological or artificial) starts thinking on its own.
PeteMorrison
Until you have a deep neural net which can robustly re-program its own feedback mechanisms, you can't really have human-like unsupervised learning. I don't think we have the computational headroom yet to allow that for a reasonably large network (presumably you'd have to allow it to reallocate its own resources, too). With quantum computing being not too far away, though, we may well soon have that capacity, even with something like a neural net that becomes exponentially more complex with size.
Also, we would need to rethink the programmer's paradigm: a neural net is only as smart as its goal, which at the moment is written by programmers. If you want truly biomimetic learning, I think you would have to allow some fuzzy logic into the goal fulfilment end of things. At that point, you would have to be very careful about what information you provide to such a program. The good news is it'll most likely just find a way to "cheat", i.e. to goal-complete by a trivial solution, in a way not anticipated by the programmer.
neutrino23
Not to be too negative about AI, but most of the predictions about it are based on wishful thinking, not science. We have this notion that somehow if a computer has enough transistors, enough memory it will spontaneously become conscious. Expanding the amount of data available doesn't necessarily improve things. Lots of problems become intractable as you increase the number of degrees of freedom.
AI is based on algorithms and we as yet have no algorithm for sensation or experience. A machine can measure the spectrum of light from the sky with great precision, but even an illiterate child can experience its blueness, which a machine can't.
I'm not saying that we will never be able to mimic human intelligence in a machine or that AI is not useful now. It is just that we don't know the fundamentals of how we think so we can't yet copy that in our machines.
Theo Prinse
It remains to be seen if non human artificial intelligence will be able to develop its own more superior AI or even surpass human intelligence by nature of definition. AI however will integrate with the human brains. Quantum computing will accelerate scientif research such as AI. Perhaps human intelligence might revolutionize with quantum computing integrated implants. There will always be the hierarchy of command. Who is going to command and judge whom ? Humans are on the brink to travel to the stars with relativistic and superlminal instantaneous speed and will procreate and panspermiate with stemcell cultivated artifical uterus. Mankind is on the brink of gaining eternal life.
Ralf Biernacki
Several times in the interview, Mr. Raichelgauz speaks of AI surpassing human intelligence as if this was the ultimate and desirable goal of AI research. Coming as it is from one of the top active researchers in the field, I find this deliberately irresponsible attitude very worrisome.
Coca1
What I think would be fun would be having Artificial intelligence start my day telling me about my health. I don't want AI nagging me not to have a beer but it could encourage me to exercise, check my weight, check blood pressure and the health list is endless. Make AI into something I can carry with me as I go through my day. Something like a Smartphone only a little more attractive and monitoring without requiring that I enter when I have a beer or do a few chin ups. This is just my thoughts on where I would like AI to go. AI could help me remember your name when I meet you; help me remember appointments and things like appointments. Maybe AI will LEARN to navigate the world as AI helps me navigate my simple life.