Those who saw IBM’s Watson defeat former winners on Jeopardy! in 2011 might be forgiven for thinking that artificially intelligent computer systems are a lot brighter than they are. While Watson was able to cope with the highly stylized questions posed during the quiz, AI systems are still left wanting when it comes to commonsense. This was one of the factors that led researchers to find that one of the best available AI systems has the average IQ of a four-year-old.
To see just how intelligent AI systems are, a team of artificial and natural knowledge researchers at the University of Illinois as Chicago (UIC) subjected ConceptNet 4 to the verbal portions of the Weschsler Preschool and Primary Scale of Intelligence Test, which is a standard IQ test for young children. ConceptNet 4 is an AI system developed at MIT that relies on a commonsense knowledge base created from facts contributed by thousands of people across the Web.
While the UIC researchers found that ConceptNet 4 is on average about as smart as a four-year-old child, the system performed much better at some portions of the test than others. While it did well on vocabulary and in recognizing similarities, its overall score was brought down dramatically by a bad result in comprehension, or commonsense “why” questions.
“If a child had scores that varied this much, it might be a symptom that something was wrong,” said Robert Sloan, professor and head of computer science at UIC, and lead author on the study. “We’re still very far from programs with commonsense–AI that can answer comprehension questions with the skill of a child of eight.”
Sloan says AI systems struggle with commonsense because it relies not only on a large collection of facts, which computers can access easily through a database, but also on obvious things that we don’t even know we know – things that Sloan calls “implicit facts.” For example, a computer may know that water freezes at 32° F (0° C), but it won’t necessarily know that it is cold, which is something that even a four-year-old child will know.
“All of us know a huge number of things,” says Sloan. “As babies, we crawled around and yanked on things and learned that things fall. We yanked on other things and learned that dogs and cats don’t appreciate having their tails pulled.”
Sloan and his colleagues hope their study will hope identify areas for AI research to focus on to improve the intelligence of AI systems. They will present their study on July 17 at the US Artificial Intelligence Conference in Bellevue, Washington.
Source: UIC
That is far ahead of the #2 system but are mostly on the cusp of having a super computer that surpasses the brain in data processing. That is not to say that programming human intelligence into a super computer with over 3 million processors is any trivial task but we are pretty close to what will be a huge milestone on the way to victory for the machines.
If it's set up like Watson, it's basically a search engine. Enter a question, get an answer from the top of the list of search items found.
Tell it "build me a tower of blocks" and you'll wait forever, because it's no more capable of doing that than your washing machine is.
Then again, in the real world, anthropomorphism isn't a very good measure of the usefulness of an AI system. Watson does what it is designed to do very well.
Learning is a collection of events with inter-dependencies to other events, sorted and categorized between each other in hierarchies governed by external factors (pain, reward, curiosity) as well as past outcomes.
What you really need is a hugely wide bus (2048-4096 bit), with lots of memory and thousands of small analogue logic cores with 12-16 bit A/Ds for their interface to the bus. Each small core needs to hold a few kilobytes of memory, and only need operate at a few MHz. Their internal states and i/o tendencies are in a state of flux dependent on neighboring cores. A specific master core (transaction ASIC) would use the ultra wide buss to simulate a hyper-connected matrix where the analogue cores believe they are in a many to many topology.
Probably best achieved with an array of modified FPGAs. Unfortunately the FPGAs would have to somehow be modified so that each cause effect event is written to an externally attached core.
The only issue is off the shelf FPGAs cannot be accessed when being written to. Back in earlier years I suggested that the best approach would have been a cascaded FPGA where a master unit would handle connection between sub-units that in themselves would be capable of re-writing a third layer. In this way the "brain" layer would not see the other levels of abstraction, thereby allowing topology changes to take place transparently. This kind of architecture can in principle be flattened in a 3D matrix of interconnecting mesh, where each interconnect is a small analogue processor, fast A/D and small bit of volatile memory buffer to hold last state. This architecture has its drawbacks, but could achieve a limited version of above on a die with only power and external inputs/stimuli for interface.
I suggest you read Doug Hofstadter's "I Am A Strange Loop" to appreciate the role that recursion and self-reference play in a true AI.
Or you can read my SF work, "Pa'an", due out shortly.