John Parkes
It's impossible to create an AI...one must develop an AI and give it the same years to learn and explore the world as a human if we expect it to become a truly autonomous learning being. Children are progressively allowed more freedoms as they learn and gain experience, AI's probably won't experience that same freedom to explore and be 'off the leash' as humans are simply out of fear. We are nowhere near ready to create a true AI as far as hardware goes, physical autonomy is just not practical yet. It is coming though and very rapidly. We lack energy production and power storage technology, by the time individual homes are powered internally without outside resources we should have the technical ability in other areas to create what we envision as AI. We need another 200 years, not just for hardware, but for humanity to move past our current reliance on economy, societal issues, and capitalism. Those three are very much in the way of our progress. When education is freely available to to even the poorest, regardless of ability to earn good grades, or pay for it, we will see the most amazing advancements in our history...and they will come from the most unlikely sources we can imagine today. Even now our brightest minds are working to develop products for corporations to maintain our reliance on an economic system that hinders advancement, we have scientists without funding, engineers without creative license, designers and thinkers selling trinkets and advertising instead of changing the world. I read science fiction and see in that fiction worlds where everyone eats without economic impact, where healthcare is without cost, where people are free to follow dreams and work on what they have a passion for...it's not real, but could be, it's not possible in how we think and operate in our world, but it could be, sure it's fantasy and i know we can't simply change the world in one or even five lifetimes, but it could be done. I see so much potential for humanity, and then see poor children living in slums with no hope of an education and mourn the loss of that child's potential.
Daishi
in 2012 IBM estimated that a human brain can process 36.8 petaflops of data. The fastest super computer in the world at the time of that estimate was 16.32 petaflops. The top super computer this year on the top500 list form June can process 33.86 petaflops/s of data.
That is far ahead of the #2 system but are mostly on the cusp of having a super computer that surpasses the brain in data processing. That is not to say that programming human intelligence into a super computer with over 3 million processors is any trivial task but we are pretty close to what will be a huge milestone on the way to victory for the machines.
Jon A.
It's more like it lacks any type of actual human intelligence.
If it's set up like Watson, it's basically a search engine. Enter a question, get an answer from the top of the list of search items found.
Tell it "build me a tower of blocks" and you'll wait forever, because it's no more capable of doing that than your washing machine is.
Then again, in the real world, anthropomorphism isn't a very good measure of the usefulness of an AI system. Watson does what it is designed to do very well.
SciFi9000
You don't just create AI for the sake of AI, you create it to achieve a task... the more dynamic the task, the more complex the AI. Even our own brain, it is a fact that you isolate it out of the world.. it withers and dies.. it need tasks, and it's what drives the intelligence to develop. (it is the complex hand that needed a large brain to work it). As we create machines to function in more dynamic and complex tasks, we will develop the software to deal with these tasks and they will get 'smarter'. Each lesson learned will then be applied into other sysstem and so on. The old 'brain in a box' of sci fi is of no purpose... what will drive AI is setting machines off in a dynamic environmnet and having to cope with the environmnet to achieve a goal. As for John's comments above.. while I agree, I have to say it's a bit off topic.
Ricky Hall
Oh but AI for the sake of AI is indeed the goal of many. But what would a machine with AI want? Would it too become ego-centric, wanting to ensure it's own survival above all else? Of course, this is the very nature of intelligence. Altruism is an ill-defined character poorly demonstrated anywhere yet on earth. If we were able to program our societal structures around such compassionate values, we would then be ready to truly endorse an advancement of AI systems that would by their nature have access to our most destructive inventions. But if society was smart enough to be truly compassionate, would we then need AI, or ust quick access to real facts?
Nairda
The architecture has to be changed to support human/animal like associative memory.
Learning is a collection of events with inter-dependencies to other events, sorted and categorized between each other in hierarchies governed by external factors (pain, reward, curiosity) as well as past outcomes.
What you really need is a hugely wide bus (2048-4096 bit), with lots of memory and thousands of small analogue logic cores with 12-16 bit A/Ds for their interface to the bus. Each small core needs to hold a few kilobytes of memory, and only need operate at a few MHz. Their internal states and i/o tendencies are in a state of flux dependent on neighboring cores. A specific master core (transaction ASIC) would use the ultra wide buss to simulate a hyper-connected matrix where the analogue cores believe they are in a many to many topology.
Probably best achieved with an array of modified FPGAs. Unfortunately the FPGAs would have to somehow be modified so that each cause effect event is written to an externally attached core.
The only issue is off the shelf FPGAs cannot be accessed when being written to. Back in earlier years I suggested that the best approach would have been a cascaded FPGA where a master unit would handle connection between sub-units that in themselves would be capable of re-writing a third layer. In this way the "brain" layer would not see the other levels of abstraction, thereby allowing topology changes to take place transparently. This kind of architecture can in principle be flattened in a 3D matrix of interconnecting mesh, where each interconnect is a small analogue processor, fast A/D and small bit of volatile memory buffer to hold last state. This architecture has its drawbacks, but could achieve a limited version of above on a die with only power and external inputs/stimuli for interface.
Ken Brody
Common sense requires a viewpoint, a personal referent - an aspect of consciousness. Without such self-awareness, there is no place to "stand" to see how things relate to you, and all you will get is a fancy search engine.
I suggest you read Doug Hofstadter's "I Am A Strange Loop" to appreciate the role that recursion and self-reference play in a true AI.
Or you can read my SF work, "Pa'an", due out shortly.
Kate Gladstone
It sounds like me at age four. (I have Asperger's Syndrome.) Should someone tell the researchers that they have invented the world's first non-human Aspie?
Mike Brown
Well first of all...this is to be expected.Its early on in the game.As the person mentioned above the more dynamic the task-the more the AI will develop.We are in an exponential phase.yes in 2013 its going to be rudimentary..give it 15 years with the rate of exponential increase,we will have something more reasonable then
Routy
Or less "reasonable". Do you watch much Sci-fi?