26 December 2009

Another Prediction for Human-Level AI: 2025!

Today's AI prediction (via Josh Hall) comes from Shane Legg, whose PhD thesis is entitled Machine Superintelligence (PDF).   Shane predicts the arrival of human level Artificial Intelligence (HL-AI) around the year 2025 to 2028.  His 90% confidence interval for HL-AI is between 2018 and 2036.

Shane begins his discussion by predicting that computers capable of speeds of 10^18 flops will be commonplace by 2020. Then he says something interesting on the topic of quantum brain and on the relationship between computer power and AGI:
I had a chat to a quantum physicist here at UCL about the recent claims that there is some evidence for this. He’d gone through the papers making these claims with some interest as they touch on topics close to his area of research. His conclusion was that it’s a lot of bull as they make assumptions (not backed up with new evidence) in their analysis that essentially everybody in the field believes to be false, among other problems.

Conclusion: computer power is unlikely to be the issue anymore in terms of AGI being possible. _VettaProject
Just as I was beginning to think that Shane may have the beginnings of a grip on the AGI problem, he says this:
The main question is whether we can find the right algorithms.
The "algorithm mindset" is a curse that computer science oriented AGI researchers are particularly prone to. It is one reason it has taken so long to begin to understand intelligence, and also one of the main reasons that the flock of optimistic predictions for AI (by 2020, 2025, etc) are likely to be wrong. Shane then has some interesting comments on the idea of the brain as an AGI machine:
At a high level what we are seeing in the brain is a fairly sensible looking AGI design. You’ve got hierarchical temporal abstraction formed for perception and action combined with more precise timing motor control, with an underlying system for reinforcement learning. The reinforcement learning system is essentially a type of temporal difference learning though unfortunately at the moment there is evidence in favour of actor-critic, Q-learning and also Sarsa type mechanisms — this picture should clear up in the next year or so. The system contains a long list of features that you might expect to see in a sophisticated reinforcement learner such as pseudo rewards for informative queues, inverse reward computations, uncertainty and environmental change modelling, dual model based and model free modes of operation, things to monitor context, it even seems to have mechanisms that reward the development of conceptual knowledge.
This is an example of trying to view the brain as a type of computer, optimised to run machine learning algorithms. Although wrong, there may well be some fruitful ideas to come from the enterprise of holding the brain up on one side, and an advanced machine learning platform facing it -- like two large mirrors facing each other -- and seeing what sorts of iterative and recursive reflections bounce out into the real world.

Here is the best part of Shane's article:
The really tough nut to crack will be how the cortical system works. There is a lot of effort going into this, but based on what I’ve seen, it’s hard to say just how much real progress is being made. From the experimental neuroscience side of things we will soon have much more detailed wiring information, though this information by itself is not all that enlightening. What would be more useful is to be able to observe the cortex in action and at the moment our ability to do this is limited. Moreover, even if we could, we would still most likely have a major challenge ahead of us to try to come up with a useful conceptual understanding of what is going on. Thus I suspect that for the next 5 years, and probably longer, neuroscientists working on understanding cortex aren’t going to be of much use to AGI efforts. My guess is that sometime in the next 10 years developments in deep belief networks, temporal graphical models, liquid computation models, slow feature analysis etc. will produce sufficiently powerful hierarchical temporal generative models to essentially fill the role of cortex within an AGI.
The cortical brain is difficult for computer scientists to understand, because it is not algorithmic -- it is complex and often chaotic (much worse than merely "fuzzy" or "probabilistic"). Some aspects of cortical function are important for AGI researchers to learn. Other aspects of cortical function are irrelevant to AGI. Which is which? And what about subcortical brain centers and modules? What about the neuroendocrine system, the glial and vascular systems of the brain? What is important enough to be copied, and what can be ignored or vigorously abstracted? Read the whole thing and consider the problem for yourself.

Al Fin AGI researchers are fascinated by the rapid development of ever more powerful computing platforms. Better algorithms -- particularly ones that optimise massively parallel processing platforms -- are desperately needed for all areas of computing. But human intelligence is not algorithmic. So the development of human level AI will not automatically follow from the construction of highly complex and speedy computing platforms. Some new paradigms are clearly called for.

Machine intelligence may find an algorithmic path to human-level and beyond. But since human intelligence is not algorithmic, such a project would have to proceed without a solid conceptual template. Generally, such innovation takes longer than when building from a working proof of concept. It should be fun to watch.

For other essential reading, I suggest a trip to NextBigFuture, where Brian considers fascinating recent developments in technology and science. I discovered the link to the article discussed above by following a link from this NextBigFuture story about climate control machines.

Labels: ,

Bookmark and Share

0 Comments:

Post a Comment

“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell

<< Home

Newer Posts Older Posts
``