Thursday 26 June 2008

AI 10: After two AI Winters it is now the time for AI to succeed


AI 10: After two AI Winters it is now the time for AI to succeed
Since its inception in 1956, Artificial Intelligence has had more periods of hype and disillusionment than any other technology. Everyone knows that AI is the dream ticket for knowledge exchange, but for most it remains no more than a fantasy. There has been good reason for such disillusionment, but at the same there is cause for real optimism (refer to blog reference AI 1). But, it is important to explain why AI has had such a bad perception and why hype of unrealistic claims need to be curtailed.

In a recent book publication ‘The fall of the machines’ by Michio Kaku he has a quote from MIT's Marvin Minsky, one of the original founders of AI. Marvin summarises AI problems in this way:

''The history of AI is sort of funny because the first real accomplishments were beautiful things, like a machine that could do proofs in logic or do well in a calculus course. But then we started to try to make machines that could answer questions about the simple kinds of stories that are in a first-grade reader book. There's no machine today that can do that.''


The AI Winters


There have been two AI Winters, which is a term for the collapse in the perception of artificial intelligence research. The term was coined by analogy with the relentless spiral of a nuclear winter: a chain reaction of pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.

The two AI Winters have been 1974−80 and 1987-2007.

The first AI Winter included

• 1966: the failure of computational linguistics – this is now mainstream.

• 1970: the abandonment of connectionism, which is the interconnected networks of simple units – this has been solved in commercial R&D.

• 1971−75: DARPA’s frustration with the Speech Understanding Research program – this is now mainstream.

• 1973: the large decrease in AI research in the United Kingdom in response to the Lighthill Report, which was an "Artificial Intelligence: A General Survey" by Professor Sir James Lighthill. This report gave a very pessimistic prognosis for many core aspects of research in this field, stating that "in no part of the field have discoveries made so far produced the major impact that was then promised".

• 1973−74: DARPA’s cutbacks to academic AI research in general.

The second AI Winter included:

• 1987: the collapse of the Lisp machine market

• 1993: expert systems slowly reaching the bottom

• 1990: the quiet disappearance of the fifth-generation computer project's original goals, which had attracted over US$2bn of funding

• 2002: case-based reasoning systems reaching the bottom

• 2004: the demise of knowledge management systems

• 2007: 10 years of semantic-web reaches a glass ceiling

• 2007: 15 years of advancement in search-engines reach glass ceiling


2008 and AI Hype starts again

During 2008 trials of Avatars powered by AI in Second Life by organisations, such as the Rensselaer Polytechnic Institute, have papers claiming they have AI that has the IQ of a four year old. At least the reports say that is a glass ceiling. If only this was true, but it cannot be.


AI cannot replicate the human brain for the foreseeable future

Why? There are three primary reasons.

Firstly, the human brain is still not understood well enough so that it can be represented by a simple equation. Scientists are getting closer to an answer, but without such an equation how can one develop AI to replicate the human brain?

Secondly, even when a simple equation has been formulated, the technical challenges are immense. The human brain is estimated to having 100 billion nerve cells and 500 trillion synaptic connections. It is also estimated that the human brain can process the equivalent of 100 trillion instructions per second. These benchmarks are just not attainable by today’s technology. Also please look at AI 26 New Imaging of the Brain shows Strings.

Thirdly, conventional AI does not have the equivalent of human judgement to avoid over learning to prevent decision distortions and decay. Instinctively, humans are better placed to know when to stop learning something to avoid diminishing returns and possible contamination to their sense-making framework. Black-box AI is the opposite as it can start with say a meaningful conversation, but after a while the learning excesses contaminates its reasoning. So it’s quite feasible to begin a normal conversation with an AI Agent, which then becomes a blithering idiot at any moment in time. The realisation of this fundamental flaw raises both moral and ethical issues. The fear of AI Agents telling lies without any inherent notion of doing right or wrong is regarded by many as a risk too far.

These explanations simply confirm what we instinctively know that conventional AI cannot deliver a universal and unifying knowledge exchange.


Now the good news


AI is starting to succeed by developing niche applications. With Cloud Computing providing a universal utility the emergence of many niche AI applications has already started and an avalanche of such applications is now inevitable.


There will be no more AI Winters. Around US$100bn has been spent since 1956 on AI R&D, but over the next 10 years to 2018 AI R&D expenditure is likely to be around US$600bn (refer to blog reference AI 1).


The future of AI now looks exciting with changes happening faster than most people can imagine.