Thursday 26 June 2008

AI 10: After two AI Winters it is now the time for AI to succeed


AI 10: After two AI Winters it is now the time for AI to succeed
Since its inception in 1956, Artificial Intelligence has had more periods of hype and disillusionment than any other technology. Everyone knows that AI is the dream ticket for knowledge exchange, but for most it remains no more than a fantasy. There has been good reason for such disillusionment, but at the same there is cause for real optimism (refer to blog reference AI 1). But, it is important to explain why AI has had such a bad perception and why hype of unrealistic claims need to be curtailed.

In a recent book publication ‘The fall of the machines’ by Michio Kaku he has a quote from MIT's Marvin Minsky, one of the original founders of AI. Marvin summarises AI problems in this way:

''The history of AI is sort of funny because the first real accomplishments were beautiful things, like a machine that could do proofs in logic or do well in a calculus course. But then we started to try to make machines that could answer questions about the simple kinds of stories that are in a first-grade reader book. There's no machine today that can do that.''


The AI Winters


There have been two AI Winters, which is a term for the collapse in the perception of artificial intelligence research. The term was coined by analogy with the relentless spiral of a nuclear winter: a chain reaction of pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.

The two AI Winters have been 1974−80 and 1987-2007.

The first AI Winter included

• 1966: the failure of computational linguistics – this is now mainstream.

• 1970: the abandonment of connectionism, which is the interconnected networks of simple units – this has been solved in commercial R&D.

• 1971−75: DARPA’s frustration with the Speech Understanding Research program – this is now mainstream.

• 1973: the large decrease in AI research in the United Kingdom in response to the Lighthill Report, which was an "Artificial Intelligence: A General Survey" by Professor Sir James Lighthill. This report gave a very pessimistic prognosis for many core aspects of research in this field, stating that "in no part of the field have discoveries made so far produced the major impact that was then promised".

• 1973−74: DARPA’s cutbacks to academic AI research in general.

The second AI Winter included:

• 1987: the collapse of the Lisp machine market

• 1993: expert systems slowly reaching the bottom

• 1990: the quiet disappearance of the fifth-generation computer project's original goals, which had attracted over US$2bn of funding

• 2002: case-based reasoning systems reaching the bottom

• 2004: the demise of knowledge management systems

• 2007: 10 years of semantic-web reaches a glass ceiling

• 2007: 15 years of advancement in search-engines reach glass ceiling


2008 and AI Hype starts again

During 2008 trials of Avatars powered by AI in Second Life by organisations, such as the Rensselaer Polytechnic Institute, have papers claiming they have AI that has the IQ of a four year old. At least the reports say that is a glass ceiling. If only this was true, but it cannot be.


AI cannot replicate the human brain for the foreseeable future

Why? There are three primary reasons.

Firstly, the human brain is still not understood well enough so that it can be represented by a simple equation. Scientists are getting closer to an answer, but without such an equation how can one develop AI to replicate the human brain?

Secondly, even when a simple equation has been formulated, the technical challenges are immense. The human brain is estimated to having 100 billion nerve cells and 500 trillion synaptic connections. It is also estimated that the human brain can process the equivalent of 100 trillion instructions per second. These benchmarks are just not attainable by today’s technology. Also please look at AI 26 New Imaging of the Brain shows Strings.

Thirdly, conventional AI does not have the equivalent of human judgement to avoid over learning to prevent decision distortions and decay. Instinctively, humans are better placed to know when to stop learning something to avoid diminishing returns and possible contamination to their sense-making framework. Black-box AI is the opposite as it can start with say a meaningful conversation, but after a while the learning excesses contaminates its reasoning. So it’s quite feasible to begin a normal conversation with an AI Agent, which then becomes a blithering idiot at any moment in time. The realisation of this fundamental flaw raises both moral and ethical issues. The fear of AI Agents telling lies without any inherent notion of doing right or wrong is regarded by many as a risk too far.

These explanations simply confirm what we instinctively know that conventional AI cannot deliver a universal and unifying knowledge exchange.


Now the good news


AI is starting to succeed by developing niche applications. With Cloud Computing providing a universal utility the emergence of many niche AI applications has already started and an avalanche of such applications is now inevitable.


There will be no more AI Winters. Around US$100bn has been spent since 1956 on AI R&D, but over the next 10 years to 2018 AI R&D expenditure is likely to be around US$600bn (refer to blog reference AI 1).


The future of AI now looks exciting with changes happening faster than most people can imagine.

Wednesday 25 June 2008

AI 9: AI will determine the future of mobile phones as closed systems become the new modus operandi for web business


All the major Cloud Players like Google, Microsoft, Yahoo and Apple are bringing AI to the mobile. It is clear that AI will govern human interaction and orchestration of services. This interaction will be primarily through a combination of voice and touch.

The importance of the mobile cannot be underestimated as it is a physical gatekeeper to Cloud Services, which are closed systems. So anyone that provides a seamless service from the device to the application service creates a competitive barrier to entry.

What is a closed system?

Web 2.0 was the end of the internet era of open systems whereby the content and click-throughs can be accessed by third parties, in particular search engines like Google. There is growing concern from web 2.0 authors that their content can be copied and used without permission. This is all about to change.


Web 3.0 and beyond are closed systems that cannot be accessed by third parties, in particular search engines like Google, aggregators of content and aggregators of click-through intel like ComScore. There have already been a lot of examples of closed systems from the past such as an online banking application. Another example is avatar-based social networks such as Second Life. Google Search cannot access the inside of these applications. ComScore share price is now in decline for this reason especially as Google itself is adopting Web 3.0 closed systems thus starving ComScore from gathering intel.

There is no doubt that Apple with their iPhone and its recent 3G capabilities demonstrates the sheer power of an integrated and seamless closed system service. Their closed service stops Google Search from seeing inside it. Apple have designed not only search but also many other ways for customers to find what they way such as via catalogues, notifications, personalised prompts, popularity lists and much more to help its customers. With the launch of the iPod as part of this closed system Apple set the new benchmark for extending the closed system to the external device.

It is no wonder Google are becoming more concerned with Apple than Microsoft. The Google Android, though in some difficulties, is crucial for Google to develop a closed system that is attractive for 3rd party developers and providers(refer to blog reference AI 3).

Extended closed systems that include devices threaten Google’s core business of search-based advertising.

The future of the web is AI (refer to blog reference AI 1). AI is a closed system and signals the end of the old fashioned models of open content and click-throughs.

AI 8: Microsoft launches AI Cloud Services to solve urban traffic jams



Clearflow has emerged from the Microsoft AI US$1.9bn R&D budget (refer to blog reference AI 2) to apply machine learning to the problem of urban traffic jams. The Web-based service claims to be able to give drivers accurate alternative route information because it predicts where drivers will go when they move off of congested main roads.

The Clearflow system will be freely available as part of Microsoft’s Cloud service for 72 cities in the United States. Microsoft says it will give drivers alternative route information that is more accurate and attuned to current traffic patterns on both freeways and side streets. The new service will on occasion plan routes that might not be intuitive to a driver. For example, in some cases Clearflow will compute that a trip will be faster if a driver stays on a crowded highway, rather than taking a detour, because side streets are even more backed up by cars that have fled the original traffic jam.

This AI solution is a challenge to Google and Yahoo Cloud mapping services and is positioned as a Web 4.0 application (refer to blog reference AI 1) – AI complementing human knowledge.

AI 7: Prototypes already in play for humanoids aimed at the consumer and services market



The pictures show a humanoid called Dexter which has flexible joints, driven by air cylinders.

Dexter is able to walk on hard or soft surfaces, can run over uneven ground and can even jump.

Dexter is built by Anybots, Inc. and performed for the crowds at the Robo Development conference in San Jose.

Like Roomba (refer to blog reference AI 6), Dexter could be upgraded to include mobile technology and connect to a Cloud Service so it can engage in AI dialogue with its owner. This could include health advice (refer to blog reference AI 4) and of course creates new inventory and opportunities for advertising.

Tuesday 24 June 2008

AI 6: Household consumer robots set for big growth as people give them ‘pet’ names


Roomba (see picture) is a home vacuuming product from iRobot, which has already sold over 2 million units. Though it does not look like a robot it is worth it sure acts like one.

Colin Angle, CEO and co-founder of iRobot, says, "When we started shipping Roomba in 2002, we asked focus groups if it was a robot. They said no, a robot was humanoid and this was an intelligent floor vacuum. Now people are definitely changing to accept robot appliances."

A survey showed most owners gave their Roombas pet names.

James Kuffner, an associate professor at the Robotics Institute at Carnegie Mellon University, says

However, once Roomba is upgraded to include mobile technology and connects to a Cloud Service then conversational AI can engage in dialogue with its owner. This could include health advice (refer to blog reference AI 5) and of course creates new inventory and opportunities for advertising.

AI 5: AI Personal Assistants are emerging on the market but they still lack adequate IQ and then there is the question of trust!


Any Cloud Device that can display an Avatar talking head will deliver AI. This picture shows one called Ultra Hal Assistant which is a digital secretary and all-around e-buddy.

The Ultra Hal Assistant is described as a digital secretary that uses AI to understand spoken English commands and to learn over time. The claim is that Ultra Hal can remember anything you tell it, automatically dial phone numbers, remind you of appointments, do Web searches, launch applications, etc. etc.

In reality, such AI tends to be limited in its ability to learn. However, it should cope with simple commands especially related to orchestrating known tasks.

There is no doubt that Avatars with conversational capabilities, sometimes known as Chatterbots, will become pervasive once the big issue of AI learning has been properly overcome.

One of several AI issues is that it does not have the equivalent of human judgement to avoid over learning to prevent decision distortions and decay. Instinctively, humans are better placed to know when to stop learning something to avoid diminishing returns and possible contamination to their sense-making framework. Classic AI is the opposite as it can start with say a meaningful conversation, but after a while the learning excesses contaminates its reasoning. So it’s quite feasible to begin a normal conversation with an Avatar, which then becomes a blithering idiot at any moment in time. The realisation of this fundamental flaw raises both moral and ethical issues. The fear of AI telling lies without any inherent notion of doing right or wrong is regarded by many as a risk too far.

Once these problems have been overcome then Ultra Hal Assistants or equivalents will become pervasive leading to Web 4.0 and Web 5.0 (refer to blog reference AI 1).

AI 4: Consumer androids to provide personalised health services


Care-O-bot is an android assistant to assist elderly or handicapped people in daily life activities. Care-O-bot can manipulate simple objects found typically in home environments. It is equipped with a manipulator arm, adjustable walking supporters, a tilting sensor head containing two cameras and a laser scanner, and a hand-held control panel. Care-O-bot is a development of the Fraunhofer Institute for Manufacturing Engineering and Automation (IPA) in Stuttgart, Germany.

This is an illustration of a consumer robot that can be connected to Cloud Services for digital conversations with patients and alerting say nurses when sensing conditions warrant human specialist support.

This market could be worth US$2bn to US$10bn a year by 2018.

These type of androids could provide first level health advice.

The scenario of consumer robots being powered as a Cloud Service (Web 3.0) is not a technical issue and shows the excitement around the roadmap for Web 4.0 and Web 5.0 (refer to blog reference AI 1).

The implications for Microsoft (refer to blog reference AI 2) and Google (refer to blog reference AI 3) future competitiveness is in the balance.

AI 3: Google’s Android is its ‘secret’ platform for Web 4.0 and Web 5.0 AI

For Google to name its mobile platform ‘Android’ provides an interesting insight to their future road-map for implementing Web 4.0 and 5.0 AI (refer to blog AI 1).

Android is technically positioned as both a software platform and operating system for mobile that is open for all to develop applications. But, it’s Google’s associated business strategy and value creation that is far more interesting.

The name Android is closely associated with robots that resemble humans. Android is the means for a digital conversation between a human and AI.

Why is this important?

Let’s us start the answer with Google and its sponsorship of an Economist report called ‘The Future of Marketing: from monologue to dialogue’. It is a must read for everyone in the digital ad space.

The question is whether Larry Page, co-founder of Google, believes in digital conversations between AI and humans. Let’s examine some hard facts.

Larry was brought up thinking AI as his father, the late Dr. Carl Victor Page, was a professor of computer science and artificial intelligence at Michigan State University. Larry got his degree in computer science from the same university.

In early 2007, at the Annual American Association for the Advancement of Science conference, Google co-founder Larry Page let slip “We have some people at Google [who] are really trying to build artificial intelligence (AI) and to do it on a large scale…It’s not as far off as people think.” Even more telling is that Larry stated that artificial intelligence will be solved by brute force and that Google, (which happens to be the biggest owner of computers in the world) is working on it.

And Larry, as a futurist, would know some of the other well known futurists that helped influence the film Minority Report. This film contained virtual androids conversing with humans to sell product and is now regarded as the mantra for the future of marketing. This links back to the Economist report that says this is the new way to engage with customers.

The technology is well established for voice XML and Google’s Android is already designed for voice interaction.

So Google’s Android is really about human to AI (H2AI) interaction and will be used not just for smart phones but also consumer robots or indeed any smart device. Android is already being orchestrated as a Google Cloud service with inbuilt Unified Communications.

It is clear that H2AI will become the new landscape for digital advertising and more importantly the link between interaction and transaction.

No wonder Microsoft have redirected 25% of its US$7.5bn R&D to AI (refer to blog reference AI.2).

Is Microsoft spending enough to compete with Google’s Android vision?

AI 2: Microsoft lost the Web 2.0 War, but will want to win Web 4.0 + Web 5.0 by spending US$40bn to $60bn over the next ten years

Microsoft in 2007 spent around $1.9bn R&D on Artificial Intelligence, which represents 25% of their total R&D spend.

Having lost to Google in the Web 2.0 Ad War, Microsoft is focused on winning Web 4.0 ‘AI Complementing Humans’ and Web 5.0 ‘AI Supplanting Humans’. They have a formidable Web 3.0 Cloud Computing Infrastructure, but lack 21st Century Applications. They cannot afford to cannibalise their core revenue stream from software licences and therefore need to find new Cloud Apps.

Microsoft is likely to spend US$30bn to $US$40bn on AI R&D during 2009 – 2018. Plus they will acquire niche AI players during this period for another $10bn to $20bn with an expectation of paying premium prices for AI innovation from 3rd parties as other Cloud Players muscle into AI.

No wonder Microsoft pulled away from the Yahoo deal. The Microsoft Cloud strategy is clear – its AI, AI, AI!

AI 1: AI resurgence will lead to US$600bn+ R&D spend over the next 10 years


2008 will be seen as the resurgence of sustainable AI after two major AI Winters since 1956. Refer to blog references AI.10.

AI has been given the credibility by web futurists, such as Nick Carr, because they have define the roadmap after Web 3.0 ‘Cloud Computing’ as being Web 4.0 ‘AI Complementing Humans’ and Web 5.0 ‘AI Supplanting Humans’.

The robotic market with embedded Artificial Intelligence is already calculated to be worth US$182bn by 2018. TAITRA President Chao Yuen-Chuan stated in 2007 "Robots will be closely integrated with different kinds of artificial intelligence in the near future," and the Taiwan Industrial Development Bureau under the Ministry of Economic Affairs predicts that the Taiwan aims to have 6% of this market. Refer to blog references AI.4/6/7 to see emergent consumer robots.

This US$182bn robotic and AI market has been calculated without considering the impact of Web 3.0, 4.0 and 5.0.

Consumer robotics will naturally converge with the Cloud and thus radically change the software economics for robots and indeed this is likely to lead to increasing the market size way beyond the US$182bn.


Turning towards Web 4.0 and Web 5.0, Microsoft could well spend US$60bn AI R&D in a 10 year period to 2018 (refer to blog reference AI.2/8).


With AI rapidly moving towards its natural habitat of mobile phones (refer to blog reference AI.3/9) and voice systems for digital conversations it will be surprising that Microsoft has no more than 10% of the AI R&D market including for robotics AI.


This means the AI R&D over the next 10 years is likely to be in excess of US$600bn as AI becomes pervasive dictating all digital interactions and transactions.