Monday 13 October 2008

AI 108 Ban the Turing Test as it is Harmful to AI Advancement


This weekend there was another Turing Test when six Artificial Conversational Entities (ACEs) tried to fool human interrogators into thinking they were also human.

All the ACEs managed to fool at least one of their human interrogators and organisers feel it will only be a matter of time before the test is passed. This weekend, none of the ACEs could pass the threshold set by Turing in 1950 of fooling 30 per cent of the human interrogators.

The winning machine, known as Elbot (see diagram), could only achieve a 25 per cent success rate. Let’s not get excited about fooling the bottom 25% of humans! It is worth looking at the ACE illustration covered earlier (please refer to AI 100).

Professor Kevin Warwick from the University of Reading's School of Systems Engineering, who organised the test, said: "This has been a very exciting day with two of the machines getting very close to passing the Turing Test for the first time.

Why should we ban the Turing Test?

We need to go back to basics for the definition of artificial intelligence.

The Oxford Dictionary definition for human intelligence is the “ability to acquire and apply knowledge”. Therefore the definition of artificial intelligence ought to be the “ability to acquire, apply and measure knowledge”.

Even when an ACE is able to fool the bottom 30% of the human class does it mean it has the ability to acquire, apply and measure knowledge? Of course not! The Turing Test does not consider IF-THEN-ELSE knowledge that underpins so many different professional disciplines such as health procedures needed to address the global health crises (refer to AI 108).

We need to replace the Turing Test designed in 1950 so that artificial intelligence focuses upon a noble cause in line with 21st Century thinking. Just remember the wealth of experience gained in the last 58 years since Turing set this challenge, which was 30 years before IBM launched its successful personal computer.