July 29, 2007
“The Real Transformers”
There’s a nice feature-length article in the NYTimes Sunday mag today, called “The Real Transformers”, chronicling Rodney Brooks’ and Cynthia Breazeal’s work on social humanoid robots, including their plans for deploying a series of toddler-like robots in a museum in 2009. (There’s a successful precedent for this, you might recall.)
(btw, I’ve been away from blogging for a while, but I plan to get back into the game soon, perhaps write those follow-up NLU posts I had mentioned.)
July 31st, 2007 at 8:11 pm
This cover demonstrates, in its way, that the Turing test is obsolete, and machines will become continuous with the category ‘human’ when the essential concepts of humanity (e.g., “understanding”) are sufficiently (popularly) relativized.
People are already confused how much intelligence is in the voice-recognition telephone-menu. As this artificial intelligence grows, it increasingly produces a hazy appearance of human-like understanding.
In other words, the *real* Turing test is not as initially expressed, but rather something more like: “The machine becomes human when we forget it’s a machine.”
Or cynically: “A machine becomes equivocally human when our concept of human falls below that which is easily fooled by modern technology.”
August 1st, 2007 at 3:25 pm
I believe the Turing test is as valid as ever. Voice recognition is not displaying of intelligence, it’s just accurate pattern matching.
A successful AI is one which can handle a conversation like this, thus passing the Turing Test:
“So, what do you think about X?”
“Really? Why do you think that?”
“But don’t you think that Y is true?”
“If that’s so, prove it!”
“I’m still not convinced. I think you’re trolling! Go to hell!”
an unsuccessful one will likely terminate like this:
“What do you mean by ‘insuficient data for proper answer’?”
August 3rd, 2007 at 5:20 pm
It may be that the Turing test is as valid as it ever was, but it’s grown increasingly clear that there’s a _de facto_ test in addition to the _de jure_ test elaborated by Alan Turing. I join you in believing that Turing properly describes where people *should* draw the line between machine- and true- intelligence. I strongly disagree that this is the test and the standard that people actually use.
Not to judge a book (or article) by its cover, but one can anyway judge a cover by its cover. The cover accompanying the OP suggests that “understanding” is being used in a slippery and vague way — in a way to which Turing would have probably objected. I don’t want to distract too much from the actual article, but I thought this point might be interesting separately.