Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

Alan Turing's Prediction for the Year 2000 126

Chernicky writes "In 1950, Alan Turing , the father of computer science and (arguably) artificial intelligence, made a prediction about the year 2000. Turing said that in about fifty years, the answers of a computer would be indistinguishable from those of human beings, when asked questions by a human interrogator. With the year 2000 upon us, Dartmouth College is offering a $100,000 prize to the first programmer that can pass the Turing Test. The deadline for submissions is October 30, 1999. "
This discussion has been archived. No new comments can be posted.

Alan Turing's Prediction for the Year 2000

Comments Filter:
  • Man, I need to start on my database of all askable questions. Not too hard, right?

    Penrif
  • To pass a Turing test? I think I'm sentient. Maybe I'll give it a go. 8)

    In all seriousness, have there been any previous projects that have passed Turing tests under conditions dictated by an independent third party?


    -W-
  • Dartmouth College is offering a $100,000 prize
    to the first programmer that can pass the Turing Test.


    Ummm, I sure hope I can pass the Turing test. I know some of you out there might have problems passing it, but I'm pretty confident I can pass.

    Of course if he meant 'first program' it might make more sense.
  • Haven't programs already done this? Eliza? Sex? I thought they all passed the Turing Test?

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • A number of programs have passed aturing test for a very limited period of time . They inevitably break if you give the pepole administering the test long enough to distinguish between the automated responses and the human ones .

    Overall I would say that his is a pretty safe bet on their part . A little publicity gained with promises that almost certainly can't be cashed in on .
    Your Squire
    Squireson
  • Except of course that I'm sure the programmer would prefer the cash, and not pass it on to the program :-)

    Why am I thinking "Shooting Fish"?
  • I really can't believe that it can be done, yet. A number of groups have claimed to solve the problem. The computer that writes short stories, etc. But the Turing test requires interaction, learning, and a number of other talents that we just don't have the space, technology, or developmental resources to produce.

    I (rather arrogantly) believe that today's computer may be able to fool one persone but cannot fool multiple people.


    -- Moondog
  • by Nagash ( 6945 ) on Tuesday October 05, 1999 @06:35PM (#1635354)
    The Turing Test has long been discounted as a bad goal of AI research, although people have been doing Turing Test "auditions" for years.

    The problem with the Turing Test is that it tries to make a computer human and that's not really what AI is all about - it's more about trying to solve problems using various techniques in order to make programs useful. (Maybe making a computer human is not all that useful ;) )

    The problem is that the program only needs to pass 5 minutes worth of conversation. That's a pretty narrow goal. Technically, it's not really artificial intelligence at this point - it's just a ruse (however, it's still extremely diffucult to program natural language capabilities and have "common sense" -- two goals that are themselves not bad ones to do research in).

    Douglas R. Hofstadter wrote an interesting article about this - he had a conversation with a program named Nicolai (I think). It was quite amusing - the program spits out some very interesting answers. :-)

    Anyway, no one has yet succeeded at this and if you feel you can get a program to imitate a human for 5 minutes, go right ahead. You'll earn that $100K :-)

    Woz
  • what I posted a minute ago may show up soon, but I can't be sure, so here I post it again.


    Is www.forum2000.org a fake? or is it an honest-to-deity AI capable of answering questions in a lifelike manner?

    Dan
  • Are there any websites or the like on the Web that
    demonstrate this kind of interactivity? Something
    like Eliza, but better (I would assume)?
  • I thik using IRC as a testbed for testing this would be great.

    Have the applicants join on a channel thats used a bit, say #hotjaurez or #3l3tn3ss and see how it fares in converstaion. Then have them , with nick changes, move over to a more constrained channle, like #mindvox or #youngpoetsinheat.

    The truck would not only to be able to pickout the bots, but to pick out the humans as well.

    Im bettng the bots would have better cahnce of being dubbed Human than many of the genetic slush bags.

    just my tunie
  • by HoserHead ( 599 ) on Tuesday October 05, 1999 @06:41PM (#1635358)
    Back in my BBS days, I was rather naive (to say the least) and wasn't very well-versed in the ways of the BBS. I went to page the Sysop of this particular BBS, and lo and behold! he was there. I proceeded then to have a lengthy (30+ minute) conversation with this sysop, after which I sent him a nice mail thanking him for humouring me (some of his answers were pretty bizzare, so I went along with them!)

    It was only after I myself had begun setting up a BBS that I came across this BBS door program. I don't remember what it was called, but it pretended to be a chat program. Basically, it responded to specified keywords with a random sentence from a huge flatfile database, and even pretended to have typooo^H^Hs from time to time.

    I then realised that I'd been had!

    Some sysop must have been laughing his ass off at this young kid who went by the handle "Orion", chatting away with a very crude AI and being suckered into it the whole way.

    I look back on those days and wonder how I missed it. But it just goes to show you that, as much as you might be fooled by a computer, we've got a long way to go before we reach anything approximating independent thought. Personally I don't think it'll ever happen - but it might be neat to be proven wrong.

  • by Scaramouche ( 62036 ) on Tuesday October 05, 1999 @06:43PM (#1635359)
    please do not mistake my intent- i would be quite
    impressed with anyone who could pass the turing
    test.

    however, how much further does this really get us
    than building a computer which can beat kasparov
    in a (relatively) high speed chess match. chess
    seemed like a big thing to teach a computer once,
    but it has been relegated to the relatively
    trivial now.

    it seems to me that a program which passes the
    turing test may well fall into the same category.
    (i am assuming here that the program merely
    appears to be having a conversation- that it is
    not a language _understanding_ system.) it would
    simply become something that people would set
    loose in chatrooms, or attach to old unwanted
    e-mail accounts, and watch the fun.

    what i'd like to see is someone tackling a truely
    significant problem. like programming a computer
    to be able to vaccume your house.
  • being that the turing test is a very popular and well known "test", how does the fact that it exists change the focus and development of the AI community??

    similarly how does RSA's challenges influence the encryption community.
  • Well, crap, I've been holding the AI/Turing Test problem solution for years now, but now that Dartmouth wants to offer me a HUGE 100K for it, I might as well release it! Yay!
  • (setq howareyoulst '((how are you) (hows it going) (hows it going eh)
    (how\'s it going) (how\'s it going eh) (how goes it)
    (whats up) (whats new) (what\'s up) (what\'s new)
    (howre you) (how\'re you) (how\'s everything)
    (how is everything) (how do you do)
    (how\'s it hanging) (que pasa)
    (how are you doing) (what do you say)))
    (setq qlist
    '((what do you think \?)
    (i\'ll ask the questions\, if you don\'t mind!)
    (i could ask the same thing myself \.)
    (($ please) allow me to do the questioning \.)
    (i have asked myself that question many times \.)
    (($ please) try to answer that question yourself \.)))

    (setq foullst
    '((($ please) watch your tongue!)
    (($ please) avoid such unwholesome thoughts \.)
    (($ please) get your mind out of the gutter \.)
    (such lewdness is not appreciated \.)))

    (setq deathlst
    '((this is not a healthy way of thinking \.)
    (($ bother) you\, too\, may die someday \?)
    (i am worried by your obsession with this topic!)
    (did you watch a lot of crime and violence on television as a child \?))
    )

    (setq sexlst
    '((($ areyou) ($ afraidof) sex \?)
    (($ describe)($ something) about your sexual history \.)
    (($ please)($ describe) your sex life \.\.\.)
    (($ describe) your ($ feelings-about) your sexual partner \.)
    (($ describe) your most ($ random-adjective) sexual experience \.)
    (($ areyou) satisfied with (// lover) \.\.\. \?)))


    (setq stallmanlst '(
    (($ describe) your ($ feelings-about) him \.)
    (($ areyou) a friend of Stallman \?)
    (($ bother) Stallman is ($ random-adjective) \?)
    (($ ibelieve) you are ($ afraidof) him \.)))

  • Hrmm, I wonder if those massive Neural Nets at the Chantilly, Virginia National Reconnaissance Office already have past this point...

  • Dartmouth College is offering a $100,000 prize to the first
    programmer that can pass the Turing Test.
    Rip the guts out of a Cray... Hey, I'm not large, my laptop and I can fit inside the case :-)
    ==================================
    neophase
  • It just occurred to me that in the movie Blade Runner (and the PKD book as well?), the test for whether or not a suspect is a replicant or human is basically a fancy Turing test, isn't it?

    Place the subject in front of an interrogator and try to provoke an emotional response, indicating humanity. Sufficiently advanced replicants are good at fooling the test ("Rachel took nearly 50 questions") but to date all replicants are distinguishable from humans.

    Seems pretty allegorical to me. What was the test called in the film? Who was that doctor / scientist? Would he have been eligible for the reward?

    Man I want to see that movie again now...

    Other thoughts, since I'm on a tangent: how about a program that can seem more real than Zippy the Pinhead? (Shouldn't be too difficult.) Or one that is less boneheaded than the average Slashdot AC poster? (Shouldn't be too difficult.) Sounds like it's time to get coding...










  • I seem to remember that Zummy was fooling a lot of reporters (they thought that the 'bot was actually a Linux technician). Perhaps the maintainer should try and submit that?
  • Check out the book the movie is based off of, "Do Androids Dream of Electric Sheep." It goes off on some length about the empathy tests used to detect Replicants. Dan
  • Gawd.. this thing is friggin funny!

    Just look at the list of "recent questions" or whatever they are..

    --

  • I highly doubt that the turing test will be passed by any computer in the year 2000. However, I think we are getting close as things like neural nets and evolutionary simulation programs (I'm not talking about simulations of organisms evolving, but small programs written to evolve and compete against each other for memory and cpu time in a digital environment)-I wish I could remember the url, can anyone help me out? From what I understand, theoretically, a complex enough neural net could, with enough time and enough stimuli (input) learn to do anything within the limits of the hardware. I think we will start to see the beginnings of a true turing capable machine sometime in the next 5-10 years. In the next 25 years or so, we may even see programs that can upgrade themselves. Imagine: Linux 10.8.4 has a few unwanted bugs? No problem, just run it under a high load for a while and as problems arise, the turing module simply tweaks the code a bit for you. :)

    --

  • This brings up an interesting point about the Turing Test -- when you were on IRC, you had no idea you were not talking to a machine. Thus, even though it did not have sophisticated knowledge or natural language abilities, you still believed it was human.

    However, with the Turing Test, the judges know some of the entries are not humans and thus ask questions that would indeed flush out a computer based on the responses. They are looking for the culprit because they are told one is there.

    Ignorance really can be bliss, can't it? :-)

    Woz
  • Hold a Turing Test that has only computers or only humans answering and don't tell the judges. See if you get some judges saying "That was a computer for sure.", or vice versa.

    This raises yet another issue with the test -- a human can very easily give responses like a computer, thus fooling the judges. Is that fair? Maybe some humans are like computers with their answers.

    In fact, one time a participant was talking about Shakespeare, and was a complete expert on the subject. The human jugde was conviced he was a computer because his answers were so exact!

    Yet another problem....

    Woz
  • by Anonymous Coward
    Erwin! [userfriendly.org]
  • regardless its still an interesting challenge, and any implementation that CAN pass it will probably reveal something interesting about ourselves. No matter what "AI"s goals are :)
  • If the turing test is out, and some speach recognition has just been conquered, what should the goal be?

    Computer vision is a decent test, but it has to be under such tight constraints. Other senses aren't worth the time, so we are left we only a few options.

    Intelligence: Problem Solving
    Have the computer tackle problems that are slight deviations from known ones with known solutions.

    Creativity: Problem Solving with a twist
    Have the machine solve a problem, and display a logical progression of the solution. The path must be more than just a search in all possible answer space.

    Abstract: Problem Solving with no discrete answers
    Have the computer tackle the 5 people in a 4 person life boat problem... who stays, who goes.

    Gullibility: Give the computer the ability to believe some of what it is told, with out question, but also have it try to question and investigate false claims.

    Reverse Turing test:
    Computer takes the place as the moderator and tries to decide who it is talking to. Some factors would need to be ruled out, such as spelling and punctuation.

    Any other ideas for good AI benchmarks?
    The benchmarks have to be there to encourage funding and some research, so some test needs to be decided as a standard.
  • Of course, the Replicants were really genetically engineered clones. I find it more interesting that they might _not_ pass a Turing test. Considering that they were basically human.
  • The difference between the Voight-Kampff test and the Turing Test is that the Voight-Kampff test doesn't really test intelligence.. It tests emotional history. The idea is that since replicants are made as adults, they don't have the proper emotional background to give the same empathy responses... not that they aren't as smart as people.
  • by SEE ( 7681 ) on Tuesday October 05, 1999 @07:22PM (#1635381) Homepage
    I was on a MUD. Somebody struck up a conversation with me, and then suddenly stopped. He turned to a companion and said, "OC: I feel really dumb -- I actually thought that 'bot was another player."

    I must say that I was rather embarassed at being thought a 'bot, and immediately denied it -- at which point the other player said, "OC: Well, it is really believeable -- see how it even denied it was a 'bot? Whoever wrote it was good."

  • by xyz ( 79275 ) on Tuesday October 05, 1999 @07:29PM (#1635382)
    Those contests have been going on for a while.

    From what I read, most people working in AI don't treat them as something worth while. It's fairly obvious that programs won't be able to pass the turing test for some time (decades, maybe centuries), and the results of such tests only make it less likely that people working on valid AI projects will be taken seriously.

    The Loebner Prize has it's own homepage [loebner.net]. Chech out the transcripts of the conversations. The most 'clever' programs simply look for keywords, some insist on asking all the questions, some are 'whimsical' and use metaphores while constatnly switching topics, none show any comprehension whatsoever.

  • Intelligence: Problem Solving

    Creativity: Problem Solving with a twist



    These are very good goals, and sometimes very difficult to achieve (for instance, the travelling salesman problem).

    Abstract: Problem Solving with no discrete answers

    This is an example of weighing the pros and the cons, but what are the criteria? Do you look deeper than the fact that they are all people and are deserve to live? (theoretically :) ) When do you want computers making ethical decisions? (A computer as a judge?)

    Reverse Turing test

    Neat idea. Still need natural language capabilities and ability to adapt as well as creativity. Should the computer be told what kind of questions to ask?

    Some of what you are suggesting is good because it tackles "real-world"/practical problems. Solving these first would help lead to the larger goal of an more "intelligent" program or environment.

    I would love to have a nice AI helper program for debugging, myself :-)

    Woz
  • "what i'd like to see is someone tackling a truely significant problem. like programming a computer to be able to vaccume your house."

    They already have made robots that sweep/mop/vaccum/whatever floors as well as people can. The problem is that it's still cheaper to pay a person to do it.

    Having a robo-slave would be cool, though. You could have it do yardwork, cook dinner, clean the house, all sorts of stuff. But I think that the advent of robots that can do "people work" will raise all sorts of ethical issues -- for instance, robots do manual labor as well as people can for longer hours and for less money, then think about all the people that could be put out of work.

    We won't have to worry about that for a while, though.

    Take care,

    Steve
  • You know... The Office '97 assistant. He seems to solve all of my problems!
  • by Anonymous Coward

    People give Eliza way too much credit. I mean, it's a fun little program and was mostly cool for a while, but it's not even close to be able to pass the Turing Test. Since Eliza replies in questions to avoid contradicting herself, the only way she could pass a Turing Test is if you were talking to a New Yorker.
    -
    User: Are you smart?"
    Pesci: What'dya mean, "Are you smart?" You think I ain't smart? I'm just an idiot? Huh? Is that what I am? An idiot?
    User: No, that's not what I meant...
    Pesci: Oh, so now you're calling me a liar? I'm stupid and a liar?
    User: Uhh, are you okay?
    Pesci: Yeah, I'm fine. Why? You think there's something wrong with me? You think I'm crazy? Huh? I'm crazy? I'll show you fucking crazy!
    User: I think this program is bugged.
    Pesci: Oh, so now I'm broken? Huh? You think I wasn't programmed right? You think I'm supposed to sit around and take your shit? Huh? I'm supposed to be nice to you? Huh? I'm just a program? Is that it? Just a program to amuse you? Is that what I am? Is it?
    User: No, I didn't mean it like that...no, please, don't...nooooooooo!
    *BLAM*BLAM*BLAM*
    Programmer: Oh, shit, Joey, you didn't need to do that...Oh, shit. I knew there was something wrong when the beta testers disappeared...oh, shit, oh, oh, oh, shit...

  • The problem with the Turing Test is that it tries to make a computer human...

    Actually it does not even encourage us to make a computer human. It encourages us to make a computer program that produces sentences that sound human. Compare the size of the cortical areas devoted to speech processing, to the total area of the brain. This is my estimate of the Turing test's relevance :-)

    The general problem with the Turing test, as with most of the rest of the classical AI genre, is that it assumes that all relevant information processing should be symbolic. More likely only a small fraction of the information processing ought to be symbolic, the rest sub symbolic (ANN-s, fuzzy logic etc).

    Look at ants, rabbits, dogs etc -- They cannot do symbolic information processing (cannot speak) but the feat they accomplish is still pretty impressive!
  • but it missed the deadline, so no money for the programmer, and the project dies. I guess that's just life.
  • Any other ideas for good AI benchmarks?
    The benchmarks have to be there to encourage funding and some research, so some test needs to be decided as a standard.

    How about RoboCup [robocup.org]?

    Computer vision is a decent test, but it has to be under such tight constraints. Other senses aren't worth the time, so we are left we only a few options.

    This remark leaves me clueless. What's wrong with the vision problem? If you have a computer system that can see, wouldn't that be useful?
    What are those constraints? If you are referring to the great need of processing power, I'd say thats more of a challenge (if the algorithms require too much processing power, then maybe they need rethinking)!
  • There are amusing logs of people talking to Eliza like programs that are clearly convinced that they are talking to a person. Particularly when the program is set up to flirt rather than psychoanalyze. It sort of requires a more careful statement of what the Turing test is. It doesn't just mean `can fool some bozo out there.'
  • Well, it's already happening isn't it? There are less and less manual labour jobs, but they are being replaced by other jobs. Luckily, it's a gradual thing that's not happening overnight.
  • by Greyfox ( 87712 ) on Tuesday October 05, 1999 @08:10PM (#1635394) Homepage Journal
    We all know you're a bot, so you may as well stop trying to fool everyone.
  • hmm.......I can smell brain cells frying.... :)
  • For those who didn't understand the above post, the BBS chat bot would spout that phrase whenever you said 'hmm' or a variant of that thereof. Another common response was when you said 'name', it would say "My name is (name of author), but you can call me HAL."

    A couple of weeks after I fell for the door, I found a copy of it and proceeded to check it out. It's actually quite interesting.

    First of all, typos were simply repeated characters. This worked well in practice once or twice, but it gets obvious after a couple more times. A more convincing typo mechanism would be to sometimes type the wrong letter (i.e. lwtter) or double-striking (lewtter) keys directly adjacent to the intended key.

    The way the actual AI engine worked was that it parsed the text the user inputted and then compared it to a list of keywords in a data file. When the first match is found, one of the responses are selected and used. Used responses seem to be logged, and if the computer runs out of responses for a particular keyword then the door aborts. The same would happen if the user inputted a blank line twice. There was a "*" keyword at the end for words not covered, and responses would include phrases like "Would you repeat what you said?" and such.

    Overall, it didn't work too badly. During Christmas season one year, the author hacked the door to create a "Chat with Santa" door. One of the highlights of that one was when you said 'shit', the AI would say "If shit is what you want for Christmas, shit is what you'll get."

    :)

    Ahh, the memories......

    -Ed
  • Hmm... I've read some of the conversations there now. Judging from the ratio of gender words in the replies, I'd say that the system is either very human or operated by a guy/some guys in their twenties :-)

    But it's a funny site. I recommend it!
  • I ran one of those and it was a riot. Don't feel bad though, that thing took in a LOT of folks.
    The nice thing about it was that it had a certain
    context in which it operated -- the reason a sysop
    would run something like that was from being tired
    of the same exact questions 60 times a day -- so it was fairly easy to seed with keywords and tailor the responses a bit to add to the illusion.

    Of course the limited context makes it a bit more
    like a magic trick (think "card force") than AI.
    I'm afraid in the Turing test, the person with
    the computer program is not the one choosing the questions.


    M=(current_state, current_symbol, new_state, new_symbol, left/right)
  • You're saying that now. But just wait until the killbots rise up against humanity. Oh, you'll long for the days before we created our robot masters, let me tell you.

    (President, Skynet Historical Recreationist Society)
  • I LOVE this!

    I've never seen something more funny in my life than this [forum2000.org]. I literally fell out of my chair.
  • I was thinking the same thing... so I just pulled out my hand dandy DVD...

    Applicable blade runner quotes [geocities.com]
  • Computer conversations can be funny. And stupid. Check out this one, called "Conversations with Fred" http://www.sudval.org/users/spamfire/essays/fred/f red.htm
  • main(int argc,char *argv[]){
    char question[2048];
    scanf("%s",question);
    printf("I honestly don't know...\n");
    return 0;
    }

    i'll take cash, please. :)
  • given the volume of questions in the past 15 minutes or so, and the speed at which responses are generated, I find myself leaning toward believing these are genuine computer-generated responses. Of course, given the amazing human-like reponses, I'm very reluctant to follow this logic.

    All I can say is, if it is AI, AI are better comedians than any human could possibly be. If human, the people behind this are the funniest people alive.

    Even reading poll comments on slashdot isnt this funny.
  • The problem with the Turing Test is that it tries to make a computer human and that's not really what AI is all about - it's more about trying to solve problems using various techniques in order to make programs useful. (Maybe making a computer human is not all that useful ;) )

    What you describe is the viewpoint of those who have given up on developing a general intelligence because the problem has proved to be too difficult. "Solving problems using various techniques in order to make programs useful" isn't AI. It's standard algorithm development. It's AI in its most limited sense, where highly specific and limited intelligence is applied to specific problems.

    AI more generally is about developing an artificial intelligence i.e. a computer that can convince us that it's conscious. It's a higher goal than a purely utilitarian one (but will no doubt prove to be far more useful in the long run). This means that the AI has to have human characteristics, and a test along the lines of the Turing test is the only way to measure this.


  • The turing test is based on the way that we as humans ascertain whether another object is sentient or not. We only think that other people are sentient and intelligent because of the way they act. We don't actually know that anyone else is actually conscious, we just assume it from their actions. The turing test attempts to use the same procedure, but hide those aspects that might prejudice our impressions, i.e. that one of the respondants is actually a computer. The turing test is overly restrictive in that a computer has to try to act like a human, however it was presented as a starting point, i.e. if a computer could pass the turing test, then it should be conscious. Oh, and how do you know that rabbits and dogs don't do symbolic information processing. Symbolic information needn't be just a representation of words. I would highly recommend "Godel, Escher, Bach: An Eternal Golden Braid" by Douglas Hofstader for a much more in depth discussion
  • Douglas Hofstader argued that for a computer to be able to pass the turing test, it would need to have such a detailed understanding of the world in which the language is based that it would in fact be intelligent.

  • It's actually quite simple. You have to pick your subject carefully (intelligence is something we want to avoid) and then spring it on them when they don't expect it.

    I did this with AOL Instant Messenger. I saved a bunch of my gaim conversations and then read over them and customized Eliza to make it sound as much like me as I could. Then some perl magic to make it work with Toc and I left for a party and then a movie.

    I got back at around 2:30 in the morning and saw a friend talking to it. He had been chatting since 11:00 pm!!! He didn't even dimly suspect that it might be a computer, but he was getting pretty pissed off - it was saying pretty stupid stuff that usually didn't make sense, and it repeated itself every 5 or 10 minutes.

    I laughed over that one for a looooooong time. It might not work anymore, tho... anyone know if Aol pulled the plug on Toc?

    --
    grappler
  • This remark leaves me clueless. What's wrong with the vision problem? If you have a computer system that can see, wouldn't that be useful?

    The problem is, that computers don't see. They look, but they don't really see. They can be told that what's in front of them is Cassandra, but they aren't good enough at being able to distinguish what most everyday items in the world are just by sight.

  • I think the success of a Turing test program is largely dependent on the intelligence of its human conversation partner. As I've witnessed on more than one occasion, people have spent literally hours talking happily to a second-rate Eliza clone, thinking it was a real person.

    One particular episode that comes to mind is The Saga of Roter Hutmann [nothingisreal.com] , available at http://www.nothingisreal.com/saga/. This is the story of a computer science major who spent hours every day talking with Julia, a Turing test program, even going so far as to ask it out on a date, before he finally voiced to me his suspicions that she was "not human". Ironically, he then proceeded to call her a poorly-written program... Julia used to be accessible via telnet (fuzine.mt.cs.cm.edu, user "julia") but, alas, is there no more...

    Anyway, check out the Saga if you've got a few minutes to spare as people keep telling me it's the funniest thing they've read for a long time...

    Regards,

  • We were discussing you --not me.

  • I dunno...
    I participated in a turing test once, as the human on the other side of a terminal.

    More than half of judges failed me (ie: thought I was an AI).

    Half of me felt that that was so cool, but the other half started wondering if I've been playing with computers to much... The first half musta won, cause I haven't cut back one bit :)
  • The full test?

    No.

    Tom
  • The Turing Test is carefully stated.

    Nothing has even come close to passing it yet.

    Tom
  • Dartmouth College is offering a $100,000 prize to the first programmer that can pass the Turing Test

    I'm a programmer. I can mostly keep up a conversation for five minutes. Where do I apply to get the money?

    /Johan

  • The problems with the Turing test are that it is too hard and that it can produce false negatives. After all a machine could be truly intelligent and not human at all.

    (I hadn't spotted the 5 minute rule, is it Turing's or a bolt-on?)

    Tom
  • Yes but the test requires a *human* judge ;->

    Tom
  • ...is that it avoids any argument about whether a program is really intelligent or actually 'understands' by defining intelligence as behaving sufficiently like a human (ignoring the physical aspects of humanity) that other humans accept it as one.

    This isn't easy at all -- imagine asking a computer program to not only suggest a move in a chess game, but to write a poem about a subject of your choosing, compare and contrast two public figures, and so on.

    I don't think any of this can be done without a *deep* understanding of language and human culture.

    Of course there are *many* very useful things for AI to achieve which fall short of passing the Turing test -- in fact I think by the time we can pass the Turing test we'll probably have achieved everything else -- except super-human intelligence, but perhaps that's just a matter of cranking up the clock speed :-)
  • All i nkow is that it didn't do well in the US when it came out. then it came to europe where it received huge critics and public praise and so it was re-released in the US where it finally became the popular hit.
  • by rve ( 4436 )
    I'll collect my $100k. By the way, this isn't me, but my robot posting.
  • The problems with the Turing test are that it is too hard and that it can produce false negatives. After all a machine could be truly intelligent and not human at all.

    The Turing test does not produce false negative. It states that IF a computer passes it THEN it is conscious. The implication is not reversible.

    Despite many researchers devoting their time to actually building machines to pass a constrained version of the test, I would say that the main merit of it is exactly that it is very hard. Constrained Turing tests, such as computers that can talk about a certain subject, only produce clever programming gimmicks that do not scale.
    However, the complexity that is inevitably needed to actually produce intelligent speech is the key feature here: from complex interactions of simple components intelligence emerges. Both Daniel Dennet and Douglas Hofstadter have written some insightful stuff about this. In "Consciousness Explained" Dennet describes a conversation between a Turing-test-proof computer and an interrogator: the computer tells the interrogator a joke and explains it. It also comments that it doesn't really like the joke because it is about racial prejudice. Reading this conversation makes you realize how immensely difficult this task is.

    In short, I don't agree that passing the Turing Test is no longer a goal of AI. Any system that would pass the real-deal test should be considered intelligent. However most programs written today are just gimmicks, that can only pass very short or very constrained tests. We are very, very far away from passing the real test.

  • LOL... Check out the Borg Queen's reply!

    This has got to be real people.
  • This [forum2000.org] one is even better... it features Bill Gates commenting on Slashdot :-)
  • You need a 2 people & a program.
    Person A does the question asking to the computer program and Person B. It's person A's responsibility to guess which ones if the machine and which is the human. The computer program wins only if person A recognises it as being human over the real human.
  • that's not really what AI is all about - it's more about trying to solve problems using various techniques in order to make programs useful.

    That's not the goal of AI, that's the goal of programming.

    There was a time when expert sytems were AI. Before that, anything DWIMish was AI. Now it's all just programming.

    Why? Because the only definition that consistently fits AI is clever stuff we don't really know how to program yet.

    Someone (I forget who, maybe Dave Touretzsky?) said once, ``AI is like a magic trick. The first time you see it, you think wow, that's amazing! That must be magic! Then you want to know how it's done, and someone tells you, and you think, wow, that's really clever! Then later, after you understand it and it's no longer novel, when you see it again you think, well it's just slight-of-hand, duh.''

    Once something doesn't feel like magic any more, because it has become common-place, it is no longer AI. At that point, it's just programming.

    (And some pointy-headed loser will probably even refer to it as a ``design pattern.'')

    I like that Hofstadter dialogue you mentioned, but calling something that passes the Turing test a ruse kind of misses Hofstadter's point entirely. Just because you understand why a program passes the Turing test doesn't mean it didn't pass, and it doesn't mean you are excused from treating the program as a human. Because if you hypothetically understood all the chemical and electrical processes that made us work, it wouldn't excuse you from treating your fellow humans decently. ``Oh, it's just a meat-machine pretending to be clever'' isn't an excuse.

    To bastardize the Arthur C. Clarke quote, any sufficiently understood magic is indistinguishable from technology.

  • The problem with the Turing Test is that it tries to make a computer human and that's not really what AI is all about - it's more about trying to solve problems using various techniques in order to make programs useful.

    Is that what they tell you these days ? How is that distinct from any other kind of programming ? The real fact of the matter is that AI is based on an incorrect (but intuitively very attractive) idea that humans minds and computers are similar. Since its become increasingly obvious that this is not the case, the people who staff the AI departments of universities have backed off further and further from this. By the time I was there, they had realised the brain's "hardware" was radically different from a computers and had backed off into a kind of dualism based on the C-T hypothesis, claiming that conciousness (as if we had any idea what that is) is a program that can run on widely differing hardware platforms. Obviously they've backed off even further now.

  • Although the Turing test is widely regarded as a tool to test intelligence, this statement is very questionable. The Turing test only test how well a program can simulate a human.

    For example: if a friend of yours can multiply two large numbers, you'll say he's smart; however no-one ever called a calculator 'smart'.
    If a person memorizes all countries with their capitals, you'll also consider this person intelligent; computers are far better in memorizing things.

    The point is that this program (trying to pass the Turing test) will not only have to fake intelligence, but also stupidity. If the interrogator asks it to factor 4553536663, it will have to lie and say it doesn't know or it will loose credibility. The question here is: is it favorable for a computer (or any other device) to deny its capabilities, just because our definition of intelligence might be a little off ?

  • The problem is, that computers don't see. They look, but they don't really see.

    No, exactly. And to make a computer really see, is what's known as the vision problem. My question (as you will see if you read it again) was: Why isn't this a suitable problem for AI?
  • Which just goes to show - passing the turing test is probably mostly a question of cues other than the conversational logic. If a beautiful woman came on to you and kept saying "we were talking about you, not me" over and over, you probably wouldn't say, "gee, that's a poorly written computer program".

    You'd probably fall for it.
    ;-)
  • If I recall correctly, in the fine print of the Loebner Prize it says that in order to win the real money, you have to pass a *fully multimedia* Turing Test. In other words, do computer generated audio and video so convincingly that you think you're talking to a person over a webcam or something. Of course, since we're barely even close to coherent conversation now, nobody's likely to win that money.

    They have this contest every year. Some years, the contestants do well, others, not so well. When taken as an abstract, i.e. "A computer that can always fool any human for any length of time into thinking that he is talking to another human", the Turing Test is valid -- but untestable. Once you put constraints on it ("these 10 people for 15 minutes...") it's no longer valid because each constraint is a weakness (maybe the people were stupid. Maybe if they just had time to ask another question they would have been able to tell the difference...)

    I think something important that's forgotten frequently in dealing with natural language technology is that right now, in almost all cases, you don't want to have a conversation with your computer! You want to tell it to turn on the lights, and to ask it how much money you have in the bank, and to find cool new warez and MP3s, dood. The sentence structure for queries and commands is far different (and far similar) than trying to parse out conversations in which context almost always becomes the downfall of comprehension.

    Someday, yes, people will want to have a conversation with the machines that control their houses. I envision a machine that can tell by my sentence structure what mood I'm in, and put on some appropriate music, set the lights, and so on. But those things will all happen *after* we get the basics down, like differentiating "Lights on" from "Could you turn on the lights please, computer?" and having them both do the same thing. Nobody would call the former true natural language. It's when we can do the latter, and have "noise suppression" be so seamless that you can say what you mean in almost any conceivable way, that people will take it seriously as an interface.

  • What if it learns to mimick Windows 95?

    (Bad influences and all that)
    --
  • This is exactly why I think AI is a bad term. As someone mentioned earlier, general intelligence is much too hard to symbolize.

    The thing is, AI really isn't that different from regular programming. It's just that, because it plays a game or solves puzzles instead of replacing characters, we call it "intelligence". AI is more about trying to find a way to program those little tricks and shortcuts that we take in our own mind.

    What really constitutes the "intelligence" we are trying to make artificially? We have robots programmed to react to sensory input. So the fact that it reacts makes it...? Would an intelligent program be one that helped me find problems while I'm debugging?

    Many AI problems relate to one common trait: How do I eliminate a lot of the useless paths I could follow to achieve this goal?

    Damn, I have to go to class. Anyway, these are things I am thinking about....

    Woz
  • Enjoy, I created these a while back. If you get all the way to sunday you might notice one of the aliens is wearing an interesting t-shirt.

    turing test comic here [smallgrey.com]
  • About the Turing test. There are many myths and legends about it. People claim certain knowledge of the test, but don't think them out and see what's wrong. Common sense applies more here, than any university degree.

    One claim is that the Turing test is the only way we got to determine if a computer program is intelligent or not. This is derived out of the notion that we think we can recognize intelligence when we see it. But the test says nothing of common error probability (many humans have actually failed the test for being an AI), and the capabilities of the judges. If you read some of the transcripts from the past official Turing tests you'll be horrified how quick some judges are to judge, and what simple questions they ask. Many of them appear to be bored with it all. This also applies to the human candidates. Some of these faults in the past can be blamed on poorly written programs, that couldn't compete in any way. The past Turing tests actually had limited discussion topics, so that the programs could be programmed for a specific discussion topic. But think of a super-program (that is not super by today's standards) among those. It could actually pass in the tired and disappointed athmosphere four years ago. To quote from "Tomas Covenant The Unbeliever": Any test is just as good as the tester himself.

    About Humans. In our arrogance we say that we are intelligent, and everything else is not. We are amazed and dazzled by pets who performs instant rescue operations in fires and drowning accidents. For how can animals be intelligent? We don't measure intelligence, we blatantly state that things around us that ain't human is not intelligent. By unconciously applying our own version of the Turing test to everything around us. Of course, many of us do regard animals as intelligent, to a lesser degree, but most humans think of intelligence as a binary state.

    About Intelligence. But it can be measured. It's not an ON/OFF switch for us to decide it's state. Heck, we don't really have a clear-cut definition of intelligence even today! Other than that faulty "It's not human-like" negativity test, and IQ tests which is only a test to separate "dumb" people from the rest.

    And there isn't just One Kind of Intelligence (to Rule them all). You have social-, technical-, langual-, mathematical-, logical-, motoric-, coordinatic- and many, many more intelligences. There exists no test that tests it all, and no tests are very accurate. Many people who are considered "dumb" really excell (how the hell is this bloody word spelled? ;) in some more obscure areas. So we don't have a clue what it's all about!! Really. We just like to simplify things to the bone. And make ourselves look better than the crowd.

    My definition of an intelligent system, is an open-minded and positive test. Wether I can measure it or not a system is intelligent to a certain degree if it contains information and processes this information within itself. It MAY receive input data, and it MAY emit output data, but that is only essential to my perspective of knowledge (not beliefs). The type of data-storage medium is not essential. Neither is the medium processing the data. The essential is that information is being altered inside the system, and fed back in a feed-back loop. Thus, the system has a way of "viewing itself" (definition of a reflective system).

    The internal processes can involve operations like copy, addition, inverse, etc. These would be atomic functions. While multiplications, subtraction, divisions and exchanges would only be optimizations, since they always could be expressed by a set of atomic operations. But the data doesn't have to be numbers, and the atomic functions would be different for neural networks, images, symbols or even colours for instance.

    To complicate things even more, processes could run in parallell internally in the system. In real life, nevral networks in our brains all process in parallell to a certain degree. (Ie. I'm sure there are semi-synchronisation methods between parts of the brain, even though they might be complex or chaotic)

    In information theory, you can express any information in binary numbers (00101011). This simplifies things, but you'll need a non-ambigous specification to convert data both ways. Some types of data could perhaps be more effectively processed than strings of binary data (ie. linked-lists, images, chinese symbols), simplified in complex structures of binary strings.

    Input and output data in a feedback-loop would permit the system to develop with its surroundings. To what extent is unknown. Ie, how much intelligence and knowledge would the two systems contribute to each other? Limitations would be imposed by information storage sizes, lack of atomic functions, dead-end loops, etc. Especially lack of creativity (a random function) would be a dramatic limitation to the extent of intelligence and knowledge possible to be learned and taught. Read-only areas in the system's data or process-storage would be another severe limitation.

    Systems lacking a trait that exists in another system could interface with that other system in a symbiosis, to use the resources found there. This is in the extreme case the basic principle of an artificial neural network. Where everything is shared holographically in the structure of the neurons' connections (and each connections weights).

    On the difference between intelligence, knowledge and their respective levels. The usual pit-trap is to not distinguish intelligence and knowledge. I prefer to define level of knowledge as the amount of non-redundant information a system can internally access within a given time/number of cycles. While level of intelligence to be complexity of a given task to be solved within a given time/number of cycles.

    These levels are next to impossible to measure very accurately in real life, but of course you have imperfect methods. Just not count on them for anything else than what they are. One type of method is to measure intelligence from the output of the system, in light of the input data or not. You can also test intelligence by scanning the actual code and data the system consists of, if you are able to "X-ray" it. You will have to be able to determine how intelligent the algorithm is. Of course, in real life, the observation will always affect the state of a running system (Real life is ALWAYS On, darn ;). In computer programs it will ne unaffected unless the technitian trips over a wire or something.

    These definitions leaves one thing hanging if you're calculating in real-time: processing cycles per time unit (e.g. 450 MHz). I don't consider a system processing large amounts of data (a supercomputer) to be more intelligent, by the definition above and "common" reason. But you would have to multiply this speed with the intelligence level to get the total intelligence-effect (ie some of what Turing and IQ tests are really testing).

    I know this is all hard and difficult to understand and think over. The definition is very impractical too. But it's a much better place to start, than just saying "I don't see the intelligence in this" when you haven't even decided for yourself what intelligence really is! That simply shows alot of ignorance. Besides it's the modern way to go. Most AI programmers building neural network live by it. (Sadly I'm not :(

    The definition doesn't exclude anything physical the right to be intelligent. We human beings consists of thrillions of living cells. They in turn consist of billions of atoms and molecules. Which again turns out to consist of even smaller "particles" of less physical nature (see the religion of modern science [not a book, it's for real! ;] ). All these particles (or more correctly multi-dimensional waves), are processing internal data and interacting with their surroundings. Therefore, everything physical can be considered to exhibit a certain degree of intelligence!!

    I think this ALSO applies in cases where we are not able to detect the output data or the non-human intelligence in it. Science is too eager to test for negativity and simplify things, thus many creative theories are crushed by the latest dogmas. (Scientific people think they know better than everybody else just because they use fancy language to make themselves misunderstood.)

    Now if you've grasped the ideas I've expressed here, you'll know that the Turing test is a bogus test. Both in the computer lab as well as in real life.

    - Steeltoe (really tired of hearing those people say Turing test is all we got)

    PS: Gee, this edit-window is tiny! ;)
  • its a feature, not a bug... that simulates polititians. :)
  • First, Hugh Loebner is the one that is supplying the $100,000 for the Grand Prize not Dartmouth. Dartmouth is just hosting the contest this year.
    The link to Hugh's Loebner Prize page has already been posted in one of the other comments, but should be added to the list of related links in the /. box.

    Secondly, even if you don't win the $100,000 Grand Prize, Hugh presents $2000 every year to the "most human program". Entries are being accepted until Oct. 31st and there is no entrance fee. So go read the rules and try to win yourself a few thousand dollars.

  • And I wonder why tuition is so high...
  • First, Hugh Loebner is the one that is supplying the $100,000 for the Grand Prize not Dartmouth.

    That's a relief!
  • I think this has been discussed a little in another thread, but the Voight-Kampf test did not measure intelligence or the ability to "pass" as human, it measured empathy as a measure of true humanity.

    This is an important distinction. The replicants were already Turing compliant, but they were not human. Dick believed that empathy was the defining aspect of being human. Dick's replicants would have been able to pass any Turing test with ease. In fact, they passed the most difficult Turing test of all: they were able to live in human society, hold jobs, sing Opera, make love; but they weren't human.
  • To see if a program will pass the Turing Test right away, just ask it some question with a lot of slang.

    IE: "Hey, wussup, just wonderin if ya caught that NIN "pinion" vid on MTV yet? If not, check dat shit out cuz its PHAT!!"

    I'd like to see what an intelligent program's response to that would be...

  • cant believe they went there
  • the most likely explanation is that they have the page linked to an irc bot or the like & if they feel like answering, they just send the message back through a bot.
  • I believe the turing test is both impossible to pass, and an inaccurate way of measuring artificial intelligence.

    Artificial intelligence's purpose isnt to mimick the way humans think and react but to be able to devise solutions to problems without the need of specific programming for the problem, to be able to learn and adapt to new situations, and not just be constrained by a single original procedure. This test to an extent might measure this, as the computer would be able to answer a question accurately no matter what the form the question is presented as, and if it does not have the answer on hand, search the internet for the answer, however, the answer would still be very easily distinguishable from a human's answer.

    There are a number of ways in which one could "trick" the computer, or "cheat" on t he test from either the interrogaters end, or the person whom is being compared to the computer. One easy way to cheat would be to simply look for human error. A computer has no element of human error, except that which is programmed into it. An instant giveaway might be a typo ("teh"). Another giveaway would be when the person does not know the answer to the question, as any artificial intelligence computer program made to be able to accurately answer questions would be able to quickly locate and produce the correct answer, while no person knows everything. You ask someone "whats the atomic mass of bromine" and they would be like "what the hell kind of question is that?", while a computer spits out the number. Which brings me to how the answerer could cheat: slang and dialect. People generally dont speak proper english, and it would be easy to distinguish between a computer program and certain dialects or slang used by people. Of course, you could attempt to tackle this and the typo problem both by making it purposefully make typos, or attempt to make it speak with slang ("gangsta_turing_AI": 'you best step off 'for I bust a cap in you a$$ motha*****') but seriously, is this what AI is about? I didnt think so, and even if we went to such lengths, I don't believe it could be done 100% convincingly, at least not by the end of the month ;).
  • by Zirman ( 87997 )
    What bugs me about Alan Turings test for intelligence only looks for human like intelligence. Consious congitive intelligence is what they should be testing for.
  • The idea of the Turing Test is a really cool one, at least at first. And if people who tried building Turing Test capable machines worked with the spirit of the test in mind, it would stay cool. The problem is that they don't. Instead of focusing on things that make us intelligent, they focus on things that make us human. They analyze speech patterns, type patterns, and similar things that are easy to duplicate, and they write programs that duplicate our failings, not our intelligence. Programs have been written that can pass the Turing Test. Eliza, for example, does really well. But hook Eliza up to another of its ilk, and you get utter gibberish. So perhaps that's a better cantidate for a Turing Test - design a program that can make intelligent conversation with a pseudo-Turing capable program.
  • I'm not going to reiterate all of it, but Selmer Bringsjord [rpi.edu] has written and collected a lot of interesting information about robots, Turing Tests, and the state of the art.

  • You sound a lot like Eliza yourself with that comment :-)

    It says more about the friend. It took a LOT of customization to get Eliza to do that. Nothing spiffy, just a lot of words for it to watch for and a variety of responses. When I was done with it, it didn't sound anything like the original psychiatrist version.

    This particular friend was actually an annoying guy from my CS class who got my AIM name from somebody and kept bothering me. Instead of putting him on my blocklist, I gave him the Turing Treatment (TM) :-)

    Now he's been bothering me more - he's fascinated by my customized Eliza and he thinks I am really on to something big in the field of AI. Sheesh...

    --
    grappler
  • To pass the test, the program would have to converse fluently in a manner indistinguishable from a human... not quite where we're at in AI.

"If it ain't broke, don't fix it." - Bert Lantz

Working...