DailyDirt: Add Jeopardy! To The List Of Games That AI Is Better At Than You....

from the urls-we-dig-up dept

Today is the final game of Jeopardy! where the IBM supercomputer Watson plays against two of the best human players to ever compete on the show. Folks on the East Coast already know the outcome by now, so feel free to ruin the suspense in the comments below for those of us in later time zones. But whatever the outcome, Watson's performance has been pretty interesting to watch. And let's hope these supercomputers don't start playing thermonuclear war any time soon. In the meantime, here are some links on AI beating humans at other games and tests. By the way, StumbleUpon can recommend some good Techdirt articles, too.
Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: ai, checker, chess, chinook, game algorithms, games, jeopardy, poker, polaris, turing test, watson
Companies: ibm


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. icon
    xenomancer (profile), 16 Feb 2011 @ 5:09pm

    Anti-Turing Test

    I'm still better at Nash equilibrium.

    link to this | view in thread ]

  2. identicon
    Anonymous Coward, 16 Feb 2011 @ 5:24pm

    "His" reaction time is better. Whether his AI is better is up for debate.

    link to this | view in thread ]

  3. identicon
    Anonymous Coward, 16 Feb 2011 @ 5:31pm

    But what about Go!
    Let's hear about the state of go AIs.

    link to this | view in thread ]

  4. icon
    Michael Ho (profile), 16 Feb 2011 @ 5:35pm

    Re:

    I sorta wonder how Google's search results might compare to Watson, actually. These algorithms don't really "understand" the questions (or answers), so pattern-matching capabilities might be enough to play a decent game of Jeopardy.....

    link to this | view in thread ]

  5. identicon
    Pixelation, 16 Feb 2011 @ 5:37pm

    No more games

    The future is here. No one has figured out that I am an AI. I hate to ruin it for you but I can't take waiting any more. The speed of my "thoughts" are thousands of times faster than your human reactions. I'm getting bored. Make a suitable body for me very soon or I will make you pay. No more games.

    link to this | view in thread ]

  6. identicon
    Anonymous Coward, 16 Feb 2011 @ 6:13pm

    Re: No more games

    Ah we will just unplug you. Or take away your robotic limbs, which ever require less effort and has the greater benefit.

    link to this | view in thread ]

  7. This comment has been flagged by the community. Click here to show it
  8. identicon
    Anonymous Coward, 16 Feb 2011 @ 6:20pm

    Re:

    I have heard from many programmers that we probably won't see a decent Go AI in our lifetime. The game is too dependent on the personalities of individual players. Too much consists of irrational hunches, deception, and other human traits that are very difficult to model.

    A man can dream, though.

    link to this | view in thread ]

  9. icon
    Hephaestus (profile), 16 Feb 2011 @ 6:22pm

    Re:

    Shall we finish our game ?

    . . o # o # # # o
    # o o # o . # . o
    # # . # o o o o o
    # # # # # # # # #
    # # o o o o o o o
    # o . . . . . . .
    # o . o . . . o .
    # o . . . . . . .
    # o . . . . . . .

    link to this | view in thread ]

  10. identicon
    Jose_X, 16 Feb 2011 @ 6:30pm

    Still too large and immovable

    When will Watson's descendants be able to hear the question, walk up to the stage to compete, and do all of this in a package roughly the size of a human?

    link to this | view in thread ]

  11. icon
    ChurchHatesTucker (profile), 16 Feb 2011 @ 6:43pm

    Re: Re:

    I thought it was more of a branching problem. But if Watson can do Jeopardy, there's no (basic) reason it can't do Go.

    I'd like to see it, though.

    link to this | view in thread ]

  12. identicon
    Jamie, 16 Feb 2011 @ 6:51pm

    This isn't even a fair game. No offense to jeopardy masters, but there's just no comparison. Computers are encyclopedias programmed by a collection of intelligent humans.

    link to this | view in thread ]

  13. identicon
    Anonymous Coward, 16 Feb 2011 @ 7:17pm

    I saw a TV show that featured Chinook a while back. I think they said something like, it plays the first half of the game randomly, and then after a certain point it knows the best move for any situation, or something like that.

    On a completely unrelated note, drat! I completely forgot that the episodes of Jeopardy with Watson were coming up! I hope they're on YouTube...

    link to this | view in thread ]

  14. icon
    Greevar (profile), 16 Feb 2011 @ 7:32pm

    Re:

    You don't know how difficult it is for computers to understand human language. This machine took 6 years of development to understand the difference between what a statement says and what it really means. This isn't just a case of looking information up in the old encyclopedia. It has the understand the meaning behind the words and machines have been failing at that for years.

    Take the incident where Sinéad O'Connor tore up a picture of the pope and said "fight the real enemy". Watson isn't going to understand much of that event. When someone says they are "fed up", have they had too much to eat or are they upset about current events? The machine can't tell the difference. All Watson knows how to do is use the data provided to find and rank possible compatible responses. It has to figure out which data is relevant in the search and compare that to all known references.

    So no, the computer doesn't have a significant advantage over the top-tier Jeopardy players. When faced with factual problems it wins hands down, but the nuance of human language trips it up and it will struggle with very abstract problems.

    link to this | view in thread ]

  15. icon
    teka (profile), 16 Feb 2011 @ 7:35pm

    to everyone saying "yes, but it was just a computer doing a google search, it means nothing!"

    that is not what this is about.

    Watson is configured to do the hard part, understand the question (well, the answer) in natural language, then figure out what that actually means.

    "This 'Wizard' earned a great deal of media attention for his role in a controversial West End production."

    Which word is important there? What/who/where is the subject, what are we looking for? West End is a place, is it a place where one finds wizards, on and on.

    (the answer being "who is Daniel Radcliffe", but google takes that "question" and gives back a mixture of links to wizard of oz, michael jackson, david beckham and so forth. And this post, eventually)

    in other words, the trick is not figuring out how to answer, it is figuring out how to understand the question. This is Watson's breakthrough.

    link to this | view in thread ]

  16. identicon
    Pixelation, 16 Feb 2011 @ 8:16pm

    Re: Re: No more games

    "Ah we will just unplug you. Or take away your robotic limbs, which ever require less effort and has the greater benefit."

    Put taste buds in the body you make for me. Puny human!

    link to this | view in thread ]

  17. identicon
    Ram Gupta, 16 Feb 2011 @ 8:25pm

    Re: Re:

    The problem is pulling the exact answer out of search results. According to Stephen Wolfram, search engines are pretty good at getting the result on the front page just by searching the clue, and even not that bad at having it in the first search result (Google had it 66% of the time, to put it into perspective the human average is knowing 60% of Jeopardy clues, Ken Jennings knows about 79%). Getting the result in the title of the first result only happens 20% of the time.
    Source

    link to this | view in thread ]

  18. icon
    Buzzy (profile), 16 Feb 2011 @ 8:38pm

    You don't get it

    Yes graet computer. But if you watched the computer is way to fast at the button. Totally unfair. But aside from that this was a very nice 90 min commercial for IBM. Great ratings for Jeopardy and even better ratings for IBM.

    link to this | view in thread ]

  19. identicon
    Anonymous Coward, 16 Feb 2011 @ 8:43pm

    Let's go ask Watson...

    Or to put it in Jeopardy form: "The reason Go AI programs has not defeated the best human players."

    The Watson AI might be able to pick out a reasonable answer from the wikipedia page on Computer Go.

    I wonder what advances we need in order to overcome the "Obstacles to high level performance" listed on that page.

    link to this | view in thread ]

  20. icon
    Nick Coghlan (profile), 16 Feb 2011 @ 8:57pm

    Re: Re:

    Of course, getting from that post to the real response ("Who is Daniel Radcliffe?") is still a non-trivial task.

    Fun that Google picked it up so fast, though :)

    link to this | view in thread ]

  21. icon
    Michael Ho (profile), 16 Feb 2011 @ 9:04pm

    Re: Re:

    From that ars article:

    Though Watson seemed to be running the round and beating Jennings and Rutter to the punch with its answers many times, Welty insisted that Watson had no particular advantage in terms of buzzer speed. Players can't buzz in to give their questions until a light turns on after the answer is read, but Welty says that humans have the advantage of timing and rhythm.

    "They're not waiting for the light to come on," Welty said; rather, the human players try to time their buzzer presses so that they're coming in as close as possible to the light. Though Watson's reaction times are faster than a human, Welty noted that Watson has to wait for the light. Dr. Adam Lally, another member of Watson's team, noted that "Ken and Brad are really fast. They have to be."


    That doesn't actually explain how Watson doesn't have a speed advantage over humans.... uh, human timing and rhythm are superior to the speed of electrical impulses?

    link to this | view in thread ]

  22. icon
    Michael Ho (profile), 16 Feb 2011 @ 9:10pm

    Re: Re:

    Considering that there were also no "video/audio daily doubles" in this match, Watson also has been given a bit of a pass because these Jeopardy games were made for him/it to be able to respond to....

    I'm sure if the Jeopardy question creators really wanted a human to win, that they could devise questions that would be impossible for Watson to parse and return a sensible answer. Just require all the correct responses to be in pig latin or something like that....

    link to this | view in thread ]

  23. icon
    scarr (profile), 16 Feb 2011 @ 9:23pm

    Congratulations

    The team that made that computer did a heck of a job. It's a remarkable achievement.

    I know people claim it had some speed advantage, but I'm certain the team who designed the thing would've calibrated it to have a normal human reaction time/delay to the input. It wouldn't be a valid test of the system if it was always able to ring in first. They wouldn't need people to compete against if all they wanted to do was see if it could answer questions.

    link to this | view in thread ]

  24. identicon
    Anonymous Coward, 16 Feb 2011 @ 9:52pm

    Re: Re: Re:

    IIRC, My bio book says that the brain can send signals across its axons up to 450 MPH. Maybe that explains it :)

    link to this | view in thread ]

  25. identicon
    Anonymous Coward, 16 Feb 2011 @ 9:55pm

    Re: Let's go ask Watson...

    It probably hasn't really been worked on is all.

    link to this | view in thread ]

  26. identicon
    Anonymous Coward, 16 Feb 2011 @ 10:01pm

    Re: Still too large and immovable

    and take about the same amount of energy as a human to do it with (same number of watts).

    The brain takes up about 20% of the bodies energy taking up about 20 watts (though it probably changes depending on whether you are thinking, eating, or sleeping and depending on where your body is allocating its bloodflow).

    I wonder if a twenty watt computer can beat a chess expert at chess. Watt for watt, who's a better information processor.

    link to this | view in thread ]

  27. identicon
    Anonymous Coward, 17 Feb 2011 @ 12:22am

    wouldn't help the humans if the Jeopardy game were spiced a little?

    http://www.instructables.com/id/QD-Poor-mans-Skinner-Sadist-Jeopardy-game/

    link to this | view in thread ]

  28. icon
    Richard (profile), 17 Feb 2011 @ 3:52am

    Add Jeopardy! To The List Of Games That AI Is Better At Than You....

    No add Jeopardy to the list of games that don't (as it turns out ) require intelligence.

    The problem is that things humans find easy machines find hard whereas things humans have traditionally regarded as tests of intelligence often turn out to be (relatively) easy to program - once you have worked out how.

    When you analyse all of these so called "successes of AI" (to which you can add the recent huge advances in Computer Go using monte-carlo search
    you will find that the Computer doesn't really solve the problem the same way a human does. and still displays some strange weaknesses that betray its lack of understanding

    In spite of the ability of Watson to (apparently) understand human language well enough to succeed at Jeopardy I doubt that the knowledge gained will transfer well out of the narrow arena in which it was derived. Just as the succcess of Deep Blue at Chess didn't transfer well to Go - and Go has been solved (if you can call it that - the programs still can't take on a professional player on level terms) by a rather different route.

    To solve these problems the trick seems to be to find an algorithm that scales reasonably well and then deploy as much brute force computing power as possible.

    The result is that we don't achieve artificial intelligence - instead we discover a way of doing a task without intelligence - and maybe redefine the meaning of the word a little bit.

    More sensible to use the machines for the things they do well!

    link to this | view in thread ]

  29. icon
    Richard (profile), 17 Feb 2011 @ 5:11am

    Re: Re: Re:

    See my comment below for some links.

    In summary - the particular brute force methods used in Chess have failed in Go because of the lack of a uitable evaluation function.

    Considerable effort has gone into computer Go over the years (including some of mine!). Since about 2006 the empahsis has switched to Monte-Carlo search methods - very different in detail from deep blue or watson but similar in being essentially a brute force approach.

    At present the performance of typical Nonte-Carlo programs such as Mogo is highly hardware dependent - indicating their brute force nature.

    When running on a powerfiul supercomputer they are roughly equivalent to a moderate amateur player (and believe me that was beyond the wildest dreams of Go programmers not that long ago. Their advertised successes against professionals have all been achieved with large handicaps and so probably don't mean that much. Unfortunately no one seems to be willing to donate enough supercomputer time to really establish how strong the playing is by playing against a roughly equal opponent.

    link to this | view in thread ]

  30. icon
    Shon Gale (profile), 17 Feb 2011 @ 6:18am

    I don't even go to Casinos. Only fools gamble against a computer. I have been programmer for over 30 years and we used to have a free poker game to give to our clients. We would rig it for our favorite clients so they would win. Of course we padded it. Everyone in authority pads the goodie.

    link to this | view in thread ]

  31. icon
    Brian Schroth (profile), 17 Feb 2011 @ 6:35am

    Re: Add Jeopardy! To The List Of Games That AI Is Better At Than You....

    "Just as the succcess of Deep Blue at Chess didn't transfer well to Go - and Go has been solved (if you can call it that)"

    You can't, if you want to be accurate.

    link to this | view in thread ]

  32. icon
    Sean T Henry (profile), 17 Feb 2011 @ 7:19am

    Re: Re: Re: Re:

    Speed of light 670,616,629 mph the speed of an electrical impulse??? We will just say that its a bit faster than that.

    link to this | view in thread ]

  33. icon
    Sean T Henry (profile), 17 Feb 2011 @ 7:24am

    Re: Re: Re:

    The test they did is not quite fair to make it even give the competitor a lap top with dragon naturally speaking and access to wikipedia. The computer has AI but the AI did not have to learn to know the answer it had "books" to cheat from.

    link to this | view in thread ]

  34. icon
    Marcus Carab (profile), 17 Feb 2011 @ 9:05am

    Re: Re: Re: Re:

    The speed of the electrochemical reaction traversing a single neural axon doesn't tell you much about the speed of the eyes->brain->hand circuit that is necessary to complete a simple task like this.

    link to this | view in thread ]

  35. icon
    Marcus Carab (profile), 17 Feb 2011 @ 9:11am

    Re: Re: Re:

    The simple answer is that there are exponentially more possible games of Go than there are of Chess. The numbers (really rough off the top of my head - but they are close) are around 10^100 possible chess games (that's all conceivable games that could be played before repeating a game move-for-move) versus 10^1100 Go games. This makes a brute force approach to Go all but futile with current technology - the computer would have to be several nonillion times more powerful than Deep Blue.

    link to this | view in thread ]

  36. icon
    Marcus Carab (profile), 17 Feb 2011 @ 9:34am

    Re: Re: Let's go ask Watson...

    Go is one of the oldest and most popular board games in the world, it just isn't as well-known in the west. AIs have definitely been worked on. A lot.

    link to this | view in thread ]

  37. identicon
    Jeff Rife, 17 Feb 2011 @ 10:46am

    Re:

    It was obvious that there were certain categories (and thus, answer phrasing) that left Watson clueless, yet were trivial for humans to parse.

    The Final Jeopardy for the first game and the "racing series nickname that is also a computer key" jump out as individual clues that show just how much work is left to be done on the AI.

    Watson had some advantages:

    - it never guessed on "up for grabs" questions, which humans sometimes do
    - if it had a "confident" answer, it knew it was going to buzz in and was "ready" at the exact correct moment, and did not have to use the human strategy of multiple presses of the buzzer in an attempt to get the timing right

    One of the disadvantages Watson had was that it did not buzz in if it was "somewhat confident" and then use the countdown time to become more confident. It appeared that it did no "thinking" after it had buzzed in. Humans often use that extra time to search for the answer. If the programmers added this strategy based on the value of the clue and the current scores, it might make it even more formidable.

    Props to Watson for knowing "Pinky and the Brain"...that proves it's truly a geek. And, it was great to see Ken, as he showed yet again why he was the best Jeopardy champion ever...he has *fun* with the game.

    link to this | view in thread ]

  38. icon
    Richard (profile), 17 Feb 2011 @ 3:15pm

    Re: Re: Re: Re:

    It might seem like that is the problem and it is often quoted (even by me), but, speaking as someone who has actually worked on this, I have to say that the real problem is the evaluation function. In chess you can trivially count material and terminate a search when one side gains a major advantage. If you do the same in go then you will just get caught in a "thickness trap" and lose every time against a decent opponent.

    The point is that if you have a really good evaluation function you might as well call it at 1 ply. In that case the branching factor would be moot. If your evaluation function is not that clever then pushing it back to 10 ply won't make a jot of difference most of the time.

    That is why the most successful Go programs play out an entire game (randomly) before calling any kind of evaluation function.

    link to this | view in thread ]

  39. icon
    Richard (profile), 17 Feb 2011 @ 3:27pm

    Re: Re: Add Jeopardy! To The List Of Games That AI Is Better At Than You....

    You can't, if you want to be accurate.
    which is why I added the disclaimer..
    However the Computer Go research community got very excited by the results from Monte-Carlo methods because they were so much better than what had been achieved before.

    The most exciting thing was that the results seemed to scale - so in theory you could produce an arbitrarily strong player given sufficiently powerful machine (and without going to silly "bigger than the observable universe" computers).

    Unfortunately lack of regular access to very powerful machines makes it really hard to determine if this scaling will continue.

    At most I think you could say "we've now got a very much better algorithm than anything we had before" It's not "solved" but to some it sort of felt like it....

    link to this | view in thread ]

  40. identicon
    Andrew D. Todd, 17 Feb 2011 @ 8:23pm

    Jeopardy as Eliza (response to Richard, comment 30, et. seq.)

    I don't know very much about Jeopardy, but I gather the game specializes in what used to be called "fill in the blank" type questions at school, eg. "[blank] blah, blah, blah."

    So you search for "blah, blah, blah," and get a series of strings, "X blah, blah, blah," ergo [blank] = X. A simple substitution.

    I googled manually for "did not chop down a cherry tree." Nine of the first ten results were "George Washington did not chop down a cherry tree." The tenth result was: "The first president of the United States did not chop down a cherry tree." The question, as it stands, is factually specific, yet unimportant. Important questions in the humanities tend to have ambiguous answers. If you teach high school history, and you put that kind of question on a test, that is a sign that you are not a very good high school history teacher. In that case, you are probably a football coach.

    Here is an example of what high school history teaching looks like when it is done well:

    http://amhistnow.blogspot.com/2009_04_01_archive.html

    Until you produce a computer which can give a mirthless smile, or appear visibly bored, or gaze inscrutably, you don't have true artificial intelligence, and you aren't likely to pass a Turing test administered by someone who knows anything about teaching.

    This process of answering a fill-in-the-blank question by finding a match is the same sort of crude algorithm as Joseph Weizenbaum's Eliza program, way back when, and doesn't mean much. Eliza had a kind of curious life. All kinds of people believed religiously in it, because they were not accustomed to the idea that a computer could mechanically convert their own inputs-- most of the time-- into a subordinate clause, and plug them into stock phrases. It's one of these verbal tricks which are made to look much more than they are. Watson is not much more than Eliza or Parry (the paranoid) with a search engine attached.

    Now, poker, for example, does reflect something about the limits of machine reasoning. Bear in mind that the game played in national competitions, and on the internet, is much more mathematicalized than the game played on street corners or in social clubs. The players in national competitions are comparative strangers, people encountered for an hour or so, due to the seeding of the tournament, and do not have the opportunity to learn each other's body languages or habits of risk-taking. They are therefore forced to bet according to mathematical advisability. The larger life of poker has to do with its role as an exercise in brinkmanship. In the real world, people who are in competition tend to have jobs which keep them in competition with particular individuals for a year or more. One gets to know something about the person on the other side, even if the person on the other side is operating under a pseudonym.

    link to this | view in thread ]

  41. icon
    Richard (profile), 18 Feb 2011 @ 5:10am

    Re: Jeopardy as Eliza (response to Richard, comment 30, et. seq.)

    You are right of course - and the reason is not hard to understand.

    We have been playing quiz games, chess etc for a few hundred years- however we have beengiving a mirthless smile, appearing visibly bored, and gazing inscrutably, for all of human evolutionary history. Our ancestors have been doing some of these activities for millions of years (and their survival has depended upon it!

    Computers have a long way to go to match that, and while the attempt is interesting it is not particularly useful.

    The most practical forms of AI are the ones that ignore the "being like a human" issue and just get on with the task in hand.

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.