DailyDirt: Add Jeopardy! To The List Of Games That AI Is Better At Than You....
from the urls-we-dig-up dept
Today is the final game of Jeopardy! where the IBM supercomputer Watson plays against two of the best human players to ever compete on the show. Folks on the East Coast already know the outcome by now, so feel free to ruin the suspense in the comments below for those of us in later time zones. But whatever the outcome, Watson's performance has been pretty interesting to watch. And let's hope these supercomputers don't start playing thermonuclear war any time soon. In the meantime, here are some links on AI beating humans at other games and tests.- Deep Blue won its first game of chess against Garry Kasparov in 1996. The computer didn't win the match that year, but it won the re-match in 1997. [url]
- Checkers was declared "solved" in 2007 by the Chinook project. Chinook was actually stronger than any human player by 1996, but it took a few more years for Chinook to realize checkers was a futile game (like tic-tac-toe) and retire. [url]
- A few years ago, the Polaris poker bot beat a few professionals at Texas hold'em. So be careful playing poker online... [url]
- The famous long bet between Mitchell Kapor and Ray Kurzweil has $20,000 riding on the question of whether or not AI will pass a Turing test by 2029. The bet started in 2002, and Kapor even suggested back then that a machine might win at a Jeopardy! game show. [url]
- To discover more interesting stuff on artificial intelligence, check out what the robots at StumbleUpon suggest. [url]
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: ai, checker, chess, chinook, game algorithms, games, jeopardy, poker, polaris, turing test, watson
Companies: ibm
Reader Comments
Subscribe: RSS
View by: Time | Thread
Anti-Turing Test
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
Source
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
That doesn't actually explain how Watson doesn't have a speed advantage over humans.... uh, human timing and rhythm are superior to the speed of electrical impulses?
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
Re:
The Final Jeopardy for the first game and the "racing series nickname that is also a computer key" jump out as individual clues that show just how much work is left to be done on the AI.
Watson had some advantages:
- it never guessed on "up for grabs" questions, which humans sometimes do
- if it had a "confident" answer, it knew it was going to buzz in and was "ready" at the exact correct moment, and did not have to use the human strategy of multiple presses of the buzzer in an attempt to get the timing right
One of the disadvantages Watson had was that it did not buzz in if it was "somewhat confident" and then use the countdown time to become more confident. It appeared that it did no "thinking" after it had buzzed in. Humans often use that extra time to search for the answer. If the programmers added this strategy based on the value of the clue and the current scores, it might make it even more formidable.
Props to Watson for knowing "Pinky and the Brain"...that proves it's truly a geek. And, it was great to see Ken, as he showed yet again why he was the best Jeopardy champion ever...he has *fun* with the game.
[ link to this | view in chronology ]
Let's hear about the state of go AIs.
[ link to this | view in chronology ]
Re:
A man can dream, though.
[ link to this | view in chronology ]
Re: Re:
I'd like to see it, though.
[ link to this | view in chronology ]
Let's go ask Watson...
The Watson AI might be able to pick out a reasonable answer from the wikipedia page on Computer Go.
I wonder what advances we need in order to overcome the "Obstacles to high level performance" listed on that page.
[ link to this | view in chronology ]
Re: Let's go ask Watson...
[ link to this | view in chronology ]
Re: Re: Let's go ask Watson...
[ link to this | view in chronology ]
Re: Re: Re:
In summary - the particular brute force methods used in Chess have failed in Go because of the lack of a uitable evaluation function.
Considerable effort has gone into computer Go over the years (including some of mine!). Since about 2006 the empahsis has switched to Monte-Carlo search methods - very different in detail from deep blue or watson but similar in being essentially a brute force approach.
At present the performance of typical Nonte-Carlo programs such as Mogo is highly hardware dependent - indicating their brute force nature.
When running on a powerfiul supercomputer they are roughly equivalent to a moderate amateur player (and believe me that was beyond the wildest dreams of Go programmers not that long ago. Their advertised successes against professionals have all been achieved with large handicaps and so probably don't mean that much. Unfortunately no one seems to be willing to donate enough supercomputer time to really establish how strong the playing is by playing against a roughly equal opponent.
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
The point is that if you have a really good evaluation function you might as well call it at 1 ply. In that case the branching factor would be moot. If your evaluation function is not that clever then pushing it back to 10 ply won't make a jot of difference most of the time.
That is why the most successful Go programs play out an entire game (randomly) before calling any kind of evaluation function.
[ link to this | view in chronology ]
Re:
. . o # o # # # o
# o o # o . # . o
# # . # o o o o o
# # # # # # # # #
# # o o o o o o o
# o . . . . . . .
# o . o . . . o .
# o . . . . . . .
# o . . . . . . .
[ link to this | view in chronology ]
No more games
[ link to this | view in chronology ]
Re: No more games
[ link to this | view in chronology ]
Re: Re: No more games
Put taste buds in the body you make for me. Puny human!
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Still too large and immovable
[ link to this | view in chronology ]
Re: Still too large and immovable
The brain takes up about 20% of the bodies energy taking up about 20 watts (though it probably changes depending on whether you are thinking, eating, or sleeping and depending on where your body is allocating its bloodflow).
I wonder if a twenty watt computer can beat a chess expert at chess. Watt for watt, who's a better information processor.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
Take the incident where Sinéad O'Connor tore up a picture of the pope and said "fight the real enemy". Watson isn't going to understand much of that event. When someone says they are "fed up", have they had too much to eat or are they upset about current events? The machine can't tell the difference. All Watson knows how to do is use the data provided to find and rank possible compatible responses. It has to figure out which data is relevant in the search and compare that to all known references.
So no, the computer doesn't have a significant advantage over the top-tier Jeopardy players. When faced with factual problems it wins hands down, but the nuance of human language trips it up and it will struggle with very abstract problems.
[ link to this | view in chronology ]
Re: Re:
I'm sure if the Jeopardy question creators really wanted a human to win, that they could devise questions that would be impossible for Watson to parse and return a sensible answer. Just require all the correct responses to be in pig latin or something like that....
[ link to this | view in chronology ]
On a completely unrelated note, drat! I completely forgot that the episodes of Jeopardy with Watson were coming up! I hope they're on YouTube...
[ link to this | view in chronology ]
that is not what this is about.
Watson is configured to do the hard part, understand the question (well, the answer) in natural language, then figure out what that actually means.
"This 'Wizard' earned a great deal of media attention for his role in a controversial West End production."
Which word is important there? What/who/where is the subject, what are we looking for? West End is a place, is it a place where one finds wizards, on and on.
(the answer being "who is Daniel Radcliffe", but google takes that "question" and gives back a mixture of links to wizard of oz, michael jackson, david beckham and so forth. And this post, eventually)
in other words, the trick is not figuring out how to answer, it is figuring out how to understand the question. This is Watson's breakthrough.
[ link to this | view in chronology ]
Re:
freakin' google.
[ link to this | view in chronology ]
Re: Re:
Fun that Google picked it up so fast, though :)
[ link to this | view in chronology ]
You don't get it
[ link to this | view in chronology ]
Congratulations
I know people claim it had some speed advantage, but I'm certain the team who designed the thing would've calibrated it to have a normal human reaction time/delay to the input. It wouldn't be a valid test of the system if it was always able to ring in first. They wouldn't need people to compete against if all they wanted to do was see if it could answer questions.
[ link to this | view in chronology ]
http://www.instructables.com/id/QD-Poor-mans-Skinner-Sadist-Jeopardy-game/
[ link to this | view in chronology ]
Add Jeopardy! To The List Of Games That AI Is Better At Than You....
The problem is that things humans find easy machines find hard whereas things humans have traditionally regarded as tests of intelligence often turn out to be (relatively) easy to program - once you have worked out how.
When you analyse all of these so called "successes of AI" (to which you can add the recent huge advances in Computer Go using monte-carlo search
you will find that the Computer doesn't really solve the problem the same way a human does. and still displays some strange weaknesses that betray its lack of understanding
In spite of the ability of Watson to (apparently) understand human language well enough to succeed at Jeopardy I doubt that the knowledge gained will transfer well out of the narrow arena in which it was derived. Just as the succcess of Deep Blue at Chess didn't transfer well to Go - and Go has been solved (if you can call it that - the programs still can't take on a professional player on level terms) by a rather different route.
To solve these problems the trick seems to be to find an algorithm that scales reasonably well and then deploy as much brute force computing power as possible.
The result is that we don't achieve artificial intelligence - instead we discover a way of doing a task without intelligence - and maybe redefine the meaning of the word a little bit.
More sensible to use the machines for the things they do well!
[ link to this | view in chronology ]
Re: Add Jeopardy! To The List Of Games That AI Is Better At Than You....
You can't, if you want to be accurate.
[ link to this | view in chronology ]
Re: Re: Add Jeopardy! To The List Of Games That AI Is Better At Than You....
which is why I added the disclaimer..
However the Computer Go research community got very excited by the results from Monte-Carlo methods because they were so much better than what had been achieved before.
The most exciting thing was that the results seemed to scale - so in theory you could produce an arbitrarily strong player given sufficiently powerful machine (and without going to silly "bigger than the observable universe" computers).
Unfortunately lack of regular access to very powerful machines makes it really hard to determine if this scaling will continue.
At most I think you could say "we've now got a very much better algorithm than anything we had before" It's not "solved" but to some it sort of felt like it....
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Jeopardy as Eliza (response to Richard, comment 30, et. seq.)
So you search for "blah, blah, blah," and get a series of strings, "X blah, blah, blah," ergo [blank] = X. A simple substitution.
I googled manually for "did not chop down a cherry tree." Nine of the first ten results were "George Washington did not chop down a cherry tree." The tenth result was: "The first president of the United States did not chop down a cherry tree." The question, as it stands, is factually specific, yet unimportant. Important questions in the humanities tend to have ambiguous answers. If you teach high school history, and you put that kind of question on a test, that is a sign that you are not a very good high school history teacher. In that case, you are probably a football coach.
Here is an example of what high school history teaching looks like when it is done well:
http://amhistnow.blogspot.com/2009_04_01_archive.html
Until you produce a computer which can give a mirthless smile, or appear visibly bored, or gaze inscrutably, you don't have true artificial intelligence, and you aren't likely to pass a Turing test administered by someone who knows anything about teaching.
This process of answering a fill-in-the-blank question by finding a match is the same sort of crude algorithm as Joseph Weizenbaum's Eliza program, way back when, and doesn't mean much. Eliza had a kind of curious life. All kinds of people believed religiously in it, because they were not accustomed to the idea that a computer could mechanically convert their own inputs-- most of the time-- into a subordinate clause, and plug them into stock phrases. It's one of these verbal tricks which are made to look much more than they are. Watson is not much more than Eliza or Parry (the paranoid) with a search engine attached.
Now, poker, for example, does reflect something about the limits of machine reasoning. Bear in mind that the game played in national competitions, and on the internet, is much more mathematicalized than the game played on street corners or in social clubs. The players in national competitions are comparative strangers, people encountered for an hour or so, due to the seeding of the tournament, and do not have the opportunity to learn each other's body languages or habits of risk-taking. They are therefore forced to bet according to mathematical advisability. The larger life of poker has to do with its role as an exercise in brinkmanship. In the real world, people who are in competition tend to have jobs which keep them in competition with particular individuals for a year or more. One gets to know something about the person on the other side, even if the person on the other side is operating under a pseudonym.
[ link to this | view in chronology ]
Re: Jeopardy as Eliza (response to Richard, comment 30, et. seq.)
We have been playing quiz games, chess etc for a few hundred years- however we have beengiving a mirthless smile, appearing visibly bored, and gazing inscrutably, for all of human evolutionary history. Our ancestors have been doing some of these activities for millions of years (and their survival has depended upon it!
Computers have a long way to go to match that, and while the attempt is interesting it is not particularly useful.
The most practical forms of AI are the ones that ignore the "being like a human" issue and just get on with the task in hand.
[ link to this | view in chronology ]