DailyDirt: Can't We Just Play Games For Fun?
from the urls-we-dig-up dept
We've seen plenty of advances in game algorithms that make us humans look pretty weak compared to the best chess (and checkers and poker and RPS and air hockey and Flappy bird and...) playing computers. Computers aren't having any fun beating us at all these games, but they do it nonetheless. As always, let's just hope they figure out quickly that no one wins at thermonuclear war.- It seems a bit irrational for humans to keep playing a game that a computer can play better than 99.999999% of all humans, but that doesn't mean we shouldn't try to create better and better chess playing algorithms. A deep learning program called Giraffe has taught itself how to play chess at an FIDE International Master level in just three days (on a modern mainstream PC, not a supercomputer). It's not playing at a (super-)Grandmaster level yet, but it's also not evaluating millions of moves per second like Deep Blue and other chess supercomputers can. [url]
- Google's DeepMind AI is beating humans at more classic video games -- now up to 31 titles, such as Q*Bert and Zaxxon. However, it hasn't yet mastered games like Ms. Pac-Man or Asteroids. Phew! We're not obsolete yet.... [url]
- If you think humans are safe by sticking to sports like soccer, basketball or baseball, you might want to see a few robots in development for playing some of these sports. It might take some time for robots to catch up, but I doubt anyone really wants to play any kind of full-contact sport against a robot, anyway. [url]
Filed Under: ai, artificial intelligence, chess, deep blue, deepmind, game algorithms, robotics, robots, sports, video games
Companies: google, ibm