Ok, so it’s not League of Legends or StarCraft II – but Google’s AI can now add beating humans at classic arcade games to its accomplishments.

The project, built by DeepMind Technologies (acquired by Google last year), built an artificially intelligent computer program that can teach itself to play Atari 2600 games. The AI proved better at some games than others. Video Pinball, for example, was a breeze for the AI. But, Ms. Pac-Man proved to be a bit tougher for the AI to grasp.

In the study’s abstract, the researchers write, “We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters.”

Check out this video to see the AI’s first attempts at Breakout, and then what happens after it plays it several hundred times.

This is pretty cool, but what are some other applications for this AI. You have to remember that the purpose of AI is to learn not be taught. Imagine Google’s self-driving cars. Now, what if it learned based off driving experience and not being taught how to drive? Let’s just hope Google puts that to the test on a closed road. If the AI’s first attempt of Breakout is any indication, Google might want to limit the amount of curves on the road during the first attempt too.

In a blog post, Dharshan Kumaran and Demis Hassabis of Google DeepMind explain how the process works.

Related
Google Pixel Phone Asks What’s the Matter With You?

“DQN incorporated several key features that for the first time enabled the power of Deep Neural Networks (DNN) to be combined in a scalable fashion with Reinforcement Learning (RL)—a machine learning framework that prescribes how agents should act in an environment in order to maximize future cumulative reward (e.g., a game score). Foremost among these was a neurobiologically inspired mechanism, termed “experience replay,” whereby during the learning phase DQN was trained on samples drawn from a pool of stored episodes—a process physically realized in a brain structure called the hippocampus through the ultra-fast reactivation of recent experiences during rest periods (e.g., sleep). Indeed, the incorporation of experience replay was critical to the success of DQN: disabling this function caused a severe deterioration in performance.”

Some of this is a bit over my head, but the results are pretty damn amazing.

The blog post talks a bit more about the AI. If you want to dive deep into the science behind it, check out the full study here.

Follow News Ledge

This post may contain affiliate links, which means we receive a commission if you make a purchase using one of the affiliated links.