Google DeepMind AI's Ability To Discern Physical Objects Is Mere Child's Play

Google's DeepMind has been working on some truly incredible things over the past couple of years. Just last week, we learned that DeepMind would be teaching itself how to play StarCraft II, which wouldn't be the first time it had a gaming focus. Before Google acquired DeepMind a couple of years ago, its AI was used to learn and conquer Atari games, and more recently, it taught itself how to beat an expert at Go.

Since then, we've seen DeepMind used to enhance AI-speech generation, and even work to conquer blindness. Now it's going to teach itself how to identify objects in a virtual world.

 90460620 datacentre5

As we covered last month, DeepMind's engineers have been hard at work on helping its AI teach itself, and we now see an extension of that today. In a new experiment, AI has been forced to teach itself much like an infant or child does: through experimentation. This particular test features 5 blocks with differing masses, and through touching each one, the AI begins to learn the differences. A hollow block couldn't be sat on, for example, while a solid one could be.

One specific test had the AI test each block to figure out which had the greatest mass, and later, multiple blocks were combined to shake things up. With something called "reinforcement learning", the AI would be rewarded or punished based on its correct and incorrect answers, which again helps it further develop its "brain". A real child, for example, would quickly learn that hitting themselves in the face would hurt, so they'd probably work to avoid that in the future. It's the same concept here.

Despite its sheer complexity, this kind of AI is still primitive at this point, but over time, it can learn much more about the world, and ultimately, that could help tackle seriously advanced subjects that require the kind of troubleshooting that is difficult for real humans. On one hand, the prospect is scary; but on the other, it's downright amazing.