From The Eyes of Google's AI

By Peter R - 20 Jun '15 14:28PM
Close

Artificial Intelligence is trained to do things the way humans want it do. Researchers at Google got neural networks to do some thinking with the information they learnt. The result: Inceptionism.

In a blog post, software engineers Alexander Mordvintsev and along with intern Christopher Olah, got various neural networks trained for specific purposes of image recognition, to reveal what they learnt. Essentially, they turned the networks 'around' to understand differences in image interpretation of humans and artificial intelligence, while exposing flaws in training artificial neural networks.

"Neural networks that were trained to discriminate between different kinds of images have quite a bit of the information needed to generate images too," the post's author discovered. However the images produced are not we'd expect.

For instance, the network was asked to produce an image of dumbbells. The resulting image showed researchers possible errors in training.

"There are dumbbells in there alright, but it seems no picture of a dumbbell is complete without a muscular weightlifter there to lift them. In this case, the network failed to completely distill the essence of a dumbbell. Maybe it's never been shown a dumbbell without an arm holding it. Visualization can help us correct these kinds of training mishaps," the post reads.

Neural networks are essentially several layers of neural sets. Each layer extracts different features, with higher layers building on information from lower layers and extracting more sophisticated features from an image. Images from such layers reveal that networks often over-interpret images.

Fun Stuff

Join the Conversation

The Next Read

Real Time Analytics