A neural network has been taught how to judge your selfie. “Creator Andrej Karpathy, who previously worked with Google Research on its learning algorithm program DeepMind, fed the program with more than five million photos tagged #selfie before purging it down to around two million self-portraits worth using. The network was then programmed to determine whether or not a selfie was a good one by analyzing social signals such as likes and shares for the photo.” I think I’ve taken one selfie in my life and that was when I was a teenager. It was not called a selfie, then, however, it was called Tara being stupid with a camera.
It’s not as entertaining as Deep Dream, but Google is training its AI to detect pedestrians. Quickly. “We present a new real-time approach to object detection that exploits the efficiency of cascade classifiers with the accuracy of deep neural networks. Deep networks have been shown to excel at classification tasks, and their ability to operate on raw pixel input without the need to design special features is very appealing. However, deep nets are notoriously slow at inference time. In this paper, we propose an approach that cascades deep nets and fast features, that is both extremely fast and extremely accurate.”
I’m sorry, I can’t stop thinking about Google’s Deep Dream toy — erm, I mean neural network tool. Yeah, that’s what I mean. Anyway, now researchers are deep-dreaming maps. “Last week, Google released the ”DeepDream” code to the public, so that anyone with some programming skills could process their own images with a psychedelic glaze. Naturally, a couple of brave mapmakers stepped in and produced some geo-visualizations—now, the hills literally have eyes.”
You remember that mention I made last week of Google’s research that let a neural network make trippy art? Google’s open sourced the tool. “A small group of Google software engineers have open sourced a new tool that can take an image and create an artistic spin on it using deep neural networks…. To use the tool, people will also need to set up NumPy, SciPy, PIL, IPython, or a scientific python distribution such as Anaconda or Canopy.”
Google is training its neural networks to augment images, and in the process is making downright trippy art. “What Google is doing here is essentially reversing image recognition, and telling its computers to use the images they already know to augment new images. As Singularity Hub (via Engadget) explains: ‘Where the software was allowed to “free associate” and then forced into feedback loops to reinforce these associations — it found images and patterns (often mash-ups of things it had already seen) where none existed previously.’” One of the images below.