Algorithm Watch: Google apologizes after its Vision AI produced racist results

Algorithm Watch: Google apologizes after its Vision AI produced racist results. “In an experiment that became viral on Twitter, AlgorithmWatch showed that Google Vision Cloud, a computer vision service, labeled an image of a dark-skinned individual holding a thermometer ‘gun’ while a similar image with a light-skinned individual was labeled ‘electronic device’. A subsequent experiment showed that the image of a dark-skinned hand holding a thermometer was labelled ‘gun’ and that the same image with a salmon-colored overlay on the hand was enough for the computer to label it ‘monocular’.”

TechCrunch: Glisten uses computer vision to break down product photos to their most important parts

TechCrunch: Glisten uses computer vision to break down product photos to their most important parts. “It’s amazing that in this day and age, the best way to search for new clothes is to click a few check boxes and then scroll through endless pictures. Why can’t you search for ‘green patterned scoop neck dress’ and see one? Glisten is a new startup enabling just that by using computer vision to understand and list the most important aspects of the products in any photo.”

Input: Google AI no longer uses gender binary tags on images of people

Input: Google AI no longer uses gender binary tags on images of people. “Google’s image-labeling AI tool will no longer label pictures with gender tags like ‘man’ and ‘woman,’ according to an email seen by Business Insider. In the email, Google cites its ethical rules on AI as the basis for the change.”

BetaNews: IBM launches new open source tool to label images using AI

BetaNews: IBM launches new open source tool to label images using AI. “Images for use in development projects need to be correctly labeled to be of use. But adding labels is a task that can involve many hours of work by human analysts painstakingly applying manual labels to images, time that could be better spent on other, more creative, tasks. In order to streamline the labelling process IBM has created a new automated labeling tool for the open source Cloud Annotations project that uses AI to ‘auto-label’ images and thus speed up the process.”

Inside the 1TB ImageNet data set used to train the world’s AI: Nude kids, drunken frat parties, porno stars, and more (The Register)

The Register: Inside the 1TB ImageNet data set used to train the world’s AI: Nude kids, drunken frat parties, porno stars, and more. “ImageNet – a data set used to train AI systems around the world – contains photos of naked children, families on the beach, college parties, porn actresses, and more, scraped from the web to train computers without those individuals’ explicit consent. The library consists of 14 million images, each placed into categories that describe what’s pictured in each scene. This pairing of information – images and labels – is used to teach artificially intelligent applications to recognize things and people caught on camera.”

Fast Company: The world’s most-advanced AI can’t tell what’s in these photos. Can you?

Fast Company: The world’s most-advanced AI can’t tell what’s in these photos. Can you?. “Is that a manhole cover or dragonfly sitting on a table? Is that a green iguana or just a squirrel running with some nuts? Is that a unicycle or a crocodile crossing the road? To humans, the answer is obvious. But the best image-identifying artificial intelligence in the world hasn’t a clue.”

MakeUseOf: 8 Nifty Apps to Identify Anything Using Your Phone’s Camera

MakeUseOf: 8 Nifty Apps to Identify Anything Using Your Phone’s Camera. “For many people, your phone’s camera is one of its most important aspects. It has a ton of uses, from superimposing wild creatures into reality with AR apps to taking sharp pictures even at night. But you might be missing out on another major ability your phone’s camera has: it can work as a visual search engine and identify just about anything you see in the world. Here are the best identification apps for Android and iPhone.”