Salina Post: Web-based AI program encourages users to submit photos of bees for IDs

Salina Post: Web-based AI program encourages users to submit photos of bees for IDs. “A Kansas State University researcher’s effort to develop an artificial intelligence tool for identifying bees has created quite a buzz already. Brian Spiesman, an assistant professor in K-State’s Department of Entomology, launched the website…earlier this year to relieve a backlog of information needed to help track trends in bee populations across the world.”

Algorithm Watch: Google apologizes after its Vision AI produced racist results

Algorithm Watch: Google apologizes after its Vision AI produced racist results. “In an experiment that became viral on Twitter, AlgorithmWatch showed that Google Vision Cloud, a computer vision service, labeled an image of a dark-skinned individual holding a thermometer ‘gun’ while a similar image with a light-skinned individual was labeled ‘electronic device’. A subsequent experiment showed that the image of a dark-skinned hand holding a thermometer was labelled ‘gun’ and that the same image with a salmon-colored overlay on the hand was enough for the computer to label it ‘monocular’.”

TechCrunch: Glisten uses computer vision to break down product photos to their most important parts

TechCrunch: Glisten uses computer vision to break down product photos to their most important parts. “It’s amazing that in this day and age, the best way to search for new clothes is to click a few check boxes and then scroll through endless pictures. Why can’t you search for ‘green patterned scoop neck dress’ and see one? Glisten is a new startup enabling just that by using computer vision to understand and list the most important aspects of the products in any photo.”

Input: Google AI no longer uses gender binary tags on images of people

Input: Google AI no longer uses gender binary tags on images of people. “Google’s image-labeling AI tool will no longer label pictures with gender tags like ‘man’ and ‘woman,’ according to an email seen by Business Insider. In the email, Google cites its ethical rules on AI as the basis for the change.”

BetaNews: IBM launches new open source tool to label images using AI

BetaNews: IBM launches new open source tool to label images using AI. “Images for use in development projects need to be correctly labeled to be of use. But adding labels is a task that can involve many hours of work by human analysts painstakingly applying manual labels to images, time that could be better spent on other, more creative, tasks. In order to streamline the labelling process IBM has created a new automated labeling tool for the open source Cloud Annotations project that uses AI to ‘auto-label’ images and thus speed up the process.”

Phys .org: Deep learning enables real-time imaging around corners

Phys .org: Deep learning enables real-time imaging around corners . “Researchers have harnessed the power of a type of artificial intelligence known as deep learning to create a new laser-based system that can image around corners in real time. With further development, the system might let self-driving cars ‘look’ around parked cars or busy intersections to see hazards or pedestrians. It could also be installed on satellites and spacecraft for tasks such as capturing images inside a cave on an asteroid.”

MIT Technology Review: Machine vision can spot unknown links between classic artworks

MIT Technology Review: Machine vision can spot unknown links between classic artworks. “One job of the art historian is to tease apart this web, to study the human poses used by different artists and glimpse the forces that influenced them. Today, that gets easier thanks to the work of Tomas Jenicek and Ondrej Chum at the Czech Technical University in Prague. These guys have used a machine vision system to analyze the poses of human subjects in fine art paintings throughout history. They then search for other paintings that contain people in the same poses.”

Hackaday: Leigh Johnson’s Guide To Machine Vision On Raspberry Pi

Hackaday: Leigh Johnson’s Guide To Machine Vision On Raspberry Pi. “We salute hackers who make technology useful for people in emerging markets. Leigh Johnson joined that select group when she accepted the challenge to build portable machine vision units that work offline and can be deployed for under $100 each. For hardware, a Raspberry Pi with camera plus screen can fit under that cost ceiling, and the software to give it sight is the focus of her 2018 Hackaday Superconference presentation. (Video also embedded below.)”

EurekAlert: Brown researchers teach computers to see optical illusions

EurekAlert: Brown researchers teach computers to see optical illusions . “For the study, the team lead by [Thomas] Serre, who is affiliated with Brown’s Carney Institute for Brain Science, started with a computational model constrained by anatomical and neurophysiological data of the visual cortex. The model aimed to capture how neighboring cortical neurons send messages to each other and adjust one another’s responses when presented with complex stimuli such as contextual optical illusions.”

MIT News: Helping computers fill in the gaps between video frames

MIT News: Helping computers fill in the gaps between video frames. “Given only a few frames of a video, humans can usually surmise what is happening and will happen on screen. If we see an early frame of stacked cans, a middle frame with a finger at the stack’s base, and a late frame showing the cans toppled over, we can guess that the finger knocked down the cans. Computers, however, struggle with this concept.”

Pete Warden: What Image Classifiers Can Do About Unknown Objects

Pete Warden: What Image Classifiers Can Do About Unknown Objects. “A few days ago I received a question from Plant Village, a team I’m collaborating with about a problem that’s emerged with a mobile app they’re developing. It detects plant diseases, and is delivering good results when it’s pointed at leaves, but if you point it at a computer keyboard it thinks it’s a damaged crop. This isn’t a surprising result to computer vision researchers, but it is a shock to most other people, so I want to explain why it’s happening, and what we can do about it.”

TechCrunch: Want to fool a computer vision system? Just tweak some colors

TechCrunch: Want to fool a computer vision system? Just tweak some colors . “Research into machine learning and the interesting AI models created as a consequence are popular topics these days. But there’s a sort of shadow world of scientists working to undermine these systems — not to show they’re worthless but to shore up their weaknesses. A new paper demonstrates this by showing how vulnerable image recognition models are to the simplest color manipulations of the pictures they’re meant to identify.”

Wired: AI Has A Hallucination Problem That’s Proving Tough To Fix

Wired: AI Has A Hallucination Problem That’s Proving Tough To Fix. “TECH COMPANIES ARE rushing to infuse everything with artificial intelligence, driven by big leaps in the power of machine learning software. But the deep-neural-network software fueling the excitement has a troubling weakness: Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there.”