Salina Post: Web-based AI program encourages users to submit photos of bees for IDs

Salina Post: Web-based AI program encourages users to submit photos of bees for IDs. “A Kansas State University researcher’s effort to develop an artificial intelligence tool for identifying bees has created quite a buzz already. Brian Spiesman, an assistant professor in K-State’s Department of Entomology, launched the website…earlier this year to relieve a backlog of information needed to help track trends in bee populations across the world.”

CNET: Australia has invented Shazam for spiders

CNET: Australia has invented Shazam for spiders. “Critterpedia is a collaboration between creators Nic and Murray Scare and Australia’s National Science Agency, CSIRO. It’s a machine learning engine designed to automatically identify different species of spiders and snakes. An AI-powered algorithm like Critterpedia requires hundreds of thousands of images to become accurate in its assessments, so CSIRO and Data 61 are hoping to get as many people as possible to download Critterpedia and upload pictures of spiders and snakes they might see in the wild.”

Algorithm Watch: Google apologizes after its Vision AI produced racist results

Algorithm Watch: Google apologizes after its Vision AI produced racist results. “In an experiment that became viral on Twitter, AlgorithmWatch showed that Google Vision Cloud, a computer vision service, labeled an image of a dark-skinned individual holding a thermometer ‘gun’ while a similar image with a light-skinned individual was labeled ‘electronic device’. A subsequent experiment showed that the image of a dark-skinned hand holding a thermometer was labelled ‘gun’ and that the same image with a salmon-colored overlay on the hand was enough for the computer to label it ‘monocular’.”

TechCrunch: Glisten uses computer vision to break down product photos to their most important parts

TechCrunch: Glisten uses computer vision to break down product photos to their most important parts. “It’s amazing that in this day and age, the best way to search for new clothes is to click a few check boxes and then scroll through endless pictures. Why can’t you search for ‘green patterned scoop neck dress’ and see one? Glisten is a new startup enabling just that by using computer vision to understand and list the most important aspects of the products in any photo.”

Input: Google AI no longer uses gender binary tags on images of people

Input: Google AI no longer uses gender binary tags on images of people. “Google’s image-labeling AI tool will no longer label pictures with gender tags like ‘man’ and ‘woman,’ according to an email seen by Business Insider. In the email, Google cites its ethical rules on AI as the basis for the change.”

BetaNews: IBM launches new open source tool to label images using AI

BetaNews: IBM launches new open source tool to label images using AI. “Images for use in development projects need to be correctly labeled to be of use. But adding labels is a task that can involve many hours of work by human analysts painstakingly applying manual labels to images, time that could be better spent on other, more creative, tasks. In order to streamline the labelling process IBM has created a new automated labeling tool for the open source Cloud Annotations project that uses AI to ‘auto-label’ images and thus speed up the process.”

Inside the 1TB ImageNet data set used to train the world’s AI: Nude kids, drunken frat parties, porno stars, and more (The Register)

The Register: Inside the 1TB ImageNet data set used to train the world’s AI: Nude kids, drunken frat parties, porno stars, and more. “ImageNet – a data set used to train AI systems around the world – contains photos of naked children, families on the beach, college parties, porn actresses, and more, scraped from the web to train computers without those individuals’ explicit consent. The library consists of 14 million images, each placed into categories that describe what’s pictured in each scene. This pairing of information – images and labels – is used to teach artificially intelligent applications to recognize things and people caught on camera.”

Fast Company: The world’s most-advanced AI can’t tell what’s in these photos. Can you?

Fast Company: The world’s most-advanced AI can’t tell what’s in these photos. Can you?. “Is that a manhole cover or dragonfly sitting on a table? Is that a green iguana or just a squirrel running with some nuts? Is that a unicycle or a crocodile crossing the road? To humans, the answer is obvious. But the best image-identifying artificial intelligence in the world hasn’t a clue.”

MakeUseOf: 8 Nifty Apps to Identify Anything Using Your Phone’s Camera

MakeUseOf: 8 Nifty Apps to Identify Anything Using Your Phone’s Camera. “For many people, your phone’s camera is one of its most important aspects. It has a ton of uses, from superimposing wild creatures into reality with AR apps to taking sharp pictures even at night. But you might be missing out on another major ability your phone’s camera has: it can work as a visual search engine and identify just about anything you see in the world. Here are the best identification apps for Android and iPhone.”

Tech Xplore: Team locates nearly all US solar panels in a billion images with machine learning

Tech Xplore: Team locates nearly all US solar panels in a billion images with machine learning. “Knowing which Americans have installed solar panels on their roofs and why they did so would be enormously useful for managing the changing U.S. electricity system and to understanding the barriers to greater use of renewable resources. But until now, all that has been available are essentially estimates. To get accurate numbers, Stanford University scientists analyzed more than a billion high-resolution satellite images with a machine learning algorithm and identified nearly every solar power installation in the contiguous 48 states.”

Ars Technica: How computers got shockingly good at recognizing images

Ars Technica: How computers got shockingly good at recognizing images. “Right now, I can open up Google Photos, type ‘beach,’ and see my photos from various beaches I’ve visited over the last decade. I never went through my photos and labeled them; instead, Google identifies beaches based on the contents of the photos themselves. This seemingly mundane feature is based on a technology called deep convolutional neural networks, which allows software to understand images in a sophisticated way that wasn’t possible with prior techniques.”

Wired: When in Nature, Google Lens Does What the Human Brain Can’t

Wired: When in Nature, Google Lens Does What the Human Brain Can’t . “AI-powered visual search tools, like Google Lens and Bing Visual Search, promise a new way to search the world—but most people still type into a search box rather than point their camera at something. We’ve gotten used to manually searching for things over the past 25 years or so that search engines have been at our fingertips. Also, not all objects are directly in front of us at the time we’re searching for information about them.”

Software to Help Identify Civil War Photos Launches August 1

My Twitter buddy Steve D. clued me in on this site I hadn’t heard of: https://www.civilwarphotosleuth.com/ . It’s an initiative to identify people in US Civil War photos. The public release of the software is August 1, and a launch event will be held at NARA. You can RSVP and get more details via this Google Doc.

Pete Warden: What Image Classifiers Can Do About Unknown Objects

Pete Warden: What Image Classifiers Can Do About Unknown Objects. “A few days ago I received a question from Plant Village, a team I’m collaborating with about a problem that’s emerged with a mobile app they’re developing. It detects plant diseases, and is delivering good results when it’s pointed at leaves, but if you point it at a computer keyboard it thinks it’s a damaged crop. This isn’t a surprising result to computer vision researchers, but it is a shock to most other people, so I want to explain why it’s happening, and what we can do about it.”