Tech Xplore: Team locates nearly all US solar panels in a billion images with machine learning. “Knowing which Americans have installed solar panels on their roofs and why they did so would be enormously useful for managing the changing U.S. electricity system and to understanding the barriers to greater use of renewable resources. But until now, all that has been available are essentially estimates. To get accurate numbers, Stanford University scientists analyzed more than a billion high-resolution satellite images with a machine learning algorithm and identified nearly every solar power installation in the contiguous 48 states.”
Ars Technica: How computers got shockingly good at recognizing images. “Right now, I can open up Google Photos, type ‘beach,’ and see my photos from various beaches I’ve visited over the last decade. I never went through my photos and labeled them; instead, Google identifies beaches based on the contents of the photos themselves. This seemingly mundane feature is based on a technology called deep convolutional neural networks, which allows software to understand images in a sophisticated way that wasn’t possible with prior techniques.”
Engadget: Google Goggles is officially dead. “Google signed Goggles’ death warrant the moment it launched Lens, and now it looks like the tech giant is ready to bid farewell to its old image recognition app. As Android Police has noticed, the only thing you’ll see when you fire up the Goggles app is a note that says it’s going away.”
Wired: When in Nature, Google Lens Does What the Human Brain Can’t . “AI-powered visual search tools, like Google Lens and Bing Visual Search, promise a new way to search the world—but most people still type into a search box rather than point their camera at something. We’ve gotten used to manually searching for things over the past 25 years or so that search engines have been at our fingertips. Also, not all objects are directly in front of us at the time we’re searching for information about them.”
My Twitter buddy Steve D. clued me in on this site I hadn’t heard of: https://www.civilwarphotosleuth.com/ . It’s an initiative to identify people in US Civil War photos. The public release of the software is August 1, and a launch event will be held at NARA. You can RSVP and get more details via this Google Doc.
Pete Warden: What Image Classifiers Can Do About Unknown Objects. “A few days ago I received a question from Plant Village, a team I’m collaborating with about a problem that’s emerged with a mobile app they’re developing. It detects plant diseases, and is delivering good results when it’s pointed at leaves, but if you point it at a computer keyboard it thinks it’s a damaged crop. This isn’t a surprising result to computer vision researchers, but it is a shock to most other people, so I want to explain why it’s happening, and what we can do about it.”
Lifehacker: How To Use Google Lens’ New Features . “Google Lens, once a Pixel-only feature, is now a part of the Google Photos app (or a standalone Android download). During Google I/O this year, Google announced a number of new features for Google Lens, and you can play with them on both iOS and Android right now – assuming your device now supports Lens in its Camera app (or the standalone Lens app, if it doesn’t).”