South China Morning Post: China makes ‘world’s largest satellite image database’ to train AI better

South China Morning Post: China makes ‘world’s largest satellite image database’ to train AI better. “A satellite imaging database containing detailed information of more than a million locations has been launched in China to help reduce artificial intelligence’s errors when identifying objects from space, the Chinese Academy of Sciences said on Wednesday. The fine-grained object recognition in high-resolution remote sensing imagery (FAIR1M) database was tens or even hundreds of times larger than similar data sets used in other countries, it said.”

Input Magazine: DoNotPay’s new tool makes your photos undetectable to facial recognition software

Input Magazine: DoNotPay’s new tool makes your photos undetectable to facial recognition software. “With the new Photo Ninja feature, users upload a photo of themselves to DoNotPay and its algorithms insert hidden changes that confuse facial recognition tools. This type of masked picture can be referred to as an ‘adversarial example,’ exploiting the way artificial intelligence algorithms work to disrupt their behavior.”

The Next Web: Check if your photos were used to develop facial recognition systems with this free tool

The Next Web: Check if your photos were used to develop facial recognition systems with this free tool . “The search engine checks whether your photos were included in the datasets by referencing Flickr identifiers such as username and photo ID. It doesn’t use any facial recognition to detect the images. If it finds an exact match, the results are displayed on the screen. The images are then loaded directly from Flickr.com.”

Wired: This Site Published Every Face From Parler’s Capitol Riot Videos

Wired: This Site Published Every Face From Parler’s Capitol Riot Videos. “Late last week, a website called Faces of the Riot appeared online, showing nothing but a vast grid of more than 6,000 images of faces, each one tagged only with a string of characters associated with the Parler video in which it appeared. The site’s creator tells WIRED that he used simple open source machine learning and facial recognition software to detect, extract, and deduplicate every face from the 827 videos that were posted to Parler from inside and outside the Capitol building on January 6, the day when radicalized Trump supporters stormed the building in a riot that resulted in five people’s deaths.”

Russia Beyond: How a neural network learned to recognize Russia

Russia Beyond: How a neural network learned to recognize Russia. “Yandex, Russia’s biggest Internet company, has released an online game that invites players to guess where photos were taken. The images are taken from a database of photos uploaded by users to the Yandex.Maps app (similar to Google Maps). The human players compete against the specially trained neural network Alice, which is already used as a voice assistant in many Yandex products.”

Computerworld: Seeing the signs (and locating them) with Google Street View and deep learning

Computerworld: Seeing the signs (and locating them) with Google Street View and deep learning. “Street signs are everywhere, but where they are precisely is not always known by the local government authorities that manage them. Councils and governments keep datasets of all signs in an area – a record of location data is mandatory – but as roads are redeveloped they are increasingly incomplete and due to errors by humans doing field surveys, often inaccurate.”

The Verge: AI is worse at identifying household items from lower-income countries

The Verge: AI is worse at identifying household items from lower-income countries. “Object recognition algorithms sold by tech companies, including Google, Microsoft, and Amazon, perform worse when asked to identify items from lower-income countries. These are the findings of a new study conducted by Facebook’s AI lab, which shows that AI bias can not only reproduce inequalities within countries, but also between them.”

Google AI Blog: Announcing Open Images V5 and the ICCV 2019 Open Images Challenge

Google AI Blog: Announcing Open Images V5 and the ICCV 2019 Open Images Challenge. “In 2016, we introduced Open Images, a collaborative release of ~9 million images annotated with labels spanning thousands of object categories. Since then we have rolled out several updates, culminating with Open Images V4 in 2018. In total, that release included 15.4M bounding-boxes for 600 object categories, making it the largest existing dataset with object location annotations, as well as over 300k visual relationship annotations. Today we are happy to announce Open Images V5, which adds segmentation masks to the set of annotations, along with the second Open Images Challenge, which will feature a new instance segmentation track based on this data.”

Ars Technica: How computers got shockingly good at recognizing images

Ars Technica: How computers got shockingly good at recognizing images. “Right now, I can open up Google Photos, type ‘beach,’ and see my photos from various beaches I’ve visited over the last decade. I never went through my photos and labeled them; instead, Google identifies beaches based on the contents of the photos themselves. This seemingly mundane feature is based on a technology called deep convolutional neural networks, which allows software to understand images in a sophisticated way that wasn’t possible with prior techniques.”