MIT Technology Review: Facebook is creating an AI assistant for Minecraft

MIT Technology Review: Facebook is creating an AI assistant for Minecraft. “Minecraft is the best-selling video game of all time, having moved over 170 million copies. More than 90 million people play every month. But what makes it useful for AI research is that while the Minecraft world offers infinite variety, its rules are also simple and predictable within certain limits. AI researchers have already begun to use it to train and test various kinds of AI systems.”

Digital Trends: Google’s soccer-playing A.I. hopes to master the world’s most popular sport

Digital Trends: Google’s soccer-playing A.I. hopes to master the world’s most popular sport. “Think the player A.I. in FIFA ‘19 was something special? You haven’t seen anything yet! That’s because search giant Google is developing its own soccer-playing artificial intelligence. And, if the company’s history with machine intelligence is anything to go by, it’ll be something quite special.”

The Verge: Facebook open-sources algorithms for detecting child exploitation and terrorism imagery

The Verge: Facebook open-sources algorithms for detecting child exploitation and terrorism imagery. “Facebook will open-source two algorithms it uses to identify child sexual exploitation, terrorist propaganda, and graphic violence, the company said today. PDQ and TMK+PDQF, a pair of technologies that store files as digital hashes and compare them with known examples of harmful content, have been released on Github, Facebook said in a blog post.”

TechCrunch: Facebook is creating photorealistic homes for AIs to work and learn in

TechCrunch: Facebook is creating photorealistic homes for AIs to work and learn in. “If AI-powered robots are ever going to help us out around the house, they’re going to need a lot of experience navigating human environments. Simulators, virtual worlds that look and behave just like real life, are the best place for them to learn, and Facebook has created one of the most advanced such systems yet.”

EurekAlert: Researchers unveil tool to debug ‘black box’ deep learning algorithms

Eurekalert: Researchers unveil tool to debug ‘black box’ deep learning algorithms. “Deep learning systems do not explain how they make their decisions, and that makes them hard to trust. In a new approach to the problem, researchers at Columbia and Lehigh universities have come up with a way to automatically error-check the thousands to millions of neurons in a deep learning neural network. Their tool, DeepXplore, feeds confusing, real-world inputs into the network to expose rare instances of flawed reasoning by clusters of neurons. Researchers present it on Oct. 29 at ACM’s Symposium on Operating Systems Principles in Shanghai.”