Cloudy with a chance of neurons: The tools that make neural networks work (Ars Technica)

Ars Technica: Cloudy with a chance of neurons: The tools that make neural networks work. “Artificial Intelligence—or, if you prefer, Machine Learning—is today’s hot buzzword. Unlike many buzzwords have come before it, though, this stuff isn’t vaporware dreams—it’s real, it’s here already, and it’s changing your life whether you realize it or not.” Deep dive with lots of resources.

Ars Technica: How neural networks work—and why they’ve become a big business

Ars Technica: How neural networks work—and why they’ve become a big business. “Computer scientists have been experimenting with neural networks since the 1950s. But two big breakthroughs—one in 1986, the other in 2012—laid the foundation for today’s vast deep learning industry. The 2012 breakthrough—the deep learning revolution—was the discovery that we can get dramatically better performance out of neural networks with not just a few layers but with many. That discovery was made possible thanks to the growing amount of both data and computing power that had become available by 2012. This feature offers a primer on neural networks. We’ll explain what neural networks are, how they work, and where they came from. And we’ll explore why—despite many decades of previous research—neural networks have only really come into their own since 2012.”

Cornell Chronicle: AI tool detects global fashion trends

Cornell Chronicle: AI tool detects global fashion trends. “GeoStyle analyzes public Instagram and Flickr photos to map trends using computer vision and neural networks, a kind of artificial intelligence often used to sort images. Its models help researchers understand existing trends in specific cities and around the world over time, and its trend forecasts are up to 20% more accurate than previous methods.”

ZDNet: AI can now read the thoughts of paralysed patients as they imagine they are writing

ZDNet: AI can now read the thoughts of paralysed patients as they imagine they are writing. “Handwriting is becoming a rare skill in the digital age. But researchers have now discovered a new application that could significantly improve the way tetraplegic people, who are often also unable to speak, communicate with the outside world.”

Phys .org: Researchers make neural networks successfully detect DNA damage caused by UV radiation

Phys .org: Researchers make neural networks successfully detect DNA damage caused by UV radiation. “Researchers at Tomsk Polytechnic University jointly with the University of Chemistry and Technology (Prague) conducted a series of experiments which proved that artificial neural networks can accurately identify DNA damage caused by UV radiation. In the future, this approach can be used in modern medical diagnostics. An article, dedicated to those studies, was published in the Biosensors and Bioelectronics journal.”

Neowin: Neural network system has achieved remarkable accuracy in detecting brain hemorrhages

Neowin: Neural network system has achieved remarkable accuracy in detecting brain hemorrhages. “Deep learning and its applications have grown in recent years. Recently, researchers from ETH Zurich used the technique to study dark matter in an industry first. Now, a team working with the University of California, Berkeley and the University of California, San Francisco (UCSF) School of Medicine have trained a convolutional neural network dubbed ‘PatchFCN’ that detects brain hemorrhages with remarkable accuracy.”

Inside the 1TB ImageNet data set used to train the world’s AI: Nude kids, drunken frat parties, porno stars, and more (The Register)

The Register: Inside the 1TB ImageNet data set used to train the world’s AI: Nude kids, drunken frat parties, porno stars, and more. “ImageNet – a data set used to train AI systems around the world – contains photos of naked children, families on the beach, college parties, porn actresses, and more, scraped from the web to train computers without those individuals’ explicit consent. The library consists of 14 million images, each placed into categories that describe what’s pictured in each scene. This pairing of information – images and labels – is used to teach artificially intelligent applications to recognize things and people caught on camera.”