MIT News: System brings deep learning to “internet of things” devices

MIT News: System brings deep learning to “internet of things” devices. “Deep learning is everywhere. This branch of artificial intelligence curates your social media and serves your Google search results. Soon, deep learning could also check your vitals or set your thermostat. MIT researchers have developed a system that could bring deep learning neural networks to new — and much smaller — places, like the tiny computer chips in wearable medical devices, household appliances, and the 250 billion other objects that constitute the ‘internet of things’ (IoT).”

University of Amsterdam: Google Streetview shows social importance pedestrian friendly environment

University of Amsterdam: Google Streetview shows social importance pedestrian friendly environment. “With Google Streetview and Deep Learning, researchers at the University of Amsterdam and the University of Twente identified how the urban environment is linked to the vitality of social organisations and neighbourhoods. They conclude that, if an environment provides more space to pedestrians, this will be conducive to neighbourhood-based social organisations’ chances of survival.”

MIT News: Shrinking deep learning’s carbon footprint

MIT News: Shrinking deep learning’s carbon footprint. “Some of the excitement over AI’s recent progress has shifted to alarm. In a study last year, researchers at the University of Massachusetts at Amherst estimated that training a large deep-learning model produces 626,000 pounds of planet-warming carbon dioxide, equal to the lifetime emissions of five cars. As models grow bigger, their demand for computing is outpacing improvements in hardware efficiency. Chips specialized for neural-network processing, like GPUs (graphics processing units) and TPUs (tensor processing units), have offset the demand for more computing, but not by enough.”

AraNet: New Deep Learning Toolkit for Arabic Social Media (Synced)

Synced: AraNet: New Deep Learning Toolkit for Arabic Social Media. “The performance of natural language processing (NLP) systems has dramatically improved on tasks such as reading comprehension and natural language inference, and with these advances have come many new application scenarios for the tech. Unsurprisingly, English is where most NLP R&D has been focused. Now, a team of researchers from the Natural Language Processing Lab at the University of British Columbia in Canada have proposed AraNet, a deep learning toolkit designed for Arabic social media processing.”

Phys .org: Deep learning enables real-time imaging around corners

Phys .org: Deep learning enables real-time imaging around corners . “Researchers have harnessed the power of a type of artificial intelligence known as deep learning to create a new laser-based system that can image around corners in real time. With further development, the system might let self-driving cars ‘look’ around parked cars or busy intersections to see hazards or pedestrians. It could also be installed on satellites and spacecraft for tasks such as capturing images inside a cave on an asteroid.”

Morning Brew: Finland Expands AI Basics Course to EU

Morning Brew: Finland Expands AI Basics Course to EU. “Finland will relinquish the rotating presidency of the Council of the EU at the end of the year. Its outgoing gift = expanding Elements of AI to 1% of the EU population by 2021. Starting next year, the course will be available in all 24 official EU languages. But since there are no restrictions on who can take the course, this is basically a Christmas present to anyone who speaks one of those languages. Since it launched, over 220,000 people from 110 countries have signed up to take the class (it was available online in English). ” I signed up, said I lived in the United States, no problem.

Ars Technica: Deep Learning breakthrough made by Rice University scientists

Ars Technica: Deep Learning breakthrough made by Rice University scientists. “In an earlier deep learning article, we talked about how inference workloads—the use of already-trained neural networks to analyze data—can run on fairly cheap hardware, but running the training workload that the neural network ‘learns’ on is orders of magnitude more expensive. In particular, the more potential inputs you have to an algorithm, the more out of control your scaling problem gets when analyzing its problem space. This is where MACH, a research project authored by Rice University’s Tharun Medini and Anshumali Shrivastava, comes in.”

The Verge: AI R&D is booming, but general intelligence is still out of reach

The Verge: AI R&D is booming, but general intelligence is still out of reach. “Trying to get a handle on the progress of artificial intelligence is a daunting task, even for those enmeshed in the AI community. But the latest edition of the AI Index report — an annual rundown of machine learning data points now in its third year — does a good job confirming what you probably already suspected: the AI world is booming in a range of metrics covering research, education, and technical achievements.”

Cloudy with a chance of neurons: The tools that make neural networks work (Ars Technica)

Ars Technica: Cloudy with a chance of neurons: The tools that make neural networks work. “Artificial Intelligence—or, if you prefer, Machine Learning—is today’s hot buzzword. Unlike many buzzwords have come before it, though, this stuff isn’t vaporware dreams—it’s real, it’s here already, and it’s changing your life whether you realize it or not.” Deep dive with lots of resources.

Ars Technica: How neural networks work—and why they’ve become a big business

Ars Technica: How neural networks work—and why they’ve become a big business. “Computer scientists have been experimenting with neural networks since the 1950s. But two big breakthroughs—one in 1986, the other in 2012—laid the foundation for today’s vast deep learning industry. The 2012 breakthrough—the deep learning revolution—was the discovery that we can get dramatically better performance out of neural networks with not just a few layers but with many. That discovery was made possible thanks to the growing amount of both data and computing power that had become available by 2012. This feature offers a primer on neural networks. We’ll explain what neural networks are, how they work, and where they came from. And we’ll explore why—despite many decades of previous research—neural networks have only really come into their own since 2012.”

Phys .org: Deep learning to analyze neurological problems

Phys .org: Deep learning to analyze neurological problems . “Getting to the doctor’s office for a check-up can be challenging for someone with a neurological disorder that impairs their movement, such as a stroke. But what if the patient could just take a video clip of their movements with a smart phone and forward the results to their doctor? Work by Dr. Hardeep Ryait and colleagues at CCBN-University of Lethbridge in Alberta, Canada, publishing November 21 in the open-access journal PLOS Biology, shows how this might one day be possible.”

Arizona State University: Social media text mining can predict a company’s ‘brand personality’

Arizona State University: Social media text mining can predict a company’s ‘brand personality’. “‘Brand personality scales’ have been around for many years, using consumers’ feedback to attribute human characteristics to companies. These scales, which find that Cracker Barrel is ‘wholesome’ and Sephora is ‘contemporary,’ have proven to be reliable marketing tools. Now, a team including an Arizona State University professor and IBM researchers have harnessed machine learning to accurately predict brand personality ratings by analyzing hundreds of thousands of social media posts.”

ScienceBlog: Researchers Find Way To Harness AI Creativity

ScienceBlog: Researchers Find Way To Harness AI Creativity. “A team led by Alexander Wong, a Canada Research Chair in the area of AI and a professor of systems design engineering at the University of Waterloo, developed a new type of compact family of neural networks that could run on smartphones, tablets, and other embedded and mobile devices. The networks, called AttoNets, are being used for image classification and object segmentation, but can also act as the building blocks for video action recognition, video pose estimation, image generation, and other visual perception tasks.”

Newswise: Using deep learning to improve traffic signal performance

Newswise: Using deep learning to improve traffic signal performance. “Urban traffic congestion currently costs the U.S. economy $160 billion in lost productivity and causes 3.1 billion gallons of wasted fuel and 56 billion pounds of harmful CO2 emissions, according to the 2015 Urban Mobility Scorecard. Vikash Gayah, associate professor of civil engineering, and Zhenhui “Jessie” Li, associate professor of information sciences and technology [both at Penn State], aim to tackle this issue by first identifying machine learning algorithms that will provide results consistent with traditional (theoretical) solutions for simple scenerios, and then building upon those algorithms by introducing complexities that cannot be readily addressed through traditional means.”