AraNet: New Deep Learning Toolkit for Arabic Social Media (Synced)

Synced: AraNet: New Deep Learning Toolkit for Arabic Social Media. “The performance of natural language processing (NLP) systems has dramatically improved on tasks such as reading comprehension and natural language inference, and with these advances have come many new application scenarios for the tech. Unsurprisingly, English is where most NLP R&D has been focused. Now, a team of researchers from the Natural Language Processing Lab at the University of British Columbia in Canada have proposed AraNet, a deep learning toolkit designed for Arabic social media processing.”

Phys .org: Deep learning enables real-time imaging around corners

Phys .org: Deep learning enables real-time imaging around corners . “Researchers have harnessed the power of a type of artificial intelligence known as deep learning to create a new laser-based system that can image around corners in real time. With further development, the system might let self-driving cars ‘look’ around parked cars or busy intersections to see hazards or pedestrians. It could also be installed on satellites and spacecraft for tasks such as capturing images inside a cave on an asteroid.”

Morning Brew: Finland Expands AI Basics Course to EU

Morning Brew: Finland Expands AI Basics Course to EU. “Finland will relinquish the rotating presidency of the Council of the EU at the end of the year. Its outgoing gift = expanding Elements of AI to 1% of the EU population by 2021. Starting next year, the course will be available in all 24 official EU languages. But since there are no restrictions on who can take the course, this is basically a Christmas present to anyone who speaks one of those languages. Since it launched, over 220,000 people from 110 countries have signed up to take the class (it was available online in English). ” I signed up, said I lived in the United States, no problem.

Ars Technica: Deep Learning breakthrough made by Rice University scientists

Ars Technica: Deep Learning breakthrough made by Rice University scientists. “In an earlier deep learning article, we talked about how inference workloads—the use of already-trained neural networks to analyze data—can run on fairly cheap hardware, but running the training workload that the neural network ‘learns’ on is orders of magnitude more expensive. In particular, the more potential inputs you have to an algorithm, the more out of control your scaling problem gets when analyzing its problem space. This is where MACH, a research project authored by Rice University’s Tharun Medini and Anshumali Shrivastava, comes in.”

The Verge: AI R&D is booming, but general intelligence is still out of reach

The Verge: AI R&D is booming, but general intelligence is still out of reach. “Trying to get a handle on the progress of artificial intelligence is a daunting task, even for those enmeshed in the AI community. But the latest edition of the AI Index report — an annual rundown of machine learning data points now in its third year — does a good job confirming what you probably already suspected: the AI world is booming in a range of metrics covering research, education, and technical achievements.”

Cloudy with a chance of neurons: The tools that make neural networks work (Ars Technica)

Ars Technica: Cloudy with a chance of neurons: The tools that make neural networks work. “Artificial Intelligence—or, if you prefer, Machine Learning—is today’s hot buzzword. Unlike many buzzwords have come before it, though, this stuff isn’t vaporware dreams—it’s real, it’s here already, and it’s changing your life whether you realize it or not.” Deep dive with lots of resources.

Ars Technica: How neural networks work—and why they’ve become a big business

Ars Technica: How neural networks work—and why they’ve become a big business. “Computer scientists have been experimenting with neural networks since the 1950s. But two big breakthroughs—one in 1986, the other in 2012—laid the foundation for today’s vast deep learning industry. The 2012 breakthrough—the deep learning revolution—was the discovery that we can get dramatically better performance out of neural networks with not just a few layers but with many. That discovery was made possible thanks to the growing amount of both data and computing power that had become available by 2012. This feature offers a primer on neural networks. We’ll explain what neural networks are, how they work, and where they came from. And we’ll explore why—despite many decades of previous research—neural networks have only really come into their own since 2012.”

Phys .org: Deep learning to analyze neurological problems

Phys .org: Deep learning to analyze neurological problems . “Getting to the doctor’s office for a check-up can be challenging for someone with a neurological disorder that impairs their movement, such as a stroke. But what if the patient could just take a video clip of their movements with a smart phone and forward the results to their doctor? Work by Dr. Hardeep Ryait and colleagues at CCBN-University of Lethbridge in Alberta, Canada, publishing November 21 in the open-access journal PLOS Biology, shows how this might one day be possible.”

Arizona State University: Social media text mining can predict a company’s ‘brand personality’

Arizona State University: Social media text mining can predict a company’s ‘brand personality’. “‘Brand personality scales’ have been around for many years, using consumers’ feedback to attribute human characteristics to companies. These scales, which find that Cracker Barrel is ‘wholesome’ and Sephora is ‘contemporary,’ have proven to be reliable marketing tools. Now, a team including an Arizona State University professor and IBM researchers have harnessed machine learning to accurately predict brand personality ratings by analyzing hundreds of thousands of social media posts.”

ScienceBlog: Researchers Find Way To Harness AI Creativity

ScienceBlog: Researchers Find Way To Harness AI Creativity. “A team led by Alexander Wong, a Canada Research Chair in the area of AI and a professor of systems design engineering at the University of Waterloo, developed a new type of compact family of neural networks that could run on smartphones, tablets, and other embedded and mobile devices. The networks, called AttoNets, are being used for image classification and object segmentation, but can also act as the building blocks for video action recognition, video pose estimation, image generation, and other visual perception tasks.”

Newswise: Using deep learning to improve traffic signal performance

Newswise: Using deep learning to improve traffic signal performance. “Urban traffic congestion currently costs the U.S. economy $160 billion in lost productivity and causes 3.1 billion gallons of wasted fuel and 56 billion pounds of harmful CO2 emissions, according to the 2015 Urban Mobility Scorecard. Vikash Gayah, associate professor of civil engineering, and Zhenhui “Jessie” Li, associate professor of information sciences and technology [both at Penn State], aim to tackle this issue by first identifying machine learning algorithms that will provide results consistent with traditional (theoretical) solutions for simple scenerios, and then building upon those algorithms by introducing complexities that cannot be readily addressed through traditional means.”

EurekAlert: UCI researchers’ deep learning algorithm solves Rubik’s Cube faster than any human

EurekAlert: UCI researchers’ deep learning algorithm solves Rubik’s Cube faster than any human. “Since its invention by a Hungarian architect in 1974, the Rubik’s Cube has furrowed the brows of many who have tried to solve it, but the 3D logic puzzle is no match for an artificial intelligence system created by researchers at the University of California, Irvine.”

UChicago News: Scientists use technology to examine questions around climate, biodiversity

UChicago News: Scientists use technology to examine questions around climate, biodiversity. “clam shell may be a familiar find on the beach, but its intricate curves and markings tell a rich tale. For centuries, biologists have collected, drawn, measured and compared the shells of bivalve species, pursuing knowledge about how the environment and behavior shape biodiversity. Now, University of Chicago scientists are combining high-resolution 3-D imaging with new geometric deep learning approaches to reveal a fuller version of the story hidden in shells.”

A beginner’s guide to AI: Supervised and unsupervised learning (The Next Web)

The Next Web: A beginner’s guide to AI: Supervised and unsupervised learning. “The AI we use everyday in our phones, cameras, and smart devices usually falls into the category of deep learning. We’ve previously covered algorithms and artificial neural networks – concepts surrounding deep learning – but this time we’ll take a look at how deep learning systems actually learn.”

Computerworld: Seeing the signs (and locating them) with Google Street View and deep learning

Computerworld: Seeing the signs (and locating them) with Google Street View and deep learning. “Street signs are everywhere, but where they are precisely is not always known by the local government authorities that manage them. Councils and governments keep datasets of all signs in an area – a record of location data is mandatory – but as roads are redeveloped they are increasingly incomplete and due to errors by humans doing field surveys, often inaccurate.”