Rice University: Early Bird uses 10 times less energy to train deep neural networks. “Researchers from Rice and Texas A&M University unveiled Early Bird April 29 in a spotlight paper at ICLR 2020, the International Conference on Learning Representations. A study by lead authors Haoran You and Chaojian Li of Rice’s Efficient and Intelligent Computing (EIC) Lab showed Early Bird could use 10.7 times less energy to train a DNN to the same level of accuracy or better than typical training. EIC Lab director Yingyan Lin led the research along with Rice’s Richard Baraniuk and Texas A&M’s Zhangyang Wang.”
Towards Data Science: Shakespeare Meets Google’s Flax. “Google Researcher introduced Flax, a new rising star in Machine Learning, a few months ago. A lot has happened since then and the pre-release has improved tremendously. My own experiments with CNNs on Flax are bearing fruit and I am still amazed about the flexibility compared to Tensorflow. Today I will show you an application of RNNs in Flax: Character-Level Language Model.”
The Serious Computer Vision Blog: Training a Rap Machine. “In my previous post, I gave a short tutorial on how to use the Google AI platform for small garage projects. In this post, I am going to follow up and talk about how I built (or more like an attempt to build) my holiday project, a machine that completes your rap lyrics using the ‘Transformer’ neural network.” I played with it a little using lyrics from G YAMAZAWA’s “North Cack”. It was… pretty good?
Neowin: Neural networks are now being used to track exotic particles at CERN. “Research within the domain of physics has profited from the rise of artificial neural networks and deep learning. In the past, we’ve seen them being applied to study dark matter and massive galaxies. Continuing this pattern, we now have artificial neural networks being used in the study of exotic particles.”
Geekologie: Woman Trains Neural Network To Create Self Portraits Of Her. “This is a short video of the generative adversarial neural network self portraits created by Ellie O’Brien using the NVIDIA StyleGAN model retrained with 7000 images of herself.”
The Next Web: New Zealand’s first AI police officer reports for duty. “New Zealand Police has recruited an unusual new officer to the force: an AI cop called Ella. Ella is a life-like virtual assistant that uses real-time animation to emulate face-to-face interaction in an empathetic way.”
The Next Web: Reuters built a prototype for automated news videos using Deepfakes tech. “The Reuters news company and an AI startup named Synthesia today unveiled a new project they’ve partnered on that uses Deepfakes-style technology to generate automated news reports in real time.”
Boing Boing: Neural network restores and colorizes old movies. “From the excellent “Two Minute Papers” YouTube channel, a discussion of a paper titled ‘DeepRemaster: Temporal Source-Reference Attention Networks for Comprehensive Video Enhancement,’ that demonstrates the results of a neural network that fixes and colorizes aged, blurry, scratchy films.” My husband and I watched this last night. I’m kind of a snob about AI-based colorizing, so that was eh, but the restoration of old/degraded video was absolutely remarkable.
The Register: Facebook mulls tagging pics with ‘radioactive’ markers to trace the origin of photos used to build image-recog AI. “Facebook researchers have developed a digital watermarking technique that allows developers to tell if a particular machine-learning model was trained using marked images.”
TechCrunch: Facebook speeds up AI training by culling the weak. “Training an artificial intelligence agent to do something like navigate a complex 3D world is computationally expensive and time-consuming. In order to better create these potentially useful systems, Facebook engineers derived huge efficiency benefits from, essentially, leaving the slowest of the pack behind.”
Ars Technica: How Google researchers used neural networks to make weather forecasts. “The researchers say their results are a dramatic improvement over previous techniques in two key ways. One is speed. Google says that leading weather forecasting models today take one to three hours to run, making them useless if you want a weather forecast an hour in the future. By contrast, Google says its system can produce results in less than 10 minutes—including the time to collect data from sensors around the United States.”
News@Northeastern: He’s Training Computers To Find New Molecules With The Machine Learning Algorithms Used By Facebook And Google. “For more than a decade, Facebook and Google algorithms have been learning as much as they can about you. It’s how they refine their systems to deliver the news you read, those puppy videos you love, and the political ads you engage with. These same kinds of algorithms can be used to find billions of molecules and catalyze important chemical reactions that are currently induced with expensive and toxic metals, says Steven A. Lopez, an assistant professor of chemistry and chemical biology at Northeastern.”
Analytics India: 10 Free Resources To Learn GAN In 2020. “Generative Adversarial Networks or GAN, one of the interesting advents of the decade, has been used to create arts, fake images, and swapping faces in videos, among others. GANs are the subclass of deep generative models which aim to learn a target distribution in an unsupervised manner. The resources we listed below will help a beginner to kick-start learning and understanding how this model works. In this article, we list down 10 free resources to learn GAN in 2020.” These are informational, not tools.
Morning Brew: Finland Expands AI Basics Course to EU. “Finland will relinquish the rotating presidency of the Council of the EU at the end of the year. Its outgoing gift = expanding Elements of AI to 1% of the EU population by 2021. Starting next year, the course will be available in all 24 official EU languages. But since there are no restrictions on who can take the course, this is basically a Christmas present to anyone who speaks one of those languages. Since it launched, over 220,000 people from 110 countries have signed up to take the class (it was available online in English). ” I signed up, said I lived in the United States, no problem.
Ars Technica: Deep Learning breakthrough made by Rice University scientists. “In an earlier deep learning article, we talked about how inference workloads—the use of already-trained neural networks to analyze data—can run on fairly cheap hardware, but running the training workload that the neural network ‘learns’ on is orders of magnitude more expensive. In particular, the more potential inputs you have to an algorithm, the more out of control your scaling problem gets when analyzing its problem space. This is where MACH, a research project authored by Rice University’s Tharun Medini and Anshumali Shrivastava, comes in.”