EurekAlert: How to make AI trustworthy

EurekAlert: How to make AI trustworthy. “One of the biggest impediments to adoption of new technologies is trust in AI. Now, a new tool developed by USC Viterbi Engineering researchers generates automatic indicators if data and predictions generated by AI algorithms are trustworthy. Their research paper, ‘There Is Hope After All: Quantifying Opinion and Trustworthiness in Neural Networks’ by Mingxi Cheng, Shahin Nazarian and Paul Bogdan of the USC Cyber Physical Systems Group, was featured in Frontiers in Artificial Intelligence.”

Phys .org: Machine-learning model finds SARS-COV-2 growing more infectious

Phys .org: Machine-learning model finds SARS-COV-2 growing more infectious. “The model, developed by lead researcher Guowei Wei, professor in the departments of Mathematics and Biochemistry and Molecular Biology, analyzed SARS-CoV-2 genotyping from more than 20,000 viral genome samples. The researchers analyzed mutations to the spike protein—a protein primarily responsible for facilitating infection—and found that five of the six known virus subtypes are now more infectious.”

InformationWeek: Google, Harvard, and EdX Team Up to Offer TinyML Training

InformationWeek: Google, Harvard, and EdX Team Up to Offer TinyML Training. “Online learning platform EdX; Google’s open-source machine learning platform, TensorFlow; and HarvardX have put together a certification program to train tech professionals to work with tiny machine learning (TinyML). The program is meant to support this specialized segment of development that can include edge computing with smart devices, wildlife tracking, and other sensors. The program comprises a series of courses that can be completed at home.”

MIT News: Shrinking deep learning’s carbon footprint

MIT News: Shrinking deep learning’s carbon footprint. “Some of the excitement over AI’s recent progress has shifted to alarm. In a study last year, researchers at the University of Massachusetts at Amherst estimated that training a large deep-learning model produces 626,000 pounds of planet-warming carbon dioxide, equal to the lifetime emissions of five cars. As models grow bigger, their demand for computing is outpacing improvements in hardware efficiency. Chips specialized for neural-network processing, like GPUs (graphics processing units) and TPUs (tensor processing units), have offset the demand for more computing, but not by enough.”

CNET: Australia has invented Shazam for spiders

CNET: Australia has invented Shazam for spiders. “Critterpedia is a collaboration between creators Nic and Murray Scare and Australia’s National Science Agency, CSIRO. It’s a machine learning engine designed to automatically identify different species of spiders and snakes. An AI-powered algorithm like Critterpedia requires hundreds of thousands of images to become accurate in its assessments, so CSIRO and Data 61 are hoping to get as many people as possible to download Critterpedia and upload pictures of spiders and snakes they might see in the wild.”

MIT Technology Review: The field of natural language processing is chasing the wrong goal

MIT Technology Review: The field of natural language processing is chasing the wrong goal. “What has the world really gained if a massive neural network achieves SOTA on some benchmark by a point or two? It’s not as though anyone cares about answering these questions for their own sake; winning the leaderboard is an academic exercise that may not make real-world tools any better. Indeed, many apparent improvements emerge not from general comprehension abilities, but from models’ extraordinary skill at exploiting spurious patterns in the data. Do recent ‘advances’ really translate into helping people solve problems?”

Engadget: DeepMind and Oxford University researchers on how to ‘decolonize’ AI

Engadget: DeepMind and Oxford University researchers on how to ‘decolonize’ AI. “In a moment where society is collectively reckoning with just how deep the roots of racism reach, a new paper from researchers at DeepMind — the AI lab and sister company to Google — and the University of Oxford presents a vision to ‘decolonize’ artificial intelligence. The aim is to keep society’s ugly prejudices from being reproduced and amplified by today’s powerful machine learning systems.”

EurekAlert: New machine learning method allows hospitals to share patient data — privately

EurekAlert: New machine learning method allows hospitals to share patient data — privately. “To answer medical questions that can be applied to a wide patient population, machine learning models rely on large, diverse datasets from a variety of institutions. However, health systems and hospitals are often resistant to sharing patient data, due to legal, privacy, and cultural challenges. An emerging technique called federated learning is a solution to this dilemma, according to a study published Tuesday in the journal Scientific Reports, led by senior author Spyridon Bakas, PhD, an instructor of Radiology and Pathology & Laboratory Medicine in the Perelman School of Medicine at the University of Pennsylvania.”

EurekAlert: New learning algorithm should significantly expand the possible applications of AI

EurekAlert: New learning algorithm should significantly expand the possible applications of AI. “The high energy consumption of artificial neural networks’ learning activities is one of the biggest hurdles for the broad use of Artificial Intelligence (AI), especially in mobile applications. One approach to solving this problem can be gleaned from knowledge about the human brain. Although it has the computing power of a supercomputer, it only needs 20 watts, which is only a millionth of the energy of a supercomputer. One of the reasons for this is the efficient transfer of information between neurons in the brain. Neurons send short electrical impulses (spikes) to other neurons – but, to save energy, only as often as absolutely necessary.”

Fast Company: Twitter automatically flags more than half of all tweets that violate its rules

Fast Company: Twitter automatically flags more than half of all tweets that violate its rules. “More than 51% of tweets that violate Twitter’s Terms of Service are now automatically flagged by machine learning systems, Twitter CEO Jack Dorsey said Thursday. The tweets are then handed to human workers for review, in a process that Dorsey said should ease the burden on people who receive harassing messages on the platform, since they won’t have to manually report as many offensive messages.”

Arab News: Google’s new tool lets you translate Ancient Egyptian hieroglyphics

Arab News: Google’s new tool lets you translate Ancient Egyptian hieroglyphics. “If you’ve ever wondered what messages the Ancient Egyptians were trying to convey with their hieroglyphics, Google’s new tool might just be able to help. In celebration of the anniversary of the discovery of the Rosetta Stone, Google Arts and Culture has released a new AI-powered tool, Fabricius, that allows you to decode and translate the ancient symbols and characters into both Arabic and English.”

University of Connecticut: UConn Library, School of Engineering to Expand Handwritten Text Recognition

University of Connecticut: UConn Library, School of Engineering to Expand Handwritten Text Recognition. “The UConn Library and the School of Engineering are working to develop new technology that applies machine learning to handwriting text recognition that will allow researchers to have improved access to handwritten historic documents. Handwritten documents are essential for researchers, but are often inaccessible because they are unable to be searched even after they are digitized. The Connecticut Digital Archive, a project of the UConn Library, is working to change that with a $24,277 grant awarded through the Catalyst Fund of LYRASIS, a nonprofit organization that supports access to academic, scientific, and cultural heritage.”

The Register: MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs

The Register: MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs. “The training set, built by the university, has been used to teach machine-learning models to automatically identify and list the people and objects depicted in still images. For example, if you show one of these systems a photo of a park, it might tell you about the children, adults, pets, picnic spreads, grass, and trees present in the snap. Thanks to MIT’s cavalier approach when assembling its training set, though, these systems may also label women as whores or bitches, and Black and Asian people with derogatory language. The database also contained close-up pictures of female genitalia labeled with the C-word.”

BetaNews: How COVID-19 sparked a revolution in healthcare machine learning and AI

BetaNews: How COVID-19 sparked a revolution in healthcare machine learning and AI. “As with nearly every element of the healthcare system, applications of machine learning and artificial intelligence (AI) have also been transformed by the pandemic. Although the power of machine learning and AI was being put to significant use prior to the Coronavirus outbreak, there is now increased pressure to understand the underlying patterns to help us prepare for any epidemic that might hit the world in the future.”