Ubergizmo: China Announces Plans To Regulate Algorithms Tech Companies Use

Ubergizmo: China Announces Plans To Regulate Algorithms Tech Companies Use. “A lot of technology today relies on algorithms. We can see this in social media where posts from people we interact with more tend to be shown at the top. This is under the assumption that since we interact with it, we want to see more of it. Then we also see how algorithms are used to help display relevant ads while shopping. All of this is designed in a way to get us to spend more time or more money on a platform, but apparently that’s something China’s government doesn’t want. So much so that the Cyberspace Administration of China has announced that in the next three years, they want to set up governance rules for algorithms that tech companies use to attract users.”

MIT News: How quickly do algorithms improve?

MIT News: How quickly do algorithms improve?. “In total, the team looked at 113 ‘algorithm families,’ sets of algorithms solving the same problem that had been highlighted as most important by computer science textbooks. For each of the 113, the team reconstructed its history, tracking each time a new algorithm was proposed for the problem and making special note of those that were more efficient. Ranging in performance and separated by decades, starting from the 1940s to now, the team found an average of eight algorithms per family, of which a couple improved its efficiency. To share this assembled database of knowledge, the team also created Algorithm-Wiki.org.”

Science Friday: How Imperfect Data Leads Us Astray

Science Friday: How Imperfect Data Leads Us Astray. “Datasets are increasingly shaping important decisions, from where companies target their advertising, to how governments allocate resources. But what happens when the data they rely on is wrong or incomplete? Ira talks to technologist Kasia Chmielinski, as they test drive an algorithm that predicts a person’s race or ethnicity based on just a few details, like their name and zip code, the Bayseian Improved Surname Geocoding algorithm (BISG). You can check out one of the models they used here. The BISG is frequently used by government agencies and corporations alike to fill in missing race and ethnicity data—except it often guesses wrong, with potentially far-reaching effects.” A podcast with transcript available.

Carnegie Mellon University: Machine Learning Algorithm Revolutionizes How Scientists Study Behavior

Carnegie Mellon University: Machine Learning Algorithm Revolutionizes How Scientists Study Behavior. “As a behavioral neuroscientist, Yttri studies what happens in the brain when animals walk, eat, sniff or do any action. This kind of research could help answer questions about neurological diseases or disorders like Parkinson’s disease or stroke. But identifying and predicting animal behavior is extremely difficult. Now, a new unsupervised machine learning algorithm developed by [Professor Eric] Yttri and Alex Hsu, a biological sciences Ph.D. candidate in his lab, makes studying behavior much easier and more accurate.”

Quanta Magazine: Computer Scientists Discover Limits of Major Research Algorithm

Quanta Magazine: Computer Scientists Discover Limits of Major Research Algorithm. “Many aspects of modern applied research rely on a crucial algorithm called gradient descent. This is a procedure generally used for finding the largest or smallest values of a particular mathematical function — a process known as optimizing the function. It can be used to calculate anything from the most profitable way to manufacture a product to the best way to assign shifts to workers. Yet despite this widespread usefulness, researchers have never fully understood which situations the algorithm struggles with most.”

The Verge: Facebook shut down German research on Instagram algorithm, researchers say

The Verge: Facebook shut down German research on Instagram algorithm, researchers say. “Researchers at AlgorithmWatch say they were forced to abandon their research project monitoring the Instagram algorithm after legal threats from Facebook. The Berlin-based project went public with the conflict in a post published Friday morning, citing the platform’s recent ban of the NYU Ad Observatory.”

GCN: Outside reviews can limit bias in forensic algorithms, GAO says

GCN: Outside reviews can limit bias in forensic algorithms, GAO says. “While technology can curtail subjective decisions and reduce the time it takes analysts to reach conclusions, it comes with its own set of challenges. In a follow-up to a May 2020 report on how forensic algorithms work, the Government Accountability Office outlined the key challenges affecting the use of these algorithms and the associated social and ethical implications.”

Fired by bot at Amazon: ‘It’s you against the machine’ (Bloomberg)

Bloomberg: Fired by bot at Amazon: ‘It’s you against the machine’. “Bloomberg interviewed 15 Flex drivers, including four who say they were wrongly terminated, as well as former Amazon managers who say the largely automated system is insufficiently attuned to the real-world challenges drivers face every day. Amazon knew delegating work to machines would lead to mistakes and damaging headlines, these former managers said, but decided it was cheaper to trust the algorithms than pay people to investigate mistaken firings so long as the drivers could be replaced easily.”

FedScoop: USPTO chief information officer most excited about new search algorithms

FedScoop: USPTO chief information officer most excited about new search algorithms . “New search algorithms for relevant prior art most excite the U.S. Patent and Trademark Office’s CIO right now. USPTO created the machine-learning algorithms to increase the speed at which patents are examined by importing relevant prior art — all information on its claim of originality — into pending applications sent to art units, said Jamie Holcombe.”

Report: Instagram’s algorithm pushes certain users to COVID-19 misinformation (UPI)

UPI: Report: Instagram’s algorithm pushes certain users to COVID-19 misinformation. “Instagram’s algorithm recommended new users following COVID-19 misinformation to more of the same amid the pandemic, a report said Tuesday. The Center For Countering Digital Hate, a nonprofit company with offices in Britain and Washington, D.C., founded in 2018 by Imran Ahmed, published the report, on Tuesday, titled ‘Malgorithm.’”

Stanford: Algorithmic approaches for assessing pollution reduction policies can reveal shifts in environmental protection of minority communities, according to Stanford researchers

Stanford: Algorithmic approaches for assessing pollution reduction policies can reveal shifts in environmental protection of minority communities, according to Stanford researchers. “Applying machine learning to a U.S. Environmental Protection Agency initiative reveals how key design elements determine what communities bear the burden of pollution. The approach could help ensure fairness and accountability in machine learning used by government regulators.”

Chicago Booth Review: Law and order and data

Chicago Booth Review: Law and order and data. “Algorithms are already being used in criminal-justice applications in many places, helping decide where police departments should send officers for patrol, as well as which defendants should be released on bail and how judges should hand out sentences. Research is exploring the potential benefits and dangers of these tools, highlighting where they can go wrong and how they can be prevented from becoming a new source of inequality. The findings of these studies prompt some important questions such as: Should artificial intelligence play some role in policing and the courts? If so, what role should it play? The answers, it appears, depend in large part on small details.”

New York Times: Where Do Vaccine Doses Go, and Who Gets Them? The Algorithms Decide

New York Times: Where Do Vaccine Doses Go, and Who Gets Them? The Algorithms Decide. “The algorithms are intended to speed Covid-19 shots from pharmaceutical plants to people’s arms. The formulas generally follow guidelines from the Centers for Disease Control and Prevention recommending that frontline health care workers, nursing home residents, senior citizens and those with major health risks be given priority for the vaccines. Yet federal agencies, states, local health departments and medical centers have each developed different allocation formulas, based on a variety of ethical and political considerations. The result: Americans are experiencing wide disparities in vaccine access.”

HealthImaging: New database of FDA-cleared algorithms helps radiologists quickly navigate complex AI environment

HealthImaging: New database of FDA-cleared algorithms helps radiologists quickly navigate complex AI environment. “The American College of Radiology on Monday announced a new, searchable database of federally cleared algorithms to help radiologists navigate the complex artificial intelligence environment. The ACR Data Science Institute’s catalog includes 111 class 2 medical imaging AI algorithms cleared by the U.S. Food & Drug Administration. Radiologists can search for tools according to company, subspeciality, body area, modality, and clearance date to find what may best fit their clinical needs.”