BERT Explained: What You Need to Know About Google’s New Algorithm (Search Engine Journal)

Search Engine Journal: BERT Explained: What You Need to Know About Google’s New Algorithm . “BERT will impact around 10% of queries. It will also impact organic rankings and featured snippets. So this is no small change! But did you know that BERT is not just any algorithmic update, but also a research paper and machine learning natural language processing framework?” A webinar recap but also a deep dive.

ZDNet: Google’s new AI tool could help decode the mysterious algorithms that decide everything

ZDNet: Google’s new AI tool could help decode the mysterious algorithms that decide everything. “While most people come across algorithms every day, not that many can claim that they really understand how AI actually works. A new tool unveiled by Google, however, hopes to help common humans grasp the complexities of machine learning.”

Harvard Business Review: When Algorithms Decide Whose Voices Will Be Heard

Harvard Business Review: When Algorithms Decide Whose Voices Will Be Heard. “Are we giving up our freedom of expression and action in the name of convenience? While we may have the perceived power to express ourselves digitally, our ability to be seen is increasingly governed by algorithms — with lines of codes and logic — programmed by fallible humans. Unfortunately, what dictates and controls the outcomes of such programs is more often than not a black box.”

Search Engine Journal: Yandex’s Artificial Intelligence & Machine Learning Algorithms

Search Engine Journal: Yandex’s Artificial Intelligence & Machine Learning Algorithms . “It’s been a decade since Yandex first introduced machine learning in search with the launch of Matrixnet. The search engine has since gone on to improve its AI and ML capabilities with further updates including Palekh and Korolyov.”

Caltech: Algorithms Seek Out Voter Fraud

Caltech: Algorithms Seek Out Voter Fraud . “Concerns over voter fraud have surged in recent years, especially after federal officials reported that Russian hackers attempted to access voter records in the 2016 presidential election. Administrative voting errors have been reported, too; for example, an audit by state officials revealed that 84,000 voter records were inadvertently duplicated by the California Department of Motor Vehicles (DMV) in the 2018 June primary election. Michael Alvarez, professor of political science at Caltech, and his team are helping with the situation by developing new algorithms for tracking voter data.”

Washington Post: Racial bias in a medical algorithm favors white patients over sicker black patients

Washington Post: Racial bias in a medical algorithm favors white patients over sicker black patients. “A widely used algorithm that predicts which patients will benefit from extra medical care dramatically underestimates the health needs of the sickest black patients, amplifying long-standing racial disparities in medicine, researchers have found.”

ScienceDaily: Boosting the popularity of social media posts

ScienceDaily: Boosting the popularity of social media posts. “Computer scientists created a new algorithm to recommend tags for social media posts which should boost the popularity of the post in question. This algorithm takes into account more kinds of information than previous algorithms with a similar goal. The result is a measurably improved view count for posts which use the tags recommended by this new algorithm.”

TechCrunch: Facebook isn’t free speech, it’s algorithmic amplification optimized for outrage

TechCrunch: Facebook isn’t free speech, it’s algorithmic amplification optimized for outrage. “The problem is that Facebook doesn’t offer free speech; it offers free amplification. No one would much care about anything you posted to Facebook, no matter how false or hateful, if people had to navigate to your particular page to read your rantings, as in the very early days of the site. But what people actually read on Facebook is what’s in their News Feed … and its contents, in turn, are determined not by giving everyone an equal voice, and not by a strict chronological timeline.”

The MIT Press Reader: Algorithms Are Redrawing the Space for Cultural Imagination

The MIT Press Reader: Algorithms Are Redrawing the Space for Cultural Imagination. “…the promised salvation of algorithmic theology stubbornly remains in the distant future: the clunky, disjointed implementation of the computational layer on cultural life leaves much yet to be desired. Algorithms still get it wrong far too often to make a believable case for transcendent truth.”

Artsy: An online image database will remove 600,000 pictures after an art project revealed the system’s racist bias.

Artsy: An online image database will remove 600,000 pictures after an art project revealed the system’s racist bias.. “ImageNet, a popular online database of images, will remove 600,000 pictures of people from its system after an art project revealed the depths of the racial biases of the system’s artificial intelligence.”

ZDNet: Google’s public image disconnect: Smart engineers and dumb algorithms

ZDNet: Google’s public image disconnect: Smart engineers and dumb algorithms. “Google looks smart and its people behave smart, but that doesn’t mean its algorithms are smart. Machine learning works well when it comes to images, not language. Google’s dirty little secret is that its algorithms are quite dumb and have trouble understanding what they see and read.”

WSJ: Amazon changed search results to boost profits despite internal dissent (Ars Technica)

Ars Technica: WSJ: Amazon changed search results to boost profits despite internal dissent. “The goal was to favor Amazon-made products as well as third-party products that rank high in ‘what the company calls “contribution profit,” considered a better measure of a product’s profitability because it factors in non-fixed expenses such as shipping and advertising, leaving the amount left over to cover Amazon’s fixed costs,’ the WSJ said.”

Ars Technica: Algorithms should have made courts more fair. What went wrong?

Ars Technica: Algorithms should have made courts more fair. What went wrong?. “Kentucky lawmakers thought requiring that judges consult an algorithm when deciding whether to hold a defendant in jail before trial would make the state’s justice system cheaper and fairer by setting more people free. That’s not how it turned out.”

SwissInfo: Study finds Big Data eliminates confidentiality in court judgements

SwissInfo: Study finds Big Data eliminates confidentiality in court judgements. “Swiss researchers have found that algorithms that mine large swaths of data can eliminate anonymity in federal court rulings. This could have major ramifications for transparency and privacy protection.”