Harvard Business Review: We Need Transparency in Algorithms, But Too Much Can Backfire

Harvard Business Review: We Need Transparency in Algorithms, But Too Much Can Backfire . “Companies and governments increasingly rely upon algorithms to make decisions that affect people’s lives and livelihoods – from loan approvals, to recruiting, legal sentencing, and college admissions. Less vital decisions, too, are being delegated to machines, from internet search results to product recommendations, dating matches, and what content goes up on our social media feeds. In response, many experts have called for rules and regulations that would make the inner workings of these algorithms transparent. But as Nass’s experience makes clear, transparency can backfire if not implemented carefully. Fortunately, there is a smart way forward.”

VentureBeat: Pymetrics open-sources Audit AI, an algorithm bias detection tool

VentureBeat: Pymetrics open-sources Audit AI, an algorithm bias detection tool. “AI startup Pymetrics today announced it has open-sourced its tool for detecting bias in algorithms. Available for download on GitHub, Audit AI is designed to determine whether a specific statistic or trait fed into an algorithm is being favored or disadvantaged at a statistically significant, systematic rate, leading to adverse impact on people underrepresented in the data set.”

TechCrunch: Are algorithms hacking our thoughts?

TechCrunch: Are algorithms hacking our thoughts? . “As Facebook shapes our access to information, Twitter dictates public opinion, and Tinder influences our dating decisions, the algorithms we’ve developed to help us navigate choice are now actively driving every aspect of our lives. But as we increasingly rely on them for everything from how we seek out news to how we relate to the people around us, have we automated the way we behave? Is human thinking beginning to mimic algorithmic processes? And is the Cambridge Analytica debacle a warning sign of what’s to come–and of happens when algorithms hack into our collective thoughts?”

How Facebook manages your information diet: Argentina case study (World Wide Web Foundation)

World Wide Web Foundation: How Facebook manages your information diet: Argentina case study. “As more people get online, we are seeing the construction and consolidation of the digital public square. Increasingly, as people spend more time online, this digital public square is becoming where people define and redefine their identities, civic discussions take place, and political organisation leads to tangible shifts in power. As with physical public squares, the architecture and rules that govern the space will determine the power dynamics that will shape our society. With the power to decide what we see and what we don’t, private companies and their algorithms have a tremendous influence over public discourse and the shape of the digital public square. Focusing on Facebook, our new research seeks to better understand the algorithms that manage our daily news diets and what we can do to make sure they work in our best interests.”

TechCrunch: Our “modern” Congress doesn’t understand 21st century technology

TechCrunch: Our “modern” Congress doesn’t understand 21st century technology . “Facebook is a business that sells social connection, its algorithms are made for targeted advertising. The data that we users provide via friends, likes and shares makes their model lucrative. But connecting a person to a pair of shoes cannot be the same engagement algorithm that we use to build a cohesive democratic society. Watch any hearing on Capitol Hill. It’s a durable, if old fashioned bridge between leaders and citizens. Informed deliberation could be a lot more compelling, but it can never compete on the same turf with funny GIFs and targeted videos. Algorithms optimized for commercial engagement do not protect public goods like democratic discourse. They are built for shareholders, not citizens. To the contrary, they can exploit and damage democracy’s most precious resource– civic trust.”

EurekAlert: A research study analyzes the influence of algorithms on online publicity and advertising

EurekAlert: A research study analyzes the influence of algorithms on online publicity and advertising . “When we look for information on the internet, buy online or use social networks we often see ads relating to our likes or profile. To what extent are these ads chosen by the web’s algorithms? A group of researchers are trying to answer this question under the name of «MyBubble», a science project from the Massachusetts Institute of Technology (MIT), Universidad Carlos III de Madrid (UC3M) and IMDEA Networks Institute.”

MIT Technology Review: An ex-Google engineer is scraping YouTube to pop our filter bubbles

MIT Technology Review: An ex-Google engineer is scraping YouTube to pop our filter bubbles. “YouTube—whose more than a billion users watch over a billion hours per day—shows us some data, like how many times a video has been viewed, liked, or disliked. But it hides more granular details about each video, like how often the site recommended it to other people. Without the full picture, it can be hard to know why, exactly, its algorithm is steering you in a certain direction. Guillaume Chaslot, a computer programmer who spent some time working on recommendations at YouTube and on display advertising at its parent company, Google, thinks this is a problem, and he’s fighting to bring more transparency to the ways videos are recommended.”

Quartz: AI experts want government algorithms to be studied like environmental hazards

Quartz: AI experts want government algorithms to be studied like environmental hazards. “Artificial intelligence experts are urging governments to require assessments of AI implementation that mimic the environmental impact reports now required by many jurisdictions. AI Now, a nonprofit founded to study the societal impacts of AI, said an algorithmic impact assessment (AIA) would assure that the public and governments understand the scope, capability, and secondary impacts an algorithm could have, and people could voice concerns if an algorithm was behaving in a biased or unfair way.”

The Guardian: Algorithms have become so powerful we need a robust, Europe-wide response

The Guardian: Algorithms have become so powerful we need a robust, Europe-wide response. “Whenever the nefarious consequences of their profit models are exposed, tech companies essentially reply, ‘don’t regulate us, we’ll improve our behaviour’. But self-regulation is simply not working well enough, especially when we have no way of knowing whether tweaking algorithms makes matters better or worse. Opaque algorithms in effect challenge the checks and balances essential for liberal democracies and market economies to function. As the EU builds a digital single market, it needs to ensure that market is anchored in democratic principles. Yet the software codes that determine which link shows up first, second, third and onwards, remain protected by intellectual property rights as ‘trade secrets’. “

Contexts: the algorithmic rise of the “alt-right”

Contexts: the algorithmic rise of the “alt-right” . “There are two strands of conventional wisdom unfolding in popular accounts of the rise of the alt-right. One says that what’s really happening can be attributed to a crisis in White identity: the alt-right is simply a manifestation of the angry White male who has status anxiety about his declining social power. Others contend that the alt-right is an unfortunate eddy in the vast ocean of Internet culture. Related to this is the idea that polarization, exacerbated by filter bubbles, has facilitated the spread of Internet memes and fake news promulgated by the alt-right. While the first explanation tends to ignore the influence of the Internet, the second dismisses the importance of White nationalism. I contend that we have to understand both at the same time.”

Techdirt: Crowdfunded OpenSCHUFA Project Wants To Reverse-Engineer Germany’s Main Credit-Scoring Algorithm

Techdirt: Crowdfunded OpenSCHUFA Project Wants To Reverse-Engineer Germany’s Main Credit-Scoring Algorithm. “As well as asking people for monetary support, OpenSCHUFA wants German citizens to request a copy of their credit record, which they can obtain free of charge from SCHUFA. People can then send the main results — not the full record, and with identifiers removed — to OpenSCHUFA. The project will use the data to try to understand what real-life variables produce good and bad credit scores when fed into the SCHUFA system. Ultimately, the hope is that it will be possible to model, perhaps even reverse-engineer, the underlying algorithm.”

MediaPost: EU May Require Search Engines To Reveal Ranking Factors

MediaPost: EU May Require Search Engines To Reveal Ranking Factors. “Search engines could be forced to reveal their ranking formulas, often viewed as the secret intellectual property behind their business model. The European Commission (EU) has proposed new rules that could require search engines, commerce sites and online platforms to explain how they rank results. In addition, they want these companies to reveal why they penalize or remove content on their sites from search results.”

New York Times: YouTube, the Great Radicalizer

New York Times: YouTube, the Great Radicalizer. “At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube. I was writing an article about his appeal to his voter base and wanted to confirm a few quotations. Soon I noticed something peculiar. YouTube started to recommend and ‘autoplay’ videos for me that featured white supremacist rants, Holocaust denials and other disturbing content.”

Fast Company: Algorithms Are Creating A “Digital Poorhouse” That Makes Inequality Worse

Fast Company: Algorithms Are Creating A “Digital Poorhouse” That Makes Inequality Worse. “In Los Angeles, an algorithm helps decide who–out of 58,000 homeless people–gets access to a small amount of available housing. In Indiana, the state used a computer system to flag any mistake on an application for food stamps, healthcare, or cash benefits as a ‘failure to cooperate;’ 1 million people lost benefits. In Pittsburgh, a child protection agency is using an algorithm to try to predict future child abuse, despite the algorithm’s problems with accuracy.”

Select All: It’s Time to End ‘Trending’

Select All: It’s Time to End ‘Trending’. “What does it mean, exactly, for something to be ‘trending’? YouTube, Facebook, and Twitter all make frequent use of the term, but none of them have a public or transparent definition — let alone a common one. When we sort through our feeds, ‘latest’ has an obvious chronological sorting mechanism; even ‘popular’ has a fairly clear and agreed-upon definition. ‘Trending,’ however, does not.”