EurekAlert: A research study analyzes the influence of algorithms on online publicity and advertising

EurekAlert: A research study analyzes the influence of algorithms on online publicity and advertising . “When we look for information on the internet, buy online or use social networks we often see ads relating to our likes or profile. To what extent are these ads chosen by the web’s algorithms? A group of researchers are trying to answer this question under the name of «MyBubble», a science project from the Massachusetts Institute of Technology (MIT), Universidad Carlos III de Madrid (UC3M) and IMDEA Networks Institute.”

Help Net Security: Researchers develop algorithm to detect fake users on social networks

Help Net Security: Researchers develop algorithm to detect fake users on social networks. “Ben-Gurion University of the Negev and University of Washington researchers have developed a new generic method to detect fake accounts on most types of social networks, including Facebook and Twitter.”

Techdirt: Again, Algorithms Suck At Determining ‘Bad’ Content, Often To Hilarious Degrees

Techdirt: Again, Algorithms Suck At Determining ‘Bad’ Content, Often To Hilarious Degrees. Warning: there is a swear word in this quote. “A few weeks back, Mike wrote a post detailing how absolutely shitty algorithms can be at determining what is ‘bad’ or ‘offensive’ or otherwise ‘undesirable’ content. While his post detailed failings in algorithms judging such weighty content as war-crime investigations versus terrorist propaganda, and Nazi hate-speech versus legitimate news reporting, the central thesis in all of this is that relying on platforms to host our speech and content when those platforms employ very, very imperfect algorithms as gatekeepers is a terrible idea. And it leads to undesirable outcomes at levels far below those of Nazis and terrorism.”

Quartz: AI experts want government algorithms to be studied like environmental hazards

Quartz: AI experts want government algorithms to be studied like environmental hazards. “Artificial intelligence experts are urging governments to require assessments of AI implementation that mimic the environmental impact reports now required by many jurisdictions. AI Now, a nonprofit founded to study the societal impacts of AI, said an algorithmic impact assessment (AIA) would assure that the public and governments understand the scope, capability, and secondary impacts an algorithm could have, and people could voice concerns if an algorithm was behaving in a biased or unfair way.”

The Guardian: Algorithms have become so powerful we need a robust, Europe-wide response

The Guardian: Algorithms have become so powerful we need a robust, Europe-wide response. “Whenever the nefarious consequences of their profit models are exposed, tech companies essentially reply, ‘don’t regulate us, we’ll improve our behaviour’. But self-regulation is simply not working well enough, especially when we have no way of knowing whether tweaking algorithms makes matters better or worse. Opaque algorithms in effect challenge the checks and balances essential for liberal democracies and market economies to function. As the EU builds a digital single market, it needs to ensure that market is anchored in democratic principles. Yet the software codes that determine which link shows up first, second, third and onwards, remain protected by intellectual property rights as ‘trade secrets’. “

Contexts: the algorithmic rise of the “alt-right”

Contexts: the algorithmic rise of the “alt-right” . “There are two strands of conventional wisdom unfolding in popular accounts of the rise of the alt-right. One says that what’s really happening can be attributed to a crisis in White identity: the alt-right is simply a manifestation of the angry White male who has status anxiety about his declining social power. Others contend that the alt-right is an unfortunate eddy in the vast ocean of Internet culture. Related to this is the idea that polarization, exacerbated by filter bubbles, has facilitated the spread of Internet memes and fake news promulgated by the alt-right. While the first explanation tends to ignore the influence of the Internet, the second dismisses the importance of White nationalism. I contend that we have to understand both at the same time.”

Techdirt: Crowdfunded OpenSCHUFA Project Wants To Reverse-Engineer Germany’s Main Credit-Scoring Algorithm

Techdirt: Crowdfunded OpenSCHUFA Project Wants To Reverse-Engineer Germany’s Main Credit-Scoring Algorithm. “As well as asking people for monetary support, OpenSCHUFA wants German citizens to request a copy of their credit record, which they can obtain free of charge from SCHUFA. People can then send the main results — not the full record, and with identifiers removed — to OpenSCHUFA. The project will use the data to try to understand what real-life variables produce good and bad credit scores when fed into the SCHUFA system. Ultimately, the hope is that it will be possible to model, perhaps even reverse-engineer, the underlying algorithm.”