Economic Times: Microsoft building tool to spot bias in artificial intelligence algorithms

Economic Times: Microsoft building tool to spot bias in artificial intelligence algorithms. “After Facebook announced its own tool to detect bias in an algorithm earlier this month, a new report suggests that Microsoft is also building a tool to automate the identification of bias in a range of different Artificial Intelligence (AI) algorithms.”

Motherboard: A Startup Media Site Says AI Can Take Bias Out of News

Motherboard: A Startup Media Site Says AI Can Take Bias Out of News. “The artificial intelligence boom has expanded into creative fields once deemed uniquely human, like music, poetry, and even narrative podcasts. AI has also started writing rudimentary news articles and assisting reporters, but a new startup launched Wednesday says it will use AI to publish breaking news about a wide variety of topics. The site is called ‘Knowhere,’ and its creators say that they believe AI can be used to write unbiased news. The site will publish three versions of every article, aggregated from right-, left-, and center-leaning websites.”

Harvard Business Review: Do Academic Journals Favor Researchers from Their Own Institutions?

Harvard Business Review: Do Academic Journals Favor Researchers from Their Own Institutions?. “Are academic journals impartial? While many would suggest that academic journals work for the advancement of knowledge and science, we show this is not always the case. In a recent study, we find that two international relations (IR) journals favor articles written by authors who share the journal’s institutional affiliation. We term this phenomenon ‘academic in-group bias.'”

Inverse: How to Create Socially Responsible Algorithms, According to AI Institute

Inverse: How to Create Socially Responsible Algorithms, According to AI Institute. “AI Now’s report, Algorithmic Impact Assessments: Toward Accountable Automation in Public Agencies, outlines the need for transparency when it comes to deploying algorithms. Algorithms have a huge impact on our daily lives, but their impact sometimes goes unnoticed. Because they are baked into the infrastructure of social media and video platforms, for example, it’s easy to forget that programs often determine what content is pushed to internet users. It’s only when something goes wrong, like a conspiracy theory video reaching the top of YouTube’s trending list, that we scrutinize the automated decision procedures that shape online experiences.”

Digital Trends: Facial recognition has a race problem — here’s how Gyfcat is fixing that

Digital Trends: Facial recognition has a race problem — here’s how Gyfcat is fixing that. “A couple years back, Google was embarrassed when its algorithms incorrectly labeled a black couple as ‘gorillas.’ Unable to stop its image-recognition algorithms working in this way, Google recently ‘fixed’ its them by removing ‘gorilla’ as a classification altogether. The maker of smart GIF search engine Gfycat recently took on a similar problem — but unlike Google, it did it in a way that didn’t just remove the image identification feature to pretend there was no problem.”

Ars Technica: Is “Big Data” racist? Why policing by data isn’t necessarily objective

Ars Technica: Is “Big Data” racist? Why policing by data isn’t necessarily objective. “Algorithmic technologies that aid law enforcement in targeting crime must compete with a host of very human questions. What data goes into the computer model? After all, the inputs determine the outputs. How much data must go into the model? The choice of sample size can alter the outcome. How do you account for cultural differences? Sometimes algorithms try to smooth out the anomalies in the data—­anomalies that can correspond with minority populations. How do you address the complexity in the data or the ‘noise’ that results from imperfect results? The choices made to create an algorithm can radically impact the model’s usefulness or reliability. To examine the problem of algorithmic design, imagine that police in Cincinnati, Ohio, have a problem with the Bloods gang—­a national criminal gang, originating out of Los Angeles, that signifies membership by wearing the color red.”