University at Buffalo: Tool ‘teaches’ computers to correctly annotate medical images. “… because machine learning is so complex, medical professionals typically rely on computer engineers to ‘train’ or modify neural networks to properly annotate or interpret medical images. Now, UB researchers have developed a tool that lets medical professionals analyze images without engineering expertise. The tool and the image data that were used for its development are publicly available online.”
I’m not sure how useful this is, but it’s fascinating. Futurism: This Site Uses Deep Learning to Generate Fake Airbnb Listings. “A new website called This Airbnb Does Not Exist uses machine learning to whip up plausible-yet-slightly-incoherent apartment listings — from a description to ersatz photos of the interior. The site’s creator, Christopher Schmidt, was inspired by This Person Does Not Exist, another recent viral site that uses a neural network to generate photos of nonexistent people. Schmidt trained This Airbnb Does Not Exist’s image generator using a dataset of apartment interiors and its text generator using actual Airbnb listings. The result: fully furnished figments of the digital imagination.” Also gloriously weird.
The Sociable: ‘We paid little attention to vulnerabilities in machine learning platforms’: DARPA. “Dr. Hava Siegelmann, program manager in the Defense Advanced Research Projects Agency‘s (DARPA) Information Innovation Office (I2O), introduced the Guaranteeing AI Robustness against Deception (GARD) program earlier this month to address vulnerabilities in machine learning (ML) platforms and to develop a new generation of defenses against adversarial deception attacks on ML models.”
EdScoop: Google’s first machine learning program launches at Mills College. “Over the next ten weeks, the Applied Machine Learning Intensive — a boot camp-like course with a project-based curriculum — will expose students to the fundamentals of machine learning and related computer science fields. Twenty students who have been accepted into the course will work with industry experts to understand data and apply it to real-w0rld problems.”
University of Iowa: Measurement and Early Detection of Third-Party Application Abuse on Twitter. This is a PDF. “Third-party applications present a convenient way for attackers to orchestrate a large number of fake and compromised accounts on popular online social networks. Despite recent high-profile reports of third-party application abuse on Twitter, Facebook, and Google, prior work lacks automated approaches for accurate and early detection of abusive applications. In this paper, we perform a longitudinal study of abusive third-party applications on Twitter that perform a variety of malicious and spam activities in violation of Twitter’s terms of service. Our measurements over a period of 16 months demonstrate an ongoing arms race between attackers continuously registering and abusing new applications and Twitter trying to detect them. We find that hundreds of thousands of abusive applications remain undetected by Twitter for several months while posting tens of millions of tweets. To this end, we propose a machine learning approach for accurate and early detection of abusive Twitter applications by analyzing their first few tweets.”
CNET: WhatsApp boots 2M accounts a month to fight misinformation. “Ahead of India’s national elections later this year, WhatsApp is trying to wrangle bulk messaging and fake accounts. Over the last three months, the Facebook-owned messaging has banned more than 2 million accounts each month for bulk or automated behavior.”
MIT Technology Review: We analyzed 16,625 papers to figure out where AI is headed next. “…though deep learning has singlehandedly thrust AI into the public eye, it represents just a small blip in the history of humanity’s quest to replicate our own intelligence. It’s been at the forefront of that effort for less than 10 years. When you zoom out on the whole history of the field, it’s easy to realize that it could soon be on its way out.”