VentureBeat: Jigsaw’s Assembler helps media organizations spot deepfake images

VentureBeat: Jigsaw’s Assembler helps media organizations spot deepfake images. “In an age of pervasive deepfakes, how can anyone know if an image they’re viewing is an AI-generated fabrication? Jigsaw — the organization working under Google parent company Alphabet to tackle cyberbullying, censorship, disinformation, and other digital issues — is prototyping a tool called Assembler to address this concern. Jigsaw CEO Jared Cohen revealed in a blog post that the tool is being piloted with media organizations to help fact-checkers and journalists pinpoint and analyze manipulated media.”

EurekAlert: New research uses physiological cues to distinguish computer-generated faces from human ones

EurekAlert: New research uses physiological cues to distinguish computer-generated faces from human ones. “‘Digital human face detection in video sequences via a physiological signal analysis,’ a paper published today in the Journal of Electronic Imaging (JEI), presents a viable, innovative way to discern between natural humans (NAT) and CG faces within the context of multimedia forensics, using individuals’ heart rate as the discriminating feature.”

Phys .org: Researchers identify seven types of fake news, aiding better detection

Phys .org: Researchers identify seven types of fake news, aiding better detection. “To help people spot fake news, or create technology that can automatically detect misleading content, scholars first need to know exactly what fake news is, according to a team of Penn State researchers. However, they add, that’s not as simple as it sounds.”

Engadget: Adobe, Twitter and the New York Times team up to fight digital fakes

Engadget: Adobe, Twitter and the New York Times team up to fight digital fakes. “Adobe, Twitter and the New York Times are tired of seeing fake media propagate, and they’re teaming up to do something about it. The trio has launched a Content Authenticity Initiative that aims to create a standard for digital media attribution. Ideally, you’d know whether or not a picture or video is legitimate simply by examining the file — you’d know if it had been manipulated.”

Poynter: Meet Forensia, a software ready to debunk fake WhatsApp audio files

Poynter: Meet Forensia, a software ready to debunk fake WhatsApp audio files. “Fact-checkers usually roll their eyes when they need to verify an audio file extracted from WhatsApp. They know it’s a time-consuming task and there is a lack of tools to help them reach a verdict about the voice they hear. This scenario, however, has just changed. Forensia is up and running in Buenos Aires, and ready to work in Saxon and Romance languages — but not for free.”

Reuters: Facebook, Microsoft launch contest to detect deepfake videos

Reuters: Facebook, Microsoft launch contest to detect deepfake videos . “Facebook Inc is teaming up with Microsoft Corp, the Partnership on AI coalition and academics from several universities to launch a contest to better detect deepfakes, the company said in a blog post here on Thursday.”

Bloomberg: U.S. Unleashes Military to Fight Fake News, Disinformation

Bloomberg: U.S. Unleashes Military to Fight Fake News, Disinformation . “The Defense Advanced Research Projects Agency wants custom software that can unearth fakes hidden among more than 500,000 stories, photos, video and audio clips. If successful, the system after four years of trials may expand to detect malicious intent and prevent viral fake news from polarizing society.”

Researchers propose detecting deepfakes with surprising new tool: Mice (CNET)

CNET: Researchers propose detecting deepfakes with surprising new tool: Mice. “Decades after Terminator’s Skynet first taught us to fear the apocalyptic potential of artificial intelligence, deepfakes represent a less deadly but very real threat from AI. Some researchers are now using a surprising and definitively analog tool to detect AI-manipulated audio: mice.”

CNET: Adobe AI can spot facial manipulations in Photoshop

CNET: Adobe AI can spot facial manipulations in Photoshop. “In a world filled with manipulated photos, deepfakes and even totally fake human faces, Adobe says it’s working on an artificial intelligence tool to spot fake images. Citing ‘the ethical implications’ of Photoshop, Adobe partnered with researchers from the University of California at Berkeley to work on the issue.”

CNET: New tool debunks deepfakes of Trump and other world leaders

CNET: New tool debunks deepfakes of Trump and other world leaders. “Deepfakes of world leaders may be easier to debunk using a new detection method, according to an academic paper Wednesday. Researchers created profiles of the unique expressions and head movements made by powerful people — like Donald Trump, Hillary Clinton, Barack Obama and US presidential hopeful Elizabeth Warren — when they talk. That ‘soft biometric model’ helped detect a range of deepfakes, the kind of manipulated videos powered by artificial intelligence that have sprung up lately featuring Mark Zuckerberg and others.”

Fake faces: UWs ‘Calling BS’ duo opens new website asking ‘Which face is real?’ (University of Washington)

University of Washington: Fake faces: UWs ‘Calling BS’ duo opens new website asking ‘Which face is real?’. “Which of these two realistic renderings of faces is real, and which is a computer-generated fake? Biology professor Carl Bergstrom and Information School professor Jevin West — creators of the ‘Calling BS’ class and site — now have a website to help you better discern between fake and real images online.” I got several right in a row before I got suspicious and deliberately guessed wrong, at which point I was noted to have given the incorrect answer. Aside from the obvious glitches you might see, watch ears, hair, and teeth to detect AI-generated fakes.