CNET: Your phone may help you fight off deepfakes before they’re even made

CNET: Your phone may help you fight off deepfakes before they’re even made. “Truepic, a San Diego startup, says it’s found a way to prevent deepfakes and doctored images before they can even show up online: by verifying the authenticity of videos and images at the time they’re captured. Now the company is working to put the technology, which it calls Truepic Foresight, in millions of smartphones around the globe by having it embedded it in the Qualcomm processors that power the majority of the world’s Android phones.”

Fast Company: Fake video threatens to rewrite history. Here’s how to protect it

Fast Company: Fake video threatens to rewrite history. Here’s how to protect it. “In an age of very little institutional trust, without a firm historical context that future historians and the public can rely on to authenticate digital media events of the past, we may be looking at the dawn of a new era of civilization: post-history. We need to act now to ensure the continuity of history without stifling the creative potential of these new AI tools.”

Gizmodo: A New Tool for Detecting Deepfakes Looks for What Isn’t There: an Invisible Pulse

Gizmodo: A New Tool for Detecting Deepfakes Looks for What Isn’t There: an Invisible Pulse. “In the endlessly escalating war between those striving to create flawless deepfake videos and those developing automated tools that make them easy to spot, the latter camp has found a very clever way to expose videos that have been digitally modified by looking for literal signs of life: a person’s heartbeat.”

Lawfare: Thirty-Six Hours of Cheapfakes

Lawfare: Thirty-Six Hours of Cheapfakes . “In the last days of August, with the clock ticking down until Election Day, senior Republican officials pulled off a disinformation hat trick: Over the course of two short days, figures affiliated with the GOP published three different deceptively edited videos on social media.” Not familiar with the term “cheapfakes”? Here’s some background.

Mashable: Microsoft is launching new technology to fight deepfakes

Mashable: Microsoft is launching new technology to fight deepfakes. “When used in the context of movies and memes, deepfakes can occasionally be a source of entertainment. But they’re also a growing concern. In the age of fake news and misinformation, deepfakes — i.e. AI-generated, manipulated photos, videos, or audio files — could potentially be used to confuse and mislead people. Microsoft, however, has other ideas. On Tuesday, the company announced two new pieces of technology, both of which aim to give readers the necessary tools to filter out what’s real and what isn’t.”

Mashable: 13 of our favorite deepfakes that’ll seriously mess with your brain

Mashable: 13 of our favorite deepfakes that’ll seriously mess with your brain. “In a rudimentary sense, deepfakes can be a face-swap of sorts, but really it’s more complex. It makes something that wasn’t — swapping in a person for another, changing what they say, shapeshifting reality. That’s why it can be scary. Imagine the damage that could be done making someone say something they did not. But again… they can also be kind of fun. That in mind, we’ve collected some of our favorite amateur deepfake videos but, you know, not the kind that threaten democracy.”

MIT News: Tackling the misinformation epidemic with “In Event of Moon Disaster”

MIT News: Tackling the misinformation epidemic with “In Event of Moon Disaster”. “This provocative website showcases a ‘complete’ deepfake (manipulated audio and video) of U.S. President Richard M. Nixon delivering the real contingency speech written in 1969 for a scenario in which the Apollo 11 crew were unable to return from the moon. The team worked with a voice actor and a company called Respeecher to produce the synthetic speech using deep learning techniques. They also worked with the company Canny AI to use video dialogue replacement techniques to study and replicate the movement of Nixon’s mouth and lips. Through these sophisticated AI and machine learning technologies, the seven-minute film shows how thoroughly convincing deepfakes can be.”

EurekAlert: Recognising fake images using frequency analysis

EurekAlert: Recognising fake images using frequency analysis. “They look deceptively real, but they are made by computers: so-called deep-fake images are generated by machine learning algorithms, and humans are pretty much unable to distinguish them from real photos. Researchers at the Horst Görtz Institute for IT Security at Ruhr-Universität Bochum and the Cluster of Excellence ‘Cyber Security in the Age of Large-Scale Adversaries’ (Casa) have developed a new method for efficiently identifying deep-fake images. To this end, they analyse the objects in the frequency domain, an established signal processing technique.”

BetaNews: In a world of deepfakes, who can you trust?

BetaNews: In a world of deepfakes, who can you trust?. “Though they seem like something out of a futuristic sci-fi movie, deepfakes are very much a reality. In fact, developers have been experimenting with deepfake technology as far back as the late 1990s. Today, deepfakes have become so advanced and believable that they can cause some serious damage in the wrong hands.”

Motherboard: We’ve Just Seen the First Use of Deepfakes in an Indian Election Campaign

Motherboard: We’ve Just Seen the First Use of Deepfakes in an Indian Election Campaign. “With deepfake election campaigns though, we are crossing over into an era where it’s going to be impossible to trust what we see and hear. The video of Tiwari, seated in front of a green-coloured wall and talking to the camera, was used to reproduce a forged version where he says things he never actually said, in a language he doesn’t even know! In this case, the speech was scripted, vetted and approved by the BJP for the creation of the deepfakes. But it’s not difficult to imagine someone faking a video to issue threats or hate against a specific section of the population.”

CNN: How fake faces are being weaponized online

CNN: How fake faces are being weaponized online. “As an activist, Nandini Jammi has become accustomed to getting harassed online, often by faceless social media accounts. But this time was different: a menacing tweet was sent her way from an account with a profile picture of a woman with blonde hair and a beaming smile.”

The Next Web: Reuters built a prototype for automated news videos using Deepfakes tech

The Next Web: Reuters built a prototype for automated news videos using Deepfakes tech. “The Reuters news company and an AI startup named Synthesia today unveiled a new project they’ve partnered on that uses Deepfakes-style technology to generate automated news reports in real time.”

VentureBeat: Jigsaw’s Assembler helps media organizations spot deepfake images

VentureBeat: Jigsaw’s Assembler helps media organizations spot deepfake images. “In an age of pervasive deepfakes, how can anyone know if an image they’re viewing is an AI-generated fabrication? Jigsaw — the organization working under Google parent company Alphabet to tackle cyberbullying, censorship, disinformation, and other digital issues — is prototyping a tool called Assembler to address this concern. Jigsaw CEO Jared Cohen revealed in a blog post that the tool is being piloted with media organizations to help fact-checkers and journalists pinpoint and analyze manipulated media.”

The Verge: FTC says the tech behind audio deepfakes is getting better

The Verge: FTC says the tech behind audio deepfakes is getting better. “Rapid progress in voice cloning technology is making it harder to tell real voices from synthetic ones. But while audio deepfakes — which can trick people into giving up sensitive information — are a growing problem, there are some good and legitimate uses for the technology as well, a group of experts told an FTC workshop this week.”