BetaNews: In a world of deepfakes, who can you trust?. “Though they seem like something out of a futuristic sci-fi movie, deepfakes are very much a reality. In fact, developers have been experimenting with deepfake technology as far back as the late 1990s. Today, deepfakes have become so advanced and believable that they can cause some serious damage in the wrong hands.”
The Next Web: Scientists figured out how to fool state-of-the-art Deepfake detectors. “A team of researchers from UC San Diego recently came up with a relatively simple method for convincing fake video-detectors that AI-generated fakes are the real deal.”
Motherboard: We’ve Just Seen the First Use of Deepfakes in an Indian Election Campaign. “With deepfake election campaigns though, we are crossing over into an era where it’s going to be impossible to trust what we see and hear. The video of Tiwari, seated in front of a green-coloured wall and talking to the camera, was used to reproduce a forged version where he says things he never actually said, in a language he doesn’t even know! In this case, the speech was scripted, vetted and approved by the BJP for the creation of the deepfakes. But it’s not difficult to imagine someone faking a video to issue threats or hate against a specific section of the population.”
The Next Web: Reuters built a prototype for automated news videos using Deepfakes tech. “The Reuters news company and an AI startup named Synthesia today unveiled a new project they’ve partnered on that uses Deepfakes-style technology to generate automated news reports in real time.”
VentureBeat: Jigsaw’s Assembler helps media organizations spot deepfake images. “In an age of pervasive deepfakes, how can anyone know if an image they’re viewing is an AI-generated fabrication? Jigsaw — the organization working under Google parent company Alphabet to tackle cyberbullying, censorship, disinformation, and other digital issues — is prototyping a tool called Assembler to address this concern. Jigsaw CEO Jared Cohen revealed in a blog post that the tool is being piloted with media organizations to help fact-checkers and journalists pinpoint and analyze manipulated media.”
Mashable: I deepfaked myself into a bunch of popular GIFs and the results are sincerely cursed. “A new app called Doublicat allows users personalize GIFs by morphing their own faces onto them, commonly known online as a deepfake. I, a true trailblazer at heart, decided to take on the mission of trying out Doublicat, just so y’all can know what you’re getting into. You can thank me (or hate me) later.” #5 literally made me shriek out loud.
Mashable: Reddit bests Facebook by rolling out a superior deepfakes policy. “Basically, Reddit is quashing lies and disinformation on the site. Users cannot try to legitimately pass off as another individual or entity. For example, a user cannot register the username of a celebrity and truly pretend to be that celebrity on the site. While that’s the most weaponized scenario, Reddit is also specific in pointing out forgery and fake articles, and links are covered under this policy too.”
BBC: Facebook to ban ‘deepfakes’. “Facebook said it would remove videos if it realised they had been edited in ways that weren’t obvious to an average person, or if they misled a viewer into thinking that a person in a video said words they did not actually say.” There has been a whole lot flying around about this. I suspect we’re going to hear more soon.
TechCrunch: ByteDance & TikTok have secretly built a deepfakes maker. “TikTok parent company ByteDance has built technology to let you insert your face into videos starring someone else. TechCrunch has learned that ByteDance has developed an unreleased feature using life-like deepfakes technology that the app’s code refers to as Face Swap. Code in both TikTok and its Chinese sister app Douyin asks users to take a multi-angle biometric scan of their face, then choose from a selection of videos they want to add their face to and share.”
CNET: Snap reportedly acquires a deepfake startup. “Snap has purchased AI Factory, an image and video recognition startup, according to Variety on Friday. Snapchat reportedly used AI Factory’s technology to launch its new Cameos feature, which allows users to insert selfies into a scene to send as a looping video and raises concerns about the possibility of creating deepfakes.”
CNN: Now fake Facebook accounts are using fake faces. “Artificially-generated faces of people who don’t exist are being used to front fake Facebook (FB) accounts in an attempt to trick users and game the company’s systems, the social media network said Friday. Experts who reviewed the accounts say it is the first time they have seen fake images like this being used at scale as part of a single social media campaign.”
MIT Technology Review: Making deepfake tools doesn’t have to be irresponsible. Here’s how.. “Synthetic media technologies—popularly known as deepfakes—have real potential for positive impact. Voice synthesis, for example, will allow us to speak in hundreds of languages in our own voice. Video synthesis may help us simulate self-driving-car accidents to avoid mistakes in the future. And text synthesis can accelerate our ability to write both programs and prose. But these advances can come at a gargantuan cost if we aren’t careful: the same underlying technologies can also enable deception with global ramifications.”
Ars Technica: I created my own deepfake—it took two weeks and cost $552. “My Ars overlords gave me a few days to play around with deepfake software and a $1,000 cloud computing budget. A couple of weeks later, I have my result, which you can see above. I started with a video of Mark Zuckerberg testifying before Congress and replaced his face with that of Lieutenant Commander Data (Brent Spiner) from Star Trek: The Next Generation. Total spent: $552.”
AFP: China bans ‘fake news’ created with AI, bots. “The regulation published Friday by China’s cyberspace authority said that both providers and users of online video news and audio services are ‘not allowed’ to use new technologies such as deep learning and virtual reality to create, distribute and broadcast ‘fake news.'”
Ars Technica: Twitter wants your feedback on its proposed deepfakes policy. “A lie has always been able to travel faster than the truth, and that goes double on Twitter, where a combination of bad human choices and bad-faith bots amplifies false messaging almost instantly around the world. So what should a social media platform do about it? The question is not rhetorical. Twitter is trying to come up with a policy for handling ‘synthetic and manipulated media,’ the company said in a blog post today, and it wants your input.”