BetaNews: In a world of deepfakes, who can you trust?. “Though they seem like something out of a futuristic sci-fi movie, deepfakes are very much a reality. In fact, developers have been experimenting with deepfake technology as far back as the late 1990s. Today, deepfakes have become so advanced and believable that they can cause some serious damage in the wrong hands.”
The Next Web: Scientists figured out how to fool state-of-the-art Deepfake detectors. “A team of researchers from UC San Diego recently came up with a relatively simple method for convincing fake video-detectors that AI-generated fakes are the real deal.”
Motherboard: We’ve Just Seen the First Use of Deepfakes in an Indian Election Campaign. “With deepfake election campaigns though, we are crossing over into an era where it’s going to be impossible to trust what we see and hear. The video of Tiwari, seated in front of a green-coloured wall and talking to the camera, was used to reproduce a forged version where he says things he never actually said, in a language he doesn’t even know! In this case, the speech was scripted, vetted and approved by the BJP for the creation of the deepfakes. But it’s not difficult to imagine someone faking a video to issue threats or hate against a specific section of the population.”
CNN: How fake faces are being weaponized online. “As an activist, Nandini Jammi has become accustomed to getting harassed online, often by faceless social media accounts. But this time was different: a menacing tweet was sent her way from an account with a profile picture of a woman with blonde hair and a beaming smile.”
The Next Web: Reuters built a prototype for automated news videos using Deepfakes tech. “The Reuters news company and an AI startup named Synthesia today unveiled a new project they’ve partnered on that uses Deepfakes-style technology to generate automated news reports in real time.”
VentureBeat: Jigsaw’s Assembler helps media organizations spot deepfake images. “In an age of pervasive deepfakes, how can anyone know if an image they’re viewing is an AI-generated fabrication? Jigsaw — the organization working under Google parent company Alphabet to tackle cyberbullying, censorship, disinformation, and other digital issues — is prototyping a tool called Assembler to address this concern. Jigsaw CEO Jared Cohen revealed in a blog post that the tool is being piloted with media organizations to help fact-checkers and journalists pinpoint and analyze manipulated media.”
The Verge: FTC says the tech behind audio deepfakes is getting better. “Rapid progress in voice cloning technology is making it harder to tell real voices from synthetic ones. But while audio deepfakes — which can trick people into giving up sensitive information — are a growing problem, there are some good and legitimate uses for the technology as well, a group of experts told an FTC workshop this week.”
Mashable: I deepfaked myself into a bunch of popular GIFs and the results are sincerely cursed. “A new app called Doublicat allows users personalize GIFs by morphing their own faces onto them, commonly known online as a deepfake. I, a true trailblazer at heart, decided to take on the mission of trying out Doublicat, just so y’all can know what you’re getting into. You can thank me (or hate me) later.” #5 literally made me shriek out loud.
Berkeley Engineering: UC Berkeley professor influences Facebook’s efforts to combat deepfakes . “Hany Farid, a Berkeley professor of electrical engineering and computer sciences, was one of the researchers Facebook approached last year. The company ultimately invested $7.5 million with Berkeley, Cornell University and the University of Maryland to develop technology to spot the deepfakes. In a brief interview, Farid, who has a joint appointment at the School of Information, said manipulated videos, which often portray politicians and celebrities saying or doing things they didn’t do, pose a serious threat to society.”
Mashable: Reddit bests Facebook by rolling out a superior deepfakes policy. “Basically, Reddit is quashing lies and disinformation on the site. Users cannot try to legitimately pass off as another individual or entity. For example, a user cannot register the username of a celebrity and truly pretend to be that celebrity on the site. While that’s the most weaponized scenario, Reddit is also specific in pointing out forgery and fake articles, and links are covered under this policy too.”
BBC: Facebook to ban ‘deepfakes’. “Facebook said it would remove videos if it realised they had been edited in ways that weren’t obvious to an average person, or if they misled a viewer into thinking that a person in a video said words they did not actually say.” There has been a whole lot flying around about this. I suspect we’re going to hear more soon.
TechCrunch: ByteDance & TikTok have secretly built a deepfakes maker. “TikTok parent company ByteDance has built technology to let you insert your face into videos starring someone else. TechCrunch has learned that ByteDance has developed an unreleased feature using life-like deepfakes technology that the app’s code refers to as Face Swap. Code in both TikTok and its Chinese sister app Douyin asks users to take a multi-angle biometric scan of their face, then choose from a selection of videos they want to add their face to and share.”
CNET: Snap reportedly acquires a deepfake startup. “Snap has purchased AI Factory, an image and video recognition startup, according to Variety on Friday. Snapchat reportedly used AI Factory’s technology to launch its new Cameos feature, which allows users to insert selfies into a scene to send as a looping video and raises concerns about the possibility of creating deepfakes.”
CNN: Now fake Facebook accounts are using fake faces. “Artificially-generated faces of people who don’t exist are being used to front fake Facebook (FB) accounts in an attempt to trick users and game the company’s systems, the social media network said Friday. Experts who reviewed the accounts say it is the first time they have seen fake images like this being used at scale as part of a single social media campaign.”
MIT Technology Review: Making deepfake tools doesn’t have to be irresponsible. Here’s how.. “Synthetic media technologies—popularly known as deepfakes—have real potential for positive impact. Voice synthesis, for example, will allow us to speak in hundreds of languages in our own voice. Video synthesis may help us simulate self-driving-car accidents to avoid mistakes in the future. And text synthesis can accelerate our ability to write both programs and prose. But these advances can come at a gargantuan cost if we aren’t careful: the same underlying technologies can also enable deception with global ramifications.”