Poynter: Why people still fall for fake screenshots

Poynter: Why people still fall for fake screenshots. “What is it about fake screenshots that makes people vulnerable to thinking they’re real? For one thing, people often use real screenshots to ‘preserve’ something, like a provocative or erroneous tweet, that might be later deleted. A screenshot can be a signal that something “real” has been exposed. Hoaxers exploit that signal with a fake. And if an original can’t be found, people might just assume it was deleted.”

VentureBeat: Jigsaw’s Assembler helps media organizations spot deepfake images

VentureBeat: Jigsaw’s Assembler helps media organizations spot deepfake images. “In an age of pervasive deepfakes, how can anyone know if an image they’re viewing is an AI-generated fabrication? Jigsaw — the organization working under Google parent company Alphabet to tackle cyberbullying, censorship, disinformation, and other digital issues — is prototyping a tool called Assembler to address this concern. Jigsaw CEO Jared Cohen revealed in a blog post that the tool is being piloted with media organizations to help fact-checkers and journalists pinpoint and analyze manipulated media.”

Nieman Lab: Is this video “missing context,” “transformed,” or “edited”? This effort wants to standardize how we categorize visual misinformation

Nieman Lab: Is this video “missing context,” “transformed,” or “edited”? This effort wants to standardize how we categorize visual misinformation. “If a photo has been ’shopped, was it changed just a little or a lot? Did the editing harmlessly change the white balance or fundamentally alter the reality the photo is supposed to represent? Is a tight crop excluding important context or appropriately directing a viewer’s focus to something? Then apply all of that to videos. Where’s the line between a deepfake and a cheapfake? Your head starts to hurt. The unsung heroes of the Internet are the people who develop the standards by which information gets encoded into structured data, and said heroes are now turning their attention to this particular problem, visual misinformation. ”

Ars Technica: Twitter wants your feedback on its proposed deepfakes policy

Ars Technica: Twitter wants your feedback on its proposed deepfakes policy. “A lie has always been able to travel faster than the truth, and that goes double on Twitter, where a combination of bad human choices and bad-faith bots amplifies false messaging almost instantly around the world. So what should a social media platform do about it? The question is not rhetorical. Twitter is trying to come up with a policy for handling ‘synthetic and manipulated media,’ the company said in a blog post today, and it wants your input.”

Interesting Engineering: A Database of 100,000 AI Generated Faces Is Changing the Way We Think about Stock Photos

Interesting Engineering: A Database of 100,000 AI Generated Faces Is Changing the Way We Think about Stock Photos. “Artificial intelligence can now give you a quality stock photo of a model… that does not exist. That’s right, AI can now generate imaginary faces for your next project. Dubbed Generated Photos, the collection of faces was created by Konstantin Zhabinskiy and his team.”

Nieman Journalism Lab: Can you spot a fake photo online? Your level of experience online matters a lot more than contextual clues

Nieman Journalism Lab: Can you spot a fake photo online? Your level of experience online matters a lot more than contextual clues. “My collaborators and I recently studied how people evaluate the credibility of images that accompany online stories and what elements figure into that evaluation. We found that you’re far less likely to fall for fake images if you’re more experienced with the internet, digital photography, and online media platforms — if you have what scholars call ‘digital media literacy.'”

Mona Lisa frown: Machine learning brings old paintings and photos to life (TechCrunch)

TechCrunch: Mona Lisa frown: Machine learning brings old paintings and photos to life. “Machine learning researchers have produced a system that can recreate lifelike motion from just a single frame of a person’s face, opening up the possibility of animating not just photos but also paintings. It’s not perfect, but when it works, it is — like much AI work these days — eerie and fascinating.”

TechSpot: Security researchers fake cancerous nodes in CT scans with machine learning

TechSpot: Security researchers fake cancerous nodes in CT scans with machine learning. “We expect that when we have a CT or MRI scan that the results are accurate. After all we are talking about equipment that can cost millions of dollars and radiologists with years of training and sometimes decades of experience. However, hospital security can be lax and researchers have now shown they can fake CT and MRI scans using a generative adversarial network (GAN).”

Fake faces: UWs ‘Calling BS’ duo opens new website asking ‘Which face is real?’ (University of Washington)

University of Washington: Fake faces: UWs ‘Calling BS’ duo opens new website asking ‘Which face is real?’. “Which of these two realistic renderings of faces is real, and which is a computer-generated fake? Biology professor Carl Bergstrom and Information School professor Jevin West — creators of the ‘Calling BS’ class and site — now have a website to help you better discern between fake and real images online.” I got several right in a row before I got suspicious and deliberately guessed wrong, at which point I was noted to have given the incorrect answer. Aside from the obvious glitches you might see, watch ears, hair, and teeth to detect AI-generated fakes.

‘No image can be taken on face value’: Fake photos flood social media after a terrorist attack in India (Poynter)

Poynter: ‘No image can be taken on face value’: Fake photos flood social media after a terrorist attack in India. “Hoaxes on social media about violent attacks are one thing. But after last week’s suicide bombing, mainstream media outlets in India started publishing false photos, too. Several journalists tweeted a photo which purported to show the terrorist in a combat uniform. The Economic Times and India Today — which has its own fact-checking project — published the photo both in print and in a video.”

Futurism: A New AI Draws Cats, and They’re Utterly Grotesque

Okay, I promise I will calm down about these. But this one uses AI to generate CAT PICTURES. Seriously, how can I not? Futurism: A New AI Draws Cats, and They’re Utterly Grotesque. “GANs have been used for much more ambitious projects in the past. Researchers at NVIDIA harnessed the power of the technology to create uncanny faces that are almost completely indistinguishable from the real thing. But that doesn’t mean bored people on the internet shouldn’t be able to take advantage of the open-source technology for a bit of fun — that is, as long as real-world cats stay out of harm’s way.” I tested this. A fraction of the cats look something like real cats. The other ones look like the dreams you have after a meal of spicy meatballs and eggnog.

Medium: How to recognize fake AI-generated images

Medium: How to recognize fake AI-generated images. “Here are some things you can look for when trying to recognize an image produced by a GAN [generative adversarial network]. We’ll focus on faces because they are a common testing ground for researchers, and many of the artifacts most visible in faces also appear in other kinds of images.”

Arizona State University: Fighting fake photos, one social stream at a time

Arizona State University: Fighting fake photos, one social stream at a time. “In 1855, an English photographer named Roger Fenton traveled to Crimea to document the war there. British troops dubbed one spot on the Sevastopol peninsula the ‘valley of death’ because it was under constant shelling. Fenton photographed the spot, a shallow defile littered with cannonballs. The photo (above), titled ‘Valley of the Shadow of Death,’ became famous as one of the first and most well-known images of war. The problem is it’s faked.”

Nieman Lab: A new study provides some dispiriting evidence for why people fall for stupid fake images online

Nieman Lab: A new study provides some dispiriting evidence for why people fall for stupid fake images online. “C’mon, guys, look at the source! So if you’re assessing the credibility of a possibly fake image online, you’re looking at stuff like the source, how many times it’s been shared, and what the image shows, right? Not so much, according to a new study out of UC Davis. Instead, what matters are digital media literacy skills, experience or skill in photography, and prior attitudes about the issue.”

Wired: This Browser Extension Is Like An Antivirus For Fake Photos

Wired: This Browser Extension Is Like An Antivirus For Fake Photos. “Doctored images are the scourge of the web-wide fight against fake news. Tech companies and researchers can analyze the behavior of a typical bot in order to sniff out new ones. They can limit the reach of news outlets that perpetually share stories flagged as false. They can see when accounts are coordinating their activity and wipe out whole networks at once. But determining whether a photo that’s been meme-ified and screenshotted a thousand times over depicts something real requires a different level of forensic analysis. Researchers are beginning to develop software that can detect altered images, but they’re locked in an arms race with increasingly skillful creators of fake images.”