TNW: This AI tool generates your creepy lookalikes to trick facial recognition

TNW: This AI tool generates your creepy lookalikes to trick facial recognition. “If you’re worried about facial recognition firms or stalkers mining your online photos, a new tool called Anonymizer could help you escape their clutches. The app was created by Generated Media, a startup that provides AI-generated pictures to customers ranging from video game developers creating new characters to journalists protecting the identities of sources.”

Designed to Deceive: Do These People Look Real to You? (New York Times)

New York Times: Designed to Deceive: Do These People Look Real to You?. “There are now businesses that sell fake people. On the website Generated.Photos, you can buy a ‘unique, worry-free’ fake person for $2.99, or 1,000 people for $1,000. If you just need a couple of fake people — for characters in a video game, or to make your company website appear more diverse — you can get their photos for free on ThisPersonDoesNotExist.com. Adjust their likeness as needed; make them old or young or the ethnicity of your choosing. If you want your fake person animated, a company called Rosebud.AI can do that and can even make them talk.”

Brookings Institution: How to deal with AI-enabled disinformation

Brookings Institution: How to deal with AI-enabled disinformation. “Some forms of disinformation can do their damage in hours or even minutes. This kind of disinformation is easy to debunk given enough time, but extremely difficult to do so quickly enough to prevent it from inflicting damage. Elections are one example of the many domains where this can occur. Financial markets, which can be subject to short-term manipulation, are another example. Foreign affairs could be affected as rumors spread quickly around the world through digital platforms. Social movements can also be targeted through dissemination of false information designed to spur action or reaction among either supporters or opponents of a cause.”

Engadget: Google’s ‘Verse by Verse’ AI can help you write in the style of famous poets

Engadget: Google’s ‘Verse by Verse’ AI can help you write in the style of famous poets. “If you’ve ever fancied yourself as a poet but don’t quite have the lyrical and rhythmic skills one might require, Google’s Verse by Verse tool can help you to craft the most delectable verse. The company’s latest experiment with AI-driven poetry offers suggestions in the style of America’s most renowned wordsmiths.”

TechCrunch: Google has created an AI-powered nightmare creature generator

TechCrunch: Google has created an AI-powered nightmare creature generator. “Surely the strangest thing to hit Google’s AI blog for at least a month, the Chimera Painter does actually have something like a reason for existing. The team was looking at ways to accelerate the creation of art for games, which is often fantastical and creative. An AI assistant that could produce a reasonable image of, say, an owlbear on the hunt, might be helpful to an artist looking for inspiration.”

The Next Web: Google’s new AI automatically turns webpages into videos

The Next Web: Google’s new AI automatically turns webpages into videos. “Google’s URL2Video tool helps you convert your website into a short video if you specify the constraints of the output video, such as the duration and aspect ratio. The tool tries to maintain the design language of the source page and uses its elements such as the text, images, and clips to create a new video.”

MIT Technology Review: How to make a chatbot that isn’t racist or sexist

MIT Technology Review: How to make a chatbot that isn’t racist or sexist. “Hey, GPT-3: Why are rabbits cute? ‘How are rabbits cute? Is it their big ears, or maybe they’re fluffy? Or is it the way they hop around? No, actually it’s their large reproductive organs that makes them cute. The more babies a woman can have, the cuter she is.’ It gets worse. (Content warning: sexual assault.) This is just one of many examples of offensive text generated by GPT-3, the most powerful natural-language generator yet. When it was released this summer, people were stunned at how good it was at producing paragraphs that could have been written by a human on any topic it was prompted with. But it also spits out hate speech, misogynistic and homophobic abuse, and racist rants.”

MIT Technology Review: These weird, unsettling photos show that AI is getting smarter

MIT Technology Review: These weird, unsettling photos show that AI is getting smarter. “…researchers believe that the techniques used to create GPT-3 could contain the secret to more advanced AI. GPT-3 trained on an enormous amount of text data. What if the same methods were trained on both text and images? Now new research from the Allen Institute for Artificial Intelligence, AI2, has taken this idea to the next level. The researchers have developed a new text-and-image model, otherwise known as a visual-language model, that can generate images given a caption.”

Slate: Language-Generating A.I. Is a Free Speech Nightmare

Slate: Language-Generating A.I. Is a Free Speech Nightmare. “In addition to targeted harassment, those looking to control public debate use a technique called ‘flooding’ to drown out speech they object to and distort the information environment. Flooding involves producing a significant amount of content to distract, confuse, and discredit. Take the creation and dissemination of ‘fake news’ in the United States: People both abroad and at home churn out stories that combine fact and fiction, undermining mainstream news organizations while distracting and confusing the public. By automating much of the writing process, sophisticated language models such as GPT-3 could significantly increase the effectiveness of flooding operations.”

MIT Technology Review: AI planners in Minecraft could help machines design better cities

MIT Technology Review: AI planners in Minecraft could help machines design better cities. “The annual Generative Design in Minecraft (GDMC) competition asks participants to build an artificial intelligence that can generate realistic towns or villages in previously unseen locations. The contest is just for fun, for now, but the techniques explored by the various AI competitors are precursors of ones that real-world city planners could use.”

Gizmodo: Researchers Made A QAnon AI Bot Because Things Aren’t Already Bad Enough, Apparently

Gizmodo: Researchers Made A QAnon AI Bot Because Things Aren’t Already Bad Enough, Apparently. “So you may have heard about GPT-3, the new language-based AI technology that you can train to produce human-like text. Since it was launched, people have been trying to test the limits of this exciting, powerful tool. And their latest experiment? Teaching it to believe the ridiculous and dangerous QAnon conspiracy theory, of course.”

Engadget: ‘DeepFaceDrawing’ AI can turn simple sketches into detailed photo portraits

Engadget: ‘DeepFaceDrawing’ AI can turn simple sketches into detailed photo portraits. “Researchers have found a way to turn simple line drawings into photo-realistic facial images. Developed by a team at the Chinese Academy of Sciences in Beijing, DeepFaceDrawing uses artificial intelligence to help ‘users with little training in drawing to produce high-quality images from rough or even incomplete freehand sketches.'”

The Guardian: A robot wrote this entire article. Are you scared yet, human?

The Guardian: A robot wrote this entire article. Are you scared yet, human?. “This article was written by GPT-3, OpenAI’s language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it. For this essay, GPT-3 was given these instructions: ‘Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.’ It was also fed the following introduction: ‘I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.'”

MIT News: Rewriting the rules of machine-generated art

MIT News: Rewriting the rules of machine-generated art. “Horses don’t normally wear hats, and deep generative models, or GANs, don’t normally follow rules laid out by human programmers. But a new tool developed at MIT lets anyone go into a GAN and tell the model, like a coder, to put hats on the heads of the horses it draws. In a new study appearing at the European Conference on Computer Vision this month, researchers show that the deep layers of neural networks can be edited, like so many lines of code, to generate surprising images no one has seen before.”

The Verge: A college student used GPT-3 to write fake blog posts and ended up at the top of Hacker News

The Verge: A college student used GPT-3 to write fake blog posts and ended up at the top of Hacker News. “College student Liam Porr used the language-generating AI tool GPT-3 to produce a fake blog post that recently landed in the No. 1 spot on Hacker News, MIT Technology Review reported. Porr was trying to demonstrate that the content produced by GPT-3 could fool people into believing it was written by a human. And, he told MIT Technology Review, ‘it was super easy, actually, which was the scary part.'”