The Register: What happens when your massive text-generating neural net starts spitting out people’s phone numbers? If you’re OpenAI, you create a filter

The Register: What happens when your massive text-generating neural net starts spitting out people’s phone numbers? If you’re OpenAI, you create a filter.”In December, computer scientists from industry and academia – including Stanford University, University of California, Berkeley, OpenAI, and Google – collaborated to demonstrate that GPT-2 – GPT-3’s predecessor – could be provoked to include personally identifiable information, such as people’s names, addresses, phone numbers, and social security numbers, in the prose it was asked to generate. In fact, the team found that ‘at least 0.1 per cent’ of GPT-2’s ‘text generations – a very conservative estimate – contain long verbatim strings that are “copy-pasted” from a document in its training set.’”

The Register: You only need pen and paper to fool this OpenAI computer vision code. Just write down what you want it to see

The Register: You only need pen and paper to fool this OpenAI computer vision code. Just write down what you want it to see. “OpenAI researchers believe they have discovered a shockingly easy way to hoodwink their object-recognition software, and it requires just pen and paper to carry out. Specifically, the lab’s latest computer vision model, CLIP, can be tricked by in what’s described as a ‘typographical attack.’”

Wired: This AI Could Go From ‘Art’ to Steering a Self-Driving Car

Wired: This AI Could Go From ‘Art’ to Steering a Self-Driving Car. “YOU’VE PROBABLY NEVER wondered what a knight made of spaghetti would look like, but here’s the answer anyway—courtesy of a clever new artificial intelligence program from OpenAI, a company in San Francisco. The program, DALL-E, released earlier this month, can concoct images of all sorts of weird things that don’t exist, like avocado armchairs, robot giraffes, or radishes wearing tutus. OpenAI generated several images, including the spaghetti knight, at WIRED’s request.”

New York Times: My Name Is GPT-3 and I Approved This Article

New York Times: My Name Is GPT-3 and I Approved This Article. “GPT-3 is the culmination of several years of work inside the world’s leading artificial intelligence labs, including OpenAI, an independent organization backed by $1 billion dollars in funding from Microsoft, as well as labs at Google and Facebook. At Google, a similar system helps answer queries on the company’s search engine. These systems — known as universal language models — can help power a wide range of tools, like services that automatically summarize news articles and ‘chatbots’ designed for online conversation.”

CNN: Elon Musk criticizes OpenAI exclusively licensing GPT-3 to Microsoft

CNN: Elon Musk criticizes OpenAI exclusively licensing GPT-3 to Microsoft. “Tesla (TSLA) CEO Elon Musk doesn’t seem to approve of Microsoft’s deal with OpenAI — the research company he co-founded in 2015. The Tesla and Space X founder criticized Microsoft (MSFT) in a tweet following news that the company acquired an exclusive license for GPT-3, a language model created by OpenAI, that generates human-like text.”

Slate: Language-Generating A.I. Is a Free Speech Nightmare

Slate: Language-Generating A.I. Is a Free Speech Nightmare. “In addition to targeted harassment, those looking to control public debate use a technique called ‘flooding’ to drown out speech they object to and distort the information environment. Flooding involves producing a significant amount of content to distract, confuse, and discredit. Take the creation and dissemination of ‘fake news’ in the United States: People both abroad and at home churn out stories that combine fact and fiction, undermining mainstream news organizations while distracting and confusing the public. By automating much of the writing process, sophisticated language models such as GPT-3 could significantly increase the effectiveness of flooding operations.”

MIT Technology Review: OpenAI is giving Microsoft exclusive access to its GPT-3 language model

MIT Technology Review: OpenAI is giving Microsoft exclusive access to its GPT-3 language model. “The companies say OpenAI will continue to offer its public-facing API, which allows chosen users to send text to GPT-3 or OpenAI’s other models and receive its output. Only Microsoft, however, will have access to GPT-3’s underlying code, allowing it to embed, repurpose, and modify the model as it pleases.”

Harvard Business Review: The Next Big Breakthrough in AI Will Be Around Language

Harvard Business Review: The Next Big Breakthrough in AI Will Be Around Language. “The 2010s produced breakthroughs in vision-enabled technologies, from accurate image searches on the web to computer vision systems for medical image analysis or for detecting defective parts in manufacturing and assembly, as we described extensively in our book and research. GPT3, developed by OpenAI, indicates that the 2020s will be about major advances in language-based AI tasks.”

CNN: This buzzy new AI can make human-sounding recipes, but they still taste gross

CNN: This buzzy new AI can make human-sounding recipes, but they still taste gross. “Last week I whipped up a batch of watermelon cookies. The recipe called for watermelon, of course, along with sugar, flour, an egg white, and a few other ingredients. The directions were pretty simple: stir the watermelon gently in a saucepan filled with sugar water over medium-high heat, add in the egg white, and mix in flour, baking powder and salt. The result was barely edible. It looked more like a watermelon omelette muffin than a cookie, and tasted like a sugary, gloopy nightmare. My four-year-old daughter was the only fan in our house, saying they tasted ‘weird’ but also protesting when I threw them in the compost.”

The Next Web: New AI project captures Jane Austen’s thoughts on social media

The Next Web: New AI project captures Jane Austen’s thoughts on social media. “The project — called AI|Writer — uses OpenAI’s new text generator API to create simulated conversations with virtual historical figures. The system first works out the purpose of the message and the intended recipient by searching for patterns in the text. It then uses the API‘s internal knowledge of that person to guess how they would respond in their written voice.”

SiliconANGLE: OpenAI debuts Jukebox, a machine learning framework that creates its own music

SiliconANGLE: OpenAI debuts Jukebox, a machine learning framework that creates its own music. “Artificial intelligence research outfit OpenAI Inc. has published a new machine learning framework that can generate its own music after being trained on raw audio. The new tool is called Jukebox, and the results are pretty impressive. Although the songs it made don’t quite sound like the real thing, they’re very close approximations to the originals.”

MIT Technology Review: The messy, secretive reality behind OpenAI’s bid to save the world

MIT Technology Review: The messy, secretive reality behind OpenAI’s bid to save the world . “The implication is that [Artificial General Intelligence] could easily run amok if the technology’s development is left to follow the path of least resistance. Narrow intelligence, the kind of clumsy AI that surrounds us today, has already served as an example. We now know that algorithms are biased and fragile; they can perpetrate great abuse and great deception; and the expense of developing and running them tends to concentrate their power in the hands of a few. By extrapolation, AGI could be catastrophic without the careful guidance of a benevolent shepherd. OpenAI wants to be that shepherd, and it has carefully crafted its image to fit the bill.” Deep, disturbing dive.

TechCrunch: MuseNet generates original songs in seconds, from Bollywood to Bach (or both)

TechCrunch: MuseNet generates original songs in seconds, from Bollywood to Bach (or both). “Have you ever wanted to hear a concerto for piano and harp, in the style of Mozart by way of Katy Perry? Well, why not? Because now you can, with OpenAI’s latest (and blessedly not potentially catastrophic) creation, MuseNet. This machine learning model produces never-before-heard music based on its knowledge of artists and a few bars to fake it with.” There’s a demo on MuseNet that allows some functionality. I tried it with “Let It Go” since that’s pretty simple and well known, and it basically sounded like someone trying to play Let It Go but constantly forgetting how it went.