ZDNet: AI can write a passing college paper in 20 minutes

ZDNet: AI can write a passing college paper in 20 minutes. “AI can do a lot of things extremely well. One thing that it can do just okay — which, frankly, is still quite extraordinary — is write college term papers. That’s the finding from EduRef, a resource for students and educators, which ran an experiment to determine if a deep learning language prediction model known as GPT-3 could get passing marks in an anonymized trial.”

VentureBeat: What it takes to create a GPT-3 product

VentureBeat: What it takes to create a GPT-3 product. “Granted, a disruptive technology might need more time to create a sustainable market, and GPT-3 is unprecedented in many respects. But developments so far show that those who stand to benefit the most from GPT-3 are companies that already wield much of the power in AI, not the ones who want to start from scratch.”

New York Times: My Name Is GPT-3 and I Approved This Article

New York Times: My Name Is GPT-3 and I Approved This Article. “GPT-3 is the culmination of several years of work inside the world’s leading artificial intelligence labs, including OpenAI, an independent organization backed by $1 billion dollars in funding from Microsoft, as well as labs at Google and Facebook. At Google, a similar system helps answer queries on the company’s search engine. These systems — known as universal language models — can help power a wide range of tools, like services that automatically summarize news articles and ‘chatbots’ designed for online conversation.”

MIT Technology Review: How to make a chatbot that isn’t racist or sexist

MIT Technology Review: How to make a chatbot that isn’t racist or sexist. “Hey, GPT-3: Why are rabbits cute? ‘How are rabbits cute? Is it their big ears, or maybe they’re fluffy? Or is it the way they hop around? No, actually it’s their large reproductive organs that makes them cute. The more babies a woman can have, the cuter she is.’ It gets worse. (Content warning: sexual assault.) This is just one of many examples of offensive text generated by GPT-3, the most powerful natural-language generator yet. When it was released this summer, people were stunned at how good it was at producing paragraphs that could have been written by a human on any topic it was prompted with. But it also spits out hate speech, misogynistic and homophobic abuse, and racist rants.”

CNN: Elon Musk criticizes OpenAI exclusively licensing GPT-3 to Microsoft

CNN: Elon Musk criticizes OpenAI exclusively licensing GPT-3 to Microsoft. “Tesla (TSLA) CEO Elon Musk doesn’t seem to approve of Microsoft’s deal with OpenAI — the research company he co-founded in 2015. The Tesla and Space X founder criticized Microsoft (MSFT) in a tweet following news that the company acquired an exclusive license for GPT-3, a language model created by OpenAI, that generates human-like text.”

Slate: Language-Generating A.I. Is a Free Speech Nightmare

Slate: Language-Generating A.I. Is a Free Speech Nightmare. “In addition to targeted harassment, those looking to control public debate use a technique called ‘flooding’ to drown out speech they object to and distort the information environment. Flooding involves producing a significant amount of content to distract, confuse, and discredit. Take the creation and dissemination of ‘fake news’ in the United States: People both abroad and at home churn out stories that combine fact and fiction, undermining mainstream news organizations while distracting and confusing the public. By automating much of the writing process, sophisticated language models such as GPT-3 could significantly increase the effectiveness of flooding operations.”

MIT Technology Review: OpenAI is giving Microsoft exclusive access to its GPT-3 language model

MIT Technology Review: OpenAI is giving Microsoft exclusive access to its GPT-3 language model. “The companies say OpenAI will continue to offer its public-facing API, which allows chosen users to send text to GPT-3 or OpenAI’s other models and receive its output. Only Microsoft, however, will have access to GPT-3’s underlying code, allowing it to embed, repurpose, and modify the model as it pleases.”

Harvard Business Review: The Next Big Breakthrough in AI Will Be Around Language

Harvard Business Review: The Next Big Breakthrough in AI Will Be Around Language. “The 2010s produced breakthroughs in vision-enabled technologies, from accurate image searches on the web to computer vision systems for medical image analysis or for detecting defective parts in manufacturing and assembly, as we described extensively in our book and research. GPT3, developed by OpenAI, indicates that the 2020s will be about major advances in language-based AI tasks.”

Gizmodo: Researchers Made A QAnon AI Bot Because Things Aren’t Already Bad Enough, Apparently

Gizmodo: Researchers Made A QAnon AI Bot Because Things Aren’t Already Bad Enough, Apparently. “So you may have heard about GPT-3, the new language-based AI technology that you can train to produce human-like text. Since it was launched, people have been trying to test the limits of this exciting, powerful tool. And their latest experiment? Teaching it to believe the ridiculous and dangerous QAnon conspiracy theory, of course.”

The Guardian: A robot wrote this entire article. Are you scared yet, human?

The Guardian: A robot wrote this entire article. Are you scared yet, human?. “This article was written by GPT-3, OpenAI’s language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it. For this essay, GPT-3 was given these instructions: ‘Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.’ It was also fed the following introduction: ‘I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.’”

The Next Web: This Philosopher AI has its own existential questions to answer

The Next Web: This Philosopher AI has its own existential questions to answer. “A new Philosopher AI could help you find meaning in a meaningless world — as long as you don’t ask it any controversial questions. The system provides musings on subjects that have plagued humanity since its inception. You can ask it about a topic that’s filling you with existential angst. It then uses OpenAI‘s GPT-3 text generator to analyze your text and spit back a life-affirming/soul-destroying response.” I do not recommend trying this if you have a heavy burden of despair right now.

The Verge: A college student used GPT-3 to write fake blog posts and ended up at the top of Hacker News

The Verge: A college student used GPT-3 to write fake blog posts and ended up at the top of Hacker News. “College student Liam Porr used the language-generating AI tool GPT-3 to produce a fake blog post that recently landed in the No. 1 spot on Hacker News, MIT Technology Review reported. Porr was trying to demonstrate that the content produced by GPT-3 could fool people into believing it was written by a human. And, he told MIT Technology Review, ‘it was super easy, actually, which was the scary part.’”