Columbia Journalism Review: How the media is covering ChatGPT

Columbia Journalism Review: How the media is covering ChatGPT. “In order to better understand how ChatGPT is being covered by newsrooms, we interviewed a variety of academics and journalists on how the media has been framing coverage of generative AI chatbots. We also pulled data on the volume of coverage in online news using the Media Cloud database and on TV news using data from the Internet TV News Archive, which we acquired via The GDELT Project’s API, in order to get a sketch of the coverage so far.”

Motherboard: I Asked ChatGPT To Control My Life, and It Immediately Fell Apart

Motherboard: I Asked ChatGPT To Control My Life, and It Immediately Fell Apart. “After 35 years of living in relative control of my decisions, I had decided to see what would happen if I asked AI to control my life instead. Years of suboptimal performance, both personally and professionally, and numerous failed attempts at self-improvement had convinced me there had to be a better way, and I wondered if the collective knowledge hidden inside OpenAI’s hit tech product could help me.” I have rarely laughed so hard at an article.

CNBC: Here’s what happened during OpenAI CEO Sam Altman’s first congressional hearing on artificial intelligence

CNBC: Here’s what happened during OpenAI CEO Sam Altman’s first congressional hearing on artificial intelligence. “Artificial intelligence regulation should not repeat the same mistakes Congress made at the dawn of the social media era, lawmakers at a hearing of the Senate Judiciary subcommittee on privacy and technology made clear Tuesday.”

WIRED: ChatGPT Scams Are Infiltrating the App Store and Google Play

WIRED: ChatGPT Scams Are Infiltrating the App Store and Google Play. “There are paid versions of OpenAI’s GPT and ChatGPT for regular users and developers, but anyone can try the AI chatbot for free on the company’s website. The scam apps take advantage of people who have heard about this new technology—and perhaps the frenzy of people clamoring to use it—but don’t have much additional context for how to try it themselves.”

NPR: Congress is holding hearings on how to regulate emerging AI technology

NPR: Congress is holding hearings on how to regulate emerging AI technology. “Another thing lawmakers are focused on today – how to regulate artificial intelligence. After a dinner with members of the House, the CEO of the company behind ChatGPT, Sam Altman, is appearing before a Senate Judiciary panel. We called up Democratic Senator Richard Blumenthal of Connecticut, who chairs that subcommittee.”

Quartz: Police in China have arrested a man for using ChatGPT to create and spread fake news

Quartz: Police in China have arrested a man for using ChatGPT to create and spread fake news. “Police in China have arrested a man accused of using ChatGPT, an artificial intelligence-driven text generator, to write a story about a fake train crash, which he then published online. The authorities claimed this is the first arrest related to the use of ChatGPT in China, where the technology is illegal.”

The Verge: Anthropic leapfrogs OpenAI with a chatbot that can read a novel in less than a minute

The Verge: Anthropic leapfrogs OpenAI with a chatbot that can read a novel in less than a minute. “As Anthropic notes, it takes a human around five hours to read 75,000 words of text, but with Claude’s expanded context window, it can potentially take on the task of reading, summarizing and analyzing a long documents in a matter of minutes. (Though it doesn’t do anything about chatbots’ persistent tendency to make information up.)”

MakeUseOf: What Is OpenAI’s Shap-E, and What Can It Do?

MakeUseOf: What Is OpenAI’s Shap-E, and What Can It Do?. “In May 2023, Alex Nichol and Heewon Jun, OpenAI researchers and contributors, released a paper announcing Shap-E, the company’s latest innovation. Shap-E is a new tool trained on a massive dataset of paired 3D images and text that can generate 3D models from text or images. It is similar to DALL-E, which can create 2D images from text, but Shap-E produces 3D assets.”

Ars Technica: AI gains “values” with Anthropic’s new Constitutional AI chatbot approach

Ars Technica: AI gains “values” with Anthropic’s new Constitutional AI chatbot approach. “On Tuesday, AI startup Anthropic detailed the specific principles of its ‘Constitutional AI’ training approach that provides its Claude chatbot with explicit ‘values.’ It aims to address concerns about transparency, safety, and decision-making in AI systems without relying on human feedback to rate responses. Claude is an AI chatbot similar to OpenAI’s ChatGPT that Anthropic released in March.”

TechCrunch: OpenAI’s new tool attempts to explain language models’ behaviors

TechCrunch: OpenAI’s new tool attempts to explain language models’ behaviors. “It’s often said that large language models (LLMs) along the lines of OpenAI’s ChatGPT are a black box, and certainly, there’s some truth to that. Even for data scientists, it’s difficult to know why, always, a model responds in the way it does, like inventing facts out of whole cloth. In an effort to peel back the layers of LLMs, OpenAI is developing a tool to automatically identify which parts of an LLM are responsible for which of its behaviors. The engineers behind it stress that it’s in the early stages, but the code to run it is available in open source on GitHub as of this morning.”

TechCrunch: How to ask OpenAI for your personal data to be deleted or not used to train its AIs

TechCrunch: How to ask OpenAI for your personal data to be deleted or not used to train its AIs . “While there are lots of reasons why individuals might want to shield their information from big data mining AI giants there are — for now — only limited controls on offer. And these limited controls are mostly only available to users in Europe where data protection laws do already apply. Scroll lower down for details on how to exercise available data rights — or read on for the context.”