The Atlantic: AI’s Spicy-Mayo Problem

The Atlantic: AI’s Spicy-Mayo Problem. “In recent months, the members of the AI underground have blown up the assumption that access to the technology would remain limited to a select few companies, carefully vetted for potential dangers. They are, for better or worse, democratizing AI—loosening its constraints and pieties with the aim of freeing its creative possibilities.”

Johns Hopkins University: AI Image Generators Can Be Tricked Into Making NSFW Content

Johns Hopkins University: AI Image Generators Can Be Tricked Into Making NSFW Content . “Most online art generators are purported to block violent, pornographic, and other types of questionable content. But Johns Hopkins University researchers manipulated two of the better-known systems to create exactly the kind of images the products’ safeguards are supposed to exclude.”

Ars Technica: Dead grandma locket request tricks Bing Chat’s AI into solving security puzzle

Ars Technica: Dead grandma locket request tricks Bing Chat’s AI into solving security puzzle. “Bing Chat, an AI chatbot from Microsoft similar to ChatGPT, allows users to upload images for the AI model to examine or discuss. Normally, Bing Chat refuses to solve CAPTCHAs, which are visual puzzles designed to prevent automated programs (bots) from filling out forms on the web. On Saturday, X-user Denis Shiryaev devised a visual jailbreak that circumvents Bing Chat’s CAPTCHA filter by tricking it into reading the inscription on his imaginary deceased grandmother’s locket.”

Digital Trends: ChatGPT can now generate working Windows 11 keys for free

Digital Trends: ChatGPT can now generate working Windows 11 keys for free. “In a short time, ChatGPT has amazed the world with the things it can do (and the things it really shouldn’t be able to do). And now it seems we can add creating genuine Windows 10 and Windows 11 keys to the list. All it takes is some clever prompting and you’ll get free access to Microsoft’s operating system.”

Motherboard: The Amateurs Jailbreaking GPT Say They’re Preventing a Closed-Source AI Dystopia

Motherboard: The Amateurs Jailbreaking GPT Say They’re Preventing a Closed-Source AI Dystopia. “OpenAI’s latest version of its popular large language model, GPT-4, is the company’s ‘most capable and aligned model yet,’ according to CEO Sam Altman. Yet, within two days of its release, developers were already able to override its moderation filters, providing users with harmful content that ranged from telling users how to hack into someone’s computer to explaining why Mexicans should be deported. This jailbreak is only the latest in a series that users have been able to run on GPT models.”