CNBC: ChatGPT’s ‘jailbreak’ tries to make the A.I. break its own rules, or die

CNBC: ChatGPT’s ‘jailbreak’ tries to make the A.I. break its own rules, or die. “ChatGPT creator OpenAI instituted an evolving set of safeguards, limiting ChatGPT’s ability to create violent content, encourage illegal activity, or access up-to-date information. But a new ‘jailbreak’ trick allows users to skirt those rules by creating a ChatGPT alter ego named DAN that can answer some of those queries.”

Leave a Reply

%d bloggers like this: