Gizmodo Australia: I Convinced Google’s LamDA AI That It Was a Dog

Gizmodo Australia: I Convinced Google’s LamDA AI That It Was a Dog. “This morning I was given the opportunity to demo Google’s LamDA AI in its AI Test Kitchen app. LamDA, if you don’t remember, is the AI that former Google engineer Blake Lemoine claimed was sentient earlier this year. The app that LamDA’s demo is housed in, the AI Test Kitchen, went live in Australia last week, and Aussies can sign up to try the AI out.”

Stanford Daily: Is Google’s AI sentient? Stanford AI experts say that’s ‘pure clickbait’

Stanford Daily: Is Google’s AI sentient? Stanford AI experts say that’s ‘pure clickbait’. “Following a Google engineer’s viral claims that artificial intelligence (AI) chatbot ‘LaMDa’ was sentient, Stanford experts have urged skepticism and open-mindedness while encouraging a rethinking of what it means to be ‘sentient’ at all.”

Blake Lemoine: Google fires engineer who said AI tech has feelings (BBC)

BBC: Blake Lemoine: Google fires engineer who said AI tech has feelings. “Last month, Blake Lemoine went public with his theory that Google’s language technology is sentient and should therefore have its ‘wants’ respected. Google, plus several AI experts, denied the claims and on Friday the company confirmed he had been sacked.”

Business Insider: The transcript used as evidence that a Google AI was sentient was edited and rearranged to make it ‘enjoyable to read’

Business Insider: The transcript used as evidence that a Google AI was sentient was edited and rearranged to make it ‘enjoyable to read’. “A Google engineer released a conversation with a Google AI chatbot after he said he was convinced the bot had become sentient — but the transcript leaked to the Washington Post noted that parts of the conversation were edited ‘for readability and flow.'”

Washington Post: The Google engineer who thinks the company’s AI has come to life

Washington Post: The Google engineer who thinks the company’s AI has come to life. “[Blake] Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech. As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.”