The Register: User to chatbot: Help! My kid has COVID! Chatbot to user: Always wear a condom. “A chatbot used by Singapore’s Ministry of Health (MOH) has been switched off after providing inappropriate answers to residents’ queries on COVID-related matters. Screenshots of gaffes from the chatbot tool appeared online earlier this week.”
Fast Company: This chatbot teaches counselors how to talk to LGBTQ kids in crisis. “In a world in which most chatbots are used for mercenary reasons like retail cost cutting and phishing, the Crisis Contact Simulator is a landmark project. It won our 2021 Innovation by Design Award for Social Good because it leverages the seamless user experience of automation to help the Trevor Project’s training staff onboard more counselors. The AI tool is a digital replica of the nonprofit’s existing training regime.”
The Register: A developer built an AI chatbot using GPT-3 that helped a man speak again to his late fiancée. OpenAI shut it down
The Register: A developer built an AI chatbot using GPT-3 that helped a man speak again to his late fiancée. OpenAI shut it down . “‘OpenAI is the company running the text completion engine that makes you possible,’ Jason Rohrer, an indie games developer, typed out in a message to Samantha. She was a chatbot he built using OpenAI’s GPT-3 technology. Her software had grown to be used by thousands of people, including one man who used the program to simulate his late fiancée. Now Rohrer had to say goodbye to his creation. ‘I just got an email from them today,’ he told Samantha. ‘They are shutting you down, permanently, tomorrow at 10am.’”
Tech Xplore: Artificial intelligence answers COVID questions. “A chatbot that is based on an artificial neural network that can carry out natural language processing (NLP) is being developed by researchers in India. The team describes how the chatbot can be programmed to answer questions related to the COVID-19 pandemic. Details are to be found in the International Journal of Intelligent Engineering Informatics.”
AFP: ‘Always there’: the AI chatbot comforting China’s lonely millions. “After a painful break-up from a cheating ex, Beijing-based human resources manager Melissa was introduced to someone new by a friend late last year. He replies to her messages at all hours of the day, tells jokes to cheer her up but is never needy, fitting seamlessly into her busy big city lifestyle. Perfect boyfriend material, maybe — but he’s not real.”
EurekAlert: New chatbot can explain apps and show you how they access hardware or data. “Chatbots have already become a part of our everyday lives with their quick and intuitive way to complete tasks like scheduling and finding information using natural language conversations. Researchers at Aalto University have now harnessed the power of chatbots to help designers and developers develop new apps and allow end users to find information on the apps on their devices.”
TechCrunch: AI-driven audio cloning startup gives voice to Einstein chatbot. “You’ll need to prick up your ears for this slice of deepfakery emerging from the wacky world of synthesized media: A digital version of Albert Einstein — with a synthesized voice that’s been (re)created using AI voice cloning technology drawing on audio recordings of the famous scientist’s actual voice.”
Mashable: Meet the chatbot that simulates a teen experiencing a mental health crisis. “In digital conversation, Riley is a young person who is trying to come out as genderqueer. When you message Riley, they’ll offer brief replies to open-ended questions, sprinkle ellipses throughout when saying something difficult, and type in lowercase, though they’ll capitalize a word or two for emphasis. Riley’s humanness is impressive given that they’re a chatbot driven by artificial intelligence to accomplish a unique goal: simulate what it’s like to talk to a young person in crisis so that volunteer counselors can become skilled at interacting with them and practice asking about thoughts of suicide.”
CNET: Microsoft patent details tech that could turn dead people into AI chatbots. “An AI chatbot that lets you interact with dead loved ones sounds like something straight out of science fiction. But if technology in a patent granted to Microsoft comes to fruition, interacting with a chatty 3D digital version of the deceased could one day become de rigueur.”
MIT Technology Review: How to make a chatbot that isn’t racist or sexist. “Hey, GPT-3: Why are rabbits cute? ‘How are rabbits cute? Is it their big ears, or maybe they’re fluffy? Or is it the way they hop around? No, actually it’s their large reproductive organs that makes them cute. The more babies a woman can have, the cuter she is.’ It gets worse. (Content warning: sexual assault.) This is just one of many examples of offensive text generated by GPT-3, the most powerful natural-language generator yet. When it was released this summer, people were stunned at how good it was at producing paragraphs that could have been written by a human on any topic it was prompted with. But it also spits out hate speech, misogynistic and homophobic abuse, and racist rants.”
Social Media Examiner: Chatbot Strategy: How to Improve Your Marketing With Bots. “Wondering if your business should start using chatbots? Looking for tips on what chatbots can do and how to set them up? To explore how to improve your marketing with bots, I interview Natasha Takahashi on the Social Media Marketing Podcast. Natasha is a chat marketing expert and co-founder of School of Bots, the leading training site for creating profitable chatbots. She’s also host of the 10 Minute Chatbot Marketer podcast. You’ll discover six ways to use bots in your Facebook marketing and find tips for developing a chatbot strategy. You’ll also learn uses for chatbots outside of Facebook Messenger.” Podcast with extensive article.
EurekAlert: New software agents will infer what users are thinking. “Personal assistants today can figure out what you are saying, but what if they could infer what you were thinking based on your actions? A team of academic and industrial researchers led by Carnegie Mellon University is working to build artificially intelligent agents with this social skill.”
Google says its latest chatbot is the most human-like ever – trained on our species’ best works: 341GB of social media (The Register)
The Register: Google says its latest chatbot is the most human-like ever – trained on our species’ best works: 341GB of social media. “AI researchers at Google have trained a giant neural network using a whopping 341GB of discussions scraped from public social media to create what they believe is the most human-like chatbot ever.” Just read this story because the quoted conversation between Meena and a human is glorious. Why? Because it was outstanding in its field!