‘There was all sorts of toxic behaviour’: Timnit Gebru on her sacking by Google, AI’s dangers and big tech’s biases (The Guardian)

The Guardian: ‘There was all sorts of toxic behaviour’: Timnit Gebru on her sacking by Google, AI’s dangers and big tech’s biases . “The Ethiopian-born computer scientist lost her job after pointing out the inequalities built into AI. But after decades working with technology companies, she knows all too much about discrimination.”

The downside of AI: Former Google scientist Timnit Gebru warns of the technology’s built-in biases (The Globe and Mail)

The Globe and Mail: The downside of AI: Former Google scientist Timnit Gebru warns of the technology’s built-in biases. “When Ms. Gebru – who’s 39 and holds a bachelor’s degree in electrical engineering and a PhD in computer vision from Stanford University – started her career, she stood out. She’s Black, a woman and works in an industry famously lacking in diversity. She moved to the U.S. as a teenager to escape the 1998-2000 Eritrean-Ethiopian War. The discrimination she faced after moving and throughout her career has left a lasting mark.”

Ars Technica: AI gains “values” with Anthropic’s new Constitutional AI chatbot approach

Ars Technica: AI gains “values” with Anthropic’s new Constitutional AI chatbot approach. “On Tuesday, AI startup Anthropic detailed the specific principles of its ‘Constitutional AI’ training approach that provides its Claude chatbot with explicit ‘values.’ It aims to address concerns about transparency, safety, and decision-making in AI systems without relying on human feedback to rate responses. Claude is an AI chatbot similar to OpenAI’s ChatGPT that Anthropic released in March.”

CTV News Toronto: Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

CTV News Toronto: Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI. “Geoffrey Hinton, professor at the University of Toronto and the ‘godfather’ of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by ‘bad actors.’”

Bloomberg: Google’s Rush to Win in AI Led to Ethical Lapses, Employees Say

Bloomberg: Google’s Rush to Win in AI Led to Ethical Lapses, Employees Say. “Shortly before Google introduced Bard, its AI chatbot, to the public in March, it asked employees to test the tool. One worker’s conclusion: Bard was ‘a pathological liar,’ according to screenshots of the internal discussion. Another called it ‘cringe-worthy.’… Google launched Bard anyway.”

Introducing Mozilla.ai: Investing in trustworthy AI (Mozilla)

Mozilla: Introducing Mozilla.ai: Investing in trustworthy AI. “The vision for Mozilla.ai is to make it easy to develop trustworthy AI products. We will build things and hire / collaborate with people that share our vision: AI that has agency, accountability, transparency and openness at its core. Mozilla.ai will be a space outside big tech and academia for like-minded founders, developers, scientists, product managers and builders to gather.”

University of Notre Dame: ND TEC launches series of animated videos explaining tech ethics concepts

University of Notre Dame: ND TEC launches series of animated videos explaining tech ethics concepts. “Tech Ethics Animated is a series of short animated videos unpacking central concepts and concerns in the field in a manner intended for a broad audience without an extensive background in technology ethics.” There are six videos. The first was released March 1, while the others will be released weekly for the next five weeks.

New-to-me: A University of Calgary Research Guide on Artificial Intelligence

Thanks to the Distant Librarian for pointing me toward this new-to-me research guide on artificial intelligence. From the front page: “This guide has been created for students and instructors to explore how to responsibly and ethically use AI in their work. There is information about how to critically engage with AI tools, examples and further reading on how students and instructors can use AI tools in their work, and information about current AI news, such as Chat GPT.”

Ars Technica: Responsible use of AI in the military? US publishes declaration outlining principles

Ars Technica: Responsible use of AI in the military? US publishes declaration outlining principles. “On Thursday, the US State Department issued a ‘Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,’ calling for ethical and responsible deployment of AI in military operations among nations that develop them. The document sets out 12 best practices for the development of military AI capabilities and emphasizes human accountability.”

Stanford University: Designing Ethical Self-Driving Cars

Stanford University: Designing Ethical Self-Driving Cars. “Ford has a corporate policy that says: Always follow the law. And this project grew out of a few simple questions: Does that policy apply to automated driving? And when, if ever, is it ethical for an AV to violate the traffic laws? As we researched these questions, we realized that in addition to the traffic code, there are appellate decisions and jury instructions that help flesh out the social contract that has developed during the hundred-plus years we’ve been driving cars.”

Futurism: CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism

Futurism: CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism. “The prominent tech news site CNET’s attempt to pass off AI-written work keeps getting worse. First, the site was caught quietly publishing the machine learning-generated stories in the first place. Then the AI-generated content was found to be riddled with factual errors. Now, CNET’s AI also appears to have been a serial plagiarist — of actual humans’ work.” Needless to say my CNET links will be very basic things, like announcement of Google easter eggs, and probably not even much of that.

Business Wire: IEEE Introduces New Program for Free Access to AI Ethics and Governance Standards (PRESS RELEASE)

Business Wire: IEEE Introduces New Program for Free Access to AI Ethics and Governance Standards (PRESS RELEASE). “IEEE, the world’s largest technical professional organization dedicated to advancing technology for humanity, and the IEEE Standards Association (IEEE SA) announce the availability of a program that provides free access to global socio-technical standards in AI Ethics and Governance that provide guidance and considerations towards trustworthy AI. Under the IEEE GET Program, selected standards are made available, free of charge, to encourage adoption and use of standards that contribute to advancing technology for humanity in key areas.”

University of York: Study reveals online fake reviewers suffer from pangs of conscience

University of York: Study reveals online fake reviewers suffer from pangs of conscience. “The study, led by the University of York, found individuals to be quite competent in writing compelling fake reviews in unpredictable ways, but it caused a moral dilemma for some. The researchers say the findings of the study could be used by websites to put in place better systems to detect fake reviews, which could appeal to the contributor’s moral obligation to be truthful.”