Jezebel: Elon Musk Forced to Take Down Disastrous ‘X’ Sign on Twitter Building After 3 Days. “Over the weekend, days after police had to stop the company from taking down its Twitter sign as they didn’t have the necessary safety permits, Musk rolled out an enormous, blinding ‘X’ sign at the top of the building. The brightly lit eyesore—which inevitably poses a risk to those vulnerable to seizures—terrorized neighbors across the street. Well, as of Monday afternoon, the ‘X’ sign has vanished from the top of the office building following a string of very vocal complaints from neighbors, CBS reports.” Note to posterity: we find this just as ridiculous now as you will then. I got nothing, fam.
The Verge: BBC launches an ‘experimental’ Mastodon server. “The BBC has launched its own ‘experimental’ Mastodon server, marking one of the first major news outlets to establish an instance on the Twitter alternative. You can access the server at social.bbc, which encompasses posts from a handful of BBC accounts, including BBC Radio 4, BBC Taster, BBC Research & Development, and a few more.” If you haven’t gotten into Mastodon yet, or if you HAVE gotten into Mastodon and you’re looking for your fam, check out MastoGizmos. 11 tools for exploring, browsing, and making the most of Mastodon. Free and ad-free.
Engineering .com: Can I 3D Print This? New Tool from EOS Will Tell You. “The tool is designed to make the viability of metal and polymer 3D printing more accessible to newcomers. Focused on using laser powder bed fusion (LPBF) for production, the tool is squarely aimed at industrial 3D printing, a.k.a., additive manufacturing (AM). Users can input information about their current manufacturing method(s) along with a part design file and receive an automatically generated analysis that includes a cost estimation, predicted production time, and a recommended EOS system and material.”
Stanford Law School: Rethinking Algorithmic Decision-Making. “In a new paper, Stanford University authors, including Stanford Law Associate Professor Julian Nyarko, illuminate how algorithmic decisions based on ‘fairness’ don’t always lead to equitable or desirable outcomes.”
Daily Progress: Threat and promise of AI looms over fall semester at UVa. “‘I’ve used ChatGPT to write answers to essays and discussion questions,’ an undeclared first-year student said. ‘Professors thought I wrote it. Almost everyone I know has probably cheated with it at some point. It’s the future, and it’s so easy.'”
TechRadar: Track the trackers together: Ghostery opens up its adblocker library. “Blocking and filtering online trackers since 2009, Ghostery was already used to collaborating with external experts to feed its database. Now, the team decided to make this process more transparent and accessible by the broader online community. TrackerDB is now open-source and fully available on GitHub.”
University of California Riverside: Google & ChatGPT have mixed results in medical info queries. “When you need accurate information about a serious illness, should you go to Google or ChatGPT? A study led by University of California, Riverside, computer scientists found that both internet information gathering services have strengths and weaknesses. The team included clinical scientists from the University of Alabama and Florida International University.”
Search Engine Roundtable: New Google Merchant Center Policy Says AI Generated Reviews Are Spam & Disallowed. “Google has posted a new policy saying AI-generated reviews are against its policies, disallowed and considered spam. If you find such content, Google said you must mark it as spam in your feed with the is_spam attribute.”
Washington University in St. Louis: Analyzing generative AI’s copyright crisis. “The recent explosion of artificial intelligence tools such as ChatGPT and Copilot have supercharged the assistance available to programmers. However, AI assistants may strip out comments embedded in code to convey copyright and attribution guidelines, leaving human coders none the wiser yet still on the hook legally for intellectual property infringement. To combat this problem, computer science & engineering researchers in the McKelvey School of Engineering at Washington University in St. Louis have developed CodeIPPrompt, the first automated testing platform to evaluate how much language models generate IP-violating code.”
Washington State University: Viral TikTok health videos tend to cover three topics, rely on influencers. “Sexual health, diet and exercise are the three topics that steal the show when it comes to popular health-related videos on TikTok. Unfortunately, there’s little else in terms of engaging health-related content on the video sharing platform, a Washington State University study found.”
New York Times: Researchers Poke Holes in Safety Controls of ChatGPT and Other Chatbots. “A new report indicates that the guardrails for widely used chatbots can be thwarted, leading to an increasingly unpredictable environment for the technology.”
WHO13: Google glitch burns Urbandale business at worst-possible time. “Google is now the business lifeline. Without it, Lenz is only doing about ten jobs a day, and that’s only because this business, one that usually takes calls, is now making them. ‘The gals in the office are calling out, reaching out to friends, using Facebook,’ he said. Lenz said he’s losing $30,000 to $40,000 a day. But what’s worse are his fears that he’s losing his customers’ trust.”
NBC News: Online games struggle to rein in hateful usernames, report finds. “Usernames that include racist, misogynistic, antisemitic, anti-LGBTQ+, ableist and white supremacist terms go unmoderated on some of the most popular online games, according to a report published Monday by the Anti-Defamation League.”
National Library of Australia: National Library of Australia launches modernised Catalogue. “The National Library of Australia has launched its modernised Catalogue making it easier for patrons to search the Library’s collections.”