University of Michigan: Biases in large image-text AI model favor wealthier, Western perspectives. “AI model that pairs text, images performs poorly on lower-income or non-Western images, potentially increasing inequality in digital technology representation.”
Tag Archives: AI
WIRED: The Generative AI Copyright Fight Is Just Getting Started
WIRED: The Generative AI Copyright Fight Is Just Getting Started. “The biggest fight of the generative AI revolution is headed to the courtroom—and no, it’s not about the latest boardroom drama at OpenAI. Book authors, artists, and coders are challenging the practice of teaching AI models to replicate their skills using their own work as a training manual. The debate centers on the billions of works underpinning the impressive wordsmithery of tools like ChatGPT, the coding prowess of Github’s Copilot, and artistic flair of image generators like that of startup Midjourney.”
Search Engine Journal: OpenAI Investigates ‘Lazy’ GPT-4 Complaints On Google Reviews, X
Search Engine Journal: OpenAI Investigates ‘Lazy’ GPT-4 Complaints On Google Reviews, X. “OpenAI, the company that launched ChatGPT a little over a year ago, has recently taken to social media to address concerns regarding the ‘lazy’ performance of GPT-4 on social media and Google Reviews.”
404 Media: Civitai and OctoML Introduce Radical New Measures to Stop Abuse After 404 Media Investigation
404 Media: Civitai and OctoML Introduce Radical New Measures to Stop Abuse After 404 Media Investigation. “Civitai, a text-to-image AI model sharing platform, is seeking a new cloud computing provider and instructing its millions of users to complain to its current provider, OctoML, after OctoML decided to end its business relationship with Civitai entirely, after a 404 Media investigation.”
University of California Davis: Google Weed View? Professor Trains Computer to Spot Invasive Weed
University of California Davis: Google Weed View? Professor Trains Computer to Spot Invasive Weed. “Using photos from Google’s Street View database, UC Davis researchers have tracked down over 2,000 cases of johnsongrass in the Western United States for a fraction of the cost and time that it would take to do drive-by or other in-person surveys. They call their tool Google Weed View.”
Ohio State University: ChatGPT often won’t defend its answers – even when it is right
Ohio State University: ChatGPT often won’t defend its answers – even when it is right. “A team at The Ohio State University challenged large language models (LLMs) like ChatGPT to a variety of debate-like conversations in which a user pushed back when the chatbot presented a correct answer. Through experimenting with a broad range of reasoning puzzles including math, common sense and logic, the study found that when presented with a challenge, the model was often unable to defend its correct beliefs, and instead blindly believed invalid arguments made by the user.”
TechCrunch: Early impressions of Google’s Gemini aren’t great
TechCrunch: Early impressions of Google’s Gemini aren’t great. “This week, Google took the wraps off of Gemini, its new flagship generative AI model meant to power a range of products and services including Bard, Google’s ChatGPT competitor. In blog posts and press materials, Google touted Gemini’s superior architecture and capabilities, claiming that the model meets or exceeds the performance of other leading gen AI models like OpenAI’s GPT-4. But the anecdotal evidence suggests otherwise.”
NiemanLab: The press adopts a new level of transparency around images
NiemanLab: The press adopts a new level of transparency around images. “The press has often been light on contextual information and details about the images they use. Typically, publications only provide the reader with a tiny gray caption, perhaps with a name and maybe some context related to its use or production method or where it was found, such as ‘illustration,’ ‘archival photo,’ ‘photo,’ or ‘social media.’ … A newfound level of transparency around images could be vital in educating the press and the public about images and their credibility.”
Tulane University: Tulane showcases AI expertise through new online hub
Tulane University: Tulane showcases AI expertise through new online hub. “The website also includes guidelines for ethical and responsible use of AI, a news section highlighting AI research at Tulane and a section where the Tulane community can learn about upcoming workshops and training opportunities. In the coming months, the site will feature Tulane’s latest findings on how artificial intelligence can better support its research and teaching missions as well as its students and faculty. It will also spotlight how documentation and proposals are prepared via AI and how data and other scholarly materials are accessed and organized.”
North Carolina State University : New HS Curriculum Teaches Color Chemistry and AI Simultaneously
North Carolina State University: New HS Curriculum Teaches Color Chemistry and AI Simultaneously. “North Carolina State University researchers have developed a weeklong high school curriculum that helps students quickly grasp concepts in both color chemistry and artificial intelligence – while sparking their curiosity about science and the world around them.”
404 Media: a16z Funded AI Platform Generated Images That “Could Be Categorized as Child Pornography,” Leaked Documents Show
404 Media: a16z Funded AI Platform Generated Images That “Could Be Categorized as Child Pornography,” Leaked Documents Show . “OctoML, a Seattle-based startup that helps companies optimize and deploy their machine learning models, debated internally whether it was ethical and legally risky for it to generate images for Civitai, an AI model sharing and image generating platform backed by venture capital firm Andreessen Horowitz, after it discovered Civitai generated content that OctoML co-founder Thierry Moreau said ‘could be categorized as child pornography,’ according to internal OctoML Slack messages and documents viewed by 404 Media.”
TechCrunch: Google’s best Gemini demo was faked
TechCrunch: Google’s best Gemini demo was faked. “Google’s new Gemini AI model is getting a mixed reception after its big debut yesterday, but users may have less confidence in the company’s tech or integrity after finding out that the most impressive demo of Gemini was pretty much faked.” I’m getting Google Duo vibes.
University of Southern California: New report combines social work and artificial intelligence to address racial bias in housing for people experiencing homelessness
University of Southern California: New report combines social work and artificial intelligence to address racial bias in housing for people experiencing homelessness. “Racial inequities and the impacts of systemic bias are starkly evident in the population of people experiencing homelessness in Los Angeles, but a new report details a proposed method of collaboration between human and technological systems that could eliminate racial bias in housing allocation.”
VentureBeat: Runway ML is partnering with Getty Images on new AI video models for Hollywood and advertising
VentureBeat: Runway ML is partnering with Getty Images on new AI video models for Hollywood and advertising. “Runway ML, the New York City video AI startup backed by Google and Nvidia, continues to entrench itself in the marketplace of the future. Today, the company announced it is partnering with Getty Images, one of the largest repositories of paid stock imagery and editorial imagery in the world, to develop a new generative AI video model: Runway <> Getty Images Model (RGM).”
Cornell University: Newly released open-source platform cuts costs for running AI
Cornell University: Newly released open-source platform cuts costs for running AI. “Cornell researchers have released a new, open-source platform called Cascade that can run artificial intelligence (AI) models in a way that slashes expenses and energy costs while dramatically improving performance. Cascade is designed for settings like smart traffic intersections, medical diagnostics, equipment servicing using augmented reality, digital agriculture, smart power grids and automatic product inspection during manufacturing – situations where AI models must react within a fraction of a second.”