PetaPixel: Getty Images Ban AI-Generated Pictures, Shutterstock Following Suit

PetaPixel: Getty Images Ban AI-Generated Pictures, Shutterstock Following Suit. “Getty Images has announced it will not accept submissions that were created with AI-image generators and will remove all such artworks. The world’s largest repository of images shared with PetaPixel the note sent to contributors stating that images generated from artificially intelligent (AI) image generators such as Stable Diffusion, DALL-E, and Midjourney will not be allowed on the site.”

Ars Technica: AI model from OpenAI automatically recognizes speech and translates it to English

Ars Technica: AI model from OpenAI automatically recognizes speech and translates it to English. “On Wednesday, OpenAI released a new open source AI model called Whisper that recognizes and translates audio at a level that approaches human recognition ability. It can transcribe interviews, podcasts, conversations, and more.”

Ars Technica: AI software helps bust image fraud in academic papers

Ars Technica: AI software helps bust image fraud in academic papers. “During a trial that ran from January 2021 to May 2022, [American Association for Cancer Research] used Proofig to screen 1,367 papers accepted for publication, according to The Register. Of those, 208 papers required author contact to clear up issues such as mistaken duplications, and four papers were withdrawn.”

Vanity Fair: Darth Vader’s Voice Emanated From War-Torn Ukraine

Vanity Fair: Darth Vader’s Voice Emanated From War-Torn Ukraine. “Bogdan Belyaev was working from home when the air raid sirens went off. They hadn’t been heard in the city of Lviv since World War II, but it was February 24, and Russia had just invaded Ukraine…. But for Belyaev, work carried on because he needed it to. People on the other side of the world were relying on him, and the project was the culmination of a passion he’d had since childhood: Star Wars.”

Maryland Today: How AI Could Help Writers Spot Stereotypes

Maryland Today: How AI Could Help Writers Spot Stereotypes. “Studious Asians, sassy yet helpless women and greedy shopkeepers: These tired stereotypes of literature and film not only often offend the people they caricature, but can drag down what might otherwise have been a compelling narrative. Researchers at the University of Maryland’s Human-Computer Interaction Lab are working to combat these clichés with the creation of DramatVis Personae (DVP), a web-based visual analytics system powered by artificial intelligence that helps writers identify stereotypes they might be unwittingly giving fictional form among their cast of characters (or dramatis personae).”

The Verge: Here’s Krafton’s virtual human Ana in action

The Verge: Here’s Krafton’s virtual human Ana in action. “Earlier this year, Krafton — the company best known for the battle royale shooter PUBG — unveiled what it described as a ‘hyper-realistic’ virtual human. Alongside those first images and details were some big plans to turn Ana, as she’s known, into a virtual star. Now we can see what that looks like with a brand-new music video.”

Nation Thailand: DITP launches new AI tool to evaluate Thailand’s trade prospects

Nation Thailand: DITP launches new AI tool to evaluate Thailand’s trade prospects. “Phusit Rattanakul Seriroengrit, [Department of International Trade Promotion]’s director general, said on Friday that the DITP Business AI tool can analyse products in five categories, including agriculture, food, lifestyle and fashion, health and beauty, and industrial sectors. He said the system features a global trade analytics option which predicts export trends in the short term (three months) and long term (12 months).”

It’s not me, it’s you: Why I’m breaking up with Otter.ai (KnowTechie)

KnowTechie: It’s not me, it’s you: Why I’m breaking up with Otter.ai. “Otter.ai is an automated service. Unlike other industries, it isn’t battling rampant wage inflation. The biggest variable that influences the cost-per-transaction is computing power, which is unbelievably cheap. Sure, the big three cloud providers (Microsoft, Google, and Amazon) have all recently hiked their prices in light of supply chain woes and soaring energy costs. But not by that much.”

UGA Today: Did my computer say it best?

UGA Today: Did my computer say it best?. “With autocorrect and auto-generated email responses, algorithms offer plenty of assistance to help people express themselves. But new research from the University of Georgia shows people who rely on computer algorithms for assistance with language-related, creative tasks didn’t improve their performance and were more likely to trust low-quality advice.”

India is building a database for companies to train AI models: Rajeev Chandrasekhar (Mint)

Mint: India is building a database for companies to train AI models: Rajeev Chandrasekhar. “India is building a large database of anonymized non-personal data for Indian companies and startups that are using artificial intelligence (AI), said Rajeev Chandrasekhar, minister of state (MoS) for Electronics and Information Technology, at the Global Fintech Fest (GFF), an industry event, held in Mumbai on Wednesday.”

NewsWise: Even smartest AI models don’t match human visual processing

NewsWise: Even smartest AI models don’t match human visual processing. “Deep convolutional neural networks (DCNNs) don’t see objects the way humans do – using configural shape perception – and that could be dangerous in real-world AI applications, says Professor James Elder, co-author of a York University study published today.”

Cornell Chronicle: Do trucks mean Trump? AI shows how humans misjudge images

Cornell Chronicle: Do trucks mean Trump? AI shows how humans misjudge images. “Researchers from Cornell and partner institutions analyzed more than 16 million human predictions of whether a neighborhood voted for Joe Biden or Donald Trump in the 2020 presidential election based on a single Google Street View image. They found that humans as a group performed well at the task, but a computer algorithm was better at distinguishing between Trump and Biden country. The study also classified common ways that people mess up, and identified objects – such as pickup trucks and American flags – that led people astray.”

TechCrunch: OpenAI begins allowing users to edit faces with DALL-E 2

TechCrunch: OpenAI begins allowing users to edit faces with DALL-E 2. “After initially disabling the capability, OpenAI today announced that customers with access to DALL-E 2 can upload people’s faces to edit them using the AI-powered image-generating system. Previously, OpenAI only allowed users to work with and share photorealistic faces and banned the uploading of any photo that might depict a real person, including photos of prominent celebrities and public figures.”

Stanford Medicine: Training physicians and algorithms in dermatology diversity

Stanford Medicine: Training physicians and algorithms in dermatology diversity. “There’s a long-standing challenge in dermatology: Textbooks, databases, journals and lectures are largely bereft of images that feature darker skin. Their absence can cause gaps in clinical expertise and in diagnosis, as symptoms of a disease don’t necessarily appear the same on all skin tones. Physicians trained to identify signs of illness on lighter shades can overlook them in people with a darker complexion, and algorithms trained on a sea of beige pictures may miss signs of disease when evaluating images from a patient with brown skin.”

Study: Even smartest AI models don’t match human visual processing (York University)

York University: Study: Even smartest AI models don’t match human visual processing. “Deep convolutional neural networks (DCNNs) don’t see objects the way humans do – using configural shape perception – and that could be dangerous in real-world AI applications, says Professor James Elder, co-author of a York University study published today.”