Penn State: Deepfakes expose vulnerabilities in certain facial recognition technology

Penn State: Deepfakes expose vulnerabilities in certain facial recognition technology. “Mobile devices use facial recognition technology to help users quickly and securely unlock their phones, make a financial transaction or access medical records. But facial recognition technologies that employ a specific user-detection method are highly vulnerable to deepfake-based attacks that could lead to significant security concerns for users and applications, according to new research involving the Penn State College of Information Sciences and Technology.”

Times of Israel: Google engineer identifies anonymous faces in WWII photos with AI facial recognition

Times of Israel: Google engineer identifies anonymous faces in WWII photos with AI facial recognition. “Walking past the countless photos of Holocaust survivors and victims at Warsaw’s POLIN Museum of the History of Polish Jews in 2016, New York-native Daniel Patt was haunted by the possibility that he was passing the faces of his own relatives without even knowing it…. he set to work creating and developing From Numbers to Names (N2N), an artificial intelligence-driven facial recognition platform that can scan through photos from prewar Europe and the Holocaust, linking them to people living today.”

CNET: Microsoft Restricts Its Facial Recognition Tools, Citing the Need for ‘Responsible AI’

CNET: Microsoft Restricts Its Facial Recognition Tools, Citing the Need for ‘Responsible AI’. “Microsoft is restricting access to its facial recognition tools, citing risks to society that the artificial intelligence systems could pose. The tech company released a 27-page ‘Responsible AI Standard’ on Tuesday that details the company’s goals toward equitable and trustworthy AI.”

New York Times: Accused of Cheating by an Algorithm, and a Professor She Had Never Met

New York Times: Accused of Cheating by an Algorithm, and a Professor She Had Never Met. “A Florida teenager taking a biology class at a community college got an upsetting note this year. A start-up called Honorlock had flagged her as acting suspiciously during an exam in February. She was, she said in an email to The New York Times, a Black woman who had been ‘wrongfully accused of academic dishonesty by an algorithm.’ What happened, however, was more complicated than a simple algorithmic mistake. It involved several humans, academic bureaucracy and an automated facial detection tool from Amazon called Rekognition.”

CNN: House lawmakers voice ‘serious concerns’ about facial recognition used by contractor ID.me

CNN: House lawmakers voice ‘serious concerns’ about facial recognition used by contractor ID.me. “Two top House lawmakers on Thursday began probing ID.me, a company that uses facial recognition technology to verify identities for many state and federal agencies, into the ‘efficacy, privacy and security’ of its services and products. The move, which comes months after the IRS halted a plan to require taxpayers to use ID.me when logging onto their accounts amid a privacy backlash, further ratchets up scrutiny of the service in Washington.”

Washington Post: Ukraine is scanning faces of dead Russians, then contacting the mothers

Washington Post: Ukraine is scanning faces of dead Russians, then contacting the mothers. “Ukrainian officials have run more than 8,600 facial recognition searches on dead or captured Russian soldiers in the 50 days since Moscow’s invasion began, using the scans to identify bodies and contact hundreds of their families in what may be one of the most gruesome applications of the technology to date.”

SecurityWeek: The Art Exhibition That Fools Facial Recognition Systems

SecurityWeek: The Art Exhibition That Fools Facial Recognition Systems. “The most boring art exhibition in the world has been launched online. It comprises just 100 images of the same painting: 100 copies of the Mona Lisa. But all is not what it seems – and that’s the whole point. Humans see 100 identical Mona Lisa images; but facial recognition systems see 100 different celebrities.”

Biometric Update: Clearview facial recognition app up to 20B images

Biometric Update: Clearview facial recognition app up to 20B images. “Clearview 2.0 is described as having a database of more than 20 billion publicly available facial images matches photos of suspects, persons of interest and potential victims through AI-powered facial recognition. Notable customers include the FBI, Department of Homeland Security, and hundreds of local agencies, totaling 3,100 clients in law enforcements across the U.S.”

The Register: Ukraine using Clearview AI facial recognition technology

The Register: Ukraine using Clearview AI facial recognition technology. “The Ukrainian government is using facial recognition technology from startup Clearview AI to help them identify the dead, reveal Russian assailants, and combat misinformation from the Russian government and its allies. Reuters reported yesterday that the country’s Ministry of Defense began using Clearview’s search engine for faces over the weekend.”

Courthouse News: Italy fines US facial recognition firm Clearview AI

Courthouse News: Italy fines US facial recognition firm Clearview AI. “Italy’s data privacy watchdog on Wednesday fined U.S.-based firm Clearview AI 20 million euros (almost $22 million) over its controversial facial recognition software. The watchdog ordered the company to delete data relating to people in Italy and banned it from further collection and processing of information there.”