Science Magazine: What a massive database of retracted papers reveals about science publishing’s ‘death penalty’. “Nearly a decade ago, headlines highlighted a disturbing trend in science: The number of articles retracted by journals had increased 10-fold during the previous 10 years. Fraud accounted for some 60% of those retractions; one offender, anesthesiologist Joachim Boldt, had racked up almost 90 retractions after investigators concluded he had fabricated data and committed other ethical violations. Boldt may have even harmed patients by encouraging the adoption of an unproven surgical treatment. Science, it seemed, faced a mushrooming crisis. The alarming news came with some caveats. “
Motherboard: Scientist Published Papers Based on ‘Rick and Morty’ to Expose Predatory Academic Journals. “Scientists have discovered a way to use magnets to fight back against intergalactic parasites. The trick is that it only works in the Zyrgion simulation. In a paper published in several scientific journals, Newer Tools to Fight Inter-Galactic Parasites and Their Transmissibility in Zyrgion Simulation, leading scientist Beth Smith laid out research describing a new method to fight the terrible parasites that live by implanting false memories in their hosts. That is, of course, bullshit.”
Syracuse University: ORI Grant Funds Automated Tool to Detect Potential Fraud in Scientific Papers. “The Office of Research Integrity in the U.S. Department of Health and Human Services has awarded funding to a School of Information Studies (iSchool) professor to further automate the detection of fraudulent material in scientific papers. A grant of $149,310 has been awarded to Daniel Acuna, assistant professor. His project aims to advance the detection process by developing tools and systems, including scalable software and infrastructure and statistical feedback, to be used by integrity investigators. The award was presented for his project, “Methods and Tools for Scalable Figure Reuse Detection with Statistical Certainty Reporting.” Acuna plans to develop a data-searching tool that will boost the scale at which articles are automatically searched to detect figure reuse, thus finding cases of potential inauthenticity and inappropriate reuses much more quickly and across broader repositories of information. “
Pieknieweski’s Blog: Autopsy Of A Deep Learning Paper . “I read a lot of deep learning papers, typically a few/week. I’ve read probably several thousands of papers. My general problem with papers in machine learning or deep learning is that often they sit in some strange no man’s land between science and engineering, I call it ‘academic engineering’. Let me describe what I mean…”
Washington Post: Russia is building a new Napster — but for academic research. “What will future historians see as the major Russian contribution to early 21st-century Internet culture? It might not be troll farms and other strategies for poisoning public conversation — but rather, the democratization of access to scientific and scholarly knowledge. Over the last decade, Russian academics and activists have built free, remarkably comprehensive online archives of scholarly works. What Napster was to music, the Russian shadow libraries are to knowledge.”
Inside Higher Education: New Tool for Open-Access Research. “Get the Research will connect the public with 20 million open-access scholarly articles. The site will be built by Impactstory — the nonprofit behind browser extension tool Unpaywall — in conjunction with the Internet Archive and the British Library.” I’ve signed up to try to get early access.
LSE Impact Blog: The academic papers researchers regard as significant are not those that are highly cited . “For many years, academia has relied on citation count as the main way to measure the impact or importance of research, informing metrics such as the Impact Factor and the h-index. But how well do these metrics actually align with researchers’ subjective evaluation of impact and significance? Rachel Borchardt and Matthew R. Hartings report on a study that compares researchers’ perceptions of significance, importance, and what is highly cited with actual citation data. The results reveal a strikingly large discrepancy between perceptions of impact and the metric we currently use to measure it.”