University of Arizona College of Science: Lum. AI (Tuscon)

Tuscon: University of Arizona College of Science: Lum. AI. “Researchers worldwide publish 2.5 million journal articles each year, adding to the tens of millions of scholarly articles in circulation. For a researcher or clinician, developing a holistic understanding of a field — for example, the systematic matching of genomic alterations in a tumor with proper drug treatments — is an immense task. Now imagine that those researchers, faced with trying to understand the various mechanisms and cellular processes involved in a specific tumor type, had a new tool: an automated system that could review all that literature — analyzing each academic paper in seconds — and extract key information that could help them generate easily interpretable answers and conclusions.”

Washington Post: Step aside Edison, Tesla and Bell. New measurement shows when U.S. inventors were most influential.

Washington Post: Step aside Edison, Tesla and Bell. New measurement shows when U.S. inventors were most influential.. “The U.S. patent office has stockpiled the text to more than 10 million patents. But that’s often all they have: an enormous amount of text. Many early patents lack any form of citation or industry specification, which researchers could use to understand the history of American invention. Now a team of economists has created a clever algorithm that processes that text — often the only consistent data we have for many of the country’s most famous inventions — to create a measure of the influential inventors and industries of the past 180 years.”

Virtual Victorians: Using 21st-century technology to evaluate 19th-century texts (Princeton University)

Princeton University: Virtual Victorians: Using 21st-century technology to evaluate 19th-century texts. “In the 19th century, printing technology changed the way readers experienced texts. Today, students and researchers are using digital technology to access historical literary texts in new ways and finding surprising echoes of the past in their own lives.”

Revisiting the Disputed Federalist Papers: Historical Forensics with the Chaos Game Representation and AI (Wolfram Blog)

Wolfram Blog: Revisiting the Disputed Federalist Papers: Historical Forensics with the Chaos Game Representation and AI. “In 1944 Douglass Adair published ‘The Authorship of the Disputed Federalist Papers,’ wherein he proposed that [James] Madison had been the author of all 12. It was not until 1963, however, that a statistical analysis was performed. In ‘Inference in an Authorship Problem,’ Frederick Mosteller and David Wallace concurred that Madison had indeed been the author of all of them. An excellent account of their work, written much later, is Mosteller’s ‘Who Wrote the Disputed Federalist Papers, Hamilton or Madison?.’ His work on this had its beginnings also in the 1940s, but it was not until the era of ‘modern’ computers that the statistical computations needed could realistically be carried out.”

Kaylin Walker: Tidy Text Mining Beer Reviews

Kaylin Walker: Tidy Text Mining Beer Reviews. “BeerAdvocate.com was scraped for a sample of beer reviews, resulting in a dataset of 31,550 beers and their brewery, beer style, ABV, total numerical ratings, number of text reviews, and a sample of review text. Review text was gathered only for beers with at least 5 text reviews. A minimum of 2000 characters of review text were collected for those beers, with total length ranging from 2000 to 5000 characters.”

Digital Scholarship Resource Guide: Text analysis (part 4 of 7) (Library of Congress)

Library of Congress: Digital Scholarship Resource Guide: Text analysis (part 4 of 7). “Clean OCR, good metadata, and richly encoded text open up the possibility for different kinds of computer-assisted text analysis. With instructions from humans (“code”), computers can identify information and patterns across large sets of texts that human researchers would be hard-pressed to discover unaided. For example, computers can find out which words in a corpus are used most and least frequently, which words occur near each other often, what linguistic features are typical of a particular author or genre, or how the mood of a plot changes throughout a novel. Franco Moretti describes this kind of analysis as ‘distant reading’, a play on the traditional critical method ‘close reading’. Distant reading implies not the page-by-page study of a few texts, but the aggregation and analysis of large amounts of data.”

Science: Want to analyze millions of scientific papers all at once? Here’s the best way to do it

Science: Want to analyze millions of scientific papers all at once? Here’s the best way to do it. “There is long-standing debate among text and data miners: whether sifting through full research papers, rather than much shorter and simpler research summaries, or abstracts, is worth the extra effort. Though it may seem obvious that full papers would give better results, some researchers say that a lot of information they contain is redundant, and that abstracts contain all that’s needed. Given the challenges of obtaining and formatting full papers for mining, stick with abstracts, they say. In an attempt to settle the debate, Søren Brunak, a bioinformatician at the Technical University of Denmark in Kongens Lyngby, and colleagues analyzed more than 15 million scientific articles published in English from 1823 to 2016.”