What We’re Reading
“Trust in US Federal, State, and Local Public Health Agencies During COVID-19”
“Scientific expertise was a more commonly reported reason for ‘a great deal’ of trust at the federal level, whereas perceptions of hard work, compassionate policy, and direct services were emphasized more at the state and local levels.…It may be especially helpful to identify opportunities for creating complementary communication strategies at the federal, state and local levels, with more emphasis on scientific expertise at the federal level and more emphasis on compassionate direct services at the state and local levels.”
“Noam Chomsky: The False Promise of ChatGPT”
New York Times
“The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.”
“Using Population Descriptors in Genetics and Genomics Research”
“In response to a request from the National Institutes of Health, the National Academies assembled an interdisciplinary committee of expert volunteers to conduct a study to review and assess existing methodologies, benefits, and challenges in using race, ethnicity, ancestry, and other population descriptors in genomics research. The resulting report focuses on understanding the current use of population descriptors in genomics research, examining best practices for researchers, and identifying processes for adopting best practices within the biomedical and scientific communities.”
“As Scientists Explore AI-Written Text, Journals Hammer Out Policies”
February 22, 2023
“So far, scientists report playing around with ChatGPT to explore its capabilities, and a few have listed ChatGPT as a co-author on manuscripts. Publishing experts worry such limited use could morph into a spike of manuscripts containing substantial chunks of AI-written text.
One concern for journal managers is accuracy. If the software hasn’t been exposed to enough training data to generate a correct response, it will often fabricate an answer, computer scientists have found.
“Many journals’ new policies require that authors disclose use of text-generating tools and ban listing a large language model such as ChatGPT as a co-author, to underscore the human author’s responsibility for ensuring the text’s accuracy.
That is the case for Nature and all Springer Nature journals, the JAMA Network, and groups that advise on best practices in publishing, such as the Committee on Publication Ethics and the World Association of Medical Editors. But at least one publisher has taken a tougher line: The Science family of journals announced a complete ban on generated text last month.
The journals may loosen the policy in the future depending on what the scientific community decides is acceptable use of the text generators, Editor-in-Chief Holden Thorp says. “It’s a lot easier to loosen our criteria than it is to tighten them.”
“The Fauci Phenomenon, Part 2”
New England Journal of Medicine
March 23, 2023
“In this episode of ‘Intention to Treat,’ Anthony Fauci sits down with host Rachel Gotbaum to discuss his long career in infectious disease and public health, what has motivated him, and the lessons he has learned and taught along the way.
This page was last updated on Thursday, May 4, 2023