Search This Blog

Thursday 18 July 2024

On separating the wheat from chaff re:science.

 Three Genuine Tells of Junk Science


Capital Research Center reports on non-profit organizations. Managing editor Jon Rodeback offers three tells of junk science: He identifies “settled,” “consensus,” and “scientific study.” On that last topic, he notes,
                                 While scientific studies are essential to scientific research, a single study by itself is far from definitive, and not all scientific studies are created equal. The findings of a single study need to be tested and retested, no matter how promising they seem. In fact, the most promising findings probably need more rigorous testing to ensure that a bias toward a desired outcome did not influence the research.

In addition, the more a study or report is entangled with politics and government funding, the less scientific and less reliable its results will likely be. I have personally witnessed how a government report was vetted by the various offices in a federal department and offending passages were removed or rewritten so as to not cast a particular federal office in a bad light—usually not to correct any inaccuracy in the report, but to obscure inconvenient data and conclusions. 

JON RODEBACK, “THREE TELLS OF JUNK SCIENCE,” CAPITAL RESEARCH CENTER, JUNE 26, 2024

This comes to us hard on the heels of philosopher Massimo Pigliucci’s effort to identify “pseudoscience,” in which he suggested that the solution is to rely on him and on sites he approves of. That’s certainly not an answer for everyone

How Desired Results Are Obtained

Perhaps the main thing to see here is that the many current problems in peer-reviewed science in recent years have diminished the reasons we should simply trust it. Business prof Gary Smith Wrote late last year about the methods used to achieve a desired — but not necessarily natural — result:

One consequence of the pressure to publish is the temptation researchers have to p-hack or HARK. P-hacking occurs when a researcher tortures the data in order to support a desired conclusion. For example, a researcher might look at subsets of the data, discard inconvenient data, or try different model specifications until the desired results are obtained and deemed statistically significant — and therefore publishable. HARKing (Hypothesizing After the Results are Known) occurs when a researcher looks for statistical patterns in a set of data without any well-defined purpose in mind beyond trying to find a pattern that is statistically significant — and therefore publishable. P-hacking and HARKing both lead to the publication of dodgy results that are exposed as dodgy when they are tested with fresh data. This failure to replicate undermines the credibility of published research (and the value of publications in assessing scientific accomplishments).

Even worse than p-hacking and HARKing is complete fabrication. Why torture data or rummage through large databases when you can simply make stuff up? An extreme example is SciGen, a random-word generation program created by three MIT graduate students. Hundreds of papers written entirely or in part by SCIgen have been published in reputable journals that claim they only publish papers that pass rigorous peer review.

More sophisticated cons are the “editing services” (aka, “paper mills”) that some researchers use to buy publishable papers or to buy co-authorship on publishable papers. These fake papers are not created by randomly generated words but they may be entirely fabricated or else plagiarized, in whole or in part, from other papers. It has been estimated that thousands of such papers have been published; it is known that hundreds have been retracted after being identified by research-integrity sleuths.

If anything, Smith notes, the problems will likely get worse because chatbots (large language models or LLMs), introduced only about two years ago, can generate rubbish research papers with far greater efficiency and quality than the methods people complained about five years ago.

It’s not an opinion that science is becoming less trustworthy; it’s an everyday fact, if we go by what we are told about the floods of computer-written junk papers and the problems Smith identifies. And the public’s deepening loss of trust in science is also a fact.

Grounds for Hope

Historically, interest and investment in science — and reliance on it — has waxed and waned. It has increased when people see an actual benefit. But if, over time, “studies show” mainly amounts to a publicity campaign for some project approved by powerful interests, with no practical benefits to recommend it, we can expect public trust to decline further. And blaming the public for not believing what’s not believable is hardly a useful response.

Take heart! There have been periods when science stagnated, then underwent major reforms — usually when it was in a rut. Like now, for many disciplines.


                               

No comments:

Post a Comment