Monday, October 1, 2007

Wall Street Journal Article

Hey everyone. This is an interesting Wall Street Journal article I came across, which serves as a bit of a wake up call for people who analyze data and journal articles. But honestly, I know this information doesn't really surprise those of us in research!


Most Science Studies
Appear to Be Tainted
By Sloppy Analysis
September 14, 2007; Page B1

We all make mistakes and, if you believe medical scholar John Ioannidis, scientists make more than their fair share. By his calculations, most published research findings are wrong.

Dr. Ioannidis is an epidemiologist who studies research methods at the University of Ioannina School of Medicine in Greece and Tufts University in Medford, Mass. In a series of influential analytical reports, he has documented how, in thousands of peer-reviewed research papers published every year, there may be so much less than meets the eye.

These flawed findings, for the most part, stem not from fraud or formal misconduct, but from more mundane misbehavior: miscalculation, poor study design or self-serving data analysis. "There is an increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims," Dr. Ioannidis said. "A new claim about a research finding is more likely to be false than true."

The hotter the field of research the more likely its published findings should be viewed skeptically, he determined.

Take the discovery that the risk of disease may vary between men and women, depending on their genes. Studies have prominently reported such sex differences for hypertension, schizophrenia and multiple sclerosis, as well as lung cancer and heart attacks. In research published last month in the Journal of the American Medical Association, Dr. Ioannidis and his colleagues analyzed 432 published research claims concerning gender and genes.

Upon closer scrutiny, almost none of them held up. Only one was replicated.

Statistically speaking, science suffers from an excess of significance. Overeager researchers often tinker too much with the statistical variables of their analysis to coax any meaningful insight from their data sets. "People are messing around with the data to find anything that seems significant, to show they have found something that is new and unusual," Dr. Ioannidis said.

In the U. S., research is a $55-billion-a-year enterprise that stakes its credibility on the reliability of evidence and the work of Dr. Ioannidis strikes a raw nerve. In fact, his 2005 essay "Why Most Published Research Findings Are False" remains the most downloaded technical paper that the journal PLoS Medicine has ever published.

"He has done systematic looks at the published literature and empirically shown us what we know deep inside our hearts," said Muin Khoury, director of the National Office of Public Health Genomics at the U.S. Centers for Disease Control and Prevention. "We need to pay more attention to the replication of published scientific results."

Every new fact discovered through experiment represents a foothold in the unknown. In a wilderness of knowledge, it can be difficult to distinguish error from fraud, sloppiness from deception, eagerness from greed or, increasingly, scientific conviction from partisan passion. As scientific findings become fodder for political policy wars over matters from stem-cell research to global warming, even trivial errors and corrections can have larger consequences.

Still, other researchers warn not to fear all mistakes. Error is as much a part of science as discovery. It is the inevitable byproduct of a search for truth that must proceed by trial and error. "Where you have new areas of knowledge developing, then the science is going to be disputed, subject to errors arising from inadequate data or the failure to recognize new matters," said Yale University science historian Daniel Kevles. Conflicting data and differences of interpretation are common.

To root out mistakes, scientists rely on each other to be vigilant. Even so, findings too rarely are checked by others or independently replicated. Retractions, while more common, are still relatively infrequent. Findings that have been refuted can linger in the scientific literature for years to be cited unwittingly by other researchers, compounding the errors.

Stung by frauds in physics, biology and medicine, research journals recently adopted more stringent safeguards to protect at least against deliberate fabrication of data. But it is hard to admit even honest error. Last month, the Chinese government proposed a new law to allow its scientists to admit failures without penalty. Next week, the first world conference on research integrity convenes in Lisbon.

Overall, technical reviewers are hard-pressed to detect every anomaly. On average, researchers submit about 12,000 papers annually just to the weekly peer-reviewed journal Science. Last year, four papers in Science were retracted. A dozen others were corrected.

No one actually knows how many incorrect research reports remain unchallenged.

Earlier this year, informatics expert Murat Cokol and his colleagues at Columbia University sorted through 9.4 million research papers at the U.S. National Library of Medicine published from 1950 through 2004 in 4,000 journals. By raw count, just 596 had been formally retracted, Dr. Cokol reported.

"The correction isn't the ultimate truth either," Prof. Kevles said.

Email me at ScienceJournal@wsj.com9.

No comments: