To paraphrase Elizabeth Barrett Browning, How do I rig thee? Let me count the ways.
Even research statistics are all too often rigged, according to a commentary in this month’s Journal of the A.M.A. These rigged statistics are being applied to clinical studies of new drugs, devices, and treatments to put them just far enough over the line of “significance” to win Food and Drug Administration approval.
And to win big dollar profits for research companies and the researchers themselves – my claim, not the Journal’s.
This goes beyond what Mark Twain called “lies, damned lies, and statistics.” Twain was referring to “spinning” legitimate statistics to show results in a favorable light.
But Stanford’s John P. A. Ioannidis MD, ScD, calls out statisticians-for-hire for actually cherry-picking, distorting, and manipulating post-hoc the statistical analyses themselves in scientific publications, in the service of Big Pharma.
Here are some of his observations:
- Some policy makers have an exaggerated sense of certainty about research results based simply of a P-value less than 0.05 (P-value is a statistical construct that estimates the probability that an observed difference between the study group and control group is a true difference rather than a coincidental difference caused by random factors alone.)
- Some policy makers hype results based on statistical differences that are technically correct but weak at best
- Some policy makers focus on “statistical significance” only and fail to consider “clinical significance” as well as other practical considerations when interpreting study results
- “Some fields that claim to work with large, actionable effects (eg, nutritional epidemiology) may simply have larger, uncontrolled biases.” That is, just because a study appears to have a robust statistical effect does not mean the conclusion is iron-clad. An observed difference might have another hidden explanation that contradicts the study conclusion.
- “Absent pre-specified rules, most research designs and analyses have enough leeway to manipulate the data and hack the results to claim important signals.”
- “Studies have shown that unless an analysis is prespecified, analytical choice (eg, different adjustments for covariates in nonrandomized studies) may allow obtaining a wide range of results.”
- “In a recent survey completed by 390 consulting statisticians, a large percentage perceived that they had received inappropriate requests from investigators to analyze data in ways that obtain desirable results.”
- “Passing the threshold of “statistical significance” … such as P < .05 is typically too easy…”
- “Clinical, monetary, and other considerations may often have more importance than statistical findings.”
Dr. Ioannidis’s offers a solution to keep honest statisticians honest: Require researchers to post in advance, such as at ClinicalTrials.gov, not only the overall research design but also detailed descriptions of
- numbers of subjects to be studied (since cohort size affects the “power” of the statistical analysis)
- which statistical methodologies will be used
- advance definition of subgroups designated for separate analysis
- specification of the threshold for statistical significance (choice of P value)
- criteria for altering statistical methods in the face of unexpected problems occurring during the course of a study
- plans to post raw data for all to see and analyze.
Prestigious medical journals could adopt Ioannidis’s solutions without waiting for comprehensive reform of the whole health system. But the Journal’s surfacing of issues around abuse of research statistics illustrates the extent to which that system has fallen under the pall of profits, the depth to which the system has been rigged, and the degree to which Hippocratically-pledged professionals have been coopted. And this means that the full weight of our society, government and nation will be needed to fix it.
Now, take action.