Publication bias. Perhaps unsurprisingly, the scientific literature is full of positive results. Null or negative results have traditionally been hard to publish and are often relegated to abandoned hard drives, despite the fact that they represent valuable knowledge claims. Research users looking for evidence get a distorted view, which — like Joh... See more
To show how this might work, let’s take multiple sclerosis as an example. It is a devastating condition in need of better therapies. When testing new drugs in animal studies, researchers measure and report a number of different metrics, including the effects of drug candidates on inflammation, on damage to nerve fibres (axon loss), on damage to the... See more
Small sample sizes. We also need to know that the study was big enough to justify the claims made. For example, it’s pretty obvious that men are on average taller than women. But to have a good chance of detecting this difference in an experiment, you’d need at least 20 people to find a statistically significant result. Unfortunately, such huge eff... See more
No blinding. Experiments must have a test and a control arm. “Blinding” is when the researcher doesn’t know whether the subject they are analyzing (be it proteins, cells, mice, or human subjects) is the test or the control. Results are more reliable if the researcher is blinded, and this is standard part of clinical trials, but is often not practic... See more
Critically, we need to be much more strategic in the way we use published research findings (from the lab) to inform what we do next (e.g. clinical trials). We need to combine and integrate — systematically — the full granularity of research claims and their provenance to provide a richness of detail to our understanding. This process of integratin... See more