Questions of power
15 July 2013. When researchers discuss “power”, they usually refer to the likelihood of a study to identify a suspected effect of a certain magnitude. If the study is large and applies sensitive methods, it will have the power of detecting a small change with a high degree of certainty. On the other hand, a small study will only pick up a large effect, everything else being hidden by background noise. Many more small studies are carried out than large ones, and there is therefore a risk that scientists may publish only those studies that are newsworthy. If that happens, then the small-scale attention-getters may be “false positives” – they claim to represent new discoveries, but they may just be statistical outliers that happen when a large number of studies are carried out.
A recent publication has generated much discussion how large this problem is in neuroscience. There seems to be a tendency that the magnitude of the effect, e.g., of drug treatment, decreases with study size. How can that be? The simple answer is that, to be published in a prominent journal, a study needs to show a reliable or statistically significant effect. The smaller the study is, the larger the effect must be to reach significance and subsequent publication. But what about all the other studies that were not significant? There are not published, but put away in a drawer. So a large number of small studies never makes it outside the file drawers in the research laboratories. Or so the argument goes.
The risk of publication bias has been known for a long time, and statistical methods have been developed to detect it. The question is now whether this problem exists in regard to chemical brain drain. Unfortunately, there is not much evidence to rely upon. When a large number of reports on IQ deficits associated with fluoride exposure were analyzed, no publication bias was detected.
Some years ago, when brain toxicity due to lead was under discussion, calculations showed that the probability for several studies showing similar effects only by accident was very, very small. Professor Herbert Needleman famously said that, in order to make up for the published reports on lead toxicity, there would have to be a huge number of reports on lacking effects hidden in some very large drawers.
In regard to chemical brain drain, only a handful of substances are known so far to cause damage to brain development in children. A couple of hundred substances are suspected of being brain drainers (see appendix in “Only one chance”). But the real problem is that hundreds, perhaps thousands of chemicals may be causing adverse effects to brain development, and we act as if all of these compounds are safe. Many of these compounds may be “false negatives”, as there is no evidence that they are innocuous.
Thus, as seems to be the case with neuroscience reports, it is possible that an industrial chemical may, on occasion, be falsely accused of being toxic. But such problems of false positive findings are likely extremely small in comparison with the enormous background of chemicals that have never even been considered as hazards to brain development and are currently considered safe.
If there is a “power” problem in regard to neurotoxic chemicals, it seems that power is needed to make responsible decisions to test all relevant chemicals for brain toxicity and to protect the brains of the next generation against chemical brain drain.