Last night, a fellow editor emailed me a link to yet another study claiming to show cell phone use may be linked to cancer. This was worth looking at in more detail, though, as it claimed to see an increase in a specific cancer — the same type of cancer that had increased in a problematic US government study.
A quick look at the study identified major issues with the primary conclusion. Normally, at this point, the decision would be to skip the coverage unless the study gets unwarranted attention from the rest of the media. (See: Scott Kelly’s DNA). But in this case, we thought we’d describe how we evaluated the paper, because it could help more people identify similar problems in the future.
The first step in evaluating a scientific paper is to obtain a copy of the article. Fortunately, it was put online by an organization that consistently promotes the idea that cell phones pose health risks. The involvement of the Environmental Health Trust should not be viewed as positive or negative; they have promoted very low quality material in the past, but the organization would no doubt promote higher quality studies if they agreed with its stance.
The research itself has been accepted for publication, meaning it has undergone peer review. It will appear in a magazine called environmental research, but the quality of the magazine is important. While much of the research that ends up being published in lower profile journals is significant, the rise of online journals has spawned a myriad of predatory publishers who will publish anything as long as the authors pay them a publishing fee.
Environmental research is not one of those; it is published by Elsevier, a large publishing house that has been covering scientific journals for decades. And this particular magazine has been around since the 1960s. A journal’s “impact factor” is an imperfect measure of whether the articles it publishes ultimately influence other research. Environmental researchThe score is typical of a decent quality magazine serving a specialized audience.
All of this suggests that this cell phone paper has probably undergone a fair amount of peer review, so it shouldn’t be dismissed out of hand.
Then, during the evaluation process, we usually give a quick look at the author list. Something a little unusual here is that each author comes from the same institution, the Italian Instituto Ramazzini. Usually an author list this large involves collaboration between several research centers, but this makes it relatively easy to research the background of the Ramazzini. As it turns out, the Institute is widely recognized for its cancer studies, and has been doing them for decades. Some controversy has arisen over some of the organization’s conclusions and arguments in Congress about whether U.S. government funds should go to a foreign institution. But in at least one situation, outside experts were called in by the US government, and they found that at least some of Ramazzini’s work was science-based.
When you add all this up, this seems like a document to be taken seriously. So that’s what we’re going to do.
Statistics vs Numbers
Like other studies of its kind, this new study involves long-term exposure of rats to cell phone signals. As with the US government study, it involves unusually long exposures (19 hours a day), but it uses much lower doses, similar to what a person might actually experience. It uses a very large number of animals (almost 2500 in total), which should provide good statistical power. So far, so good.
But in abstracto it goes wrong. There, the authors of the article talk about three increases in cancer incidence in animals exposed to cell phone radiation. But two of these weren’t statistically significant, meaning there’s more than a five percent chance the difference would occur randomly. If we allow non-significant changes to the conclusions, the data would just as well support the reporting that cell phones reduce the risk of cancer in some experimental groups.
That is bad. But there is still one significant increase in cancer in their data, so let’s take a closer look at that: “A statistically significant increase in the incidence of cardiac schwannomas was observed in treated male rats at the highest dose.” When it comes to this type of cancer, the control group of 817 rats developed four tumors. But critically, all those tumors occurred in women; none in men. This apparent sexual bias will necessarily increase the impact of each tumors in any of the male experimental populations.
And that’s exactly what you see happening. In a female population, 2.2 percent of an experimental group developed this type of tumor, but that was not a statistically significant result. In contrast, in this male population with the significant difference, only 1.5 percent of the animals developed these tumors. A group of men on a low dose had the same number of tumors, but the group was larger and so the result fell below significance.
These figures suggest that the only statistical effect seen in this study is due to the unusually low tumor incidence in the control group, and not to a specific effect of cell phone radiation.
As we mentioned above, the normal response to an inquiry like this would be to just ignore it unless it’s widely discussed. But highlighting the process we use to decide to ignore it should give you an idea of how we determine what to treat when it comes to scientific studies at Ars. And if you decide to try this method at home, it can also help you determine which results to look for.