Nothing in Biology Makes Sense: Pseudoscience in scientific clothing

A snake in the literature? Photo via Wikimedia Commons.

This week at Nothing in Biology Makes Sense!, guest contributor Chris Smith finds something a bit odd in his Google Scholar results:

I recently gave a lecture on the Miller-Urey experiment, and I wanted to pull up the original citation. So, glancing at the clock to make sure I still had five minutes before showtime, I headed over to Google Scholar and entered in the search terms “Miller Urey.” When I started browsing the results I was surprised to find, on the first page, a link to an article titled “Why the Miller–Urey research argues against abiogenesis” published in The Journal of Creation, a product of Creation Ministries International.

To learn what Chris thinks is going on—and how it resembles a phenomenon in evolutionary biology—go read the whole thing.◼

Re: Cite more papers, get more citations?

Courtesy Zen Faulkes’s Twitter feed: Philip Davis of the Scholarly Kitchen shows that the study I discussed earlier, purporting to show that journal articles that cite more sources are themselves more likely to be cited is, um, quite probably bunk. Davis was skeptical of the cite-more-be-cited-more (henceforth, CMBCM) correlation, so he did what any good scientist would, on reading a result he didn’t believe: he tried to replicate it, collecting his own data set from articles published in the journal Science in 2007.

Davis replicated the CMBCM result with his own dataset, but then he started looking for other correlations in the data. It turns out that longer papers are also more likely to be cited—and, when Davis statistically controlled for that effect, the CMBCM result not only disappeared, it reversed. That is, long Science papers with more citations are slightly less likely to be cited than long Science papers with fewer citations. Building a still more complicated statistical model that incorporates the paper’s length, subject area, and number of authors, Davis totally eradicated the effect of variation in the length of the Works Cited list.

Controlling now for pages, authors, and section, reference list length is no longer statistically significant. In fact, it looks pretty much like a random variable (p=0.77) and adds no information to the rest of the regression model.

Davis’s analysis looks convincing to me. It’s hard to say, however, whether it conclusively refutes the result reported in Nature News. That’s partly because the CMBCM analysis is derived from a much larger data set than Davis’s; but more importantly, it was presented at a conference, not in a published article.

Conference papers often present preliminary results, and in the absence of a published Methods section, the News article doesn’t tell us whether the coauthors controlled for the effects of the confounding factors Davis identifies or not. (Although it seems logical to conclude from the News piece that they didn’t.) If the CMBCM data set is going to make it through peer review at a journal, however, its authors will have to account for confounding factors.

Cite more papers, get more citations?

Update, 18 August 2010: An attempt to replicate the result discussed here finds serious issues with the statistics.

ResearchBlogging.orgNature News is reporting some interesting results presented as a paper at a meeting of the International Society for the Psychology of Science & Technology last week: articles published in the journal Science with longer “Works Cited” sections are themselves more frequently cited.

A plot of the number of references listed in each article against the number of citations it eventually received reveal that almost half of the variation in citation rates among the Science papers can be attributed to the number of references that they include. And — contrary to what people might predict — the relationship is not driven by review articles, which could be expected, on average, to be heavier on references and to garner more citations than standard papers.

The same authors did a similar analysis of papers published in the journal Evolution and Human Behavior over 30 years, and found similar results [PDF]. Here’s the relevant figure from that paper:

Cite more, be cited more. Figure 2 from Webster et al. (2009) [PDF].

The lack of a “review effect” is surprising, but I don’t think this overall result is. Academia, as much as we might describe it as cutthroat, also runs on reciprocal altruism. Authors notice when their papers are cited, and are more likely to cite papers that build on or relate to their own work. I’d be interested to see the network of citation underlying the pattern Webster et al. have found—I suspect that there’s a lot of clustering around disciplines and sub-disciplines and sub-sub-sub-disciplines that contributes to all this mutual back-scratching citing.

Updated, 15 August 2010, 2126h: Fixed the link to the original Nature News article, which turns out not to be access-restricted.


Webster, G.D., Jonason, P.K., & Schember, T.O. (2009). Hot topics and popular papers in evolutionary psychology: Analyses of title words and citation counts in Evolution and Human Behavior, 1979-2008. Evolutionary Psychology, 7 (3), 348-348 Other: