Scientific methods in the genomic age

Nature Methods has a good editorial considering the issues around defining what science is in the age of exploratory genomics [$-a].

As schoolchildren we are taught that the scientific method involves a question and suggested explanation (hypothesis) based on observation, followed by the careful design and execution of controlled experiments, and finally validation, refinement or rejection of this hypothesis. … Scientists’ defense of this methodology has often been vigorous, likely owing to the historic success of predictive hypothesis-driven mechanistic theories in physics, the dangers inherent in ‘fishing expeditions’ and the likelihood of false correlations based on data from improperly designed experiments.

Their conclusion is that hypothesis-driven science will absorb the the current flood of genomic data as the basis for new hypotheses to direct future large-scale data collection:

But ‘omics’ data can provide information on the size and composition of biological entities and thus determine the boundaries of the problem at hand. Biologists can then proceed to investigate function using classical hypothesis-driven experiments. It is still unclear whether even this marriage of the two methods will deliver a complete understanding of biology, but it arguably has a better chance than either method on its own.

As I’ve said before, massive genomic datasets change science mainly through their quantity, not their quality. On the one hand, science has always involved undirected observation – Darwin didn’t have any strong hypotheses in mind when he hopped aboard the Beagle. Classical natural history is a discipline devoted to almost nothing but undirected data collection, and it’s been the grist for evolution and ecology research since the beginning of time. On the other, it seems to me that genomic “fishing expeditions” are more hypothesis-driven than we realize, even if the only hypothesis is “Neanderthal genomes will be different from modern humans.”

On the Media on Science 2.0: Sounds good to us!

[Rant alert – I’m starting to get real tired of this nonsense. Although it is proving to be good blog fodder, and it got me published in the letters column of Science. Maybe it’s not so bad. And but so …]

Wired editor Chris Anderson is on this week’s On the Media, talking up the Petabyte Age. And OTM pretty much swallows it whole.


Photo by Pixelsior.

The Petabyte Age, as Anderson describes it, is the present time in which massive volumes of data (petabytes, in fact) are supposedly marking the end of the scientific method. If you actually read the Wired story, you’ll discover that Anderson has a pretty shaky grasp on what the scientific method actually is, and apparently thinks that “statistical analysis” is not hypothesis testing. As it turns out, it is.

On OTM interview, Anderson recants the sensationalist headline, possibly in response to the long string of critical comments it drew on Wired.com. But he repeats all of the mistakes and nonsense that generated the criticism: Craig Venter sequenced some seawater without a prior hypothesis, and Google summarizes lots of data to look for patterns without prior hypotheses; ipso facto, no one needs hypotheses anymore. (Anderson insists on talking about “theories” rather than hypotheses, which only highlights his unfamiliarity with basic philosophy of science.) The interviewer, Brooke Gladstone, pretty much lets him have his say. Does she then consult an actual working scientist, or, better yet, a philosopher of science? Not so much.

This is not the sort of coverage I’ve come to expect from OTM, which is basically in the running with RadioLab for the title of My Favorite Public Radio Show. Normally, OTM specializes in pointing out exactly this sort of failing in other news shows – interviewing pundits without actually talking to people who work in the fields in question. But it would seem that they don’t feel the scientific freaking method is important enough to cover properly.

Climate change: bad for native plants

[Correction/clarification appended]

This is how I can justify blogging as a scientific activity: once in a while, I find something really useful. Case in point is this post on the ‘blog of Pamela Ronald, the chair of the University of California Davis plant genomics program, which points to a new in the last volume of PLoS ONE that predicts (perhaps not surprisingly) climate change is going to be bad for rare plants in California.

The effect of climate change on plant communities is a major concern for me, because the range of my favorite woody monocot, the Joshua tree, may have to change quite a bit to compensate for a warmer climate. (For reference, see the photo of me setting up a pollination experiment on a Joshua tree in front of the Yucca Valley United Methodist church.) Previous projections have suggested that Joshua trees are going to be in trouble under a warming climate. Back in 2006, Science ran a cover article suggesting that climate change may make wildfires more frequent [$-a]. That’s a very real problem for Joshua tree’s range in the Mojave Desert – my lab has already lost field sites to brush fires in only about half a dozen years of focusing on Joshua trees. Another, more recent study has suggested that climate change is going to make the southwest U.S. even more arid&nbsp[$-a], which is also, obviously, a bad thing for plants (and people) in the region.

Earlier work of this sort usually modeled how climate change might increase or decrease the distribution of individual plant species – big, showy things like Joshua tree, Saguaro cactus, giant Sequoias. Loarie et al. improve over this by projecting changes in whole plant communities across the California floristic province. And they predict that up to 66% of plants endemic to California will lose more than 80% of their ranges. That’s a lot of diversity – more than just my study organism – at stake.

Correction:
In the original version of this post, I conflated the state of California, which does include a lot of Joshua tree’s range, with the California floristic province, which doesn’t. So Loarie
et al.‘s new paper doesn’t directly impact Joshua trees. But it’s still cool/alarming, and decidedly post-worthy. In making that correction, I’ve also inserted a more recent study of climate change in the U.S. southwest, by Seager et al.

References:
Loarie SR, BE Carter, K Hayhoe, S McMahon, R Moe, CA Knight, and DD Ackerly. 2008. Climate change and the future of California’s endemic flora. PLoS ONE 3:e2502.

Seager R, M Ting, I Held, Y Kushner, J Lu, G Vecchi, H-P Huang, N Harnik, A Leetmaa, N-C Lau, C Li, J Velez, and N Naik. 2007. Model projections of an imminent transition to a more arid climate in southwestern North America. Science 316:1181-4.

Westerling AL, HG Hidalgo, DR Cayan, and TW Swetnam. 2006. Warming and earlier spring increase western U.S. forest wildfire activity. Science 313:940-3.

Wired drinks the Science 2.0 kool-aid

Just a few months after computer scientist Ben Shneiderman heralded a new kind of science based on, well, nothing particularly new, Wired magazine hops on the bandwagon with a cover article announcing the end of science. This is an historic event, indeed – but only in the Henry Ford sense. Which is to say, mostly bunk.

Basically, Wired‘s argument, as laid out in the introductory article by Chris Anderson, is similar to Shneiderman’s – that a deluge of newly-available data (the “petabyte age”) will somehow make the longstanding scientific method of observation-hypothesis-experiment obsolete:

This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. … With enough data, the numbers speak for themselves.

Doesn’t anyone read Popper any more? Just walking through some of the case studies listed by Wired makes it clear that, although the petabyte age will allow us to ask and answer new questions about life, the universe, and everything, we’ll still have to use good old-fashioned hypothesis testing to do it.

Predicting agricultural yields. Agricultural consultants Lansworth predict crop yields better than the USDA by using new data on weather and soil conditions. They crunch a lot of numbers – but they’re still testing hypotheses, by fitting predictive models to all that data and determining which ones explain more of the variation. Or so I surmise, since we’re never actually told anything about their methods.

Micro-targeting political markets. Since 2004, the amount of data collected on who votes for whom has ballooned. And, from what I understand of the description here, political consultants are having a field day looking for trends in the data – which is to say, mining the data to develop and test hypotheses about voter behavior.

Guessing when air fares will rise. The website Farecast looks for trends in flight data and ticket prices to predict whether fares will change in the near future. This is – you guessed it – just another way to say that they’re testing hypotheses.

It’s true that most of the examples Wired cites don’t require active formulation of hypotheses by the people overseeing analysis of big data sets; instead, they let computers find the best-supported hypothesis based on the available data. And that is new – kinda.

Biologists use a similar approach for reconstructing evolutionary trees and solving other computationally challenging problems, called Markov chain Monte Carlo, or MCMC. In MCMC, you feed the computer a dataset, and tell it how to judge if one possible explanatory model (or hypothesis) is better than another. Then you sit back and let the computer explore a whole bunch of slightly different models, and see which ones score better. Far from making hypothesis testing obsolete, this is hypothesis testing on steroids. And it is, at least for the moment, the future of science.

Science 2.0 revisited

Back in March, Science ran a Perspectives piece in which computer scientist Ben Shneiderman suggested that the wealth of new data on human interactions provided by the Internet (Facebook, Amazon.com customer records, &c.) would require a new approach to science, which he called “Science 2.0” [subscription]:

… the Science 2.0 challenges cannot be studied adequately in laboratory conditions because controlled experiments do not capture the rich context of Web 2.0 collaboration, where the interaction among variables undermines the validity of reductionist methods (7). Moreover, in Science 2.0 the mix of people and technology means that data must be collected in real settings … Amazon and Netflix became commercial successes in part because of their frequent evaluations of incremental changes to their Web site design as they monitored user activity and purchases.

Science 2.0 sounded, to me, a lot like what ecologists and evolutionary biologists often do – hypothesis testing based on observations, manipulations of whole natural systems in the field, and the clever use of “natural experiments” sensu Diamond [subscription]. I said as much in a post shortly after Shneiderman’s article ran, and also wrote a brief letter to Science.

And now it turns out they’ve published it! My letter, along with a response from Shneiderman, is in the 6 June issue [subscription]. You can read it in PDF format here. In very short form, I say:

… what Shneiderman calls Science 1.0 has always included methods beyond simple controlled experiments, such as inference from observation of integrated natural systems and the careful use of “natural experiments” (1) to test and eliminate competing hypotheses.

Shneiderman’s response concedes the point on natural experiments, but says he was actually talking about manipulative experiments conducted on large online social networks:

Amazon and NetFlix designers conduct many studies to improve their user interfaces by making changes in a fraction of accounts to measure how user behaviors change. Their goal is to improve business practices, but similar interventional studies on a massive scale could develop better understanding of human collaboration in the designed (as opposed to natural) world …

That still sounds to me like ecological experimentation, but with people’s Facebook accounts instead of (to pick an organism at random) yucca moths. Maybe I’m just not getting it, but I don’t see anything in Shneiderman’s description that qualifies as a new kind of science.

References
Shneiderman B. 2008. Science 2.0. Science 319:1349-50.

Diamond J. 2001. Dammed experiments! Science 294:1847-8.

Yoder, JB, and B Shneiderman. 2008. Science 2.0: Not So New? Science 320:1290-1.

Science 2.0? New data, but not new methods

In a Perspectives piece in this week’s Science, Ben Shneiderman argues that we need a new kind of science [subscription] to deal with human interactions via the Internet. He calls this “Science 2.0”:

The guiding strategies of Science 1.0 are still needed for Science 2.0: hypothesis testing, predictive models, and the need for validity, replicability, and generalizability. However, the Science 2.0 challenges cannot be studied adequately in laboratory conditions because controlled experiments do not capture the rich context of Web 2.0 collaboration, where the interaction among variables undermines the validity of reductionist methods (7). Moreover, in Science 2.0 the mix of people and technology means that data must be collected in real settings (see the figure). Amazon and Netflix became commercial successes in part because of their frequent evaluations of incremental changes to their Web site design as they monitored user activity and purchases.

Good evolutionary ecologist that I am, I read this and said to myself, “Science 2.0 sounds like what I already do.” Biologists have been using methods beyond controlled laboratory experiments and collecting data in “real settings” to test hypotheses since Darwin’s day and before (see Jared Diamond’s discussion of natural experiments found in “real settings” [subscription]).

As an example of Science 2.0 methods, Shniederman shows a chart mapping collaborations between U.S. Senators, a version of which is available here. It’s an informative picture – you can see immediately that “independents” Joe Lieberman and Bernie Sanders are a lot more connected to the Democrats than the Republicans, and that a relatively small number of senators act as “bridges” between the two parties. But it’s not clear to me why this represents a new method (apart from the visualization technology behind it) – couldn’t the same graphic have been compiled from paper voting records in 1920? It might be easier to produce now, but I don’t think the diagram represents a new scientific method. (An analogy: it might be really easy for me to do ANOVAs now, but these statistics pre-date my laptop and R.)

Shniederman also suggests that Science 2.0 will be interested in different kinds of things than hoary old Science 1.0:

Science 1.0 heroes such as Galileo, Newton, and Einstein produced key equations that describe the relationships among gravity, electricity, magnetism, and light. By contrast, Science 2.0 leaders are studying trust, empathy, responsibility, and privacy.

He cites a “fivefold growth of research on privacy and trust,” based on a literature search, but doesn’t elaborate on how these topics require truly new methods. Again, I’d suggest that Science 1.0 was interested in human interactions, too (just ask a Sociobiologist), but it didn’t have the data provided by the Internet until, well, about 10 years ago. I’d wager that none of the studies turned up by Shneiderman’s lit search do anything radically new, methods-wise.

It’s certainly true that the growth of social networking through the Internet allows scientists access to data that can answer questions we weren’t able to deal with before. For instance, we have real-time records of people interacting with their friends thanks to Facebook (momentarily pretend this doesn’t creep you out). But the actual methods we’ll use to analyze those data are nothing radically new. On that count, Science 2.0 looks a lot like a Microsoft product upgrade – a new interface “skin” on top of the same basic mechanism.

References:
Shneiderman B. 2008. Science 2.0. Science 319:1349-50.

Diamond J. 2001. Dammed experiments! Science 294:1847-8.