The Times Real Estate


.

  • Written by Kai Zhang, Associate Professor of Statistics and Operations Research, University of North Carolina at Chapel Hill

There’s an increasing concern among scholars that, in many areas of science, famous published results tend to be impossible to reproduce.

This crisis can be severe. For example, in 2011, Bayer HealthCare reviewed 67 in-house projects[1] and found that they could replicate less than 25 percent. Furthermore, over two-thirds of the projects had major inconsistencies. More recently[2], in November, an investigation of 28 major psychology papers found that only half could be replicated.

Similar findings are reported across other fields, including medicine[3] and economics[4]. These striking results put the credibility of all scientists in deep trouble.

What is causing this big problem? There are many contributing factors. As a statistician, I see huge issues with the way science is done in the era of big data. The reproducibility crisis is driven in part by invalid statistical analyses that are from data-driven hypotheses – the opposite of how things are traditionally done.

Scientific method

In a classical experiment, the statistician and scientist first together frame a hypothesis. Then scientists conduct experiments to collect data, which are subsequently analyzed by statisticians.

A famous example of this process is the “lady tasting tea” story.[5] Back in the 1920s, at a party of academics, a woman claimed to be able to tell the difference in flavor if the tea or milk was added first in a cup. Statistician Ronald Fisher doubted that she had any such talent. He hypothesized that, out of eight cups of tea, prepared such that four cups had milk added first and the other four cups had tea added first, the number of correct guesses would follow a probability model called the hypergeometric distribution[6].

Such an experiment was done with eight cups of tea sent to the lady in a random order – and, according to legend, she categorized all eight correctly. This was strong evidence against Fisher’s hypothesis. The chances that the lady had achieved all correct answers through random guessing was an extremely low 1.4 percent.

That process – hypothesize, then gather data, then analyze – is rare in the big data era. Today’s technology can collect huge amounts of data[7], on the order of 2.5 exabytes a day.

While this is a good thing, science often develops at a much slower speed, and so researchers may not know how to dictate the right hypothesis in the analysis of data. For example, scientists can now collect tens of thousands of gene expressions from people, but it is very hard to decide whether one should include or exclude a particular gene in the hypothesis. In this case, it is appealing to form the hypothesis based on the data. While such hypotheses may appear compelling, conventional inferences from these hypotheses are generally invalid. This is because, in contrast to the “lady tasting tea” process, the order of building the hypothesis and seeing the data has reversed.

Data problems

Why can this reversion cause a big problem? Let’s consider a big data version of the tea lady — a “100 ladies tasting tea” example.

How big data has created a big crisis in science What’s in the cup? Alex Yuzhakov/shutterstock.com[8]

Suppose there are 100 ladies who cannot tell the difference between the tea, but take a guess after tasting all eight cups. There’s actually a 75.6 percent chance that at least one lady would luckily guess all of the orders correctly.

Now, if a scientist saw some lady with a surprising outcome of all correct cups and ran a statistical analysis for her with the same hypergeometric distribution above, then he might conclude that this lady had the ability to tell the difference between each cup. But this result isn’t reproducible. If the same lady did the experiment again she would very likely sort the cups wrongly – not getting as lucky as her first time – since she couldn’t really tell the difference between them.

This small example illustrates how scientists can “luckily” see interesting but spurious signals from a dataset. They may formulate hypotheses after these signals, then use the same dataset to draw the conclusions, claiming these signals are real. It may be a while before they discover that their conclusions are not reproducible. This problem is particularly common in big data analysis[9] due to the large size of data, just by chance some spurious signals may “luckily” occur.

What’ worse, this process may allow scientists to manipulate the data[10] to produce the most publishable result. Statisticians joke[11] about such a practice: “If we torture data hard enough, they will tell you something.” However, is this “something” valid and reproducible? Probably not.

Stronger analyses

How can scientists avoid the above problem and achieve reproducible results in big data analysis? The answer is simple: Be more careful.

If scientists want reproducible results from data-driven hypotheses, then they need to carefully take the data-driven process into account in the analysis. Statisticians need to design new procedures that provide valid inferences. There are a few already underway[12].

Statistics is about the optimal way to extract information from data. By this nature, it is a field that evolves with the evolution of data. The problems of the big data era are just one example of such evolution. I think that scientists should embrace these changes, as they will lead to opportunities to develop of novel statistical techniques, which will in turn provide valid and interesting scientific discoveries.

References

  1. ^ Bayer HealthCare reviewed 67 in-house projects (doi.org)
  2. ^ More recently (www.theatlantic.com)
  3. ^ medicine (doi.org)
  4. ^ economics (doi.org)
  5. ^ “lady tasting tea” story. (io9.gizmodo.com)
  6. ^ hypergeometric distribution (mathworld.wolfram.com)
  7. ^ huge amounts of data (public.dhe.ibm.com)
  8. ^ Alex Yuzhakov/shutterstock.com (www.shutterstock.com)
  9. ^ particularly common in big data analysis (doi.org)
  10. ^ to manipulate the data (doi.org)
  11. ^ Statisticians joke (books.wwnorton.com)
  12. ^ a few already underway (projecteuclid.org)

Authors: Kai Zhang, Associate Professor of Statistics and Operations Research, University of North Carolina at Chapel Hill

Read more http://theconversation.com/how-big-data-has-created-a-big-crisis-in-science-102835

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more