Is the Internet Age Redefining Science?
To a middle school student, science is a clear category; it’s a subject you take, along with history, language, or P.E. You have a science teacher; you read a science textbook. You learn about the scientific method. In the real world, though, categories are not always so clearly delineated. In fact, the leading science journal, Nature, seems to be asking some fundamental questions about the methods and materials of its very reason for being.
This week, Nature presented a debate between two cancer researchers on whether scientific research should proceed “hypothesis first” or “data first.” The controversy has arisen, in part, by the technology available. Large-scale genomic surveys are now possible, and funds are being focused away from traditional methods toward obtaining vast databases of genetic information. Robert Weinberg is alarmed at the trend; he argued that mere data collection without understanding is pointless and that the funding shifts are discouraging small research projects from which major insights have been traditionally been made.1 Todd Golub argued that patterns in complex phenomena become apparent only when there is sufficient data available.2 It takes a lot of data to separate signal from noise; therefore data collection is essential before new hypotheses can be generated. The interesting thing about these articles is not who won the debate, but that a question so basic about the scientific method needs to be asked nearly 400 years after Francis Bacon. To what extent is the question a consequence of the sheer volume of data that can be accumulated and stored? The scientific method was devised when data was written with a quill on parchment.
Peer review is another focal point of dispute. Last week, Nature applauded a British research council that is cracking down on the practice of flooding review agencies with grant applications.3 Because the odds of winning a grant are low, “low success rates lead researchers to submit more applications in the hope of securing at least some funding, overburdening peer reviewers,” the editors explained. “The system ends up rewarding safe, short-term research proposals that meet everyone’s approval, at the cost of the innovative suggestions it should be supporting.” The council now says that if you don’t secure funding, you are limited to one application the following year. They feel the council’s new “‘blacklisting’ rule is a radical, unpopular but courageous effort to address a crisis in the peer-review system.” But will the cure be worse than the disease?
The consequences of the revised policy are uncertain. Thanks to other peer-review changes, applications have already been cut by about a third since last year, and success rates are up. But the new policy’s threat of exclusion may further discourage adventurous funding bids. The EPSRC also runs the risk of alienating its community, making it harder to find peer reviewers – who are in increasingly scarce supply.
The rule has already generated inequities and complaints. Nature still thinks it was a good move that requires fine-tuning. No one is sure at this point what will happen. Could luck play a role in who gets in the game? “Other scientists have worried that an application is marked ‘unsuccessful’ if it falls below the halfway point on a list of proposals ranked by panels of peer reviewers � a criterion that not only seems arbitrary, but also risks taking out good researchers who are simply unlucky.” Imagine if the loser in this process had been a young new Isaac Newton. The editors left it open if the council’s “gutsy gamble” will work, and noted that other councils are watching what happens.
Letters to the editor are often interesting to read. Three biologists from three widely respected scientific institutions wrote Nature last week in a huff, challenging the editors’ definition of science. As a follow-up to the Human Genome Project, now 10 years old, Nature’s editors had written that it is “Time for the epigenome” project.4 The three scientists were “astonished” at that editorial,5 claiming that it seemed to “disregard principles of gene regulation and of evolutionary and developmental biology that have been established during the past 50 years.” Their complaint was not just about disagreements on traditional practices, but about Nature’s acceptance of the idea that the epigenome has a “scientific basis” at all. Undoubtedly the editors would take umbrage at challenges to their ability to judge what constitutes science.
The internet age is shifting the dynamics of scientific practice. However comfortable the world was with the peer-reviewed publishing paradigm, times have changed. Instant internet access is democratizing science in many ways. Nature has read the tea leaves and is adjusting. In a dramatic move, Nature’s editors are opening up their once-impregnable editorial fortress and letting the peasants in. “Nature’s new online commenting facility opens up the entire magazine for discussion,” the Editorial announced this week.6 They have some concerns about signal to noise; comments will be vetted and monitored to weed out libel, obscenity or unjustified accusations – but not trivia. They will review their approach after a few months. Nevertheless, the popularity of internet blogs has not been lost on Nature and they are seeing the value of interesting and lively dialogue. It appears from the comments to this editorial that many think it’s a great idea.
Perhaps the best way to evaluate good science is with some form of measurement. Alas, another paper in Nature pointed out serious failings in that regard. In an Opinion piece last week,7 Julia Lane proposed, “Let’s make science metrics more scientific.” She wasn’t discussing better ohmmeters or ammeters – the subtitle explained, “To capture the essence of good science, stakeholders must combine forces to create an open, sound and consistent system for measuring all the activities that make up academic productivity, says Julia Lane” She described the problem in stark reality:
Measuring and assessing academic performance is now a fact of scientific life. Decisions ranging from tenure to the ranking and funding of universities depend on metrics. Yet current systems of measurement are inadequate. Widely used metrics, from the newly-fashionable Hirsch index to the 50-year-old citation index, are of limited use. Their well-known flaws include favouring older researchers, capturing few aspects of scientists’ jobs and lumping together verified and discredited science. Many funding agencies use these metrics to evaluate institutional performance, compounding the problems. Existing metrics do not capture the full range of activities that support and transmit scientific ideas, which can be as varied as mentoring, blogging or creating industrial prototypes.
The dangers of poor metrics are well known – and science should learn lessons from the experiences of other fields, such as business. The management literature is rich in sad examples of rewards tied to ill-conceived measures, resulting in perverse outcomes. When the Heinz food company rewarded employees for divisional earnings increases, for instance, managers played the system by manipulating the timing of shipments and pre-payments. Similarly, narrow or biased measures of scientific achievement can lead to narrow and biased science.
Whether Lane’s suggestions will solve these is another question. The fact that she opened them up for discussion in Nature should be enough to raise eyebrows among those who think of science as an unbiased enterprise. Lane’s paper did more to elaborate on the problems than to solve them. Moreover, her solutions sound like an internet-age Web 3.0 pipe dream:
How can we best bring all this theory and practice together? An international data platform supported by funding agencies could include a virtual ‘collaboratory’, in which ideas and potential solutions can be posited and discussed. This would bring social scientists together with working natural scientists to develop metrics and test their validity through wikis, blogs and discussion groups, thus building a community of practice. Such a discussion should be open to all ideas and theories and not restricted to traditional bibliometric approaches.
Something “should” be done, she ended: “Some fifty years after the first quantitative attempts at citation indexing, it should be feasible to create more reliable, more transparent and more flexible metrics of scientific performance.” She claimed “The foundations have been laid” but it’s evident that little is being done yet. That means all the problems she listed are today’s risks and realities. Someday, over the rainbow, “Far-sighted action can ensure that metrics goes beyond identifying ‘star’ researchers, nations or ideas, to capturing the essence of what it means to be a good scientist.”
It’s clear that science is evolving, as it always has. But what is it evolving from, and what is it evolving toward? If science itself is not stable, has it ever been – or will it ever be – a reliable method of gaining understanding?8
1. Robert Weinberg, “Point: Hypotheses first,” Nature 464, 678 (1 April 2010) | doi:10.1038/464678a; Published online 31 March 2010.
2. Todd Golub, “Counterpoint: Data first,” Nature 464, 679 (1 April 2010) | doi:10.1038/464679a; Published online 31 March 2010.
3. Editorial, “Tough love,” Nature 464, 465 (25 March 2010) | doi:10.1038/464465a; Published online 24 March 2010.
4. Editorial, “Time for the epigenome,” Nature 463, 587 (4 February 2010) | doi:10.1038/463587a; Published online 3 February 2010.
5. Ptashne, Hobert and Davidson, “Questions over the scientific basis of epigenome project,” Nature 464, 487 (25 March 2010) | doi:10.1038/464487c.
6. Editorial, “Content rules,” Nature 464, 466 (25 March 2010) | doi:10.1038/464466a; Published online 24 March 2010.
7. Julia Lane, “Let’s make science metrics more scientific,” Nature 464, 488-489 (25 March 2010) | doi:10.1038/464488a; Published online 24 March 2010.
8. “Understanding” is not the same thing as explanation, prediction, and control. Scientific theories can provide those things and still be wrong or lacking in understanding of reality. See the 3/17/2010 commentary.
Science is mediated through fallible human beings. It is not “out there” in the world, to be retrieved in some unbiased way. Human beings have to figure out not only what nature is showing us – they have to figure out what nature is, and what science is. At every step there are decisions to be made by creatures who don’t know everything and who weren’t there at the beginning. We must divest our minds of the notion that science is an unbiased method that obtains incontrovertible truth. That is certainly not the case to an evolutionist. If blind processes produced human beings, we have no necessary or certain access to external reality. Some philosophers have tried to defend “evolutionary epistemology” – a notion that if evolution had not put us in touch with reality, we would not have survived. That’s a self-referential fallacy that assumes reality is real and that evolution is capable of addressing philosophical questions.
Science is supposed to be a systematic attempt to discern and understand the natural world, but all attempts to define science in ways that keep the good stuff in and the bad stuff out have failed. Take any definition of science and you will find examples: is science methodologically rigorous? So is astrology. Is science restricted to repeatable observation? Better not talk about dark energy or black holes. Does it make predictions? Some sportscasters score better than the 5% confidence level considered statistically significant in scientific experiments. Is it the consensus of the learned? Astrology, alchemy and Ptolemaic astronomy had long and established credentials. Is it restricted to explanations based on natural law? So much for chaos theory, probability and any explanation invoking contingency, like evolution. Is it restricted to natural explanations for natural phenomena? Read creationist journals and you will find much of this, yet the scientific establishment routinely excludes their views. Consistent philosophers of science have had to agree that by any normal definition, creation science is scientific – or else you wind up excluding other approaches the establishment doesn’t want to give up.
No two philosophers of science agree completely on what science is, let alone what scientists should be doing. Philosophers differ wildly on the nature of scientific discovery, the nature of scientific evidence, and the nature and propriety of scientific explanation. The whole field is riddled with deep and unresolved questions. If you resort to an operational definition, it becomes circular: What is science? Science is what scientists do. What do scientists do? Science. In practice, “science” is often defined as whatever those in power take it to mean. As shown by the letter to Nature above, they sometimes can’t agree among themselves.
The practice of science has changed considerably over the centuries. In the early 18th century, interested amateurs like James Joule worked independently and discussed their findings at local scientific societies that were little more than clubs. Today there is rapid, instantaneous conversation via the internet – some good, some bad, some ugly. Science has become a human social phenomenon wielding immense political and economic power. Many individual scientists do their work honestly; they really want to figure out the truth about some phenomenon, find a cure, bring clarity to a question about nature, organize our accumulating data in a useful way. At every level, though, human frailty is an intrinsic factor. Consider these very practical issues that each require decisions based on fallible human opinions:
- Who gets funding.
- How one increases the odds of getting funding.
- How much funding is needed (meat over gravy).
- How much one has to go along to get along.
- What school one goes to, and how it affects prestige.
- How one’s work is perceived by one’s peers.
- The availability of peer reviewers.
- Whether the peer reviewers are unbiased or potential rivals.
- How many peer reviewers are enough.
- Whether a glass ceiling exists for women researchers.
- Whether the good-old-boys club keeps out young or female entrants.
- Whether a consensus represents confidence or inertia.
- To what extent a consensus muscles out the mavericks.
- Whether a maverick has a view worth hearing (who decides?)
- The effect of tenure or the lack of it on objectivity.
- Whether corporate funding biases the findings.
- Whether government funding biases the findings.
- Whether individual hubris biases the findings (think Mesmer).
- The influence of one or more strong personalities in a field (think Freud).
- Whether quantity of research activity correlates with significance.
- Whether number of published papers correlates with understanding.
- Whether volume of writing on a subject correlates with its value.
- The extent to which references reinforce dogma (see 03/17/2006).
- How long it takes for new knowledge, or falsified theories, to become generally known (01/15/2010).
- Whether public comments provide signal or noise.
- Whether an expensive project provides value.
- How a project’s perceived value is to be measured.
- How the quality of scientific activity or results is to be measured.
- At what point a project outlives its usefulness.
- Whether the issue being investigated is a scientific question.
These and other issues raise an interesting thought: is a kid doing a science project she loves, or a citizen scientist pursuing a question out of his own interest and curiosity, closer to the pure scientific ideal? But if so, how would they ever afford to build a Large Hadron Collider? The expense of large scientific research programs has created a monstrosity of institutions, political processes and issues about what it is science is trying to do and why. It might be compared to how San Francisco became a boom town to support the gold miners. A lot of ancillary activity emerged (including crime and saloons) whose relevance to the activity of mining was questionable. Nevertheless, we’re stuck with Big Science. Whether more openness to public visibility via the internet will keep it honest (or make it honest) remains to be seen.
Exercise: Add to our list of non-epistemic factors that must be considered in evaluating the nature and results of science.