Does Microevolution Add Up?
Do numerous small changes add up to big ones, like Darwin thought? In the Jan. 15 issue of Nature,1 New Zealand kiwi David Penny (Allan Wilson Center for Molecular Ecology and Evolution, Massey University) is hopeful that the new chimp genome will prove it so:
The fundamental issue here is Darwin’s bold claim that “numerous, successive, slight modifications” are sufficient for all of evolution (Fig. 1 [a photo of a group of chimpanzees]). This can be paraphrased, in later terms, as “microevolution is sufficient to explain macroevolution”. The historical context is that evolutionary biology can be divided into two phases: first, the acceptance in the 1860s that evolution (macroevolution) had indeed occurred; second, the realization in the mid-1900s that the processes of microevolution (natural selection working through genetics) were necessary for evolution to occur.
Although the chimp genome is still in a preliminary draft stage, Penny points to some early results by A.G. Clark that suggest there has been some “positive selection” between ape and human genes, compared to the mouse genome:
Use of the mouse genome as an outgroup allows estimates of the number of synonymous (silent) mutations and non-synonymous (replacement) mutations. The ratio of the two permits the potential identification of genes that have been under positive selection in humans as opposed to chimpanzees, and vice versa.
“Not surprisingly,” he then says, ”selective changes occur in both the human and chimpanzee lineages (our common ancestor was neither chimp nor human).” Like what? He points to one example Clark found: enzymes for amino acid breakdown appear to have been under positive selection. What does it mean?
This is concordant with the generally high proportion of meat (and thus protein) in the human diet, at least in comparison with the more herbivorous chimpanzee and gorilla. The increased capacity to break down amino acids is not surprising in another respect. For example, failure to catabolize phenylalanine has several adverse effects, including brain damage. Overall, the finding lends support to theories that an increased proportion of meat in the diet of early humans was important for an increase in brain size. Regardless of that, there could also be ethical implications. If early humans ate meat ‘naturally’, then for example being vegetarian could be considered a personal choice rather than a universal ethical decision. But all that can be claimed here is that scientific knowledge will be necessary, even if not sufficient, for solving such ethical questions.
The only other case of possible positive selection are differences in smelling genes between apes and men. Some genes seem to be under positive selection, while others seem to be becoming inactive as pseudogenes. This work is an example, he thinks, of how evolutionary comparative genomics can stimulate research:
These results illustrate how genome-wide information will stimulate new experiments, both at the level of gene expression and with the aim of making physiological comparisons. What, for instance, is the comparative sensitivity of humans and chimps to a range of olfactory stimuli? Do humans have an improved receptivity to odours from the increased proportion of meat and/or cooked foods in our diet? Such tests will allow us to see how genetic differences manifest themselves at the level of the organism, and we can expect a burst of experiments to that end.
He touches on other questions: how important is neutral selection? How can one tell differences in the rates of gene expression? Do differences in gene expression occur more with conserved genes, or with those undergoing positive selection? Will these studies help nail down mutation rates? What is the sustainable reproduction rate? “If the proportion of deleterious and slightly deleterious mutations is significant,” he notes, “then exact replacement reproductive rates might lead to eventual genetic decline.” (That doesn’t worry him as much as “the planet’s ecological sustainability,” a “a much more immediate worry.”)
These are tastes of the “plenty of food for thought” that he thinks Clark’s initial studies elicit. The chimp genome project marches along. Penny concludes,
The full sequence will be available later this year, and further comparative analyses should lead to a definite answer as to whether there is anything in the human genome that is not accounted for by the normal microevolutionary processes. Is there a genetic continuum between us and our ancestors and the great apes? If there is, then we can say that these processes are genetically sufficient to fully account for human uniqueness – and that would be my candidate for the top scientific problem solved in the first decade of the new millennium.
1David Penny, “Evolutionary biology: Our relative genetics,” Nature 427, 208 (15 January 2004); doi:10.1038/427208a.
Ahem. Why are you asking this question? Here we are, 150 years after Darwin remade the scientific world, and you still don’t know whether numerous, successive slight modifications could make a man out of an ape?
Notice the wild, unconstrained, bluffing imagination of David Penny. He takes a few genes that seem to differ in their ability to break down some amino acids, and suddenly we have scientific information on (1) diet, (2) evolution of the brain, and (3) ethics. Incredible. In any other field of inquiry, such unjustified extrapolation from meager data would be scorned. But since Penny is an evolutionary biologist, he gets away with it and his tall tale gets published in all seriousness in Nature, the most esteemed scientific journal in the world. Why?
It’s important to know something about the Darwinian Revolution. What happened in 1859 was not just the announcement of a new scientific theory. It was a fundamental change in the way science is done. Prior to Darwin, scientists (mostly theists and creationists) believed strongly in proof. “Nothing on mere authority” was the motto of the Royal Society. Scientists were careful to distinguish between speculations and facts that were demonstrable through experiment.
But with Darwin, the standards were lowered. It became permissible to just speculate about a natural phenomenon (see quotation at top right of this page). Proof was no longer required: Darwin elevated the esteem of the hypothesis in science.
Despite the outrage of many scientists of the period, Darwin’s admirers, including John Stuart Mill, Karl Marx and Herbert Spencer, recognized the essence of what Darwin had done: he had opened up a new framework for storytelling. They freely admitted that Darwin had not proved his case. Darwin himself understood that his theory lacked an explanation for variation, for inheritance, or for speciation, and that he had not actually demonstrated any transformations. Most of the alleged evidence that was adduced in The Origin in support of his hypothesis of natural selection was (1) circumstantial, (2) based on analogy with domestic breeding, or (3) negative theological argument (i.e., “a Designer would not have done it this way”) – an example of the either-or fallacy. But none of this lack of scientific proof mattered, because Darwin had changed the rules of science to incorporate mere speculation. Notice what John Stuart Mill said:
Mr. Darwin’s remarkable speculation on the origin of species is another unimpeachable example of a legitimate hypothesis. . . . It is unreasonable to accuse Mr. Darwin (as has been done) of violating the rules of induction. The rules of induction are concerned with the condition of proof. Mr. Darwin has never pretended that his doctrine was proved. He was not bound by the rules of induction but by those of hypothesis. And these last have seldom been more completely fulfilled. He has opened a path of inquiry full of promise, the results of which none can foresee. (cited in Janet Browne, Charles Darwin: The Power of Place [Princeton, 2002], p. 186.)
Some of those results we see with hindsight: eugenics, communism, and Nazism. Wake up, people! Don’t you see what has happened? The rules of science were changed! Proof was out. Hypothesis (read: speculation, imagination) was in.
Let’s get something straight. Hypothesis is not science. Hypothesis comes before science. A hypothesis is merely the hunch, the guess, the heuristic device a scientist uses to begin his experiments that, hopefully, will prove or disprove the hypothesis. Yes, it takes a good hypothesis to produce good science; Faraday might not have achieved such success without the hunch that the forces of electricity and magnetism were related. But the hypothesis is not blessed as science until it is proven. What Darwin did was to create the open-ended hypothesis that no longer required proof. As emphatically and clearly stated above by John Stuart Mill (the economist whose theories found support in Darwin’s view of a competitive, dog-eat-dog world), Darwin had opened a path of inquiry. The means was now the end.
Darwin’s hypothesis freed up writers’ block for lots of storytellers in other fields. Economists like Mill and Marx (for different reasons) were attracted to the image of cutthroat competition. Politicians found justification for colonialism and expansion of the British Empire. Racists liked the idea of survival of the fittest (themselves being, of course, the fittest). It gave artists new themes for sweeping dramatic landscapes. It gave poets like Tennyson and Browning new themes for probing the human condition. Composers, psychologists, industrialists, comedians, novelists, journalists, cartoonists and the man on the street all began to look at the world with this new framework. It didn’t matter whether the framework was supported on a solid foundation of fact. The play was the thing.
The problem with Darwin’s anti-Baconian New Atlantis was that hypotheses and speculations can be infinitely varied, endless yarns that, unless nailed down with factual proof, are no better than dreams and myths. They may be dressed in scientific terms, but can be 180 degrees wrong. Science was supposed to be a reliable methodology for obtaining truth about the natural world. It was supposed to require not just a hypothesis, but a large accumulation of facts that actually supported the hypothesis, not just might support it. But Darwinism brought in grand, sweeping glittering generalities incapable of proof. It is also the reason most Darwinian stories in the journals are futureware: empty promises that the proof is out there, still waiting to be discovered, someday over the rainbow, once we find water on Mars, or once the chimp genome is finished, or whatever. When the promised data are not helpful, no matter; just push the envelope a little farther out, and the story goes on.
Now clearly, Darwin’s redefinition of science did spawn a lot of experimentation. Darwin himself was almost obsessive-compulsive in his observations of orchids, barnacles and pigeons, as Janet Browne describes in her highly-acclaimed biography (recommended reading). But most of this was a hunt for tidbits of data that might lend support to his hypothesis of natural selection. None of it was proof. Nothing he found demonstrated great transformations between any plants or animals: it just might have. He was an advocate looking for support, no better than a cultist trying to proof-text his heresy with snippets of Bible verses that might be consistent with his preconceived hunch, whether or not the context justifies it.
For instance, when Darwin eagerly sought evidence that honeybees had varied (Browne, p. 203), the evidence was negligible. Did this falsify his hypothesis? No way. “In desperation,” Browne writes, “Darwin turned the question on its head: if there were no physical differences, perhaps there might be variations in behaviour?” (Ibid.) Darwin set the example of having a hypothesis that was so flexible and imaginative, no amount of negative evidence could ever falsify it. He opened the scientific world to storytellers, providing them welfare and job security (see 12/22/2003).
That is why David Penny can write nonsense and get it published in a scientific journal. That is why after 150 years, Darwin’s “path of inquiry” is still asking fundamental questions you thought Darwin had already answered. This explains why Eugenie Scott, Michael Ruse and all the other Darwin Party advocates can bluff about the “rules of science” that guarantee perpetuation of Darwinism and exclude alternatives, no matter how much evidence contradicts it. That is why most debaters against creationists focus so much time on either (1) circumstantial evidence, (2) analogies, and (3) negative theological arguments, instead of trying to prove Darwin’s hypothesis that numerous slight, successive modifications have indeed added up to major changes, from bacteria to man. And that is why Darwin Party members are gainfully employed in their endless quest for tiny pieces of data that might be consistent with what has become the reigning mythology of our time. Evolutionism itself evolves. Like an animal presumably varying without purpose or design, evolutionary storytelling proceeds by mutation and selection (selective evidence, that is), wandering aimlessly in storyland.
Unless the scientific community raises its standards and gets back to the requirement that a hypothesis is not science till proven, then this welfare state of storytellers will persist. The public has been hoodwinked. The charlatans have become the shamans.