June 8, 2007 | David F. Coppedge

Origin of Life Made Simple: Stochastic Innovation Answers I.D.

A press release from UC San Francisco teases,

Before life emerged on earth, either a primitive kind of metabolism or an RNA-like duplicating machinery must have set the stage – so experts believe.  But what preceded these pre-life steps?
    A pair of UCSF scientists has developed a model explaining how simple chemical and physical processes may have laid the foundation for life.  Like all useful models, theirs can be tested, and they describe how this can be done.  Their model is based on simple, well-known chemical and physical laws.

Stochastic innovation can be considered a euphemism for “chance invention.”  A stochastic process is one where chance and natural law interact.  Justin Bradford and Ken Dill came up with a model that they believe bridges the gap between primordial ingredients and working machinery – at least conceptually.  Their model was published in PNAS June 4.1  The press release gives the upshot, which focuses on the interactions between simple chemical catalysts, such as the surfaces of clay minerals:

The basic idea is that simple principles of chemical interactions allow for a kind of natural selection on a micro scale: enzymes can cooperate and compete with each other in simple ways, leading to arrangements that can become stable, or “locked in,” says Ken Dill, PhD, senior author of the paper and professor of pharmaceutical chemistry at UCSF.

But is it really possible to extend natural selection to an abiotic environment?2  The press release compares this chemical selection to natural selection in living things: neurons, ant communication, and Darwin’s theory in general.  “Like these more obvious processes, the chemical interactions in the model involve competition, cooperation, innovation and a preference for consistency, they say.”  Yet these examples are all under the control of a genetic code.  And teleological terms like competition, cooperation, innovation and preferences can hardly be ascribed to molecules.
    Thus far, this would be an argument from analogy – a logical fallacy.  The authors attempted to bring this back to reality:

In its simplest form, the model shows how two catalysts in a solution, A and B, each acting to catalyze a different reaction, could end up forming what the scientists call a complex, AB.  The deciding factor is the relative concentration of their desired partners.  The process could go like this: Catalyst A produces a chemical that catalyst B uses.  Now, since B normally seeks out this chemical, sometimes B will be attracted to A — if its desired chemical is not otherwise available nearby.  As a result, A and B will come into proximity, forming a complex.
    The word “complex” is key because it shows how simple chemical interactions, with few players, and following basic chemical laws, can lead to a novel combination of molecules of greater complexity.

This is an improvement, but they still described molecules as players desiring one another and hoping to improve.  Did the journal paper, where higher standards of accuracy are expected, do any better?  A quick look shows them starting with the same analogies and teleological language:

There are several examples of what might be called “stochastic innovation,” whereby a biological, physical, or sociological system: (i) searches among viable options, then (ii) selects one or more of those options that is “best” by some metric, then (iii) locks in that selection for the future.  In biology, the best-known example is Darwinian evolution, where variation is the term that describes the search step, and natural selection is the term that describes steps (i) and (ii).

They proceeded to apply this reasoning to neurons and ant colonies and humans: “Human beings, businesses, and social organizations evolve through decision-making: they search among the options available to them, make self-serving choices, then remember and act on those decisions in the future.”  It is still unclear, however, how molecules can do these things.
    At some point, the authors are going to have to restrict their vocabulary to the non-teleological, impersonal language of chemical laws and chance:

Our interest here is in whether stochastic innovation might also be achievable in chemistry and biochemistry.  Can chemical and biochemical reactions be chained together in complex and innovative ways, driven only by simple physicochemical search and selection processes?  If so, it may be useful, not only as a tool in chemistry and biochemistry, but also for giving insights into the processes of chemical organization that may have occurred during prebiotic evolution.

This is where the rubber must meet the road.  Surprisingly, as they were putting on their gloves to announce their model, they made some embarrassing admissions about their predecessors:

Our goal is not to explain some existing body of data, because we know of none that pertains.  Rather, our goal here is to propose a type of organizing principle that has not been explored before, as far as we know, but that is based on well established physicochemical principles and that can be tested by experiments.  Our initial motivation for this work was to understand some puzzles of prebiotic chemistry, where, it could be argued, the field is just as limited by a lack of specific testable models at the moment as it is by a lack of experiments.

Leaving that little revelation behind, let’s examine the nuts and bolts of their model.  They laid down a few assumptions and boundary conditions, and spoke of conceptual catalysts A and B that interact according to simple rules.  (A and B are imaginary molecules in a computer, not real chemicals off the shelf).  Out of their ground rules, the following phenomena emerge: cooperation, competition, consistency, and innovation – or so they claim.
    As an example of competition, they introduced a super-A player that does a better job of catalyzing.  The result?  The rich get richer: “Within our simple chemical model system, this competition resembles Darwinian selection, except that our metric of ‘success’ is AB complex formation, whereas the metric of success in biological systems is survival.3
    But is this really an escape from teleology?  So far it appears that these “AB complexes” are little more than gunk, if they have no function or goal.  They would merely accumulate by chemical laws of mass action and osmosis till some limiting factor brings the state to equilibrium.
    “Our model shows that consistency has value, exhibiting ‘tortoise and hare’ behavior,” they boasted, but who would be there to ascribe values?  What judge would be watching the race and deciding the winner?  What crowd in the stands would be cheering on their champion?  Their analogical language continued, showing that the tortoise wins, just like in the fable: “Thus, sustained consistency is more effective for complex formation than high-activity-burst behavior.”
    Next, they added more catalytic pairs to the model, and got what they called “functional hierarchies.”  They said, “Thus, multiple catalysts can be driven together, potentially into a variety of topological arrangements, including metabolic chains, networks, and cycles.”  Somehow, metabolism got snuck into the picture.  Metabolism presupposes the harnessing of energy for function.
    Then, the authors threw in another surprise: they claimed their model dispenses with a famous intelligent design argument:

These results bear on an idea that has been called “irreducible complexity.”  It has been argued that complex biological and prebiotic chemical systems could not have arisen by simple physicochemical processes, because there would have been no selective advantage for each of the putative incremental changes along the way.  In that view, what good is half an eye?  An organism would not be served by anything less than a full eye, so intermediate structures would not have imparted enough value to survive natural selection.  In that view, “irreducible” refers to a system that would fail to function if any one component is removed, and irreducible complexity refers to the idea that such systems require design and could not be developed by stochastic innovation.  The counterargument, seen in computer simulations, for example, has been that stochastic innovation works differently: evolution doesn’t “know” the final end-goal in advance, but finds it through a random search in indirect, incremental steps.

Sure enough, the first reference is to Michael Behe’s book, Darwin’s Black Box, and the counterargument refers to Lenski and Adami’s digital organisms (see 07/04/2004 entry).  But in that case, ID critics charged the evolutionists with investigator interference by imposing their mental powers and decisions on the players (see ISCID).  These two authors are now claiming that an irreducible system has emerged “by blind physicochemical forces.”
    The ending discussion asserted that this model has another beneficial by-product: it explains gaps in the fossil record.  They explained:

In short, the intermediate states are unstable.  The steps are downhill.  One evolutionary step leads to the next, quickly followed by the next, and so on, without pausing.  In the evolutionary metaphor, half an eye never appears as a stable state because such a state is quickly driven by even stronger evolutionary forces to form a complete eye, maybe for a different purpose than the half-eye.  Such two-state transitions are also common in protein folding, for example, where the denatured state is followed in time by a partly structured state that is immediately followed by an even more structured state, etc., until the molecule becomes fully folded into the native structure.  At the earliest stages of folding, the protein does not know that it is headed toward the native state; it is just seeking a situation that is marginally better than its previous state.

Unpacking that paragraph, we find more personification and teleology again: we have steps leading to states that have purpose and are better than previous states.  We have proteins that are seeking to go uphill in incremental steps.4
    Winding up, the authors explained why their model supports the view of a growing minority of origin-of-life researchers, that metabolism preceded genetics (06/12/2006).  Metabolic chains and cycles of reactions emerge in their computer model without a genetic code.  “Of course,” they conceded, “an important virtue of ultimately having a genetic system is that it provides much longer term ‘memory’ for the ‘lock-in’ step (step 3 in the Introduction) than does nongenetic propagation, where memory is merely provided by a ratio of off-rates to resource fluctuation times.”  Yet virtue seems limited to moral agents, not chemicals.
    They explained that search, selection, and lock-in are the only mechanistic processes required.  Search occurs through chemical attraction, selection occurs through the formation of AB complexes, and lock-in occurs when the complexes are robust against depletion disasters.  This, they claimed, is the virtue of their model: it supplies a mechanistic answer to the intelligent design critics:

A key distinction between stochastic innovation, explored here, and design-based innovation, in which a complex system is engineered and constructed by a designer, is that stochastic innovation involves no implicit “goals” and no guidance toward a particular purpose.  The Darwinian paradigm shows how increasing complexity and order can arise from processes that do not involve guidance through intelligence or design.

They even offered a way to test their model: measure if AB complexes are more concentrated when the common resource is depleted.  It was not clear, however, if anything would happen at all if an intelligent lab worker were not present.  They assumed a tester would select the catalysts and control the concentrations.
    The final paragraphs discussed weaknesses of previous self-organization models.  Theirs, they boasted, requires no genetics, no designer, and only the laws of thermodynamics and chemical attraction.  Yet the conclusion relies heavily on the word function:

A well known process in chemistry is the binding and association of molecules, driven by thermodynamic forces.  Here, we consider whether catalyst molecules might be driven to associate with each other, through typical binding forces, but based on their molecular functions.  Functional driving forces are well-known in biology, through the principles of evolution, but are not yet much studied in chemistry.  We propose a model for how different Michaelis-Menten enzymes or catalysts might tend to associate, driven by the production or depletion of common resources.  The agents do not associate if the common resource is plentiful.  We call this the shielding principle.  In this way, agents organize adaptively, and complexity can form from simpler systems.  In our model, “function dictates structure,” a reversal of the paradigm in which “structure dictates function.”

The unanswered question is, can function exist without biology?  If chemical function is meaningless, then they have assumed a biological concept that they needed to prove.


1Justin A. Bradford and Ken A. Dill, “Stochastic innovation as a mechanism by which catalysts might self-assemble into chemical reaction networks,” Proceedings of the National Academy of Sciences USA, 10.1073/pnas.0703522104, June 12, 2007, vol. 104, no. 24, pp. 10098-10103.
2See online book, esp. page 90.
3Recall that if evolutionary success is measured in terms of survival, the result is a tautology: the fittest survive because the survivors are the fittest by definition. p See 10/29/2002 entry.
4In the protein analogy, the authors are ignoring the fact that protein folding is assisted by chaperones, and is under the control of the genetic code.  If not folded correctly within the time allowed, the protein is taken to the recycling bin (proteasome).

If you took the time to labor through their tortured thought processes, you found the same old dirty tricks the Darwin Party uses every time: personification, begging the question, analogy, glittering generalities and bluffing.  It was seasoned with the usual wishful-thinking words might, may, perhaps, coulda-woulda-shoulda.  Michael Behe could make quick confetti of this paper were he provided a chance to respond.  He just released a new book, by the way: The Edge of Evolution (Free Press, 2007).  You know it must be good: the Darwinists quickly trashed it the day it went into print.
    We took the time to explore this paper because it was important.  It was a direct attempt to defend Darwinism from three of the most damaging attacks: irreducible complexity, gaps, and the failures of chemical evolution (e.g., 02/15/2007).  Were you impressed?
    Like the evolution-via-computer charlatans, these guys only got their lab hands dirty with a keyboard.  Chemicals are less forgiving.  So let’s call their bluff and ask them to perform their proposed test with real molecules.  Ask them to throw some unselected chemicals off the shelf into a tub with some blocks of clay, sit back two billion years and watch what happens.  No interference allowed.  The chemicals must find their own “function” without half an eye to see it, and without any intelligent lab assistant to tell them what this handy word “function” means.

(Visited 17 times, 1 visits today)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.