January 12, 2005 | David F. Coppedge

We Were Wrong About Isochrons, Geologists Say

An isochron (a word meaning “equal time”) is supposed to be a line connecting points on a graph that represent the same age, or the same age difference.  If your rain gutter barrel fills fast and the bucket in your garage fills slowly, for instance, you can figure the time the rain started if you know their individual fill rates; the line connecting those two points on a graph would be an isochron.  If you found another bucket and its fill rate also fell on the line, it would imply it started filling at the same time.  As straightforward as this method is at home or in the lab, can it be misleading when extrapolated millions of years into the past, when the initial and intervening conditions were not subject to observation?
    When geologists date rocks, they seek to identify minerals that are isochronous, though they may decay at different rates.  A hidden assumption was that the initial isotope ratios were fixed at the time the rock formed.  Not so fast, say four geologists from the UK, Wisconsin and California, writing in Geology:1

The determination of accurate and precise isochron ages for igneous rocks requires that the initial isotope ratios of the analyzed minerals are identical at the time of eruption or emplacement.  Studies of young volcanic rocks at the mineral scale have shown this assumption to be invalid in many instances.  Variations in initial isotope ratios can result in erroneous or imprecise ages.    (Emphasis added in all quotes.)

This realization “questions a fundamental tenet in isochron geochronology—that the initial isotope composition of the analyzed phases is identical.”  Since variations in these ratios are now known, it creates the “possibility that it may compromise geochronological interpretations.
    Sources of variation in initial ratios include: (1) rocks forming over multiple stages instead of one, (2) crustal contamination, (3) partial assimilation of parent isotopes, and (4) magma recharge.  These sources of error “can, in principle, be identified,” they claim, by examining cross-sections of the rock from core to rim, “provided that the components involved are isotopically distinct.”  But unless identified via independent checks, isochron ages can be “fictitious,” they warn.  They give an example of how rubidium-strontium data points “may result in a good isochron fit, even though the age obtained is meaningless.”  One case produced an order-of-magnitude difference between the argon age and the rubidium-strontium age, even with a valid isochron.
    To date a rock via isochrons, the geologist has to know that the rock had (1) slow diffusion and (2) rapid cooling.  But then, “The cooling history will depend on the volume of magma involved and its starting temperature, which in turn is a function of its composition.”  They give examples where it is evident that “open-system processes during crystallization must be invoked to impart isotopic heterogeneity to the mineral population”; i.e., to explain away differences in age between two methods by claiming the rock was open to the environment during its lifetime.  They admit, though, that “if the initial variation is systematic (e.g., due to open-system mixing or contamination), then isochrons are generated that can be very good” based on their fit to the graph, “but the ages are geologically meaningless.”
       The authors assert that these sources of error might still be useful.  They describe ways how fictitious isochrons might, if true ages are known from other methods, lead to interpretations of how the rocks were formed.  They claim that it is highly unlikely that open-system processes that affect the isochrons of one method would affect another method the same way.  Their summary, however, consists primarily of cautions:

  1. The “assumption of a constant [initial strontium isotope] ratio in isochron analysis of ancient rocks may not be valid in many instances.”  In fact, significant variation is a “common observation,” they say.
  2. Statistical methods may not be able to distinguish between constant or variable [initial strontium isotope] ratios,” particularly as the rocks get older, or the rocks were subject to open systems during their formation.
  3. Independent ages are needed to evaluate rock-component isochrons.”  Disagreements “may constrain differentiation mechanisms such as contamination and mixing,” if they can be corrected by independent means.

1Davidson, Charlier, Hora, and Perlroth, “Mineral isochrons and isotopic fingerprinting: Pitfalls and promises,” Geology, Vol. 33, No. 1, pp. 29�32, doi: 10.1130/G21063.1.

Now, wait just a rockhounding minute.  Isochrons have been touted by the uniformitarians as a fail-safe method for dating rocks, because the data points are supposed to be self-checking (Darwin-lover Ken Miller used this argument in a debate against Henry Morris years ago.)  Now, these geologists, publishing in the premiere geological journal in the world, are telling us that isochrons can look perfect on paper yet give meaningless ages, by orders of magnitude, if the initial conditions are not known, or if the rocks were open systems at some time in the past.  That sounds like what young earth creationists have been complaining about all along.  But then, these geologists put a happy face on the situation.  It’s not all bad news, they say, because if the geologist can know the true age by another method, he can glean some useful information out of the errors.  But if they were wrong about the isochron method, what faulty assumptions are going to turn up some day about other methods, in a future issue of Geology?  Their confidence that they can know anything about what happened 200 million years ago is about as reassuring as the surgeon who told his patient, “I have good news and bad news.  The bad news is that we removed the wrong kidney.  The good news is that your other kidney is doing just fine.”

Categories: Uncategorized

Leave a Reply