July 23, 2006 | David F. Coppedge

Theory Battles Observations in Near-Field Cosmology

Which is more important in science: a consistent model, or a good fit with observations?  Clearly both would be the ideal.  A report in Science1 this week revealed that astronomers are having trouble holding the two together.  The problem is especially acute for near-field cosmology that deals with nearby galaxies.
    It may seem odd that astronomers feel more comfortable talking about the large-scale structure of the universe instead of our nearest neighbors, but that’s essentially what Joss Bland-Hawthorn and veteran cosmologist P.J.E. Peebles said: “These are exciting times for astronomy and cosmology,” they crowed.  “On the one hand, we find that the main predictions of Big Bang inflationary cosmology are confirmed by observations of distant objects.”  One hand usually implies another is coming: “On the other hand, nearby galaxies continue to surprise and inform us.”  The gloating over Big Bang certainty must be tempered by later admissions that 96% of the universe needs to be made up of unobservable stuff for the models to work: “The evidence for the existence of these dark components is strong, but their properties are only loosely understood.”
    The pair reported on meetings in Aspen, Colorado in February where problems were aired and data shared, including findings from the largest simulation of galaxy evolution ever made, the Millennium Simulation.  In the spirit of the hunch that the most interesting parts of science are not the successes but the puzzles, let’s look at some of the problems Bland-Hawthorn and Peebles listed in their article, where theory and observation didn’t quite match up:

  • Iron Age I:  New stars are mixed in with old ones, both at the edges of galaxies and in their centers.  Although “nuclear burning in stars is forever increasing the amount of mass in heavy elements,” two stars in the halo of the Milky Way were found to have 1/200,000 the iron abundance to hydrogen as found in the sun.  The authors conclude that this means these stars are ancient, and assume that “their very unusual mix of chemical elements provides vital information about the nature of the earliest generations of stars.”  Yet which came first, the model or the observations? 
  • Iron Age II:  Not only are these halo stars surprising, but they confess that “Other ancient stars may be hiding in the centers of galaxies, where the mass density is high and conditions likely first favored star formation….”  Yet it would seem this is the place where heavy element production would be the highest.  Searching for these ancient stars in the dense cores of galaxies is “a project for the future.”
  • Figure Fudging:  The only illustration in the article shows a remarkably good fit between simulations and an actual survey of galaxy distributions from the Sloan Digital Sky Survey.  Both show a web-like structure of filaments and voids.  The fine print reveals a problem: “But close examination of the nearby galaxies shows discrepancies with what the simulations might lead one to expect.  For example, our Local Group is expected to have a thousand small mass concentrations, but we infer the presence of fewer than 50 from the number of visible galaxies.”  The two-order-of-magnitude mismatch is quickly brought into conformity with model tweaking and assumption addition – or by shoving the problem into The Future:

    It is plausible that when the universe was ionized, the heating of the gas in the smallest of the dark matter concentrations was sufficient to prevent the formation of any stars, leaving dark galaxies.  But dwarf galaxies are observed.  Consistent with that knowledge, the simulations indicate that some stars formed in small mass concentrations before or shortly after the disruption by ionization (as discussed by Andrey Kravtsov and Oleg Gnedin), producing almost dark galaxies.  The challenge is to reconcile the large number of low-mass dark matter concentrations with the smaller number of observed dwarf galaxies.  Ideas are being tested by ongoing searches for the faintest nearby galaxies and the study of their properties.

  • Merger Mania:  The models also show that mergers should continue to the present day.  Mergers are observed, but…

    But the patterns of heavy element abundances indicate that no major component of the Milky Way could have been assembled largely by accretion of dwarfs of the kind observed today (discussed by Eline Tolstoy).  The two large galaxies in the Local Group certainly could have formed by merging of dwarfs in the early universe; the curious thing is that the dwarfs that were left behind have to be substantially different.

  • Globular 4-D Puzzle:  Observed globular clusters are not cooperating with the models, either.  Astronomers infer a great deal from the color of starlight.  For decades, the spectra of globulars led to the common conception that they are among the oldest objects in the universe (but see 10/05/2003 entry).

    Another aspect of the merging issue concerns the tight concentrations of stars known as globular clusters.  The color of a globular cluster—and likely its heavy element abundance—correlates with the luminosity of the host galaxy.  Because globular clusters generally are old, this indicates either that the globulars became attached to the present host galaxy a long time ago—which does not naturally agree with the substantial recent merging in the simulations—or that the globulars were recently attached to the host galaxy but “knew” the luminosity of the host, which seems strange (discussed by Jean Brodie).

  • Local Gangsters:  Our local group of galaxies has two large spirals and many small ones.  Is this the norm?  The Millennium Simulation, one of the largest ever carried out, produced more points for debate at the meeting:

    But because the theory predicts substantial merging and accretion in nearby galaxies, which tend to destroy thin disks, a pressing issue is whether disk-dominated systems that contain old stars as well as young are as common in the simulations as they are observed to be nearby.

Time to sum up.  “In short,” they confess, “present-day cosmological simulations do not give a very complete account of the finer details of the nearby universe.”  This is tough work, after all.  The gas dynamics are extremely difficult to understand, how stellar winds and explosions stir things up and affect star formation, and the limited capabilities of computers provide room for excuses.  “But we have observations of forming stars to teach us what happens, and what we are learning is being applied to increasingly detailed simulations of this complex process.”
    Yet, a discontented bystander might ask, which is the cart, and which is the horse?  That question becomes especially apt when, as admitted in their last paragraph, the enormity of the fudge factors in the models is revealed:

Also to be borne in mind is that the problems with the simulations may be highlighting the need for improved physics.  After all, the simulations invoke many parameters to describe the 4% of the universe that is made of baryonic matter, while using only a few to describe the remaining 96% in dark matter and dark energy.  It was surprising to find that we must postulate dark matter.  Dark energy was another surprise, and the dark sector may surprise us yet again.

Maybe the biggest surprise of all will be to someday look back and realize that there was less darkness in the real observational universe than there was in the models.


1Joss Bland-Hawthorn and P.J.E. Peebles, “Astronomy: Near-Field Cosmology” Science, 21 July 2006: Vol. 313. no. 5785, pp. 311 – 312, DOI: 10.1126/science.1127183.

Always be wary when a scientist assures you he has the big picture all wrapped up and tidy, and just a few pesky details to sweep up.  Physicists boasted in the late 19th century that all the big questions were solved, and the only work left to do was improving the measurements to the sixth decimal place.  Then came general relativity and quantum mechanics, and the universe changed.  This article should be read with that in mind.  They spoke glibly about how well the large-scale models fit with the WMAP results (something we have reported earlier is far from certain: see 03/20/2006, 09/13/2005 links), but then this list of problems in near-field cosmology should have struck fear in their minds.  It seems really ominous to say that new physics are going to have to be invented to figure out the most basic objects right around us.  Then, at the end, to admit that even the large-scale model involves 96% fudge factor – and growing – well, now you understand the difference between what astronomers know and what they claim they know.  You have the observations, and you have their models.  Take your pick.
    The best early astronomers were driven by observations.  William Herschel and his son John Herschel spent incredibly long periods of time gazing into the eyepieces of their own home-made telescopes.  In more recent times, Halton Arp, Margaret Geller and John Huchra have had less use for armchair theorizing than for the hard work of observing.  It’s so much nicer to sit at a desk in the daytime and push a pencil, or punch imaginary worlds in the mind’s eye into the keyboard.  Laplace introduced a trend with his nebular hypothesis of modeling the origins of things and putting observations in the back seat.  Einstein is said to have stated that no observation can be trusted until confirmed by theory.  For shame.  Theories are man-made; observations belong to the Lord.  Choose you this day whom you will serve.
    Models can be helpful.  They have become essential tools in research.  Important questions must be raised, however, about the assumptions that go into them, and to what extent they inform us about reality.  Even in the most famous example, the physics of Newton, the classic of hard science that fueled the enthusiasm of the Enlightenment, the great Newton assumed things he could not possibly have known: that space was flat and infinite, that it was unaffected by matter, that time was constant, and that matter travels in a straight line unless acted upon by an outside force.  These are idealized definitions that he stipulated in advance.  There is no piece of matter anywhere that is not acted on by an outside force, nor could he have known with certainty that his definitions held true everywhere.  Worse, Newton assumed things called “forces” that acted mysteriously at a distance, an issue that horrified the Cartesians at the time.  Yet it worked, and worked extremely well, so Newton prevailed.  Now, of course, we know that Newtonian physics had to be replaced in the 20th century by a different model, Einstein’s, that, of course, we now know has the complete and final answer to everything.  (Scratch that.)
    Whether the finite human mind is capable of modeling this enormous universe must be constantly challenged, not merely assumed.  Just because a model works does not mean it is true.  It is a long-standing philosophical debate whether our experience, which deals only with particulars, is capable of establishing knowledge beyond our experience that is timeless, universal, necessary and certain.  It seems an inherent limitation on us that we cannot validate the system in which we are imbedded without reference to a standard outside the system.  At best, models are human playthings that must always be the slaves of the observations.  If they help improve our lives, if they help make better observations, if they seem self-consistent, let us be content with that rather than claim we “know” how the universe is put together and where it came from.  The arrogance of many modern astronomers is a character flaw that dishonors the leadership of Kepler, Herschel and many others who followed the data wherever it led, and pursued science as an attempt, however feeble, to fathom the mind of God.  It’s time to put the observations back in control and walk humbly in line.

(Visited 8 times, 1 visits today)
Categories: Cosmology, Physics

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.