Counting Craters: Bad Assumptions Undermine Reliability
A new chronology of Earth/moon history reaches conclusions that are so assumption-ridden as to be worthless.
Watch a six-second video at the top of a Phys.org article. It shows a flash of light on the moon. The European Space Agency has a large telescope aimed at dark parts of the moon when it’s above the horizon and not full, such as at first quarter and last quarter. A program named NELIOTA tallies these light flashes.
For at least a thousand years, people claim to have spotted flashes lighting up regions of the moon, yet only recently have we had telescopes and cameras powerful enough to characterise the size, speed, and frequency of these events.
While our planet has lived with the risk, and reality, of bombardment from objects in space for as long as it has been in existence, we are now able to monitor our skies with more accuracy than ever before.
Using early data since March 2017, the research team is coming up with numbers to estimate the impact rate hitting the moon:
To date, in the 90 hours of possible observation time that these factors allowed, 55 lunar impact events have been observed. Extrapolating from this data, scientists estimate that there are, on average, almost 8 flashes per hour across the entire surface of the moon.
They will be able to refine this number as the program continues till 2020, but let’s do the math on that number, 8 flashes per hour. (These, remember, are only the impacts large enough to cause a flash.) 8/hour x 24 hour/day x 365 days/year gives us a back-of-the-envelope number of 70,800 impacts per year. From that, we can multiply the assumed age of the moon, 4.5 billion years, and arrive at 320 trillion impacts, not counting the ones too small to leave a flash (which likely are much more common). Does anybody see a problem with assuming the moon is billions of years old?
Admittedly, the estimate assumes a constant rate of incoming projectiles, but planetary scientists generally are reluctant to invoke special epochs. They would prefer a steady state. Assuming a constant rate also makes the work easier. And yet scientists in another paper had to go out on a limb and explain away discordant findings:
Forgetting Bad Assumptions of Past Research
In Science, Sara Masrouei, William Bottke and team had to hypothesize a non-steady impact rate for the Earth and moon when data competed with assumptions. In their paper, “Earth and Moon impact flux increased at the end of the Paleozoic,” they concluded from crater counts that there must have been a surge of impacts on both bodies 290 million years ago that was 2.6 times higher than normal. Summaries of the findings can be read on Science Daily and on Space.com. What’s instructive for now is how they arrived at their conclusions, and more important, whether their conclusions are reliable.
The data are messy. Impactors are hard to characterize and predict. We can see them happening, as on the NELIOTA experiment. We can see their marks as craters. We have some historic data on fireballs that struck the Earth. Now, we have 90 hours of data on moon flashes. We also know that lunar transients have been reported for at least a millennium. But for hard, observable science, that’s about it. Other kinds of evidence are indirect, as we shall see, requiring interpretation and modeling. The more detail you try to put into a historical account, the more assumptions you need. Here are some this team made, and possible problems with each.
Assumption Mongering
Heat: From observations made by the Lunar Reconnaissance Orbiter (LRO) of crater temperatures from orbit, the team estimated ages of moon craters. How? Well, knowing that rocks erode over time, and that rocks hold their heat longer than dust during the 14-day lunar night (i.e., they have more thermal inertia), they inferred from temperature profiles that younger craters have more rocks, and older ones have more dust. They fit the relative ages they arrived at into the standard geological moyboy timeline.
Possible problems: Is the temperature decrease with time linear or exponential? Space.com says that veteran planetary scientist Adrian Melosh of Purdue is not convinced: “he’s not sold on the boulder- disintegration model they used — he thinks it doesn’t properly account for how that process speeds up as rocks get smaller.”
Sample Size: The authors tallied craters on the Earth, but admit that many parts of the Earth, for various reasons, are inaccessible and not well explored.
Possible problems: In the Space.com article, Adrian Melosh “doesn’t see enough Earth craters to support solid statistical analyses; he worries that they’re working from too small a sample size.“
Incomparable Objects: The authors realize that the Earth erodes its craters, whereas the moon does not. Earth has an atmosphere, rain, glaciers, earthquakes, plate tectonics, and other processes that can make quick work of craters. Only a few spots on Earth are safe enough to preserve their craters, and so these became their index craters for anchoring dates. On the moon, Apollo astronauts brought back samples that could be analyzed with radiometric dating.
Possible problems: All dating methods assume steady decay rates, but we’ve only used radiometric dating for about a century, not billions of years. Radiocarbon doesn’t help (the only method that sometimes can be cross-checked against historical records), because there is no life on the moon. The scientists applied the same radiometric methods on incomparable objects, and tried to fit the results onto the same geologic timeline. The Apollo samples are not representative of the whole moon.
Incomparable Processes: Realizing the moon is a smaller target than the Earth, the authors calculated a ratio of 1.6 per square kilometer for impacts hitting the Earth vs the moon.
Possible problems: The Earth also has a steeper gravity well, which they do not appear to have taken into account. The Earth also has an atmosphere that quenches impactors at a threshold of size and speed. The authors assumed that the impactors came from the asteroid belt. A number of other assumptions came into their ratio, but it was applied to downstream conclusions, which contain all those embedded assumptions.

Analysis of crater Bruno caused new worries about crater-count dating (30 Nov 2018)
Diamonds: As a check against their data, the authors compared locations with impact scars to the record of kimberlite dikes, which are fast-erupting volcanoes that leave deep pits where diamonds are usually found (see 1 June 2012). Since the dikes are intrusive features extending downward 1.5 km, they should provide a cross-check for erosion in those areas to compare with impact crater erosion rates.
Possible problems: It’s great to cross-check one data set with another, but the authors assume moyboy dates for both the kimberlite eruptions and the impact craters. Those assumed dates become constants that their variables depend on. If neither are reliable, the conclusions cannot be more reliable.
Crater Count Uncertainties: Problems with crater count dating were almost completely ignored in this paper (see 17 Sept 2010, 12 Oct 2016).
Possible Problems: Previous work has shown that crater count dating is so plagued with uncertainties, it is virtually useless for drawing any conclusions about surface ages. One large impact on Mars can produce a million secondary craters! To try to reduce uncertainty of crater counts due to self-secondaries, the authors used methods from one 2017 paper that analyzed the ejecta blanket of crater Aristarchus. They did not, however, refer to a more recent 2018 paper about crater Bruno that complicates the problem of self-secondaries further (see 30 Oct 2018). And what about those 320 trillion impacts if the moon is so old?
Justified Guesswork?
These are just some of the assumptions made that cast doubt on the reliability of the paper. If the assumptions are wrong, then the conclusions based on those assumptions are also wrong (or can only be right by coincidence, like the proverbial clock that is right twice a day). One author justified their work, tentative as it is, in this way, according to Space.com:
“We’re doing the best we can with what we have now,” Zellner said. “This is science, right? We put ideas out there, and then we find ways to test those ideas, and the idea either stands the test of time or it doesn’t.”
While this sounds noble, it is not realistic. Journals “put ideas out there” and they become set in stone. Other scientists start citing the paper as authoritative. Corrections are sometimes issued, but how many scientists see them? Retractions are more rare. This “set-in-stone” effect of publishing becomes exacerbated when looking at this paper’s references. How many of those 73 references did any of the authors actually read? How many of those references had been corrected, retracted, or superseded by newer discoveries in the meantime? As we saw, they quoted a 2017 paper about self-secondary craters but missed one in 2018 that undermined the earlier one. And how many of the referred papers were also based on the same faulty assumptions as this one? All these sources of error can have the effect of perpetuating unreliable science in print. Institutions are not likely to help, because their goal is to exalt their scientists, not criticize them, in press releases about the latest ‘findings.’ The press releases, then, get picked up by amalgamators like EurekAlert, Science Daily and Phys.org, who multiply the errors around the world, such that it becomes ‘accepted truth’ and the basis for the next unreliable paper. The system of science dissemination works against mavericks and independent thinkers outside the reigning consensus paradigm.
The authors of this paper also think the work is justified because it helps us understand the risk Earth faces with another large impact, which could cause mass extinction (not that we humans could do anything about it, except worry). Given the unknowns, though, (and not even considering the unknown unknowns), perhaps the best face that can be put on this paper is to call it organized ignorance.
We do not wish to be overly critical of this paper, because in many ways, it provides an insight into modern scientific reasoning. Readers can enjoy following the logic of the authors, as they encounter anomalies and work their way through them with Bayesian models, testing one set of assumptions against another, cross-checking data, reasoning through which scenario appears more plausible. They use math and draw graphs. They reference previous work. What’s there to criticize? Isn’t this better than doing nothing? You can hear the atheists sneering, “Isn’t this better than taking the word of some ancient pre-scientific text about the Earth and the moon?” Science may not be perfect, they will argue, but it’s the best method we have (see Best-in-Field Fallacy).
In response, we like to point out that such folks, despite their materialism and scientism, betray their belief in the supernatural. One cannot judge something to be good, better or best without acknowledging a moral standard (which is supernatural), and logic (ditto). Even aside from that comeback—which effectually pulls the rug out from under their mocking—an imperfect science is only as good as its reliability. A poor map may be better than no map at all, but a dozen poor maps floating in la-la-land are worse than no map at all. If the blind lead the blind, both fall into the ditch, Jesus said. What in this paper gives anyone confidence that its conclusions are reliable? We see a lot of busy work. That can be good. We see lots of sciency-looking trappings. That can be good. We see detailed observations of nature, which is very good. But overall, the paper is riddled with assumptions and unknowns, and it is supported by circular reasoning: i.e., the assumption of billions of years is used to prove billions of years. Its conclusions look like a house of cards on quicksand. Are you satisfied with organized ignorance? Is that “science” (knowledge)?
So now, we ask if it is helpful to put organized ignorance out there. We respond that it can be, as long as ALL the possible answers are up for consideration. We humans don’t know everything, obviously. We struggle to understand what we see. These folks, though, have pre-rejected Eyewitness testimony from the outset, and so they would not even consider the option that God actually told us how (and when) He created the Earth and the moon. The authors have also willfully ignored abundant evidence of a young Earth, moon and solar system. If indeed the Truth is in Genesis, then every alternative that rejects Truth is not only false, but evil. It’s doubly evil to use one’s gifts of rationality to turn other people away from the Truth.
This is not to say that Bible-believing scientists understand everything and don’t have problems of their own. The hero of Pilgrim’s Progress had LOTS of problems! But starting on the right road is the only way to get to the right place. Investigating nature on the right road is a gift of God that brings joy and awe. Since no human has exhaustive knowledge, it’s good to learn more. Psalm 111:2, a motto for many great scientists, says, “Great are the works of the Lord, studied by all who delight in them.” Daniel prophesied that in the last days, many would move from place to place, and knowledge would increase (Daniel 12:2). The Bible promotes learning, asking questions, and testing things. It only disdains willful unbelief.
This paper could be rescued as an exercise in logical analysis if it had started with this sentence: “If we were to assume for the sake of rational exercise that the Earth and the moon came into being by blind chance billions of years ago, here are some of the problems we would have to solve” and if it ended with “As you can see, scientism provides no certainty; at best, tentative models riddled with contradictions; at worst, a rejection of Eyewitness testimony. This exercise in rigorous storytelling reminds us of the gift of rationality that, while granting us the liberty to go on intellectual detours, brings us back to what we know with increased appreciation and gratitude.” We could say Amen to that.