Can a Robot Build Itself?
The news media got a load of Joseph Jacobson’s toy robots that could make copies of themselves. Ker Than on LiveScience, for instance, called these “biological” robots:
Inspired by biological systems, scientists have developed miniature robots that can self-assemble using parts that float randomly in their environments. The robots also know when something is amiss and can correct their own mistakes. (Emphasis added in all quotes.)
(See also MSNBC News). Calling these things “robots” requires a little stretch of imagination. They don’t walk or clean the carpet. They only have two parts. The parts line up in sequences five parts long. If extra parts are floating around, new copies of the 5-element sequence will form automatically because of the way they are designed to fit together.
Jacobson (MIT) made the parts latch onto each other in specific ways. The work was inspired by DNA, according to Stefan Lovgren in National Geographic, who said the goal was to illustrate the fundamental aspects of biological replication. Self-assembly had been demonstrated before:
But the new robots mark the first time a mechanical system has been created that can self-replicate from random parts using the same principles as biological systems, which assemble structures from disordered building blocks using error correction.
“We identified two ingredients about the biological process,” Jacobson said. “One is that it can make these copies from random parts that are distributed throughout the environment, and second is that it can do so with very high fidelity [accuracy].”
Jacobsen also said, “The analogy really is that of biology. Biology is exquisitely good at building highly complex, well-ordered structures from disordered parts.” The paper was published in Nature.1
Does this new work bear at all on the question of the origin of this high-fidelity self-replication? None of the articles speculated about it explicitly, but the paper did state that attempts by robotics experts “have yet to acquire the sophistication of biological systems.” The authors also noted that without error correction, the yield for replicating an n-bit string becomes exponentially small, the longer the string.2
1Griffith, Goldwater and Jacobson, “Robotics: Self-replication from random parts,” Nature 437, 636 (29 September 2005) | doi: 10.1038/437636a.
2(1 – e)n, where e is the error per input. For a string of length 5 with two parts, as in this experiment, the yield would be just 3% if e=0.5. For a string of length 10, the yield drops to .09%. For a string of length 100, the size of a small protein, the yield is 8 x 10-29, and that is assuming only two kinds of parts. Since proteins are made up of 20 different kinds of amino acids, the error is correspondingly higher, and the yield much, much lower.
One wonders of anti-ID apostle Ker Than lept onto this story during the week of the Dover trial to show that the problem of the origin of life may not be that bad. He could show pictures of “self-replicating robots,” just like DNA. The devil is in the details.
This experiment supports ID and defeats chemical evolution theory in many important ways. (1) It illustrates the extreme differences in complexity between Jacobson’s simple 2-part, 5-length strings of nonsense and the luxuriously ordered forms of DNA and proteins. (2) It shows that intelligent guidance is required to make the parts fit together according to rules. (3) It overlooks the problem of left- and right-handed forms. (4) It requires a suitable environment for the parts to come together (here, a frictionless surface with ample spare parts). (5) The error correction derives from the parts themselves. In the cell, DNA errors are corrected by multiple proofreading machines. (5) It makes the yield for lengthier strings of more parts appear hopeless. (6) It demonstrates that no language convention arises by the attractive forces of components. Jacobson got strings of GGYYG and YGGYY. What does that spell? What function or meaning does it convey? Nothing.
In living cells, the DNA is a code that specifies parts that have function. These codes are translated by machines into another code. Multiple machines and pathways exist to maintain and correct the DNA language. Any resemblance, therefore, of these so-called “error-correcting robots” to DNA is as superficial as bits (0 and 1) are to an encyclopedia. Don’t allow such things to be used as propaganda for evolution when they are really strong arguments for intelligent design. According to Dembski’s no free lunch principle, any semblance of complex information achieved by this “evolutionary” algorithm was only made possible by the insertion of intelligent design on the front end. Naturalism can permit no such luxury.