Turing Test Stands: Your Brain Outperforms Computers
What is the speed of thought? Computer speeds are measured in megahertz and gigahertz, but that’s only part of the story. The ability to compute an answer to a problem depends on the programming, too. How does the brain compare with our best computers? A scientist from UC San Francisco and one from the Salk Institute teamed up to address that question in Current Biology.1 “Our goal here,” they wrote, “is to compare the capabilities and speeds of the brain with those of modern-day computers.”
To start with, the brain has over three times as many synapses per microliter (a billion) as a modern computer chip with 300,000,000 transistors per microliter. Brains also have wires (neurons) that are 10 times shorter, on average, as those on computer chips. They explained the main difference (at least as far as structure is concerned):
The difference between brains and computers arises not so much in the size of the elementary computer elements as in their numbers: where a modern microprocessor chip has 109 transistors, the human brain contains about 1014 [100 trillion] synapses (and a brain uses about as much power as a microprocessor). A state of-the-art microprocessor could have close to 30 km of total wire connecting its transistors, where the brain has 3 to 4 x 105 km of wire (most of which is axons). The brain’s total wire, then, is about the same as the mean distance from the earth to the moon (a little less than 4 � 105 km). Clearly, although the sizes of the basic computer elements are not so different between brains and computers, what is vastly (a million fold) different is the number of elements.
Now to processing speed. If a pulse in a neuron is assumed to represent an instruction, the brain wins again: 1011 (100 billion) instructions per second, a hundredfold more than that of a computer with multiple cores. (The computer catches up in gigahertz processing speed what the brain exceeds in numbers of synapses.)
So far, they’ve only been talking components. Major differences between the brain and a computer appear when the architecture is considered. Computers use a von Neumann architecture, in which the central processing unit (CPU) is segregated from the memory. This forces computer CPUs into a one-instruction-at-a-time straitjacket. Each instruction requires calls to memory, and each component has to keep in sync with the master clock. The brain, by contrast, is the master of flexible multiprocessing. They explain:
Neural circuits have no need for a central clock to keep actions exactly synchronized because any neural circuit in the brain has its own instructions embedded in the circuit itself: whenever it is presented with information, the circuit knows just what to do with it. Because the brain is not bound by the Von Neumann architecture, exactly what a particular neural circuit computes can be modified on the fly without reference to other circuits (as when we shift our focus of attention from one thing to another) and can also remember things for a lifetime (how to ride a bicycle).
Another major difference is that the brain is massively parallel and computers are not. Look at a shape – like a star. The computer has to examine each pixel and calculate its relationship to the neighboring pixels (or to standard shapes) before figuring out what it is; a little girl can see it all at once and know what it is instantly. Humans can handle enormous quantities of data all at once.
Nagarajan and Stevens admit that computer designers have been making enormous improvements. Parallel processing is all the rage these days. In time, they may catch up with the brain; but for now, “The problem with emulating the brain’s massive parallelism, however, is that we are not even close to being able to use the increased hardware power efficiently,” they said.
This is not to disparage computers. They are better at some things. They can perform repetitive tasks with high reliability, for instance, while brains, working probabilistically, get distracted, bored, or make errors.. This is good, though, for another reason: redundancy.
The four times out of five that information about a nerve impulse arriving at a synapse is not relayed on to the target cell by a synapse could be viewed as errors, but in fact synapses are designed this way. Neural circuits are highly redundant, with the same information arriving simultaneously at many synapses on different neurons so that, on average, neural components are predictable, in the same sense that a fair coin is predictable: you never know on a given flip whether heads or tails will turn up, but you can be sure that there will be very close to 500 heads out of a thousand flips.
An added benefit of this redundancy is high fault tolerance. We don’t have to worry when one neuron goes bad. One bad transistor in a computer, though, can be catastrophic. You can lose neurons as you age and still “function at a high level.” Here’s another “design” feature in your head: “Because of another brain design principle, the fact that neurons with the same function are located close to one another in the brain (this is called the doctrine of localization of function), the brain is much more tolerant to random death of neurons than it is to focal injury (such as a bullet wound or a stroke).”
They’re not done yet. Neurons can alter the strength of their signals without a hardware upgrade. They can do this by changing the probability of transmission through a particular neuron on the fly. And here’s another big brain bonus: scalability:
There is a final big difference between the designs of computers and brains considered here. Every time the performance of a computer circuit is improved, major design changes are necessary. Even modest alterations, like modifying the thickness of the wires on the computer chip, mean the computing components on the chip must be rearranged (a very difficult process). For evolution to work, however, neural circuits must have what is called a scalable architecture. This means that the computing performance can be improved by simply increasing the number of components and enlarging the circuit in accordance with the original design. Brain circuits generally have scalable architectures so that, for example, we are not even aware of the usual two to three fold differences in the size of brain areas from one brain to the next.
That fleeting reference to evolution seemed out of place amidst all the talk of design principles.
In conclusion, they calculated the speed of thought. Unfortunately, as discussed above, comparing computers to brains is like comparing apples to oranges (or Macintoshes to geniuses). One way to compare them is to have them run the same benchmark test. Here’s one: identify faces. The human brain shines here. We can outperform computers at face recognition by an order of magnitude with higher accuracy. But remember – that’s with a brain that is also doing many other things simultaneously. Giving a special-purpose computer the same task is cheating. Realistically, “One of the most difficult things for a computer to do is to extract objects from a visual scene, but we do this so rapidly and effortlessly that we are not even aware that it is hard.”
Another benchmark is the famous Turing test: the “thinking” test. If a questioner could not tell the difference between a human answer and a computer’s answer, the computer passes: it becomes indistinguishable from a thinking person. The authors refer here to a Turing test the computer has so far not been able to beat: the CAPTCHA operation. You’ve probably logged into secure websites where the computer presented you with a distorted word on a scrambled background that you had to identify and retype. Why do site designers do that to you? Because they know that computers have a terrible time getting the answer right. “The ease with which CAPTCHAs can be developed exposes obvious gaps between capabilities of computers and the brain.”
Speaking of CAPTCHA (which stands for “Completely Automated Public Turing test to tell Computers and Humans Apart”), Science had a triumphant-sounding paper about computer scientists who have figured out how to harness the collective power of millions of human brains. Since we humans are so good at CAPTCHA, the team decided to “re-CAPTCHA” some of our spare resources. Humans are performing about 100 million CAPTCHA operations a day anyway, so why not take advantage of all that processing power? It’s kind of like how SETI@home uses idle cycles from millions of computers. By adding a second word needing recognition to each CAPTCHA site, reCAPTCHA takes just a few fractions of a second of your time to help digitize books! How? There are massive projects underway to digitize libraries. These efforts employ optical character recognition (OCR) to convert photocopied pages to computer-coded characters, so they can enjoy all the benefits of search engines and cross-references. Unfortunately, OCR often has trouble recognizing words. These unknown words are farmed out to reCAPTCHA sites for humans to interpret. A clever cross-checking mechanism makes sure the answer is not bogus. The human-deciphered answer is correct over 99% of the time, compared to OCR’s success rate of 80% or so. It’s almost funny how these designers speak of the brain with its “wasted human processing power” being put to good use. But they really do have human benefit in mind. “We hope that reCAPTCHA continues to have a positive impact on modern society by helping to digitize human knowledge,” they said in conclusion.
And so, in conclusion, we return to Nagarajan and Stevens who, in their conclusion, speculated on whether computers will ever catch up with the human brain. Progress in computer design has certainly been impressive (Moore’s Law and all), “but we believe the problem is not computer power and ability to program parallel machines, but rather our nearly total ignorance about what computations are actually carried out by the brain,” they said. The last word: “Our view is that computers will never equal our best abilities until we can understand the brain’s design principles and the mathematical operations employed by neural circuits well enough to build machines that incorporate them.”
1. Naveen Nagarajan and Charles F. Stevens, “How does the speed of thought compare for brains and digital computers?”, Current Biology, Vol 18, R756-R758, 09 September 2008.
2. von Ahn, Maurer, McMillen, Abraham and Blum, “reCAPTCHA: Human-Based Character Recognition via Web Security Measures,” Science, 12 September 2008: Vol. 321. no. 5895, pp. 1465-1468, DOI: 10.1126/science.1160379.
Now wasn’t that an absolutely satisfying, fascinating, thrilling journey into your head, and a classic look at intelligent design science at work? Don’t show this to Eugenie Scott or she’s likely to have a cerebral hemorrhage. How many mutations did that take?
Wow! Scientists admit it: the brain is built on design principles. It is a massively parallel, robust, fault-tolerant machine that has kept the Turing Test challenge intact throughout decades of rapid, phenomenal computer design. Its design can be compared with computers that we design – and it is far superior. How on earth can anyone believe for a millisecond that this wonder just happened? And consider: they didn’t tell the half of it. The scientists vastly oversimplified things. Your brain is handling millions of subconscious operations at the same time you are thinking about this article, or identifying a face or a CAPTCHA word. Give a supercomputer all that simultaneous work and it would melt down. A computer has to be plugged in or recharged, but you can wander the globe. Your computer takes several minutes to boot up, but you can be awake and aware instantly. It has to be protected from liquids, but you can dive into a pool with your CPU not shorting out. And it runs on potatoes! (a favorite quip of A. E. Wilder-Smith, our Scientist of the Month).
Think! If “design principles” are required to understand the brain’s operation, of what purpose or value (or credibility) is evolutionary theory? While you’re at it, think about the conundrum of a brain thinking about itself—or a brain thinking about the conundrum of a brain thinking about itself. If some day a computer can fool a judge into thinking it is human, will that computer really be self-aware? Will it experience love, worship, beauty, or truth? Robots in Star Trek probe these questions, but the fact is, we design computers, and computers don’t design us (except in science fiction).
Get real. Think. If design principles were active in our creation, then there was a Designer who employed those principles. This includes the hardware and the software. You would not be able to consciously think about anything without an embedded BiOS (Bible Input-Output System) that the Designer built in, which gives you the preconditions for intelligibility of the world. Everyone has it. You couldn’t run the thinking application without it. But just as a good computer can be tricked into running malware (malicious software), a created being can be tricked into thinking its brain is a product of evolution. That is necessarily false. The brain could not even run that malware without the BiOS.
When you’re infected with this deep-seated, entrenched virus, or any of the other malware that information terrorists inserted into the global shipment, the only solution is to recognize that fact, then wipe, reinstall, and patch. Fortunately, outstanding technical support is just a call away (Isaiah 55:6-9) – and it’s free, straight from the Designer himself. Operations Manuals are also freely available by request (see BlueLetterBible and Bible Gateway, or that book in your hotel room drawer).
Got cycles? Think, then thank. Worship. CAPTCHA the thrill of the Psalmist who exclaimed, “I will praise Thee, for I am fearfully and wonderfully made. Wonderful are Your works, and my soul knows it very well” (Psalm 139:14, italics added).