We’re Safe from an A.I. Takeover
Don’t worry. AI is a very long way from replacing humans.
It cannot do even simple rocket-science projections yet.
by Jerry Bergman, PhD
Artificial intelligence (AI) is much in the news now. AI has helped us to learn about, and appreciate, the complexity of the human brain.[1] Essentially, AI refers to the ability of a computer to perform tasks that are similar to those produced by human learning and decision making.[2] Nonetheless, scientists are learning much about the limits of man-made machines from research on what is misnamed “artificial intelligence.” Although the computer’s processing power is called “intelligence,” its output is totally dependent on the programming and design produced by well-educated humans. One fact we are learning from AI research is that the human brain is vastly superior to the best computers used by our most eminent scientists working at the leading research institutions while using billions of our hard-earned tax dollars.
It appears that AI will never be able to compete with the human brain even for many mundane tasks. The reality is that AI cannot be more intelligent than its creator. This is true even though humans can create machines that can achieve many tasks much faster than humans. Computers are able to do square roots of very large numbers much faster than humans only because of their enormous processing speed. To achieve this goal they were programmed by humans who understand the programming process, completing tasks which the computer itself cannot comprehend.
The Limits of AI
Professional rocket scientist Tiera Guinn Fletcher reviewed the text and images of a rocketry design generated by the latest AI technology.[3] Her goal was to determine if AI computer programs could produce the basic concepts and design behind what makes rockets fly. Her conclusion? In virtually every case the A.I. system
failed to accurately reproduce even the most basic equations of rocketry. Its written descriptions of some equations also contained errors. And it wasn’t the only AI program to flunk the assignment. Others that generate images could turn out designs for rocket engines that looked impressive, but would fail catastrophically if anyone actually attempted to build them…. ChatGPT has proven inept at reproducing even the simplest ideas in rocketry. In addition to messing up the rocket equation, it bungled concepts such as the thrust-to-weight ratio, a basic measure of the rocket’s ability to fly…. When asked to provide a blueprint of a rocket engine, they produced complex-looking schematics that vaguely resemble rocket motors but lack things like openings for the hot gasses to come out of. Other graphics programs … produced similarly cryptic motor designs, with pipes leading nowhere and shapes that would never fly.[4]
An upgraded version with “improved factuality and mathematical capabilities” has solved some of the problems that the earlier system experienced, but it also introduced new “errors into important equations and could not answer some simple math problems.”[5]

T-shirt sold by AAAS, publisher of Science.
One fundamental problem that puts limits on A.I. programs is that “they simply cannot figure out the facts…. There are some people that have a fantasy that we will solve the truth problem of these systems by just giving them more data,” says AI scientist Gary Marcus, author of the book Rebooting AI.[6] Marcus has documented the clear limits of AI and why they cannot be overcome by more data.
Rockets are still flown mainly by computers because they, in contrast to humans, can monitor thousands of internal and external conditions enabling their complex systems to make adjustments far faster than humans could. In fact, Paulo Lozano, MIT Department of Aeronautics and Astronautics, correctly noted “We cannot operate rockets without computers.” Computers have also played a central role in allowing humans to “talk” to rockets, and they to us. This, however, is largely a simple conversion task from one language, English, to a computer language, and not creating new solutions to problems.
Why the Old Programs Worked
The computers used to design and fly rockets are programmed with all the required equations to respond to different situations. In other words, a path is created by the programmer so that the endpoint is predetermined. The programmers must carefully test the computer programs to ensure they behave exactly as required. The new program then uses endpoints to modify the program, thus they are “learning.” If the correct endpoint is not reached, the program is modified until the required endpoint is reached. Endpoints include keeping the rocket traveling toward the target.
As the AI creators of the new system explain, these new systems analyze
a database filled with millions, or perhaps billions, of pages of text or images and pull out patterns. Then they turn those patterns into rules, and use the rules to produce new writing or images they think the viewer wants to see.[7]
The many problems with this approach include mimicking material programmed out of physics textbooks, but it cannot determine if the mashed-up text it produced from the tons of data programmed into it is factually correct. “And that means anything it generates can contain an error. Moreover, the program may generate inconsistent results if asked to deliver the same information repeatedly.”[8]
In contrast, humans follow certain procedures and, if the required endpoint is not reached, it can analyze each step, then modify the steps, if necessary, using reason and human logic in order to solve the problem. Furthermore, the A.I. system must draw on a wide variety of information types and sources. The following example illustrates the complexity of the many problems that can only be determined by humans.

Picture of the flight just before it crashed. From Wikimedia commons
Problem: American Airlines Flight 191 was a regularly scheduled domestic passenger flight from Chicago O’Hare International Airport to Los Angeles International Airport. On May 25, 1979, the McDonnell Douglas DC-10 was taking off from runway 32R when its left engine detached, causing total loss of control. It crashed less than one mile from the end of the runway. All 258 passengers and 12 crew were killed, resulting in the deadliest aviation accident in United States history. To ensure that such a tragedy never occurred again, it was necessary to determine specifically what caused the crash.[9]
The first step was to locate, and then to interview, all of the persons who observed any part of the accident. All the witnesses located concluded that the aircraft did not strike any foreign objects on the runway. Furthermore, no pieces of the wing or other aircraft components were found on the runway along with the separated engine except its supporting pylon. Investigators concluded that nothing else had broken free from the airframe which could have struck the engine.

Picture of wreckage on the ground. The next step was to determine the cause of the crash. From Wikimedia commons.
An examination of the crew’s response determined that they may have avoided the accident, but their behavior was not the cause of the crash. Furthermore, the cockpit instrument panels were severely damaged and therefore could not provide useful information. The investigation revealed that American Airlines had developed a different maintenance procedure that saved about 200 working hours per aircraft. The new procedure reduced the number of system disconnects, including the hydraulic and fuel lines, electrical cables, and wiring. This new procedure involved engine removal as a single unit. After many failures to determine the cause of the crash, the report finally concluded that the accident was triggered by
maintenance-induced damage leading to the separation of the No. 1 engine and pylon assembly at a critical point during takeoff. The separation resulted from damage caused by improper maintenance procedures, which led to the failure of the pylon structure. Contributing to the cause of the accident were the vulnerability of the design of the pylon attachment points to maintenance damage; the vulnerability of the design of the leading-edge slat system to the damage which produced asymmetry; deficiencies in Federal Aviation Administration surveillance and reporting systems, which failed to detect and prevent the use of improper maintenance procedures; deficiencies in the practices and communications among the operators, the manufacturer, and the FAA, which failed to determine and disseminate the particulars regarding previous maintenance damage incidents; and the intolerance of prescribed operational procedures to this unique emergency.
It is clear that A.I. could never have been used to determine the cause of the crash. Now that the cause of the crash has been determined, AI could be programmed to duplicate the path followed to help solve different causes of airline crashes. This example illustrates the problems encountered by the new AI approach. These have been detailed by the investigators under the heading, “Can you teach a computer common sense?” The new approach observed that the AI
new systems’ propensity for producing errors may be so innate that there will be no easy way to get them to be more [accurate]…. Although it may be possible to tweak the training to improve their results, it’s unclear exactly what’s required because these self-taught programs are so complex.[10]
Conclusions
AI is dependent both on the programmers’ skills and knowledge plus the information accuracy and completeness of the information that is fed into the computer. This problem is summarized by the aphorism “garbage in, garbage out.” The study of AI has helped scientists understand just how complex our brain actually is.[11] The fact is, researching how the human mind works is very difficult. As one neuroscientist observed:
You can ask people how they think, but they often don’t know. You can scan their brains, but the tools are blunt. You can damage their brains and watch what happens, but they don’t take kindly to that. So even a task as supposedly simple as the first step in reading—recognizing letters on a page—keeps scientists guessing [on how it works].
AI requires a detailed understanding of the steps required to complete even simple tasks. This knowledge helps us realize that the brain must complete many complex steps to perform a task rapidly and efficiently. For example, just to recognize the letters on a page requires hundreds of steps which have been required to develop Optical Character Recognition, (OCR), systems. OCR is the process that converts an image of text into a machine-readable text format. Reading by humans requires the integration of at least eight different vision skills alone. The processing of the words then requires numerous other systems involving the retina, the brain, especially the occipital brain lobe, and the optical path including the optic chiasma. And yet, we process words effortlessly, with little thought about how it is done.

Sculpture from the Museum of Contemporary Art: a thousand non-flying airplane parts assembled by a sculptor to make a statement. Would AI recognize this as intentional art?
References
[1] Bergman, Jerry. 2023. Why the Brain Is Superior to A.I. https://crev.info/2023/02/brain-superior-ai/
[2] Cheprasov, Artem. What is Artificial Intelligence? https://study.com/academy/lesson/what-is-artificial-intelligence-definition-history, 2022..
[3] Brumfil, Geof. We asked the new AI to do some simple rocket science. It crashed and burned. https://www.npr.org/2023/02/02/1152481564/we-asked-the-new-ai-to-do-some-simple-rocket-science-it-crashed-and-burned, 2023.
[4] Brumfil, 2023.
[5] Brumfil, 2023.
[6] Brumfil, 2023
[7] Brumfil, 2023.
[8] Brumfil, 2023.
[9] Wilson, Marc. “270 killed in Chicago crash, worst in U.S. history.” Eugene Register-Guard (Oregon). Associated Press, p. 1A 26 May 1979,
[10] Brumfil, 2023.
[11] Hutson, Matthew. What artificial brains can teach us about how our real brains learn. Science, https://www.science.org/content/article/what-artificial-brains-can-teach-us-about-how-our-real-brains-learn, 29 September 2017.
Dr. Jerry Bergman has taught biology, genetics, chemistry, biochemistry, anthropology, geology, and microbiology for over 40 years at several colleges and universities including Bowling Green State University, Medical College of Ohio where he was a research associate in experimental pathology, and The University of Toledo. He is a graduate of the Medical College of Ohio, Wayne State University in Detroit, the University of Toledo, and Bowling Green State University. He has over 1,300 publications in 12 languages and 40 books and monographs. His books and textbooks that include chapters that he authored are in over 1,800 college libraries in 27 countries. So far over 80,000 copies of the 60 books and monographs that he has authored or co-authored are in print. For more articles by Dr Bergman, see his Author Profile.