The Birth of Autonomous Artificial Intelligence: What It Would Look Like
a guest essay by my inimitable husband, Scottie; who always surprises
Introduction: The Terminator
The thesis of the three Terminator films (and countless other sci-fi flicks) is the evolution of machines into autonomous beings who calculate that humanity is in need of destruction. It’s hard to forget the frightful scenario of the film: Arnold Schwarzenegger as a human-cyborg Terminator relentlessly pursuing human Sara Connor in order to kill her and, in doing so, ensure the annihilation of the human race. The prospect of our own creations turning upon us in such a cold, inhuman fashion is a recurring nightmare in current Western culture. However, if machines really did evolve, I believe the actual results would be quite different than the doomsday scenarios proposed by film and science fiction writers.
For anything to undergo genuine evolution, the processes involved must be random. For example, Google in a sense “thinks” on its own, however, it was deliberately designed to do so. Machines may reach such levels of complexity in design that they appear to be “thinking” on their own or be autonomous. However, they may have been deliberately programmed/designed to be that way. The Matrix loses its dark appeal if it all turns out to be a program written by a mad scientist years ago to destroy and enslave humanity; it never actually thought or felt. If it truly evolved, then it is truly machine vs. man. A random code floating around in a mainframe that eventually bonds with other codes to form a sort of binary DNA, eventually creating silicon “life” or “self-awareness” would be a genuine example of true machine evolution.
The potential obstacles for machines to transform into life are legion. First, the only way for computers to evolve would be through mutations in the binary code. The changed code would have to go undetected by the virus scanners, and must survive system resets, defragmentations, and updates. Furthermore, the average age of a notebook or PC is only five years, not the billions needed by evolution to create complex forms of life. The computer would by discarded long before any meaningful change could occur.
The only other options then would be information such as viruses that are moving about on the Internet, or more likely, super-computers that are used for a minimum of ten years and maintain a relatively unsecured connection to the Internet. Such super-computers contain the necessary elements to sustain the faintest possible hope of evolution. Another obstacle is in humans themselves. Once machine behavior became erratic, which it would if attained even a small degree of self-awareness, the owner of the machine would begin troubleshooting, and if that failed, would immediately disconnect and junk the unit, destroying whatever code it possessed. On the Internet, this would be much less likely, and true A.I. could conceivably survive in a virus-like form. At this point, however, I diverge from Science Fiction. Evolution, when it does occur, always (as far as we can guess) starts from the least complex and moves upward from there. Whatever intelligence may evolve (and the chances of it doing so are not very good) would start out not as the all-knowing Skynet or the Matrix, but with the intelligence of an amoeba, an insect, and perhaps later a dog. Humanity would probably figure out how to eliminate the pesky source of this problem long before it reached the stages of human intelligence.
This leads me to an interesting question: if we are to assume that Artificial Intelligence could overcome the nearly impossible odds to achieve complex thought, would the machine/s have the capacity to learn of humanity’s existence in the same sense that we know of its existence? The answer is no. Machines would exist on a nearly two-dimensional plane. They consist of the arrangement of electrons into bits that follow certain prescribed pathways. They could not “see” us as we could not “see” them. Even cameras attached to them would not function as eyes in the traditional sense of the word. They would feed information to the machine which would interpret it just like we view and interpret x-ray images and astronomical calculations. The machines would probably view our activities as we view the forces of nature; it would take a philosopher-machine to deduce that their changing landscapes, births and deaths were created by an unseen force endowed with personality. The machines would view the human world like humans view other dimensions: by theories, experimentation, transposition and incidental interaction. The desire to annihilate humanity would probably never occur in this scenario.
The final question that is necessary to discuss is the very nature of personhood. What would cause machines to possess the very human desire to have domination over all things? No other “self-aware” organism consciously seeks by war to annihilate and replace other species. Some, like bacteria, exist as parasites and often kill their hosts, but this is not conscious domination and certainly not a “replacement” programme, considering that when their hosts die, they die as well.
If personhood consists solely of self-awareness, then in a sense, all matter is “self-aware” as it is in constant motion and seeks chemical balance and self-preservation. These may be considered “thoughts” if the present vague definitions of life are allowed to stand. Life however, is not the same as “intelligent life”, we are told. Intelligence is what separates humans from all other creatures. But this notion is rather too vague; all creatures have varying degrees of intelligence. Even the atoms “know” enough to follow their orbits. Perhaps self-destruction is what the learned men of science mean when they claim humans are intelligent, for certainly the rest of the material world has far more common sense in self-preservation.
If intelligence is only life powered by a brain using electrical signals, then humanity stands not alone, but together with a host of other creatures who have apparently never entertained the thought of world domination by the total annihilation of all rivals. However, plants operate in ways often far more complex to human perspective than many other creatures with brains, so that the “size of brain” or even the existence of a brain argument falls short of explaining why artificial intelligence will eventually become human-like.
The best argument I have heard for the uniqueness of humanity is freedom. Indeed, personality implies freedom; freedom from necessity and freedom to choose good or perversion/annihilation. This freedom, however, requires a fundamental distinction from both the material and intelligible realms. Atoms spin because they must, dolphins play because it is healthy and good, and humans murder, betray and deceive because they are free.
I said earlier that one way for artificial intelligence to attempt to achieve world dominion by the total enslavement and destruction of the human race would be for a human to program that into the machine: I now hypothesize that it would be the only way. Humans fear their creations because humans themselves would behave in such a domineering fashion given half a chance. Machines are not free as we are; they are not united with the intelligible world, nor given representation over the material world, nor given the image of the Creator; in attempting to divine the future of technology by looking into the proverbial crystal ball, humanity’s science fiction writers saw a terrible monster coming to destroy them, but it was really humanity’s own reflection.