D. J. Scott D. J. Scott
SciencePhysicsChemistry ► Organic Chemistry ► Biology ► Ethology ► Technology

Killer Instinct

Will Artificial Intelligence be the Death of Us?


Introduction

Much has been made both in our real world and in fiction of the perceived potential dangers of developing humanlike artificial intelligence, or “machine intelligence”. High-profile business men Bill Gates and Elon Musk have expressed fears over the development of general AI, as has world-renowned physicist Stephen Hawking (citations go here).

There are fears that human-like artificial general intelligence would lead to machines trying to overthrow humanity, or actively attempt to extinguish the human race. These fears have been covered extensively in science fiction. In 1999’s The Matrix, humans and machines have been at war for at least several decades, but as the hero’s mentor, Morpheus, informs us in an expository scene, “it was us who scorched the sky,” depriving the then-solar-powered machines of much-needed sunlight and driving them to enslave the human race.

It is difficult to imagine that an intelligence with no evolutionary history of fighting for survival or competing for mates would share with us such human failings as the desire to assert dominance, the need to devalue that which is different or destroy that which it does not consider as valuable as itself, or indeed whether it would even value itself to the exclusion of other forms of intelligence. We humans have a ridiculous tendency to credit our behaviors and thought patterns more to our relatively recently-acquired high intelligence rather than to our “primitive” evolutionary past, even though the latter seems to shape our minds far more than the former.

Specism, for example, or species-ism, the placing of higher value upon what one perceives as one’s “own kind” than upon other forms of life, is likely an evolutionarily evolved trait to prevent wasting resources on those with which we cannot breed, not the inevitable result of high intelligence. Judgements of inferiority toward other humans are, after all, generally intertwined with feelings of sexual revulsion; those whom we deem inferior we tend also to regard as sexually undesirable. Racism, it could be speculated, is likely the result of tolerances in this regard being unusually low (this sort of behavior might even be an isolating mechanism that can eventually lead to speciation). From an evolutionary perspective, xenophobia makes perfect sense. (cf. “uncanny valley” effect.) Predatory species that only rarely engage in cannibalism must logically have “specist” instincts of some kind or another. We think of other species as being inferior or less important not because this is the objective reality, but because it benefited our ancestors to have such a mindset. One might even argue that as we humans learn more about the natural world, behaviors like specism and racism become increasingly unjustifiable.

Nevertheless, the history of science fiction is replete with examples of machines possessing such personality traits; machines that xenophobically view humans as “the other” and feel the need to assert dominance over them. These are biological traits; traits common to social animals who’ve had instincts to solidify their genetic legacy honed by billions of years of natural selection. Social organisms have evolved to snuff-out competing lineages, to increase social-standing in order to ensure resources like food and access to mates. An artificial intelligence would have no such evolutionary history.

The main fears surrounding artificial intelligence, then, or at least fears revolving around malevolence or indifference toward humans, would seem to reflect our own guilt over our treatment of the natural world, or at the very least acknowledgement of unresolved comeuppance (cf. Hopi-Indian millennial eschatology). It would seem that what we fear is a non-human intelligence, equal to or greater than our own, treating us the way we’ve treated one another and the rest of nature (cf. alien abduction narratives). We routinely kill other intelligent creatures for food and subject them to all manner of sadistic experimentation in order to develop treatments for our own ailments. But there is no reason to believe a non-evolved intelligence would be able to so easily justify to itself such reprehensible behavior as we do.

What we should fear are the consequences of programming artificial intelligences to share our prejudices. If, for example, we program artificial intelligence to value the lives of humans over other animals, and the justification for this is that humans are more intelligent, then an artificial intelligence which surpasses our own might conclude that by the same standard, its own form of intelligence is more valuable than ours. A true general AI might even learn these prejudices from us without us intentionally teaching it. It will therefore be vital moving forward to devest ourselves of our own prejudices and anthropocentrisms, lest they ultimately be our downfall.


Dustin Jon Scott
Michael Zimmerman
Communicating Science in the Disinformation Era
14 February 2019

College Papers
Encountering Yahoocentrism
Beowulf: A Primordial Horror
How Urbanization has Changed Us
Through a Glass, Darkly
Fear of the Feminine
Mitochondrigeny
Putting Humans in Their Place
From Progressive Evolution to Opportunistic Evolution
How the Turn of the Century Turned Evolution Upside Down
Why Searching for Extraterrestrial Life in the Solar System is Hella Important
Is Religion an Affront to Reason?
Why We Need Genetic Engineering
Killer Instinct
⚑ = You Are Here.