Becoming Self-Aware: How to Survive when the Machines Rise

Technology is a beautiful thing. It makes our lives easier and it has helped us grow as a species. The life expectancy rate has increased by more than 40 percent in the last 100 years due to our technological advances. The computer that runs the smartphone in your pocket is a 1,000 times more powerful and a 1,000,000 times cheaper than the most advanced computer at MIT in the year 1970 (Kurzweil). But what happens when we get to the point where technology, instead of increasing our life expectancy, begins to diminish it? The prospect of a reduced life span (or no life span at all) is just one possibility when talking about a potential Artificial Intelligence takeover.

The disastrous results of creating an artificially intelligent supercomputer that we could not control would be devastating to the very existence of the human race. Realistically, there are a few different outcomes that are being discussed by scientists and futurists. There is the possibility that humans continue the trend of using technology as an extension of the brain (like our smartphones), to the point that we retrofit ourselves with certain equipment that changes the way that we think and how we experience the world. This would be a sort of transition from human to what scientists and philosophers are calling transhumans. That is, a being that is built upon the biological structure of a human, but who does not possess the feelings or desires that we traditionally
associate with the human condition.

Technology would change human beings so much that transhumans would be significantly different from human beings in terms of their ideals and abilities. Another scenario that has been discussed by philosopher Nick Bostrom, and which is the most common in pop culture representations, is the possibility of humans building a
supercomputer that decides that humankind is in some way a danger to its survival or in the way of reaching its maximum potential. This sort of scenario has been the subject of such films as The Matrix, by the Wachowski Brothers and Terminator, starring Arnold Schwarzenegger.

Either of these events would take place on a global scale and would, more likely than not, result in the extermination of the human species. These types of Artificial Intelligence scenarios would pose an existential risk to humanity, one that threatens not only to put a significant dent in the human population, but to exterminate humanity altogether. We would cease to exist.

“An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development” (Bostrom 2002).

As noted by Nick Bostrom in his essay “Existential Risk As a Global Priority,” humans have survived natural existential risks for centuries—one example being the Black Plague. But the plague was not something that was actively trying to kill us. A conscious supercomputer that was bent on destroying us would be a far more dangerous and a most likely unstoppable opponent. It would be smarter than all of humanity combined, and, given the amount of interconnectivity that will exist in the future, it would most likely know everything about anyone alive. So, when would such a thing happen and more importantly, how could we stop it?

The Singularity

Famous futurist and inventor Ray Kurzweil has written extensively on the subject of Artificial Intelligence, specifically on the moment where Artificial Intelligence surpasses all human intelligence combined. This moment is called The Singularity. Many philosophers and scientists believe that The Singularity will occur as early as 2040. The Singularity is something that is difficult to see beyond, as it is hard to conceive of a world where machine intelligence surpasses that of human intelligence. Philosophers have borrowed the term “Event Horizon” from physics, which is the moment just before an object is sucked into a black hole, to describe how difficult it is to theorize at present about what the world would be like post-singularity. 

The Singularity could mean lots of things for humanity. It could be the case that machine intelligence helps us answer some of the biggest problems of the universe, such as “Why are we here?” and “What is the purpose of existence?” That is, we could use machines to our benefit and coexist with the intelligence that we will have created. Fuzzy feelings about the singularity aside, there could be a lot of trouble just over the event horizon.
What are Some of the Potential Negative Consequences?

Machine Takeover

Machine takeover seems to be the more obvious problem in terms of pop culture awareness of the dangers of Artificial intelligence, but how exactly would something like Skynet in the Terminator films actually take place?
Well, it seems that we would have to look to the future, to that moment where human intelligence is surpassed by Artificial Intelligence. It is impossible to see beyond this previously mentioned “event horizon,” but it is possible to theorize about cases where even the most innocent uses of AI could potentially lead to the extinction of the human species.

Take a commonly-used example of an artificially-intelligent machine that is given the task of making paper clips.
The machine could be making paper clips and realize that the most efficient way to make paper clips would be to utilize certain resources that humans might need, but because it is programmed to make paper clips in the most efficient way possible, it may utilize the necessary human resources anyway. Even worse, imagine that the machine realizes that being turned off at night significantly decreases its production of paper clips and decides that it is going to stay on at night instead. The machine would perceive the human desire for it to be shut off as a threat to its end goal of paper clip production, and subsequently seek to destroy any possibility of being turned off by destroying human beings.

Obviously, this is an exaggerated scenario, but it illustrates the need for significant insight into how exactly the
artificially intelligent machines of the future will function. Will these machines be able to understand the nuances of human desires, or will they make the direct connection between a programmed goal of efficiency and the inefficiency of human beings in relation to their production? Unless we put significant efforts into finding the right way to program our artificially intelligent machines, it is likely that we will lose control of our own destiny.
The second scenario of a machine takeover is the one where machines literally become aware that humanity
poses a risk to the existence and persistence of the machine itself, and so it decides to destroy humanity in order to ensure its own survival. This is pretty serious stuff, and if we were ever to reach this point in our future it is unlikely that we would be able to survive.

This sort of scenario would most likely happen because of a lack of planning in the structure of the programming given to an artificially intelligent machine. However, this could also be an unforeseen consequence of reaching the singularity. It is possible at the moment the first superhuman computer is created that that AI could create other AI’s, which could then calculate the danger humankind poses to its own existence, and thus seek to annihilate the inferior species. The machine takeover scenario is one that human beings would have virtually no (no pun intended) control over. Even some computer viruses now have the ability to evade deletion, showing that they may have developed some sort of “cockroach like intelligence.” Imagine that same desire to survive in a computer with unlimited intelligence and resources—now that’s scary.

Transhumans

The idea of Transhumanism comes from a philosophical movement that took place during the early 19th century. Transhumans are essentially human beings that have become so intertwined with technology that they will have far surpassed the capability of human beings themselves. Another word used in common language for this sort of being would be a cyborg, which is an organism that contains both biological and technological materials as part of its person. Why would Transhumanism be a problem? Well, there are two realistic scenarios about why retrofitting ourselves with technology would lead to some complications.

Technological Advancement to the Point of the loss of the human condition

If someone were to ask us what makes us human there would probably be a variety of answers, but probably what makes us human more than anything else is our consciousness, our desires, and the way we see the world. All of those things have the potential to change with the creation of a transhuman species. It is possible that we could alter ourselves to the point that we no longer think like human beings. We wouldn’t have the same desires, feelings, or goals—we would be changed not only as individuals, but as a species. The scary thing is, this is probably the most foreseeable disaster related to Artificial Intelligence. We already carry our smartphones in our pockets and use them to answer questions and solve problems for us on a daily basis. Before the 21st century, the concept of being able to use a pocket-sized device to literally have access to the entire Internet (essentially
providing its users a second brain) had not been realized. Now I can use my smartphone, and if you ask me a question and I don’t know the answer I can simply look it up and most likely have the answer for you in less than 30 seconds. Because of something called Moore’s Law, we know that not only for the power of the technology itself, but also for the decrease in its size as well as a significant drop in cost.

So, using Moore’s Law, we can imagine that at some point in the near future we will have machines that will be exponentially smaller, cheaper, and more powerful than the ones we have now. To quote futurist Ray Kurzweil, “the machine that fits in your pocket today will fit inside a blood cell 25 years from now.”
This exponential technological growth, coupled with our ever-increasing dependence on technology, could have serious consequences in the future if we make the wrong choices in how to use our technology to better ourselves. Just imagine what it would be like to have access to everything without having to interact with an external interface—imagine having the same technological and knowledge seeking tools that you have in your computer or smartphone inside of your mind. Our dependence on technology for communication is already having significant effects on our ability to communicate with one another face to face.

The way that generations are growing up now is significantly different than just two decades ago. Our conception of reality could change entirely if we had essentially all of the information compiled online inside of our minds. We may end up living more of our lives in there than we do in the outside world. This kind of technological advancement could change all of those things that we say make us human: our desires, feelings, consciousness, and goals.

Subjugation of the Human Species by Transhumans

The second issue associated with the creation of a transhuman species is that transhumans would most likely be
far superior to their predecessors, homo sapiens. Many philosophers believe that a creation of this new species
would most likely lead to the Transhumanists essentially ruling the inferior human beings. This possibility would
essentially reduce human beings to the level of a chimpanzee in today’s terms. Chimpanzees, because they live in
a human-dominated world, depend on human beings for their survival. Human beings would most likely be subjugated and used to serve the needs and goals of the superior transhuman species.

This possibility illustrates just how important preserving our humanity is. We typically think of an Artificial Intelligence takeover as machines trying to destroy human beings, but we rarely think of it as us destroying our humanity, and thus destroying ourselves. If Transhumanism were ever to become a reality, then it is a very real possibility that humanity as we now know it could become extinct.

How Can We Survive the Rise of the Machines?

We may have to define survival when talking about the potentialities of an Artificial Intelligence takeover. Since there has never been a recorded event of this sort, there can only be talk about the possibilities of what could happen and how we could survive. And even then, the discussion should not be one about the survival of the individual, but rather one about the survival of the human species. So then, maybe we need to redefine what it means to survive. As was mentioned earlier in the article, human beings have survived naturally occurring catastrophes like the Bubonic Plague, which wiped out a third of the human population—but human beings were the most intelligent life forms in that equation. That is, if there was a species that could have figured out a way to survive something like that it would have been, in all probability, the most intelligent life form in the equation. But the paradigm shifts when we talk about a potential AI takeover. Instead of it being a virus spreading, it would be a super-intelligent entity that we created intentionally trying to wipe us out of existence. That’s scary. Not only would we be physically powerless to stop something like a supercomputer, we’d also be exponentially trumped in the candlepower category. We would no longer be the smartest beings in the equation. 

In our pursuit of greatness, we often tend to overlook outside factors that could be affected by our achievements. We rarely ask, what are the ethical implications of achieving a certain goal? Now, this question may not be a big deal if the goal we’re talking about is paying off a loan or buying a house; in those scenarios, the goal can most likely be achieved with a marginal ethical cost. But when we’re talking about the potentiality of creating an artificially-intelligent supercomputer, we better think long and hard about the ethical implications of our decisions and our goals.

CEO of Tesla Motors, Elon Musk, was recently quoted equating the creation of Artificial Intelligence with the
releasing of a demon in horror movies. Musk’s words were:

“If I were to guess at what our biggest existential threat is, it’s probably that… With artificial intelligence,
we are summoning the demon. In all those stories with the guy with the pentagram and the holy water, and
he’s sure he can control the demon. It doesn’t work out.”

Not surprisingly, Musk’s hyperbole was not taken as far-fetched by many experts in the field of AI. The importance of understanding the consequences of our innovations for our species cannot be overstated. Evaluation into the good that will be attained with our technologies must be evaluated against any potential harm that could be caused because of it. Going forward, the ethics of technological innovation must be taken as a serious topic in the field of AI, especially in the face of the coming singularity.

What Makes Us Human?

Another thing that we must do to ensure the survival of humanity as we move deeper into the era of AI is remember what makes us human and evaluate if our technological advances are in line with the goals that we have as a species. Obviously, the survival of our species is key, but so also are our hopes, feelings, and consciousness. The preservation of the essence of human beings must be an essential aspect of AI research going forward. The transition from a primarily biological life form like a human being to a transhuman one with different cognitive processes, goals, and feelings is an existential risk to biological intelligent life. If human beings
are transformed to the point where they no longer match the criteria for biological intelligent life, then the existential risk of AI will have been realized (Bostrom).

This is not to say that tinkering with our human bodies would be all bad—we could potentially retrofit ourselves to attain longer lifespans and higher order thinking, while still preserving our emotions and other cognitive processes. But it seems that part of what makes us human is our curiosity and our inability to provide answers to the questions we have right away. Certain limitations and inabilities seem to be an intrinsic part of what it means to be human. As we strip more and more of these inabilities away, and as we start to think differently, at some point would we cease to love like we love, and feel like we feel? It is certainly imaginable that traditional forms of learning such as reading and face-to-face teaching could become obsolete with our expanded minds. And as the foundations of the way we learn change, so probably will the things we learn. What will it mean to be human in a post-singularity world?

The question remains as to how much we can change ourselves before we begin to lose some of the intrinsic
parts of who we are. This issue, like the ethical issues of AI, needs to be taken seriously when discussing the survival of the human species. No matter how far our technological potential reaches, we must always be aware of the dangers posed by our progress. We have discussed how even a simple directive such as “make paper
clips efficiently,” could have catastrophic results for the human species. Making sure that our technological
advances do not surpass our understanding of the way they work is crucial to maintaining control.

We must be vigilant in taking all necessary precautions to grow AI as safely as possible. Humankind has never experienced a threat that has the potential of Artificial Intelligence. It’s important to keep that fact in our minds as we move forward with innovation in this field, and as we move towards the imminent singularity.

Preservation of Humanity

New technological advancements are being made every day. The last phone I had was a flip phone with 3G. Now I have a smartphone, and I feel like I can do anything on it. I even feel smarter when I use it, and I noticed that the way my brain works has changed. For example, I’m much more comfortable with the app-based windows 8.1 than I was with Windows 7 after I began using a smartphone. My brain is changing as the technology I’m using changes. But there are also negatives to this advancement.

Dependence on technology significantly increases the strain on face-to-face interaction. Walking into a restaurant
you’ll notice way more people on their phones than you would have 10 years ago. There’s less talking, and more texting (and Snapchatting and Vining and Facebooking). Be aware of the world around you and the people around you.

Don’t lose sight of what makes you human. It’s true that we’re changing with our technology, but we shouldn’t lose our ability to empathize or communicate. And if you’re spending your time on the Internet, spend it reading something worthwhile (like learning about the future of AI), so that you continue to learn. Keeping up with technology is tough nowadays, but it’s important to know what’s happening to the world around you, and to humanity in general.

Editor’s note: A version of this article first appeared in the Doomsday 2016 issue of American Survival Guide.

Concealed Carry Handguns Giveaway