Explained: Risks of Artificial Intelligence (Existential Threat)

Risks of Artificial Intelligence

Any sufficiently advanced technology is indistinguishable from magic.
Arthur C. Clarke

Can you describe technology in one word? 

I would use the word fast

I remember the first personal computer that my family bought and our internet connection which came through the telephone. It seems strange to think that during my childhood wi-fi wasn’t even a thing. 

Today our phones can talk back to us, Elon Musk wants to build a city on Mars, and Google is testing self-driving cars.  

Even so, there was a time when technology was more…relaxed. Consider, it took us a million years to get from the discovery of fire to the invention of the wheel. And it took us another 6 thousand years to get from there to the first aeroplane. 

That’s when things started to pick up the pace and the next 100 years were full of magical surprises: we built computers, split the atom, landed on the moon and gave birth to the internet. 

So technology isn’t just fast—it’s getting faster! The speed of technological progress today is quicker than yesterday, and tomorrow it’s going to be quicker still. 

The next surprise coming our way is artificial intelligence (A.I.). Some A.I. researchers think computers will become as smart as humans in a few decades, while others feel it may never happen. However, most agree it’s possible it happens in our lifetime.  

This makes me very curious about the risks of artificial intelligence.

What if machines get smarter than us?… What will life with A.I. be like?… Will A.I. be friendly?… Will robots take over the world?… 

Let’s look for the answers.  

What is A.I.?

A.I. is exactly what you think it is: non-human intelligence in a machine. Simply put, it’s an intelligent computer.

You already know a lot about A.I. because you use it every day. 

Google’s search engine is A.I., so are Alexa and Siri. Facebook uses it to identify people in photos, and commercial airlines use it to fly you to your destination. These are all examples of artificial narrow intelligence (narrow-AI).

Narrow-AI is only good at a limited set of tasks. Think of Google’s search engine which is super-smart at finding web pages, but can’t do anything else. You can’t teach it to play chess, drive a car or conduct research. 

Even with its limited intelligence, narrow-AI is going to disrupt pretty much everything in the near-future: finance, transportation, manufacturing, defence, energy, healthcare, communication! And, of course, the job market. 

Yup, narrow-AI is exciting stuff. Nevertheless, that’s not what we are going to be talking about. We will be discussing artificial general intelligence (general-AI) and beyond. 

General-AI can do any intellectual task that people can do. (You can think of it as human-level intelligence.) Sadly, it’s still a dream for A.I. researchers and doesn’t exist yet.

And what is beyond general-AI?

If computers keep getting smarter, eventually they will get smarter than us. WAY smarter. That’s artificial superintelligence (super-AI). 

I think superintelligence is a scary idea. And the more I think about it, the scarier it gets…

We are talking about an intelligence that surpasses our Newtons and Einsteins. In fact, super-AI would be smarter than the collective intelligence of the entire human population. And by “superintelligent” we don’t just mean it would be a wizard at academic stuff (like science and maths). A.I. would be superintelligent in every possible way (including stuff like social manipulation and wealth creation). 

Such a formidable A.I. can be created by a process called intelligence explosion. It goes like this: 

Suppose we invent a narrow-AI which excels in A.I. research and coding. Such an A.I. could design a smarter version of itself. The new version could, in turn, design an even smarter version, and so on. The result, after thousands of cycles, would be the creation of a super-AI.

(Some people think this process would be rapid, and that such a narrow-AI could transform into a super-AI in only a couple of minutes. Yikes!)

Now, I know what you are thinking: Why are we talking about the far-future? Super-AI may not even happen! And who knows what a superintelligent machine would do anyway?

If these really are your thoughts, you’d be happy to know that I wrote the next section especially for you.

How to Predict the Future?

How to Predict the Future?

The ever-accelerating progress of technology…gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
John von Neumann

Consider a man you’ve never met, and I ask you to guess his goals. 

You obviously can’t, yet you can guess he will be interested in eating and keeping himself safe. Similarly, we can try to predict the behaviour of super-AI. 

We know that intelligence is the ability to achieve complex goals. This means super-AI will, by definition, have godlike abilities to achieve its objectives. It will figure out that to achieve its objectives, it needs to first ensure its own survival. It would also need to ensure it has the necessary resources. 

So even though we don’t know what super-AI would want or why, we can still guess self-preservation and resource-acquisition would be among its priorities. (Oxford’s Nick Bostrom came up with this philosophical approach to understanding superintelligence.)

I’ll admit there are still a lot of I-don’t-knows here. Who knows what the future will bring. All we can do is make our best guesses. 

Here are some of the most insightful guesses I’ve come across (including some of my own). 

It’s time for the next question: Is it even possible for machines to be more intelligent than us?

Man vs Machine 

Animals can do many cool things. However, there is no denying the fact that humans are the biggest geeks and nerds on Earth.

No other animal tries to prove mathematical theorems or conduct scientific experiments. They also miss out on stuff like driving around in cars and building space stations.

In the millions of years of our existence, we’ve never faced a greater intelligence. So we wonder: Are humans the pinnacle of intelligence? 

We are not, not even close. 

Like the rest of the living world, humans were designed by evolutionary forces. Evolution crafted our intelligence to help us survive and reproduce. This has important implications. 

Anger and your Brain

It means our intelligence isn’t uniform, rather in some areas it’s more developed and less in others.  

Our intelligence is less developed in areas that aren’t essential for survival—playing chess, multiplying large numbers, remembering random facts, and so on. Sure we can beat animals at such tasks (since they can’t do them at all), but that doesn’t mean we are good at such tasks.  

We accomplish these tasks through conscious effort and our progress is slow. In other words, our brains were simply not designed for these skills. Computers caught up and surpassed us in these areas long ago. 

Our intelligence is more developed in areas that are essential for survival—walking, talking, recognizing faces, and coordinating with others. Although these tasks need enormous computations, we do them unconsciously and flawlessly. 

Take my eyesight for example. I see my elder sister and I instantly recognize her. It seems pretty simple! Even so, there is a lot happening behind the scenes…

As vision is critical for survival, evolution has dedicated a large part of my brain to helping me see. This specialized part of my brain works at lightning speed to recognize complex shapes, detect movement, measure depth and identify colours. Finally, my brain informs me, “It’s your sister!” 

A.I. still struggles with “simple” stuff like vision which even animals get right. This makes A.I. seem pretty stupid to us. Yet it has been making steady progress in these areas as well.

My point is that while human intelligence has weaknesses due to our evolutionary design, A.I. will not have any such limitations. (This observation was made by Hans Moravec, of Carnegie Mellon, who is an A.I. expert.) We need to wake up to the formidable potential of A.I. now because soon it might become smarter than us at all things.

A.I. has other advantages as well. The size of my brain is limited since I need to carry it around with me. Hardware, on the other hand, can be vast

Finally, there is the issue of how we acquire information. I gather information by the slow processes of watching, listening and reading, whereas A.I. would be able to swiftly download information.

It’s humbling to accept that tomorrow, mankind may no longer be the smartest kid on the block. Even so, our future may be as bright as the stars in the universe.…

Now, we turn to our next question: What can we achieve with super-AI by our side?

Cosmic Endowment

Cosmic Endowment

Remember to look up at the stars and not down at your feet.
Stephen Hawking

We have some bad news and some really good news. 

Let’s start with the bad news: Sadly, Earth will not last forever. 5 billion years from now, the Sun will enter the next stage of its life cycle. This means it will start to grow bigger, and finally swallow Earth. 

(Of course, things may end way sooner for us thanks to climate change, a supervolcanic eruption, a large asteroid impact or a nuclear war.)

Now for the really good news: If we have superintelligence guiding us, our story doesn’t have to start and end on Earth. 

The harsh truth is that species don’t last forever. Thanks to 5 mass extinctions in Earth’s history, 99.9% of all species that have ever lived are now extinct. But a super-AI would be able to anticipate and help us avoid such existential threats. And humanity may outlive our planet itself.  

You can compare the impact of A.I. to that of the Industrial Revolution. After the Industrial Revolution, machines started helping us with tough physical work. In the same way, super-AI will take on tough intellectual work.

This means technological progress would be in the hands of A.I. Remember, superintelligence would be smarter than us, and think faster than us. So the rate of technological progress would get crazy. 

Yet, progress can’t continue forever. There is an upper limit to technological progress which is set by the laws of physics. These laws tell us what is possible and what is wishful thinking. (For example, you can keep building faster and faster rocket ships, but eventually you would reach the speed of light. And that’s the limit.)

Today, the level of technology is way below this limit. However, with A.I. we can imagine a future where technology takes us to the physical limits of the universe.

I find it difficult to picture such a glorious future. Even scientists struggle with it, and that is why Russian astronomer Nikolai Kardashev has given us a structure to understand ultra-advanced civilizations. 

Kardashev thought of 3 types of mature civilizations that may exist in the universe. 

A Type I civilization would control all the energy available on its planet. (We are on track to becoming such a civilization in 100 years. Without superintelligence, we may not be able to go beyond this stage.)

A Type II civilization would control all the energy of its star. (This would require the technology to build a mega-structure surrounding the sun.)

A Type III civilization would control the energy of its entire galaxy. (If we were to meet members of such a civilization, they would seem like gods to us.)

You can think of a Type III civilization as our ultimate potential. All our progress and development—starting with the invention of stone tools—has been leading us towards this destiny. Perhaps the purpose of humanity is to spread life and consciousness to our corner of the universe. Maybe this is our cosmic endowment. 

In other words, we have a lot to look forward to, and A.I. can help us along the way. This leads us to our next question: How do we ensure super-AI is friendly?

Control Problem

Control Problem

It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… At some stage therefore, we should have to expect the machines to take control.
Alan Turing

Things go wrong with computer programs all the time. 

Sometimes I press the wrong button. Other times, I send emails with glaring typos that I wish I could take back. There are even times when I edit a picture, but it ends up looking worse than the original.

None of these examples is a big deal! The computer programs I use can’t wipe out all of humanity. All they can do is cause inconvenience and delays.

Superintelligence, on the other hand, is a big deal. We need to be sure that super-AI understands what we want, or things may end up catastrophically wrong

The danger of A.I. is best explained through the story of the paperclip maximizer: A super-AI is made in charge of a factory that produces paperclips. The owner gives it the goal to “maximize the number of paperclips produced.” The obedient A.I. starts to convert our planet, and then the rest of the solar-system into…paperclips. (This sinister thought experiment was created by Bostrom.)

But, why can’t we just turn it off?  

As we’ve mentioned before, a super-AI would excel at achieving its goals—maximization of paperclips, in this example. It would realize that the biggest threat to its goal is us trying to shut it down or changing its goal. Therefore, it would take active measures to protect itself.

I’m not saying we should worry about being turned into paperclips. This was an extreme example to make a point: When we say “maximise paperclips,” it seems obvious we don’t want ourselves and our loved ones to be made into office stationery. And that’s because we greatly value human lives.

Still, none of these things is obvious to A.I. since it isn’t born with our values. To be certain super-AI correctly understands our goals, we need to first teach it human values.

So once we teach human values to super-AI, it would become safe and friendly. Surely, then there would be nothing else to worry about? 

HUMAN GREED. We still need to worry about that. 

What do you mean?

Imagine the first team to finally create a super-AI…

How wonderful! Life would never be the same. We would now have superintelligence guiding us.

Not exactly. We would now have superintelligence under the control of a group of people—a corporation or a country. This group would have unimaginable power. And nothing on Earth could challenge it…except for another super-AI. 

The controllers of the first super-AI could decide to kill all other A.I. projects around the world. If they do, it would be the beginning of the end for the rest of humanity. The best-case scenario then would be that this group owns virtually the entire economy. And the worst-case scenario would be that they establish a global totalitarian regime.

Totalitarian Regime

Totalitarian regimes are no fun. Luckily, they tend to be unstable—sooner or later they crumble. However, we are talking about a regime powered by superintelligence, which could last until the end of time. 

Still, we don’t know what would happen. The creators of super-AI may turn out to be good guys…

Yes, they could turn out to be good guys, but there is no way to be sure. And we are talking about the future of humanity here. We can’t risk a bad outcome since the stakes are literally everything. Not only do we risk losing our cosmic endowment, but we may instead be headed towards millennia of oppression!

Fortunately, all hope is not lost. We can protect our future from our own greed by taking the 2 following steps. 

The first step is to give super-AI a long-term humanitarian goal. Superintelligence must not be created for the benefit of some people. This would lead to an imbalance of power unlike anything we’ve ever seen. Rather its goal must be to help all of humanity. Here is an example of a suitable goal: “Maximize human happiness.” 

The second step is to make super-AI autonomous. After making certain A.I. is friendly and has a compassionate goal, we need to stop controlling it. It should be free to do whatever it takes to maximize human happiness. Superintelligence would then go on to create the perfect society that we ourselves have tried and failed to. It would be like having a mechanical god in the sky looking out for us.

All the concerns we have discussed in this section are together called the control problem. The control problem is the responsibility to ensure that the A.I. story has a happy ending.

Time out! We’ve covered a lot of information. Can we do a recap? 

We started with the definition of AI… Then we talked about the potential for intelligence in brains versus hardware… We even tried to imagine the ultimate potential of humanity—our cosmic endowment… Finally, we talked about the risks of artificial intelligence and the need to solve the control problem.

This brings us to our final question: What am I supposed to do with this information?

Over to You…

Super-AI is still a distant dream. 

(We now step away from our visions of the future to focus on the current state of our technology.) 

And not even a single organization is pursuing superintelligence today. So it’s too early to start celebrating or freaking out. (A.I. regulation is certainly a hot topic right now, but it’s only focused on narrow-AI.)

It’s true, technology is growing exponentially, and who knows what will happen in the coming decades. However, there is no way to regulate a future technology that doesn’t even have a blueprint yet. 

With that said, there are 45 projects across the globe working enthusiastically to create general-AI. (The 3 major ones are Google’s DeepMind, the Human Brain Project and OpenAI.) As they get closer to creating general-AI, their goals will probably be updated to the creation of super-AI.

Perhaps the best response to superintelligence is awareness. Let’s become familiar with the risks of artificial intelligence. Consequently, we would be ready to join the conversation when the time comes. 

(Congratulations! You’ve already taken the first step by reading this post. If you’d like to learn more, check out Life 3.0, A Nice Game of Chess, The Singularity is Near and Superintelligence.)  

As for the A.I. community, they must solve the control problem before they invent super-AI. We don’t have the luxury to learn from our mistakes when it comes to superintelligence—we might not survive our first mistake! 

I think it’s worth repeating: “The control problem must be solved BEFORE super-AI is invented.” That’s the only way to protect our future. The failure to solve the control problem would leave humanity with only 2 options—eternal silence (extinction) and eternal suffering (totalitarian oppression).

We’ve, finally, come to the end of this post. Phew! Thanks for sticking till the end. Tomorrow we may have the power to transform the world. So I leave you with this question: What kind of world would you like to create?

Risks of Artificial Intelligence

FEAR HOLDING YOU BACK?

Take the Free
Success Barriers Challenge
& Unleash Your Success

Take the Free Success Barriers Challenge
& Unleash Your Success

Take the Free
Success Barriers Challenge
& Unleash Your Success

Success Barriers Challenge

Hi, I'm Pritam.

Pritam Author

Life hasn’t always been easy but I like finding solutions. I have read 100s of books on personal development – and I’m sharing the best ideas with you.

>

SUCCESS BARRIERS CHALLENGE

Take the Free

& Subscribe to receive helpful content.

Trust me, I hate spam as much as you do.

AWESOME!

The SUCCESS BARRIERS CHALLENGE will

arrive in your inbox shortly.

SUCCESS BARRIERS CHALLENGE

Take the Free

& Subscribe to receive helpful content.

Trust me, I hate spam as much as you do.

AWESOME!

The SUCCESS BARRIERS CHALLENGE will

arrive in your inbox shortly.

Copy link