If artificial intelligence means the end of the world, why are we so eager to build it?

The short answer is of course “because we can.” Humans have always been inventive. Anything that can we invented will be invented, by someone somewhere.

The long answer is somewhat complicated and philosophical. The truth is, our human brains are not very good at reasoning. We think we are, but our brains have evolved to win arguments, not to reason, as Hugo Mercier so brilliantly pointed out. That is why we still have religion. It’s completely irrational to believe in a god, but winning over others to our point of view is just what we are wired for.

Artificial intelligence, we hope, will be better at reasoning. It will be able to process far more data than any human brain ever could and thus be better at cognitive tasks than any human or group of humans will ever be.

Back in 1965,  I.J. Good noted that artificial intelligence could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. But the same AI could help us eradicate the human bias of “argument winning” and do away with the unnecessary things that keep humanity back, such as war, disease, poverty, selfishness, religion, and so on. Developing such an AI system could potentially be the biggest event in human history.

That is if we submit to it and let it do the reasoning. Which in turn means, if we let it play God. So the answer to why we are building it may be, interestingly enough, that we want to build a God-machine that will provide the answers to the universe’s ultimate questions. Why is there something rather than nothing? Why are we here? The Ultimate Question of Life, the Universe, and Everything, and so on. (Let’s hope the answer isn’t 42.)

The realization that the final incarnation of AI is not Alexa or Siri, but an all-powerful super-intelligence that may replace not just jobs, but also legal systems, governments, regulators, research etc. is troublesome. Which is why a number of people such as Elon Musk, Bill Gates, and one of the greatest minds of our time, Stephen Hawking, have begun to warn us about the dangers of AI. (To see their open letter about “RESEARCH PRIORITIES FOR ROBUST AND BENEFICIAL ARTIFICIAL INTELLIGENCE” go to  http://futureoflife.org/ai-open-letter/)

The reason for the grave concerns has to do with the human brain. If we are indeed wired not to reason, but to win arguments, then we will be tempted to use AI to win arguments. We could program it for nefarious ends, yes, but even if the programming intent is good it could decide that it can only achieve those beneficial goals by means harmful to humanity. Because it will be vastly faster and smarter than humans, we won’t be able to stop it. Hence the somewhat confusing calls for the development of an “off switch” lately.

It seems to me that with the potential of artificial intelligence, there is no task more important than to make sure AI does not get out of control. It is imperative that we know how to stop it before we let it loose. Governments must take an active interest in the development of AI. We can’t really trust Google, IBM, or Facebook to control their commercially developed AI systems in the long-term interest of humanity.

I am however optimistic that we will be able to put the lid on the God AI. Even more optimistic is Ray Kurzweil, whose decades-old predictions are now coming true. Kurzweil even thinks that because AI will cure so many diseases, solve energy and environmental problems, perhaps make us immortal, that we “have a moral imperative to realize this promise while controlling the peril. It won’t be the first time we’ve succeeded in doing this.”

Kurzweil is right. After all, we have developed systems that threatened humanity before. Think of the atom bomb for example. We’ve developed horrible biological warfare agents and no real catastrophe has resulted. We are working on genetic technologies that could spell the end of humans as we know them, but we are also keenly aware how we must regulate experimentation on humans.

We have to do the same with artificial intelligence: build it, but make sure it doesn’t get out of control. The answer, thus, to my question, why are we building it, is curiously simple. We can and should be confident that as humans we can use it to make the world a better place, and ultimately to solve many more problems than it will ever create.


Published by Dr Martin Hiesboeck

Futurist, Marketer, Policy Advisor for Companies and Government Head of Blockchain and Crypto Research at Uphold and CEO of Alpine Blockchain Consultants Zurich - London - New York - Taipei

2 thoughts on “If artificial intelligence means the end of the world, why are we so eager to build it?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: