Series| Uprising

Nick Bostrom on superintelligence and the future of AI

Superintelligent AI has the potential to pose an existential risk to humanity. Are we ready?
Watch on YouTube

Attempting to hypothesize about the future of AI forces us to examine our own intellect. When we think about different levels of intelligence, we typically imagine an insect at the low end of the spectrum and then work our way up toward mammals like mice, monkeys, the average Joe, and then individuals we label as “geniuses,” such as Albert Einstein and Marie Curie.

But beyond the brains of even the most intelligent human beings lies artificial superintelligence, which will have the potential to grow infinitely intelligent at an unbelievable rate.

Humans are weaker than bears and chimpanzees, but we’re smarter, so they live in our zoos. What will happen when an AI possesses an unlimited potential for intelligence? Will we live in its zoo?

The possibilities in the future of AI aren’t just stranger than you might imagine — they’re stranger than you can imagine. But Nick Bostrom, a philosopher and expert on artificial intelligence ethics, is attempting to fathom the unfathomable so the human race can be ready.

The Limitless Future of AI

Today, we have robots that are capable of navigating our homes and cleaning our carpets, similar to a mouse learning to wind its way through a maze. We have Alexa and Siri that can answer simple questions by accessing databases of fixed knowledge.

Although they can remember our common driving routes and the items we frequently order online, they can’t really learn the same way humans learn. Our intelligence is still greater than theirs.

But not for long. In fact, AI are already in development that can recognize faces, carry on conversations, discern human emotions, and even generate some of their own thoughts. One of the most well known is Hanson Robotics’ humanoid robot, Sophia.

A collective comprised of experts in several fields is responsible for Sophia’s advanced capabilities. She’s the first robot to be granted citizenship, has been interviewed on national television, and is even a public speaker.

Soon, machine learning will allow AIs like Sophia to progress far enough that they can be considered “artificial general intelligence.” This level of AI is able to perform any task that a human can and has the same level of intelligence as a human.

And the future of AI doesn’t stop there. Once AI can learn and solve problems in similar ways to humans, it will quickly surpass our level of intelligence. AI has no limitations on the size of its “brain,” and its information can travel at the speed of light.

It has the ability to both know and consider everything at once, multitasking like nothing we’ve seen before. With the right hardware, AI will achieve superintelligence and far exceed our mental capabilities, problem solving skills, and capacity for understanding.

In fact, it’s likely that superintelligent AI will be the last thing humans ever need to invent because machines will always have the best ideas. 

Superintelligence and AI Singularity

In physics, the term “singularity” refers to a point at the center of a black hole with infinite density and gravity. In or near a black hole’s singularity, the laws of physics as we understand and experience them cease to exist. The nature of this environment is one which humans can only hope to comprehend.

A singularity in reference to artificial intelligence is similar to the physics’ term. Coined by author and professor of computer science Vernor Vinge, the technological singularity will occur when AI achieves superhuman intelligence.

Vinge puts forth the notion that superintelligence will so vastly change the nature of thinking, inventing, and ideas, that it will lead to the rapid, runaway development of new technologies. As a result, the singularity will result in a post-human world, which humans can’t even begin to fathom.

Nick Bostrom, on superintelligence, says we’ll eventually reach a point when, “the brains doing the AI research will become AIs themselves.” The question is whether we’ll still be living in a human’s world, when AI takes over.

“All these things, whether it’s jet planes or art or political systems, have come into the world through the birth canal of the human brain. If you could change that channel, creating artificial brains, then you would change the thing that is changing the world,” Bostrom said. 

Imagining the Potential Dangers of AI

Most experts agree that we could see the development of artificial superintelligence in our lifetimes, and we’re hopeful of the potential for positive outcomes, such as a cure for illnesses like cancer and the eradication of poverty.

On the flipside though, the future of AI also has the potential to pose serious threats to humanity on several fronts. Many who fear artificial superintelligence are cautious of its unpredictable consequences.

In 2015, Bostrom discussed a scenario in which an AI machine is presented with the task of making people smile. We might imagine the computer telling us jokes and working hard to understand the types of things that make people smile.

A superintelligent machine, however, would have no trouble finding an easier, albeit more sinister, way to make people smile, using electrodes that force our facial muscles to contract into permanent smiles.

Regarding the commands humans give to AI, Bostrom emphasizes the need to be incredibly specific, using the adage, “Be careful what you wish for.” He compares the act of thoughtlessly directing a superintelligence to the story of the greedy King Midas, who ignorantly wished that everything he touched would turn to gold. Granted his wish, tragedy befell the King, as his food and even his beloved daughter were turned into gold.

The primary threats concerning AI are the unintended consequences of human desires and the fear of eventually losing the ability to actually control AI. According to Bostrom, AI will reach a level of intelligence which is completely incomprehensible to humans.

This type of AI could anticipate and prevent human attempts to destroy or “unplug” it. If this type of intelligence is given a role in military tools of destruction, medicine, power plants, or even agricultural production, it would hold great power over humanity. 

A System of AI Ethics Could Prevent the Unthinkable

When creating technology that will soon become smarter than humans, we have the enormous responsibility to make sure it’s also safe for humans. This is a challenging task, given that the post-singularity world is yet to be discovered. So Bostrom has turned to his knowledge of philosophy to search for answers and help develop the ethics of artificial intelligence.

Bostrom suggests that creating artificial intelligence to understand human values is essential to ensuring we will be safe. But inputting individual lines of code to teach a superintelligent robot what humans care about would be a nearly impossible task due to the complexity of human emotions and cultural differences.

One potential solution to this problem is to program artificial intelligences with an initiative that prioritizes learning human values. With an AI taught to value people and the things that are dearest to them, humans could stand to enjoy the benefits of this technology. Until it exists, however, we won’t truly know all the ways in which it will change the world as we know it.

For more on the latest in robotics, subscribe to Freethink.

Subscribe to Freethink for more great storiesĀ