How to protect our future (when AI gets too smart)

The laws we make now will shape the future of AI... and our species.

Watch on YouTube

As technology continues to rapidly advance, the future of AI looks promising, but it doesn’t come without risks. How we choose to govern artificial intelligence could play an integral role in protecting the human race.

Imagine a humanoid robot created to serve its master, but in becoming more self-aware, it realizes that it’s more powerful than its creator and goes on to threaten the human race. This science fiction storyline is all too familiar, however AI represents a much broader sector of technology than this narrow Hollywood representation.

Artificial intelligence theorist Alan Turing originally came up with the concept of “thinking machines” in 1950. This understanding of AI has evolved over the years as its abilities and applications have become more complex. A more generally accepted definition of AI today is a machine that can make decisions and perform tasks which normally require a human level of intelligence.

This sort of artificial intelligence already surrounds us in virtually every industry. Our voice-activated personal assistants Siri and Alexa are forms of AI used on a daily basis. Both Netflix and Pandora use machine learning to make entertainment recommendations.

Along with these everyday applications, AI is used in automated financial investing, manufacturing robots, autonomous weapons, disease mapping, facial recognition software, customer service chatbots, and much more. AI is all around us and getting smarter and smarter by the day. So how can we prevent this advanced tech from becoming a real threat to mankind?

How Do We Brace Ourselves for the Future of AI?

While the fear of robots enslaving humans might be a stretch, AI’s widespread use and society’s ever-growing reliance on it could bring about equally disastrous consequences. In order to ensure that the future of AI is a positive one for humanity, some of the brightest minds are advocating for better governance.

Allan Dafoe, director of the Centre for the Governance of AI at the University of Oxford’s Future of Humanity Institute, is one of the foremost experts preparing for the future of artificial intelligence. His goal is to “gain insight and provide policy advice to help guide the development of AI for the common good.”

What Dafoe believes to be the real risk of AI is its usage to threaten the legitimacy of political, financial, and social institutions, as well as endanger basic human rights like privacy. (Does Alexa’s eavesdropping ring a bell?) In the wrong hands, AI could be used to leverage one’s position and gain unchecked access to information, wealth, and power.

“If we govern AI well, there’s likely to be substantial advancements in medicine, transportation, helping to reduce global poverty, and helping us to address climate change,” Dafoe explains. “The problem is, if we don’t govern it well, it will also produce these negative externalities in our society. Social media may make us lonely, self-driving cars may cause congestion, autonomous weapons could cause risks of flash escalations and war or other kinds of military instability.”

Companies spend roughly $20 billion collectively per year on AI products and services, each seeking to maximize its potential for profit. Since every industry operates within its own set of parameters, a one-size-fits-all approach to AI governance won’t suffice. Regardless of the sector, Dafoe and others agree that transparency is absolutely necessary.

Erick Galinkin, an artificial intelligence researcher, states, “Knowing how and why an AI made a decision can help humans reason about whether or not that decision should have been made. Additionally, fairness and ethics are high on many lists–ensuring that the societal biases or unseen prejudices in data are not reproduced or accelerated by integration with artificial intelligence.”

Many experts in the field, like Max Tegmark and Erik Brynjolfsson, believe the best way to ensure the safety of AI is to align its goals with our own, before it’s too late. “We need to work aggressively to make sure technology matches our values. This can and must be done at all levels, from government, to business, to academia, and to individual choices,” Brynjolfsson stated.

Determining how to effectively create and apply regulations that guide the future of AI is paramount to ensuring that the technology is appropriately leveraged to benefit society. As Dafoe grapples with how to institute best practices, the goal is to create laws governing artificial intelligence to protect humanity without inhibiting the growth of the technology.

As the Future of Life Institute – a nonprofit that supports research initiatives on AI safety – states: “Civilization will flourish as long as we win the race between the growing power of technology and the wisdom with which we manage it.”

With the right measures put into place, the future of AI holds vast potential to enhance the quality of human life. But it will take much research and preparation early on to keep the technology in check.

For more interesting news about the people and ideas that are changing our world, subscribe to Freethink.

Subscribe to Freethink for more great stories