This HuffPost Canada page is maintained as part of an online archive.

Regulating Rapidly Evolving AI Becoming A Necessary Precaution

The truth is that AI is getting more advanced -- now able to operate in extremely complex scenarios. As AIs start to make decisions in the real world, the stakes for human beings have skyrocketed -- increasing both potential and risk.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Earlier this year, Google's artificial intelligence (AI) officially defeated the Korean grandmaster, Lee Sedol, at Go -- finishing with four wins and only one loss. Considering that the evolution of AI that could beat the best human player was forecast as still being 10 years away, this win marked a significant moment.

The truth is that AI is getting more advanced -- now able to operate in extremely complex scenarios. As AIs start to make decisions in the real world, the stakes for human beings have skyrocketed -- increasing both potential and risk.

AI in the Real-World: It's Happening Now

The race for AI controlled weapons is already happening. China announced that it's creating a new generation of cruise missiles, which incorporates a high level of artificial intelligence which would allow commanders to control them in real-time or activate a set-and-forget mode.

Alpha is a new AI flight combat system being tested by the United States Air Force (USAF). It is "the most aggressive, responsive, dynamic and credible AI" according to retired Colonel Gene Lee, who tested the AI during an air combat simulation. After the simulation, Colonel Lee noted that Alpha "seemed to be aware of my intentions, reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed."

Based on an Air Force Report released in 2014, drones of the future will be "stealthier, faster, more computerized, equipped for electronic warfare, more lethal, more autonomous and in some cases, able to deploy as groups of mini-drones." AI will help enable drones to make decisions independently and move autonomously.

It's also worth noting that as armies develop artificial intelligence-based weaponry, robots are also being heavily equipped with weapons that wield incredible firepower -- machine guns, rocket launchers, grenades, etc.

Robots are already becoming more lifelike, can detect our emotions and can fight on our behalf.

Soon, AIs may be able to grow or 3D print new bots, start to learn from one another and on their own, or even themselves become invulnerable.

These "advancements" inspired The Future for Life Institute's recent letter recommending care and caution in the development of autonomous weapons. Autonomous weapons, in this case, are defined as military platforms that can perform duties without the involvement of humans -- a pretty scary idea if you think about it. The Institute argues that it won't be long before the autonomous weapons wind up in the hands of dictators or terrorists, but militaries suggest that without constant development their progress will be hindered on the battlefield.

With this rapid development of AI -- particularly for military use -- should we be worried?

The Need for Governance

Managing advancements in AI appropriately is paramount. That's why Elon Musk co-founded OpenAI, an organization that explores research aimed solely at the future of AI and shares most of its research with anyone who wants it. OpenAI's mission is to push the limits of AI and generate new innovations, but also to govern the advancements of the technology. It's intended to "neutralize the threat of malicious artificial super-intelligence."

Oxford philosopher and head of the Future of Humanity Institute, Nick Bostrom, was quick to point out that "if you share research without restriction, bad actors could grab it before anyone has ensured that it's safe."

Like Stephen Hawking, Nick Bostrom believes that machines will outsmart humans within the current century, and, what's more, they will have the potential to turn against us.

The biggest existential risk, he says, is not climate change or pandemic, it's the creation of machine intelligence greater than that of human beings -- in fact, he believes "we're like children playing with a bomb."

Elon Musk has voiced his apprehension for where the world of AI may go, and while continuous development is inevitable, he also called AI the "biggest existential threat" to humanity.

Those working with artificial intelligence know that the technology they're creating could potentially be used to cause harm. DeepMind, a company owned by Google, has a mandate to ensure AI agents don't learn how to prevent humans from maintaining control over the AI. In a paper written by researchers at DeepMind, the team claims to have created a way for human operators to safely interrupt an AI machine without it learning how to prevent or induce the interruptions. Considering the infinite possibilities AI opens up, this is an important fail-safe feature to build in as progress continues.

Despite all of the rhetoric, AIs likely won't rise up against us, at least not anytime soon. And despite the doomsday-esque scenarios, it's important to note that AI has the potential to help solve the world's biggest challenges and benefit society in truly significant ways. Governance and regulation in the field continue to be critical in order to ensure that AI's progress doesn't get ahead of itself.

Follow HuffPost Canada Blogs on Facebook

Also on HuffPost:

Robots that can deliver other robots:

Robots In 2016: These Are The Robots And Drones That Will Change Our Lives

Close
This HuffPost Canada page is maintained as part of an online archive. If you have questions or concerns, please check our FAQ or contact support@huffpost.com.