The future of self-controlled AI robots by Suf Alkhaldi

The fear of self-control robot
The fear of self-controlled robots

Our lives are filled with robots capable of taking care of many tasks.  Most robots function as inspection and surveillance tools, helping us to do very boring tasks.  Producing robots with high Artificial Intelligence (AI) has moved from the testing stage in labs to the streets. We might call 2015 the year of Artificial Intelligence. 2015 is the first year when we have self-driving cars. 2015 is also the year of producing AI capable of understanding legal language, making our legal life much easier.

Interesting enough, robots, with all these advancements, have limitations in doing simple human-like actions –holding a cup, cooking, cleaning, or moving furniture.  With all these limitations, the AI inside these robots gained a lot of ground in 2015.  For example, a company called DeepMind (bought by Google recently) built an AI robot capable of learning Atari video games in a few hours, without any feedback. The AI learning system designed by DeepMind figured out the objective of the Atari video game and played it as a super human in a few hours. Just two days ago on January 28, 2016, DeepMind published a paper in Nature proving that the AI robot was able to beat a championship human player of the game Go using Monte Carlo tree search and neural networks and trained by supervised learning (watching the game like a human expert). This is the first time ever that a computer program has defeated a professional human player. This breakthrough in learning like a human sent some alarming signals to the scientific community. To build a system more intelligent than a human, thereby creating a super human, creates a risk of an AI robot which does not accept to turn itself off becasue the machine is smarter than us.

Dr. Max Tegmark from the Massachusetts Institute of Technology (MIT) established The Future of Life Institute in 2014 to calm these fears and to make sure that robots don’t become destructive forces.  Even Elon Musk, a busy person with hundreds of creative engineering ideas, jumped onto the wagon by  giving 10 million dollars to the institute. Well-known luminaries and pioneers like Bill Gates, Stephen Hawkings, and Steve Wozniak with hundred of supporters joined the train to alert our society to the potential dangerous consequences of AI robots.

World Economic Forum 2016 at Davos
World Economic Forum 2016 at Davos

Just recently, at the elite gathering in the Swiss Alps of the World Economic Forum Annual Meeting from 20-23 January 2016 at  Davos-Klosters, Switzerland, leaders and prominent thinkers, including billionaires and scientists from all over the world,  sounded the alarm to protect our human civilization. Angela Kane, a Germany representative at the UN for Disarmament Affairs, announced boldly, “It may be too late” to act now.  She elaborated by pointing out that many countries underestimated the dangerous consequences of designing autonomous AI robotic weapons.

Dr. Stuart Russell from UC Berkeley stated “We are not talking about drones, where a human pilot controls the drone.”  The complexity of our life and the sophistication of our brains mixed with knowledge and moral values are absent in the AI system, making it blind to distinguish many human and moral dilemmas, scaring many scientists and ethical scholars. After all, the deployment of autonomous AI weapons ushers us into a new era of uncontrolled warfare. Dr. Russell argues that AI robots represent a great danger to our safety. Many international agreements among scientists and world leaders should take place to insure the safety of AI robots.

Finally, our ability to generate advanced computer algorithms capable of controlling many machines inspires many young scientists, creating an optimistic future by controlling the AI robotic technology.  The World Economic Forum in Davos, Switzerland might be the best place to gather all these thinkers.  I will be checking on The Future of Life Institute quite regularly as well .

This blog is usually written on Saturdays (this one is late though because of the big snow storm which hit our area last weekend.)  I would love to hear from you. Please email me at Sufalkhaldi@FutureandScienceHacks.com

Leave a Reply