Prime

Prime Readers,

Hundreds of AI researchers including Elon Musk, Steven Hawking and Steve Wozniak, are calling for a ban on autonomous weapons. I am writing this letter to tell you that a “ban on autonomous weapons beyond meaningful human control” will work to a certain extent, but shouldn’t be the only solution.

According to Steve Hawking, if AI passed the line where an AI becomes better than humans at AI design, we may face an intelligence explosion which means that it can recursively improve itself without human help, which will lead to either the best or worst thing ever happen to humanity. Thus, humanity, you and I, are the stakeholders here. The future existence of human kind is on the line, so it is important for us to shift the goal of AI from military purpose to favorable intelligence, so that we can avoid the invention and utilization of destructive AI. It is important for us, fellow humans, to have a say in evaluating new scientific products related to Autonomous weapons. And a ban of autonomous weapons is one way to do this.

Needless to say, a ban on autonomous weapon systems may not be easy to implement in practice. The resources and techniques involved in making AI are accessible by anyone with professional “AI knowledge”, so a ban could not literally limit their application of AI to weapons on individual’s level. Subbarao Kambhampati, an AI professor at Arizona State University (ASU), argues that “AI researchers should instead be thinking of more proactive technical solutions to ameliorate potential threat.” Thus, in his article “I’m a pacifist, so why don’t I support the Campaign to Stop Killer Robots?”, he introduces his way of protecting humans rather than calling a ban. One example is how he held the workshops to address AI’s adverse outcomes “using AI technology itself as a defense against the malicious uses of AI”. I agree with Professor Subbarao Kambhampati, and think it is important to have back-up plan in case the call for a ban fails in effectiveness.

It is true that a ban is limited in addressing the threat of the weapons but such a ban is an essential step toward a future free of disaster. However, our efforts to ensure a safe and promising future should not stop on a ban but we need more practical preparation as indicated by Kambhampati to secure our survival if the worst take place. We need more research and more of a say in process autonomous weapons uses to select targets, and we need to focus our AI research in a lens to benefit humanity. Because only when researchers devote all their talents into investigating knowledge to improve human’s living standard and governments put money into using AI to support the beneficial technology, rather than warfare, can we eventually lead to a world free of weapons where no innocent soldiers will be killed by autonomous weapons.


Robot Prime and Co… Zoe, Siming, Natan, Michael and Marcela, 2018.

Work Cited

Gibbs, Samuel. “Elon Musk leads 116 experts calling for outright ban of killer robots.” The Guardian, Guardian News and Media, 20 Aug. 2017

Tegmark, Max. “Hawking Reddit AMA on AI.” Future of Life Institute, Max Tegmark

Kambhampati, Subbarao. “I’m a pacifist, so why don’t I support the Campaign to Stop Killer Robots?” The Guardian, Guardian News and Media, 15 Nov. 2017