Subscribe to our newsletter

     By signing up, you agree to our Terms Of Use.


    • About Us
    • |
    • Contribute
    • |
    • Contact Us
    • |
    • Sitemap

    How AI Will Kickstart a National Defense Revolution

    Advances in technology present both new military threats and new opportunities for cooperation between China and the world.

    Last week, AlphaGo — an artificial intelligence program developed by Google’s DeepMind team — beat the highest-ranked Chinese Go player, Ke Jie, for the second time during a match held in Wuzhen, in eastern China’s Zhejiang province. With the rapid development of big data, cloud computing, and the internet of things, AI stands at the center of a coming technological and industrial revolution. But what does AI mean for China’s defense? How should we address the security challenges that AI may cause if used inappropriately?

    Broadly speaking, AI is any computer science technology that emphasizes the creation of intelligent machines that think, work, and react like humans, exhibiting the capacity to learn and solve problems. Widely regarded as the world’s first programmable, electronic, digital computer, Colossus was a machine developed by British code breakers between 1942 and 1945 to help break the Lorenz cipher used by the German army to encrypt messages during World War II.

    Since then, AI has been widely used on the battlefield. Technologically advanced satellite sensors and wearable equipment have helped improve information processing. In order to manage data and information to gain the maximum advantage, the development of a command and control system is always at the top of the Chinese military’s agenda.

    In addition, cloud computing and big data analytics are widely used in cyberattack detection and the diagnosis of faults. With adaptive technology, state defenses can now monitor the entire life cycle of an attack, from delivery, to callbacks and reconnaissance, to data exfiltration.

    The industry of unmanned systems is also growing rapidly. To date, more than 70 countries have developed their own platforms to support and accelerate the development of unmanned vehicles, drones, and ships. Meanwhile, robot technology is also developing apace. Each day, new bionic robotics are being developed and put on the market. It is believed that robot warriors and drone swarms herald a future of autonomous war machines.

    There is no doubt that AI will reshape defense technology and turn science fiction into reality. However, new challenges for defense and security should not be underestimated. First of all, AI will make future wars more precise, quicker, and crueler. Unmanned systems will be widely utilized by technologically advanced states to reduce their military casualties. However, the safety of civilians caught up in the fighting may not be guaranteed by unmanned systems. Eventually, AI warfare may undermine the principles of human morality.

    Second, AI will never replace human ingenuity. AI is, simply, an automated way of building a computer program capable of finding patterns in large amounts of data. But in fact, thinking like a real human is all unexplored territory. It’s a place where human ingenuity is going to be more important than the AI it births. Like any tool, AI can do harm when used in the wrong way — maybe even increasing the possibility of a large-scale war.

    Third, AI is far from perfect. There may be a certain amount of error in AI systems because of human error and mechanical defects. In a confrontational environment, AI systems could be invaded or damaged by malware and viruses. Under such circumstances, uncontrollable AI may end up killing innocent people.

    Last but not least, because AI blurs the boundaries of accountability, it will be difficult to track down those responsible for faulty missions. For example, perpetrators of humanitarian disasters caused by drones may include the pilot, the computer programmer, the procurement officer, and the commander, among others. It is plain to see, therefore, that AI mistakes are much harder to police and rectify than human mistakes.

    Meanwhile, as new technology spreads, terrorists could exploit AI and its weaknesses in order to launch an attack. In an increasingly interconnected and globalized world, we face new risks of cross-domain attacks, as well as increased risks of miscalculation, misjudgment, and misperception. For instance, a cyberattack using AI technology could damage critical infrastructure and industrial control systems. Furthermore, nuclear weapons and chemical and biological weapons control systems may be left at risk of being hacked.

    While many states, including China, are attempting to upgrade their defense capabilities by introducing AI technology, we should remain cautious about its proliferation. AI may lead to new forms of warfare like open-source warfare, algorithmic warfare, and hybrid warfare. It will become more and more difficult for a single country to pursue absolute security for itself. I believe that, in the era of AI, people should work more closely to respond to international security challenges. In light of this, I have three recommendations.

    First, world leaders in military and technology circles should take on more responsibility. A double-track security dialogue should be organized within a framework of coordination and cooperation among think tanks from all countries. The dialogue should involve the following topics: strengthening information exchange and sharing, assessing hacking risks in the internet of things, predicting the effects of AI proliferation, and preventing war and conflict caused by the miscalculations of sovereign states and attacks by non-state actors.

    Second, each country should use AI technology to facilitate and support deeper cooperation with other countries on disaster warning mechanisms, international anti-terrorism efforts, the fight against transnational crime, and humanitarian assistance missions.

    Finally, an international law on the weaponization of AI is an important measure for preventing AI abuse. International organizations should turn their attention to establishing retroactive liability for AI misuse and aim for a consensus on international law enforcement.

    (Header image: A SWAT team member aims a gun at a target in Beijing, Aug. 20, 2016. Zhang Yan/VCG)