AI Business is part of the Informa Tech Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.
by Wilkie Briggs
LONDON - One of AI’s more troubling problems are ‘Lethal Autonomous Weapons Systems’ (LAWS) that can administer the use of deadly force without the presence of a human being in the decision-making loop. The shocking advances of AI weaponry now have the potential to effectively locate, identify and kill targets – it is no wonder that they have been given the pithy label, “killer robots”.
While fully automated weapons of these types are still in their infancy, there is nevertheless wide-spread international opposition to the development of deadly technology that requires little to no human intervention or oversight. In March this year, the UN Secretary-General, António Guterres, expressed these fears, tweeting that “autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”.
Despite these moral problems, the Chinese government have taken a special interest in the AI arms race and have taken the opportunity to create “intelligentized” military weapons. In 2018, Major General Ding Xiangrong, Deputy Director of the General Office of China’s Central Military Commission, expressed China’s goal to take advantage of the “ongoing military revolution… centred on information technology and intelligent technology”.
Companies like CATIC (China National Aero-Technology Import & Export Corporation) have recently created the Ziyan Blowfish, an unmanned helicopter equipped with bombs that “autonomously performs more complex combat missions, including fixed-point timing detection, fixed-range reconnaissance, and targeted precision strikes.”
Meanwhile, the Chinese government are similarly developing a series of unmanned AI submarines capable of performing a wide range of missions including “suicide attacks” against enemy vessels.
These developments indicate China’s desire to revolutionise their military capabilities by adopting some of the technological advantages posed by artificial intelligence.
It is easy to see why China is turning its focus towards more innovative weapon systems – while the US spends more on defence than the next seven countries combined,the emergence of AI weaponry presents an opportunity for rival countries to build more cost effective, precise, and ultimately deadly military infrastructures.
Indeed, while America’s devotes a huge part of its defence budget to the construction and maintenance of expensive and traditional legacy systems such as aircraft carriers, China is attempting to bridge the military gulf by chiefly investing a more dynamic and algorithmic field.
Although America has shown interest in the development of LAWS, they remain cautious of global regulation. On the other hand, China will be more resistant to worldwide pressures which seek to curtail the expansion of AI weaponry.
For example, the country’s recent and ongoing abuse of facial recognition systems (which allows for further control and surveillance over its citizens) flies in the face of international law and suggests that China will not be buoyed by public outcry.
China’s increased investment in these systems demonstrates their wider commitment to becoming the world leader in AI by 2030. The government has recently set up two major research organisations devoted to the examination of artificial intelligence and unmanned systems. Estimates suggest that China will overtake the U.S in AI research within the next five years.
Wilkie Briggs writes on the philosophy and morals of artificial intelligence and data ethics