China has cautioned the United States that extensive deployment of artificial intelligence in military operations could weaken ethical limits in warfare and increase risks associated with autonomous decision making technologies. The warning was issued by China’s defence ministry during remarks addressing the growing use of advanced artificial intelligence systems by the United States military. The comments came as Washington moves forward with policies encouraging artificial intelligence companies to collaborate more closely with defence agencies, a shift that has drawn international attention amid debates about the role of emerging technologies in global security.
According to Chinese defence ministry spokesperson Jiang Bin, allowing artificial intelligence systems to play a greater role in warfare decisions could erode accountability and raise serious concerns about sovereignty and technological control. He said the unrestricted application of artificial intelligence by military institutions, particularly when used to influence war related decisions or actions across national borders, may weaken ethical safeguards that traditionally govern armed conflict. Jiang Bin warned that if algorithms are granted authority over life and death decisions on the battlefield, it could accelerate technological developments without adequate oversight and lead to unpredictable outcomes in future conflicts.
The remarks follow recent developments in the United States defence sector involving the integration of advanced artificial intelligence models into military systems. The administration of US President Donald Trump has encouraged collaboration between artificial intelligence startups and defence agencies to strengthen national security capabilities. The United States Department of Defense, commonly referred to as the Pentagon, confirmed that an artificial intelligence system developed by Elon Musk called Grok has been approved for use in classified operational environments. At the same time, the Pentagon placed restrictions on the artificial intelligence company Anthropic after the firm declined to allow its Claude AI model to be used for mass surveillance or fully autonomous lethal weapons.
The dispute emerged shortly before a United States military strike on Iran and has highlighted tensions between technology developers and defence institutions over how artificial intelligence systems should be used in sensitive security operations. Claude remains one of the most widely deployed advanced artificial intelligence models within classified defence networks, but the disagreement intensified after Anthropic maintained that its technology should not support systems designed for mass monitoring or autonomous combat decisions. Following the disagreement, the United States government directed federal agencies to stop using the company’s technology, while defence officials classified Anthropic as a supply chain risk to national security and restricted commercial cooperation with military contractors, allowing a limited transition period for existing systems within the defence department.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.