China’s People’s Liberation Army weaponizing Meta’s AI
China’s military may have found a new weapon: a repurposed version of Meta’s open-source AI, Llama, retooled for battlefield intelligence.
Last month, Reuters reported that according to three academic papers, top Chinese research institutions linked to the People’s Liberation Army (PLA) have adapted Meta’s Llama AI model for military applications.
Reuters reports that in June, six researchers from three institutions, including two under the PLA’s Academy of Military Science, detailed their use of an early version of Meta’s Llama to create “ChatBIT,” an AI tool optimized for military intelligence and decision-making.
The report points out that despite Meta’s restrictions against military use, Llama’s open-source nature allowed for unauthorized adaptation. Meta condemned this use, emphasizing the need for open innovation while acknowledging the challenges of enforcing usage policies.
Reuters says the US Department of Defense (DOD) monitors these developments amid broader US concerns about AI’s security risks. It notes this incident highlights China’s ongoing efforts to leverage AI for military and domestic security despite international restrictions and ethical considerations.
The report says that the research highlights the challenge of preventing the unauthorized use of open-source AI models, reflecting the broader geopolitical competition in AI technology.
As to how large language models (LLM) can revolutionize military intelligence, the US Central Intelligence Agency’s (CIA) first chief technology officer, Nand Mulchandani, said in a May 2024 interview with the Associated Press (AP) that generative AI systems can spark out-of-the-box thinking but is not precise and can be biased.
Mulchandani mentions that the CIA uses