China Adopts Meta AI based on Llama AI for Military Purposes
Posted On November 1, 2024
0

In an unexpected twist in the global AI race, top Chinese research institutions, including those affiliated with the People’s Liberation Army (PLA), have taken Meta’s open-source Llama AI model and reimagined it as “ChatBIT”—an intelligence-processing tool with potential military applications. To say this development is noteworthy would be an understatement; it’s a striking example of how powerful, publicly available AI can be transformed in ways that push the boundaries of technology and strategic intent.
ChatBIT’s development appears to be part of a concerted push by China to leverage AI in military and security operations. Using Meta’s earlier Llama 13B model as a foundation, Chinese researchers adapted it to be “optimized for dialogue and question-answering tasks in the military field,” according to a June paper. The tool reportedly outperformed some other AI models, achieving 90% of OpenAI’s GPT-4 capability. This is no small feat. We’re talking about a model that aims to provide a competitive edge in data analysis, operational decision-making, and potentially even simulation-based strategic planning.
The implications of this adaptation, from a national security standpoint, are profound. Meta’s open-release policy, which champions open innovation, has inadvertently facilitated the use of its models for purposes far beyond commercial applications. Despite Meta’s robust acceptable use policy, which restricts military use and other applications subject to U.S. export controls, the reality of publicly available code is that enforcement is challenging. The PLA’s engagement with Llama seems to bypass these safeguards, and it raises serious questions about the balance between openness in AI development and national security concerns.
The debate is heating up in the United States, too. There’s a growing clash between advocates of open-source AI, who argue that transparency drives innovation, and security analysts, who warn of the risks. President Biden’s recent executive order underscores these concerns, citing both the advantages and substantial security risks of unregulated AI advancements. The Pentagon has voiced similar reservations, while the Department of Defense actively monitors AI developments to gauge competitor capabilities.
But why did Chinese researchers specifically target Llama? Some experts suggest that with the PLA’s significant investment in AI, adopting high-performance models like Meta’s is a shortcut to rapid advancement, without the time and expense of starting from scratch. And this isn’t limited to military ambitions. China has reportedly adapted Llama for broader applications like “intelligence policing,” essentially supercharging the nation’s data analysis capabilities for domestic surveillance.
As China moves toward its goal of global AI leadership by 2030, it’s evident that a full range of applications—from military to civilian—are on the table. Researchers from both the U.S. and China continue collaborating on advanced AI research, ensuring that developments flow across borders. As William Hannas from Georgetown University’s CSET points out, “Can you keep them out of the cookie jar? No, I don’t see how you can.” With such extensive cross-border collaboration, limiting China’s access to cutting-edge AI innovations might be increasingly difficult.
In many ways, the story of ChatBIT is a harbinger of what’s to come. It serves as a reminder that while the frontier of AI is exciting, it is also fraught with complex geopolitical, ethical, and security implications. As AI grows in power and scope, the global competition for control over these capabilities will likely intensify, raising critical questions about how and where we draw the line on access and accountability.
The News & Views aims to bring you the latest and the most authentic, groundbreaking news from across the globe. With a wide range of categories to choose from, the News & Views serves as a one stop shop for all your needs.