The Australian Strategic Policy Institute (ASPI) recently released their report, De-Risking Authoritarian AI: A Balanced Approach to Protecting Our Digital Ecosystems, which discusses the threat of Chinese AI-enabled technology. As AI-enabled systems can influence various aspects of life, including health, safety, and wealth, democracies around the globe have taken action to regulate AI’s impact on individual rights and economic security.
AI’s wealth of benefits and risks highlight the crucial need to remain aware of how authoritarian powers, like the People’s Republic of China (PRC), can utilize it to bolster their political and social stability. The report emphasizes that the concern lies not in China’s AI capability, but rather in how they integrate AI into PRC-operated products and services. Two additional areas of concern are AI-enabled technologies that facilitate foreign interference, and ‘Large language model AI’ and other new generative AI systems.
ASPI presents the three-step framework needed to identify and manage the riskiest products and services:
- Audit: “Identify the AI systems whose purpose and functionality concern us most. What’s the potential scale of our exposure to this product or service? How critical is this system to essential services, public health and safety, democratic processes, open markets, freedom of speech and the rule of law? What are the levels of dependency and redundancy should it be compromised or unavailable?”
- Red Team: ” Anyone can identify the risk of embedding many PRC-made technologies into sensitive locations, such as government infrastructure, but, in other cases, the level of risk will be unclear. For those instances, you need to set a thief to catch a thief.”
- Regulate: “Decide what to do about a system identified as ‘high risk’. Treatment measures might range from prohibiting Chinese AI-enabled technology in some parts of the network, a ban on government procurement or use, or a general prohibition.”
The ability of AI-enabled technology to improve daily life does not come without it’s risks in the hands of authoritarian powers that don’t share our same values and interests. ASPI asserts that “The approach outlined here will be seen by some as dangerously extreme and by others as guilelessly cautious. It is, however, a balanced measure in a world in which China is neither at peace nor at war with us. We should be vigilant about the balloons in the sky, but we should think harder about the ghosts in the machine.”
For more information, check out HSDL’s In Focus topic on Artificial Intelligence.