Can China's military AI actually outpace US decision-making over Taiwan?
The PLA is racing to build AI command systems that operate at machine speed. But the Communist Party's political control mechanisms may prevent China's military from using the speed it is building.
🎧 Listen to this article
The Commissar’s Veto
In the basement of a PLA command center, an AI system generates a targeting recommendation in 47 milliseconds. The algorithm has fused satellite imagery, signals intelligence, and weather data to identify an optimal strike window. The decision support system displays its confidence interval: 94.7%. A colonel reviews the output, nods, and reaches for the authorization terminal.
Then he waits.
Somewhere in the building, a political commissar must concur. The dual-command structure that has defined the People’s Liberation Army since Mao’s era requires it. No significant military decision—and certainly no decision that could trigger war over Taiwan—proceeds without the Party’s institutional representative signing off. The commissar may be in a meeting. He may be consulting Beijing. He may simply be deliberating. The AI’s 47-millisecond advantage begins to evaporate.
This is the paradox at the heart of China’s military modernization: the PLA is racing to build decision systems that operate at machine speed while preserving a political architecture designed to operate at human speed. The question of whether China’s AI-driven command systems can outpace American decision-making in a Taiwan crisis is not primarily a technological question. It is a question about whether the Chinese Communist Party will permit its military to actually use the speed it is building.
The Intelligentization Illusion
Western defence analysts have spent the past decade warning about China’s AI ambitions. The warnings are not unfounded. The PLA’s concept of “intelligentized warfare” envisions AI integration at every level: targeting, logistics, intelligence fusion, and command decision-making through what Chinese military theorists call “hybrid human-AI workflows.” The 2024 Pentagon report on Chinese military power confirms that Beijing “continues to accelerate its development of military technology, including in military artificial intelligence.”
But acceleration is not arrival.
Georgetown’s Center for Security and Emerging Technology found that Chinese defence experts themselves identify “several technological challenges that may hinder its ability to capitalize on the advantages provided by military AI.” The most significant: “data collection, management, and quality problems.” The PLA’s C4ISR systems—the sensors and networks that would feed any AI decision engine—suffer from integration failures that no algorithm can overcome. An AI system is only as fast as its slowest data input.
The American side faces its own constraints, but they are different in kind. The Joint All-Domain Command and Control concept, JADC2, achieved what the Pentagon calls “minimum viable capability” in February 2024. The system aims to connect sensors from all service branches into a unified network. In theory, this creates the foundation for AI-accelerated decision-making. In practice, JADC2 must coordinate across not just American services but allied nations—Japan, Australia, the Philippines—each with sovereign approval requirements that no algorithm can bypass.
Here lies the first asymmetry. The PLA’s speed constraints are internal and political. America’s are external and diplomatic. Both are real. Neither is easily solved.
The Tempo Trap
Speed in warfare is not absolute. It is relative to the decision your adversary must make. A targeting cycle that completes in seconds offers no advantage if the strategic decision to authorize strikes requires days.
Consider what a Taiwan crisis actually demands. The PLA’s doctrine of degrading Taiwan’s defences before deploying blockade forces means the first strikes are themselves the decisive political act. Xi Jinping must decide—through which channel, with what authorization, accepting what consequences—to initiate hostilities. This is not a decision that can be delegated to an algorithm. The Central Military Commission’s age-graded succession hierarchy, with vice-chairmen in their late sixties, privileges elder wisdom over rapid response. The structure exists precisely to prevent hasty action.
The Council on Foreign Relations notes that historical Taiwan Strait crises have featured “compressed decision-making timelines under intense military pressure, often spanning days to weeks.” Days to weeks. Not milliseconds.
The American side faces a mirror-image problem. DoD Directive 3000.09 requires “appropriate levels of human judgment” in autonomous weapons employment. The directive does not define “appropriate.” This ambiguity is not an oversight; it is a feature. By refusing to specify a threshold, the policy forces commanders to involve multiple levels of review—dispersing accountability rather than concentrating it. The three-signature approval process for autonomous weapons (requiring sign-off from the Undersecretary of Defense for Policy, the Undersecretary for Research and Engineering, and the Vice Chairman of the Joint Chiefs) creates a multi-year bureaucratic pathway structurally incompatible with crisis decision speeds.
Both sides have built systems that promise speed while embedding structures that guarantee delay.
What the PLA Actually Has
Open-source information about PLA AI systems in actual operational use remains, as the Center for a New American Security notes, “sparse.” Chinese military writings describe ambitious goals. They do not describe fielded capabilities.
What we can observe: the PLA is investing heavily in AI for controlling uncrewed systems, for targeting and resource allocation, for intelligence processing and fusion. The Recorded Future assessment confirms integration efforts aimed at “multidomain precision warfare, intelligentized operations, and enhanced decision-making.” These are real programmes with real funding.
But the PLA’s recent history suggests caution about assuming operational readiness. Xi Jinping’s ongoing purges of military leadership—including CMC Vice Chairman He Weidong and officers with direct ties to Xi himself—have created a command climate where officers may be incentivized to report optimistic assessments rather than honest readiness gaps. The Lowy Institute’s analysis of these purges notes their implications for readiness: the very officers who would have the most accurate picture of AI system performance are also the officers most vulnerable to political consequences for delivering bad news.
The purges serve a function beyond anti-corruption. They reset institutional memory. Each cycle clears accumulated friction but sacrifices operational continuity—a pattern that mirrors AI retraining cycles where models lose prior learning. The PLA may be building sophisticated AI systems while simultaneously destroying the institutional knowledge needed to employ them effectively.
The Friction Gap
Peacetime testing and wartime employment operate under different logics. The Pentagon’s AI testing regime includes operational red-teaming, domain-specific benchmarking, and lifecycle evaluation. This rigour is essential for responsible development. It is also slow.
Federal technology assessments of military edge computing note the challenge of “denied, degraded, intermittent, and limited” (DDIL) environments—the conditions that would actually prevail in a Taiwan conflict. AI systems trained on clean data in controlled environments encounter regime change when deployed to contested spaces. The Taiwan Strait’s rapid sea-state transitions create temporal gaps where sensor data becomes unreliable. Shallow-water acoustics violate the assumptions embedded in AI sonar systems trained on deep-water signatures.
The comprehensiveness of peacetime evaluation frameworks creates what might be called a friction gap: the more rigorous the testing, the greater the incentive to bypass it in crisis. Vendor demonstrations and ad hoc integration become the path of least resistance when the alternative is waiting years for formal approval.
The PLA faces a different version of this problem. Its AI systems must operate within a dual-leadership structure where the political commissar functions as an institutionalized negative feedback loop—a human fail-safe that prevents runaway autonomous decision-making. This is not a bug. It is a feature designed to ensure Party control. But it creates a structural limit on how fast any AI-enabled process can actually complete.
Western autonomous systems build fail-safes into software. The PLA builds them into personnel.
The Coalition Constraint
American military power in the Pacific does not operate alone. JADC2’s architecture explicitly requires allied integration. This is a strategic strength—the combined capabilities of the United States, Japan, Australia, and potentially Taiwan vastly exceed what any single nation could deploy. It is also a structural vulnerability.
The USINDOPACOM Information Modernization Network initiative integrates secure communications for allies and partners while modernizing command-and-control systems. This integration reveals that JADC2’s operational viability in a Taiwan scenario depends on dissolving classification barriers with Five Eyes partners and establishing pre-delegated authorities with non-Five Eyes allies. Neither is simple.
The US-China Economic and Security Review Commission notes China’s development of “large amphibious assault ships and mobile piers” enhancing capacity to “blockade or launch an invasion of Taiwan with little advance warning.” Little advance warning means little time for coalition consultation. The ritualized authorization processes of sovereign allies—each with domestic political constraints, each with different thresholds for involvement—cannot be compressed by American AI systems.
China’s approach differs. Its thin partner network (the Ream Naval Base in Cambodia, potential facilities in the Gulf) creates single-point-of-control nodes that eliminate political negotiation latency. Beijing does not need coalition consensus. It needs only its own decision.
This asymmetry cuts both ways. American allies provide capabilities China cannot match. But those capabilities come with coordination costs that no algorithm eliminates.
The Speed That Matters
The question is not whether PLA AI systems can generate recommendations faster than American systems. They probably can, in narrow technical terms. The question is whether that speed translates into decision advantage.
Decision advantage requires more than fast algorithms. It requires that faster recommendations lead to faster authorizations, that faster authorizations lead to faster actions, and that faster actions lead to better outcomes. Each link in this chain involves human judgment, political calculation, and organizational friction that operates on timescales AI cannot compress.
Consider the OODA loop—observe, orient, decide, act—that American military doctrine treats as the fundamental cycle of combat decision-making. AI systems can accelerate observation (sensor fusion) and orientation (pattern recognition). They struggle with decision (which involves values, not just data) and action (which involves physical systems, not just information). The PLA’s commissar system inserts a human checkpoint between orientation and decision. American approval requirements insert multiple checkpoints between decision and action.
Both sides have built speed into the parts of the loop where speed is easiest. Neither has solved the parts where speed is hardest.
What Actually Determines Outcomes
A Taiwan crisis would unfold across multiple domains simultaneously: kinetic strikes, cyber operations, information warfare, economic coercion. Speed advantages in one domain can be negated by vulnerabilities in another.
The PLA’s “Three Warfares” doctrine—psychological, legal, and media warfare—operates on timescales that AI acceleration cannot address. Shaping international opinion, establishing legal justifications, and degrading adversary morale are campaigns measured in months and years, not milliseconds. Chinese defence writings on intelligentized warfare may themselves function as cognitive warfare against American planners, triggering resource misallocation toward matching capabilities that may not be as developed as publicized.
Taiwan’s own preparations matter. The island’s procurement of 700 satellite terminals as cable backup inadvertently maps the geographic distribution of critical communications nodes—intelligence that PLA planners can exploit regardless of AI speed. Taiwan’s smart city infrastructure, subjected to 2.63 million daily intrusion attempts, may already be compromised in ways that would corrupt the data feeding any AI defence system.
The decisive variables in a Taiwan conflict are unlikely to be algorithmic. Industrial capacity to sustain operations. Political will to absorb costs. Alliance cohesion under pressure. The ability to maintain communications in a degraded electromagnetic environment. Speed matters at the tactical level. At the strategic level, endurance matters more.
The Honest Assessment
Can the PLA’s AI-driven command systems outpace US decision-making in a Taiwan crisis?
In narrow technical terms, possibly. Chinese AI systems may generate targeting recommendations and resource allocation decisions faster than American systems burdened by coalition coordination and approval requirements.
In operational terms, probably not. The PLA’s political control mechanisms—the commissar system, the CMC’s authority requirements, the Party’s insistence on controlling all significant military decisions—impose delays that offset technical speed advantages. The very structures that ensure the Communist Party’s grip on its military prevent that military from exploiting the speed its AI systems theoretically provide.
In strategic terms, the question may be wrong. A Taiwan crisis would not be won by the side that decides faster. It would be won by the side that decides better—that correctly anticipates adversary responses, that maintains coalition unity, that sustains operations through attrition, that manages escalation without triggering catastrophe. Speed is an input to these outcomes, not a substitute for them.
The PLA is building impressive AI capabilities. It is building them within a political system that treats speed as a threat to control. The United States is building capable AI systems within an alliance structure that treats unilateral speed as a threat to cohesion. Both have chosen constraints. The question is which constraints prove more costly when the crisis arrives.
FAQ: Key Questions Answered
Q: Does China have more advanced military AI than the United States? A: China is investing heavily in military AI and may lead in specific applications, but open-source evidence of fielded operational systems remains limited. The US maintains advantages in underlying chip technology and has achieved “minimum viable capability” for its JADC2 integration concept.
Q: How fast could a Taiwan conflict actually unfold? A: Historical Taiwan Strait crises have featured decision timelines spanning days to weeks, not hours. While initial strikes could occur rapidly, the political decisions to authorize them and the subsequent campaign would unfold over extended periods.
Q: Would AI systems make autonomous targeting decisions in a Taiwan war? A: Both sides maintain human-in-the-loop requirements for significant targeting decisions. US policy requires “appropriate levels of human judgment,” while the PLA’s commissar system mandates political officer concurrence for major military actions.
Q: What role would US allies play in AI-enabled decision-making? A: American AI command systems are designed to integrate allied inputs, which provides capability advantages but creates coordination delays. Coalition consensus requirements cannot be bypassed by faster algorithms.
The Constraint That Binds
The race for AI-enabled military decision-making resembles an arms race in which both competitors have tied one hand behind their backs—by choice. The PLA chose Party control. The United States chose alliance integration. Each constraint reflects genuine strategic priorities. Each imposes real costs on decision speed.
The honest answer to whether PLA AI systems can outpace American decision-making is that both sides have built systems capable of generating recommendations faster than their political structures can authorize action. The bottleneck is not the algorithm. It is the human.
This may be the most reassuring finding. In a crisis where miscalculation could trigger catastrophe, the constraints that slow decision-making also create space for deliberation, for signalling, for off-ramps. The commissar’s veto and the coalition’s consultation serve purposes beyond their inconvenience. They force consideration of consequences that algorithms cannot weigh.
The PLA is building speed it may not be permitted to use. The United States is building speed it cannot use alone. The crisis, when it comes, will be decided not by which side’s AI runs faster, but by which side’s humans decide wiser.
Sources & Further Reading
The analysis in this article draws on research and reporting from:
- DoD Annual Report on China 2024-2025 - Primary source for PLA modernization and AI development assessment
- Georgetown CSET: China’s Military AI Roadblocks - Analysis of technological challenges facing PLA AI integration
- Council on Foreign Relations: The Next Taiwan Crisis - Historical analysis of Taiwan Strait crisis decision timelines
- CNAS Congressional Testimony on Military AI - Assessment of PLA AI capabilities and US-China competition
- DoD Directive 3000.09 on Autonomy in Weapon Systems - US policy framework for autonomous military systems
- Lowy Institute: Explaining Xi’s PLA Purges - Analysis of military leadership purges and readiness implications
- US-China Commission Taiwan Chapter - Assessment of China’s Taiwan invasion capabilities