White House moves to halt Nvidia B30A AI chip sales to China

The U.S. administration has informed federal agencies on November 7, 2025 it will not permit Nvidia to sell its newly designed, scaled-back B30A artificial intelligence accelerator to customers in mainland China. Once more, export controls that already keep the firm’s top Blackwell-class hardware out of the country are extending.

Officials said the B30A would be barred even though Nvidia built it specifically to comply with earlier U.S. performance thresholds for China. The chip, a weaker version of the company’s data center AI parts, can still train large language models when deployed in clusters, and Nvidia had already sampled it to several Chinese buyers. By ruling it out, Washington is signaling that future down-binned variants aimed at the Chinese market will also be scrutinized if they can be scaled up on site. Nvidia said it currently has no market share in China’s data center compute segment and excluded China from guidance because of these rules. 

The decision fits into a two-year pattern in which the U.S. first blocked Nvidia’s highest-performance H100/H200 and later Blackwell parts, then watched the company produce China-only or scaled-down models — A800, H800 and H20 — to keep serving large Chinese cloud and AI customers. Each time those parts approached the limits of what could be locally networked to train state-of-the-art models, U.S. agencies tightened the regime. The B30A now becomes the latest of those workarounds to be shut down, and officials told agencies to treat it the same way they treat the fully restricted accelerators.

Nvidia’s challenge is that Chinese AI companies do not buy single accelerators; they buy hundreds or thousands to run foundation models. Even a “China-compliant” part can cross Washington’s red line if, when racked and networked, it delivers the kind of training throughput the U.S. wants to keep out of reach. This is exactly why the White House would not clear B30A: clustered, it was still good enough for large-model training. Nvidia is already working on redesigning the chip again to try to meet the new interpretation of export requirements.

The export restriction comes in parallel with Beijing’s own clampdown. In November 2025, China ordered that state-funded data centers must use only domestically made AI chips, and projects that are less than 30% complete must remove foreign accelerators that have already been installed. That rule effectively locks Nvidia out of the next wave of government-backed AI infrastructure even if a future U.S. redesign were approved, and it channels state demand toward Huawei, Cambricon and other Chinese vendors. The U.S. ban and the Chinese procurement rule therefore work in the same direction: less Nvidia silicon in Chinese public-sector AI builds.

From a market perspective, Nvidia has been warning for months that its China AI revenue is unpredictable and that it has removed China from forward-looking statements, precisely because any chip it engineers for that market can be blocked at short notice. The company had shared samples of B30A with Chinese customers, but now must tell them it cannot ship volume. Earlier rounds of controls cut deeply into what was once a 20–25% China contribution to Nvidia’s data center business; after the 2025 Chinese datacenter directive and today’s U.S. block, that channel is even narrower.

The material on GNcrypto is intended solely for informational use and must not be regarded as financial advice. We make every effort to keep the content accurate and current, but we cannot warrant its precision, completeness, or reliability. GNcrypto does not take responsibility for any mistakes, omissions, or financial losses resulting from reliance on this information. Any actions you take based on this content are done at your own risk. Always conduct independent research and seek guidance from a qualified specialist. For further details, please review our Terms, Privacy Policy, and Disclaimers.

Articles by this author