SK Hynix posts record Q3 profit as AI chip demand soars

South Korean memory‑chip maker SK Hynix - a key Nvidia supplier - reported a record quarterly profit and said its planned memory output for 2026 is already fully contracted.
According to the report, the company’s net profit for the third quarter was 12.5975 trillion won ($8.78bn). In other words, demand for high‑bandwidth memory (HBM), essential for both training and inference of AI models, still outstrips supply.
At the start of the AI rally, attention centered on chip buyers. Now the upswing extends across the supply chain: HBM manufacturers, equipment vendors, and testing & measurement firms. This broad, synchronized growth in revenue and profit helps the market assess valuations and tones down “bubble” talk around companies powering AI infrastructure.
SK Hynix, according to the filing, is not only breaking revenue records but also preparing for the next leg: it confirms first shipments of HBM4 (the fourth generation of High Bandwidth Memory - stacked high‑speed memory for AI accelerators) in the fourth quarter and foresees a structural shortage of DRAM (dynamic random‑access memory) as capacity shifts toward HBM.
That implies memory prices are likely to stay elevated for longer than usual, as suppliers are unlikely to offer large discounts or slash list prices in the current environment.
News from China adds market tension. Beijing has effectively cut major platforms off from special Nvidia chip versions tailored for the local market, nudging the economy toward import alternatives.
At first glance, that looks like a risk for Nvidia’s Asian suppliers. In practice, however, the largest global cloud providers (Amazon Web Services and Microsoft Azure) are expanding pre‑agreed contracts for HBM4 and DDR5 to avoid shortages of critical components. As a result, memory scarcity remains a global driver rather than a China‑specific demand issue.
Markets are also tracking policy signals around Nvidia and its flagship platforms. Any steps that accelerate certification or availability of the Blackwell architecture (Nvidia’s data‑center AI accelerators) in Asia and the US are quickly reflected in share prices of chipmakers and test‑equipment vendors.
Strong earnings and record metrics at memory leaders, plus concrete HBM4 ramp plans, indicate that AI‑market growth rests on operational results rather than hype. And that the rally has room to run.