Industry analysts note that NAND chip and DRAM production capacities can be converted. Under the premise of maximizing profits, Samsung and SK Hynix are shifting their “relatively less profitable” NAND chip production capacity to high-end DRAM production. Meanwhile, with the expansion of AI inference applications, data centers’ demand for high-capacity SSDs and storage resources is increasing, making NAND chips an integral part of AI infrastructure and driving prices upward.
On January 29, Commercial Times reported that mainland China MCU manufacturer Sino MCU announced price increases of 15% to 50% across its MCU and NOR Flash portfolio due to tight chip supply and rising manufacturing costs, seen as a bellwether for the mature process market. Taiwanese firms including Holtek and Nuvoton are highly likely to follow suit. The main drivers behind the price hikes are AI-related capacity crowding-out, as well as rising lead frame and packaging/testing costs, transforming the mature process market from a price war into a seller’s market—beneficial for Taiwanese manufacturers to recover R&D costs and improve margins.
On January 26, foreign media reported that Samsung officially clarified it has “not implemented uniform, large-scale price increases” in response to market rumors of 80% hikes across DRAM and NAND product lines, though the company did not disclose specific pricing adjustments. Supply chain sources indicate that driven by AI data center demand, 64GB server RDIMM spot prices have risen over 20% within two weeks, with some NAND contract prices rumored to double in Q1. Korean media subsequently cited channel sources claiming Samsung has already raised certain NAND flash contract prices by over 100% early this year; the company declined to comment.
On January 29, SemiMedia reported that SK Hynix will become the exclusive supplier of HBM3E memory for Microsoft’s latest AI accelerator, Maia 200. The Maia 200 features TSMC 3nm process technology, equipped with six 12-layer HBM3E modules totaling 216GB capacity and bandwidth up to 7TB/s, optimized specifically for AI inference. The chip has already been deployed in Microsoft’s Iowa data center and is expanding to Arizona facilities. This move aims to strengthen Microsoft’s self-developed AI chip ecosystem and reduce reliance on general-purpose GPUs. SK Hynix’s leading position in the HBM market is further solidified, with its stock surging 8.7% on the news.
On January 29, Economic Daily reported that AI-driven memory shortages are impacting the automotive industry. NIO’s William Li and Xiaomi’s Lei Jun both cited memory price hikes as creating significant pressure. Automakers hold less than 10% of global DRAM market share, resulting in weak bargaining power, rising costs, and heightened shortage risks. Li Auto warned that memory fill rates could fall below 50% by 2026. S&P recommends building flexible supply systems an,,,,the d pushing for chip standardization; short-term measures include price locking and feature reduction, while long-term solutions require supply chain independence.
On January 30, media reported that Tesla has added approximately $2 billion in investment to its AI startup xAI, while continuing to purchase GPUs and lease computing power externally. xAI servers are supplied by Dell, which then contracts Wistron and Inventec for manufacturing. xAI’s new orders to Dell this year are expected to exceed last year’s $5 billion, with Wistron and Inventec handling L6 to L11 products; order visibility already extends to 2027, painting a bullish outlook for the supply chain.