Friday, February 6, 2026
IBM is aiming for the 1.4-nm node thanks to new heat-modeling tech it developed with Synopsys and the support of U.S military research agency DARPA. The companies told EE Times they will share the technology with chipmaking partners as the 2-nm node ramps up. TSMC and other chipmakers like Samsung started 2-nm manufacturing last year.
IBM’s move comes after the company led the chipmaking industry with the announcement of the world’s first 2-nm chips in 2021. IBM dropped out of commercial chipmaking decades ago, yet it remains a major player with a tech stack that includes fabrication and advanced packaging.
As part of its latest effort, IBM Research developed a new machine learning (ML) tool with Ansys, now part of Synopsys, in a project supported by the U.S. DARPA (Defense Advanced Research Projects Agency). IBM announced the results under DARPA’s Thermonat project, short for Thermal Design of Nanoscale Transistors. IBM modeled thermal behavior of chips down to the atomic level.
As transistor nodes shrink, heat becomes a bigger problem. AI is driving increased chip power density and generating more heat. IBM said it trained the ML software on its stores of semiconductor data, achieving prediction accuracy within 1 degree Celsius, tens of thousands of times faster than the next best simulation tools.
“We expect that chips adopting this technology will appear during this 2-nm node technology cycle,” Russ Robison, EDA lead architect at IBM Research, told EE Times. “It will be a direct requirement for performance at 1.4-nm node and below. Data centers and high-performance applications (AI) are the first space, but cell phones will follow too.”
The ability to accurately model heat sources in chips can provide a powerful tool to engineers who are designing cooling systems for new chips, according to Timothy Chainer, a subsystem cooling and integration expert at IBM Research. “When designing chip layout, they can produce a thermally aware layout,” he said.
DARPA needed the model to predict heat properties within a 1% margin of accuracy. The U.S. agency was looking for solutions 100× faster than the current state-of-the-art tech—building a physical model of a new device to gather thermal data. Robison and his colleagues were able to model within 1 degree Celsius of experimental data, and they did it 50,000× faster than current methods.
“The Thermonat teams pushed the boundaries of what is possible in chip-scale thermal prediction,” said Yogendra Joshi, Thermonat program manager at DARPA, in a press release. “By connecting fundamental physics with design-ready tools, they created capabilities that can accelerate innovation for both national-security applications and the broader semiconductor industry.”
Chipmakers need tools that can predict thermally driven failures before investing years and hundreds of millions of dollars in fabrication. Yet, existing commercial modeling tools have not been able to fully capture nanoscale heat flow, while emerging atomic-level methods may be accurate but often require weeks or months to run, making them impractical for real-world design cycles, DARPA said in a press release.
For semiconductors today, the difference in performance for a heat-optimized design versus a non-thermal optimized design is between 5-15%, Robison added. From the development side, the technology needs to take into account the thermal performance of the whole “client” application, including circuit design and use case, he noted. From the design/usage side, thermal behavior is a big limiter of potential performance/margin, and having better thermal feedback is an improvement, Robison said.
IBM is using the methodology to consider overall 3D-IC chip use cases.
“Because of the speed and capability, we can model whole 3D-IC examples in the same way, just with a little longer run time,” Robison said. “The current thermal knowledge is heading into IBM’s chiplet and advanced-packaging technologies and assembly design kits.”
Tech transfer
IBM declined to name the chipmaking partners it will share the tech with. The company is working with Japan’s startup foundry Rapidus to start production at the 2-nm node in 2027. In August last year, Rapidus began prototyping 2-nm gate-all-around (GAA) transistors at its new facility, a key step toward starting production in 2027. IBM also uses Samsung as a foundry supplier.
Most of the new developments will remain in-house for use in IBM projects and those of IBM clients. A team working on transistors at IBM is adopting the tech, while another is developing future 3D-IC devices. IBM will also use the tech for chip packaging and heterogeneous integration.
Synopsys and IBM project
Ansys, now part of Synopsys, is contributing two key technologies to the Thermonat project with IBM, Synopsys fellow Norman Chang told EE Times.
“The first is a reduced-order modeling approach that enables fast self-heating calculations for 2-nm GAA transistor designs,” Chang said. “The second is a machine-learning–based thermal solver that uses a per-tile activation methodology combined with Fourier neural operator modeling, delivering up to a 1,000× speed-up without sacrificing accuracy for designs with more than 1 million transistors. Our intent is to continue maturing both technologies with widened capabilities for future release.”
An ML technique called a Fourier neural operator, which employs a neural network training format, aided the development of the reduced-order modeling (ROM). The Fourier neural operator is particular to machine learning for solving partial differential equation matrices, so it was uniquely suited to this scenario, Robison said.
A ROM is a simplified version of a complex, high-fidelity mathematical model, created to cut computational costs while retaining essential behaviors, enabling faster analysis and real-time simulations where full models are too slow or resource intensive. A ROM trades a little accuracy for massive speed, making complex engineering problems more manageable.
The tool can help squeeze more performance out of chips, Chainer said. Improved cooling solutions can allow higher chip power at the same operating temperature to enable increased computational performance. Alternatively, more effective cooling solutions can enable lower chip operating temperature to cut chip power consumption and improve efficiency.
Synopsys is gathering evaluations of the new self-heating ROM and the fast ML-based thermal solver.
“The ML-based solver supports both static and transient workloads in digital and analog circuits, delivering 1,000× or greater speed-up,” Chang said. “It’s another example of how Synopsys is applying ML and AI to accelerate all stages of the design flow. We look forward to sharing more about this at DesignCon 2026.”
By: DocMemory Copyright © 2023 CST, Inc. All Rights Reserved
|