TSMC is chasing a trillion-transistor AI bonanza
TSMC has announced that its semiconductor manufacturing operations are rapidly recovering from the disruption caused by the earthquake that hit Taiwan on April 3 and that its revenue target for 2024 remains unchanged. The company’s factories were built with a high degree of earthquake resistance.
Management is conducting a comprehensive review of the situation, but as things stand now, we should step back from the headlines and make sure that the 10-year technology development scenario recently laid out by Chairman Mark Liu and Chief Scientist Philip Wong does not get lost in the shuffle.
On March 28, IEEE Spectrum, the magazine of the Institute of Electrical and Electronics Engineers, published an essay, “How We’ll Reach a 1 Trillion Transistor GPU,” which explains how “advances in semiconductors are feeding the AI boom.”
First, note that Nvidia’s new Blackwell architecture AI processor combines two reticle-limited 104-billion-transistor graphics processing units (GPUs) with a 10-terabytes-per-second interconnect and other circuitry in a single system-on-chip (SoC).
Reticle-limited means limited by the maximum size of the photomask used in the lithography process, which transfers the design to the silicon wafer. TSMC is therefore aiming for a roughly tenfold increase in the number of transistors per GPU in the coming decade.
The essay starts off with a review of the progress of semiconductor manufacturing and artificial intelligence so far:
- The IBM Deep Blue supercomputer that defeated world chess champion Garry Kasparovs in 1997 used 0.6- and 0.35-micron node technology.
- The AlexNet neural network that won the ImageNet Large Scale Visual Recognition Challenge in 2012, launching the era of machine learning, used 40