The unprecedented and rapid growth of artificial intelligence (AI) technologies has instigated a sharp increase in energy consumption, particularly within the data centers that power intensive AI workloads. Modern GPUs, such as the Nvidia H100/H200, now demand up to 1200W, significantly surpassing the power requirements of CPUs and GPUs from just 15 years ago. This escalating demand presents major challenges for effective power delivery and thermal management within AI hardware.
To address these challenges and improve overall energy efficiency, several new power supply topologies are being developed, alongside broader industry shifts:
- The transition from 12V to 48V power distribution within data centers and AI accelerators.
- The widespread adoption of DDR5 memory, which brings its own specific power delivery requirements.
- The emergence of new, highly demanding processor architectures designed for AI computation.
- The increasing use of supercapacitor-based modules to efficiently handle dynamic load spikes inherent in AI workloads.
- Passive components, especially capacitors, play an absolutely critical role in enabling these advancements by providing stable power delivery, filtering, and energy storage. As system architectures continue to evolve, these components must adapt accordingly. This presentation will explore:
- The latest technological progress in the design and manufacturing of passive components tailored for high-performance AI hardware.
- The remaining challenges in the selection, characterization, and development of these crucial components to meet future AI demands.
Presentation: Passive Components Challenges in AI Systems