Hardware
AMD unveils MI400 inference chip to challenge Nvidia H200
A new data-center GPU with 288GB HBM3e and ROCm 7 software stack targets LLM inference workloads at hyperscale.
Chips, data centers, and the physical infrastructure powering the AI race.
A new data-center GPU with 288GB HBM3e and ROCm 7 software stack targets LLM inference workloads at hyperscale.
The third-generation WSE cuts training time for 70B-parameter models by 3x compared to a 512-GPU H100 cluster.