Hacker News new | past | comments | ask | show | jobs | submit
Largest GPU cluster at the moment is X.ai's 100K H100's which is ~$2.5B worth of GPUs. So, something 10x bigger (1M GPUs) is $25B, and add $10B for 1GW nuclear reactor.

This sort of $100-500B budget doesn't sound like training cluster money, more like anticipating massive industry uptake and multiple datacenters running inference (with all of corporate America's data sitting in the cloud).

Shouldn't there be a fear of obsolescence?
loading story #42789119
Don't they say in the article that it is also for scaling up power and datacenters? That's the big cost here.
loading story #42792222