Thanks for a really interesting read. I wonder if it’s worth thinking more about the FLOP/$ as a follow-up. If a performance limit is reached, presumably the next frontier would be bringing down the price of compute. What are the current bottlenecks on reducing costs?
We originally wanted to forecast FLOP/s/$ instead of just FLOP/s but we found it hard to make estimates about price developments. We might look into this in the future.
Thanks. Another naive question: how do power and cooling requirements scale with transistor and GPU sizes? Could these be barriers to how large supercomputers can be built in practice?
Thanks for a really interesting read. I wonder if it’s worth thinking more about the FLOP/$ as a follow-up. If a performance limit is reached, presumably the next frontier would be bringing down the price of compute. What are the current bottlenecks on reducing costs?
We originally wanted to forecast FLOP/s/$ instead of just FLOP/s but we found it hard to make estimates about price developments. We might look into this in the future.
Thanks. Another naive question: how do power and cooling requirements scale with transistor and GPU sizes? Could these be barriers to how large supercomputers can be built in practice?
Definitely could be but don’t have to be. We looked a bit into cooling and heat and did not find any clear consensus on the issue.