Carlsmith’s report is pretty solid overall, and this doesn’t matter much because his final posterior mean of 1e15/s is still within A100 peak perf, but the high end of 100 FLOPs part is poorly justified based mostly on one outlier expert, and ultimately is padding for various uncertainties:
I’ll use 100 FLOPs per spike through synapse as a higher-end FLOP/s budget for synaptic transmission. This would at least cover Sarpeshkar’s 40 FLOP estimate, and provide some cushion for other things I might be missing
GPUs dominate in basically everything over CPUs: memory bandwidth (OOM greater), general operations-on-arbitrary-numbers-pulled-from-RAM (1 to 2 OOM greater), and matrix multiplication at various bit depths (many OOM greater). CPU based supercomputers are completely irrelevant for AGI considerations.
There are many GPU competitors but they generally have similar perf characteristics, with the exception of some pushing much higher on chip scratch SRAM and higher interconnect.
Carlsmith’s report is pretty solid overall, and this doesn’t matter much because his final posterior mean of 1e15/s is still within A100 peak perf, but the high end of 100 FLOPs part is poorly justified based mostly on one outlier expert, and ultimately is padding for various uncertainties:
GPUs dominate in basically everything over CPUs: memory bandwidth (OOM greater), general operations-on-arbitrary-numbers-pulled-from-RAM (1 to 2 OOM greater), and matrix multiplication at various bit depths (many OOM greater). CPU based supercomputers are completely irrelevant for AGI considerations.
There are many GPU competitors but they generally have similar perf characteristics, with the exception of some pushing much higher on chip scratch SRAM and higher interconnect.