> Likewise, the post argues that existing fabs are pumping out the equivalent of 5 million brains per year, which to me seems like plenty for AI takeove
Err where
In brain efficiency you wrote “Nvidia—the single company producing most of the relevant flops today—produced roughly 5e21 flops of GPU compute in 2021, or the equivalent of about 5 million brains, perhaps surpassing the compute of the 3.6 million humans born in the US. With 200% growth in net flops output per year from all sources it will take about a decade for net GPU compute to exceed net world brain compute.”
…Whoops. I see. In this paragraph you were talking about FLOP/s, whereas you think the main constraint is memory capacity, which cuts it down by [I think you said] 3 OOM? But I think 5000 brains is enough for takeover too. Again, Hitler & Stalin had one each.
I will strike through my mistake above, sorry about that.
Oh I see. Memory capacity does limit the size of a model you can fit on a reasonable number of GPUs, but flops and bandwidth constrain the speed. In brain efficiency I was just looking at total net compute counting all gpus, more recently I was counting only flagship GPUs (as the small consumer GPUs aren’t used much for AI due to low RAM).
In brain efficiency you wrote “Nvidia—the single company producing most of the relevant flops today—produced roughly 5e21 flops of GPU compute in 2021, or the equivalent of about 5 million brains, perhaps surpassing the compute of the 3.6 million humans born in the US. With 200% growth in net flops output per year from all sources it will take about a decade for net GPU compute to exceed net world brain compute.”
…Whoops. I see. In this paragraph you were talking about FLOP/s, whereas you think the main constraint is memory capacity, which cuts it down by [I think you said] 3 OOM? But I think 5000 brains is enough for takeover too. Again, Hitler & Stalin had one each.
I will strike through my mistake above, sorry about that.
Oh I see. Memory capacity does limit the size of a model you can fit on a reasonable number of GPUs, but flops and bandwidth constrain the speed. In brain efficiency I was just looking at total net compute counting all gpus, more recently I was counting only flagship GPUs (as the small consumer GPUs aren’t used much for AI due to low RAM).