I think this is an interesting and unique variable—but it seems too predictable to me. In particular, I’d be surprised if custom hardware gives more than a 100x speedup to whatever the relevant transformative AI turns out to be, and in fact I’d be willing to bet the speedup would be less than 10x, compared to the hardware used by other major AI companies. (Obviously it’ll be 1000x faster than, say, the CPUs on consumer laptops). Do you disagree? I’d be interested to hear your reasons!
I don’t really know. My vague impression is that weird hardware could plausibly make many-orders-of-magnitude difference in energy consumption, but probably less overwhelming of a difference in other respects. Unless there’s an overwhelming quantum-computing speedup, but I consider that quite unlikely, like <5%. Again this is based on very little thought or research.
Maybe I’d be less surprised by a 100x speedup from GPU/TPU to custom ASIC than a 100x speedup from custom ASIC to photonic / neuromorphic / quantum / whatever. Just on the theory that GPUs are highly parallel, but orders of magnitude less parallel than the brain is, and a custom ASIC could maybe capture a lot of that difference. Maybe, I dunno, I could be wrong. A custom ASIC would not be much of a technological barrier the way weirder processors would be, although it could still be good for a year or two I guess, especially if you have cooperation from all the state-of-the-art fabs in the world...
I think this is an interesting and unique variable—but it seems too predictable to me. In particular, I’d be surprised if custom hardware gives more than a 100x speedup to whatever the relevant transformative AI turns out to be, and in fact I’d be willing to bet the speedup would be less than 10x, compared to the hardware used by other major AI companies. (Obviously it’ll be 1000x faster than, say, the CPUs on consumer laptops). Do you disagree? I’d be interested to hear your reasons!
I don’t really know. My vague impression is that weird hardware could plausibly make many-orders-of-magnitude difference in energy consumption, but probably less overwhelming of a difference in other respects. Unless there’s an overwhelming quantum-computing speedup, but I consider that quite unlikely, like <5%. Again this is based on very little thought or research.
Maybe I’d be less surprised by a 100x speedup from GPU/TPU to custom ASIC than a 100x speedup from custom ASIC to photonic / neuromorphic / quantum / whatever. Just on the theory that GPUs are highly parallel, but orders of magnitude less parallel than the brain is, and a custom ASIC could maybe capture a lot of that difference. Maybe, I dunno, I could be wrong. A custom ASIC would not be much of a technological barrier the way weirder processors would be, although it could still be good for a year or two I guess, especially if you have cooperation from all the state-of-the-art fabs in the world...