How dependent is the AGI on idiosyncratic hardware? While any algorithm can run on any hardware, in practice every algorithm will run faster and more energy-efficiently on hardware designed specifically for that algorithm. But there’s a continuum from “runs perfectly fine on widely-available hardware, with maybe 10% speedup on a custom ASIC” to “runs a trillion times faster on a very specific type of room-sized quantum computer that only one company on earth has figured out how to make”.
If your AGI algorithm requires a weird new chip / processor technology to run at a reasonable cost, it makes it less far-fetched (although still pretty far-fetched I think) to hope that governments or other groups could control who is running the AGI algorithm—at least for a couple years until that chip / processor technology is reinvented / stolen / reverse-engineered—even when everyone knows that this AGI algorithm exists and how the algorithm works.
I think this is an interesting and unique variable—but it seems too predictable to me. In particular, I’d be surprised if custom hardware gives more than a 100x speedup to whatever the relevant transformative AI turns out to be, and in fact I’d be willing to bet the speedup would be less than 10x, compared to the hardware used by other major AI companies. (Obviously it’ll be 1000x faster than, say, the CPUs on consumer laptops). Do you disagree? I’d be interested to hear your reasons!
I don’t really know. My vague impression is that weird hardware could plausibly make many-orders-of-magnitude difference in energy consumption, but probably less overwhelming of a difference in other respects. Unless there’s an overwhelming quantum-computing speedup, but I consider that quite unlikely, like <5%. Again this is based on very little thought or research.
Maybe I’d be less surprised by a 100x speedup from GPU/TPU to custom ASIC than a 100x speedup from custom ASIC to photonic / neuromorphic / quantum / whatever. Just on the theory that GPUs are highly parallel, but orders of magnitude less parallel than the brain is, and a custom ASIC could maybe capture a lot of that difference. Maybe, I dunno, I could be wrong. A custom ASIC would not be much of a technological barrier the way weirder processors would be, although it could still be good for a year or two I guess, especially if you have cooperation from all the state-of-the-art fabs in the world...
How dependent is the AGI on idiosyncratic hardware? While any algorithm can run on any hardware, in practice every algorithm will run faster and more energy-efficiently on hardware designed specifically for that algorithm. But there’s a continuum from “runs perfectly fine on widely-available hardware, with maybe 10% speedup on a custom ASIC” to “runs a trillion times faster on a very specific type of room-sized quantum computer that only one company on earth has figured out how to make”.
If your AGI algorithm requires a weird new chip / processor technology to run at a reasonable cost, it makes it less far-fetched (although still pretty far-fetched I think) to hope that governments or other groups could control who is running the AGI algorithm—at least for a couple years until that chip / processor technology is reinvented / stolen / reverse-engineered—even when everyone knows that this AGI algorithm exists and how the algorithm works.
I think this is an interesting and unique variable—but it seems too predictable to me. In particular, I’d be surprised if custom hardware gives more than a 100x speedup to whatever the relevant transformative AI turns out to be, and in fact I’d be willing to bet the speedup would be less than 10x, compared to the hardware used by other major AI companies. (Obviously it’ll be 1000x faster than, say, the CPUs on consumer laptops). Do you disagree? I’d be interested to hear your reasons!
I don’t really know. My vague impression is that weird hardware could plausibly make many-orders-of-magnitude difference in energy consumption, but probably less overwhelming of a difference in other respects. Unless there’s an overwhelming quantum-computing speedup, but I consider that quite unlikely, like <5%. Again this is based on very little thought or research.
Maybe I’d be less surprised by a 100x speedup from GPU/TPU to custom ASIC than a 100x speedup from custom ASIC to photonic / neuromorphic / quantum / whatever. Just on the theory that GPUs are highly parallel, but orders of magnitude less parallel than the brain is, and a custom ASIC could maybe capture a lot of that difference. Maybe, I dunno, I could be wrong. A custom ASIC would not be much of a technological barrier the way weirder processors would be, although it could still be good for a year or two I guess, especially if you have cooperation from all the state-of-the-art fabs in the world...