1. an human-level AGI would be running on hardware making human constraints in memory or speed mostly go away by ~10 orders of magnitude
2. if you could store 10 orders of magnitude more information and read 10 orders of magnitude faster, and if you were able to copy your own code somewhere else, and the kind of AI research and code generation tools available online were good enough to have created you, wouldn’t you be able to FOOM?
The more you accelerate something, the slower and more limiting all it’s other hidden dependencies become.
So by the time we get to AGI, regular ML research will have rapidly diminishing returns (and cuda low level software or hardware optimization will also have diminishing returns), general hardware improvement will be facing the end of moore’s law, etc etc.
I don’t see why that last sentence follows from the previous sentences. In fact I don’t think it does. What if we get to AGI next year? Then returns won’t have diminished as much & there’ll be lots of overhang to exploit.
Sure - if we got to AGI next year—but for that to actually occur you’d have to exploit most of the remaining optimization slack in both high level ML and low level algorithms. Then beyond that Moore’s law is already mostly ended or nearly so depending on who you ask, and most of the easy obvious hardware arch optimizations are now behind us.
Well I would assume a “human-level AI” is an AI which performs as well as a human when it has the extra memory and running speed? I think I could FOOM eventually under those conditions but it would take a lot of thought. Being able to read the AI research that generated me would be nice but I’d ultimately need to somehow make sense of the inscrutable matrices that contain my utility function.
The straightforward argument goes like this:
1. an human-level AGI would be running on hardware making human constraints in memory or speed mostly go away by ~10 orders of magnitude
2. if you could store 10 orders of magnitude more information and read 10 orders of magnitude faster, and if you were able to copy your own code somewhere else, and the kind of AI research and code generation tools available online were good enough to have created you, wouldn’t you be able to FOOM?
No because of the generalized version of Amdhal’s law, which I explored in “Fast Minds and Slow Computers”.
The more you accelerate something, the slower and more limiting all it’s other hidden dependencies become.
So by the time we get to AGI, regular ML research will have rapidly diminishing returns (and cuda low level software or hardware optimization will also have diminishing returns), general hardware improvement will be facing the end of moore’s law, etc etc.
I don’t see why that last sentence follows from the previous sentences. In fact I don’t think it does. What if we get to AGI next year? Then returns won’t have diminished as much & there’ll be lots of overhang to exploit.
Sure - if we got to AGI next year—but for that to actually occur you’d have to exploit most of the remaining optimization slack in both high level ML and low level algorithms. Then beyond that Moore’s law is already mostly ended or nearly so depending on who you ask, and most of the easy obvious hardware arch optimizations are now behind us.
Well I would assume a “human-level AI” is an AI which performs as well as a human when it has the extra memory and running speed? I think I could FOOM eventually under those conditions but it would take a lot of thought. Being able to read the AI research that generated me would be nice but I’d ultimately need to somehow make sense of the inscrutable matrices that contain my utility function.