Strong upvoted this post. I think the intuition is good and that architecture shifts invalidating anti-foom arguments derived from the nature of the DL paradigm is counter-evidence to those arguments, but simultaneously does not render them moot (i.e. I can still see soft takeoff as described by Jacob Cannell to be probable and assume he would be unlikely to update given the contents of this post).
I might try and present a more formal version of this argument later, but I still question the probability of a glass-box transition of type “AGI RSIs toward non-DL architecture that results in it maximizing some utility function in a pre-DL manner” being more dangerous than simply “AGI RSIs”. If behaving like an expected utility maximizer was optimal: would not AGI have done so without the architecture transition? If not, then you need to make the case for why glass-box architectures are better ways of building cognitive systems. I think that this argument is at odds with the universal learning hypothesis and seems more in-line with evolved modularity, which has a notoriously poor mapping to post-DL thinking. ULH seems to suggest that actually modular approaches might be inferior efficiency-wise to universal learning approaches, which contradicts the primary motive a general intelligence might have to RSI in the direction of a glass-box architecture.
You are basically discussing these two assumptions I made (under “Algorithmic foom (k>1) is possible”), right?
The intelligence ceiling is much higher than what we can achieve with just DL
The ceiling of hard-coded intelligence that runs on near-future hardware isn’t particularly limited by the hardware itself: algorithms interpreted from matrix multiplications are efficient enough on available hardware. This is maybe my shakiest hypothesis: matrix multiplication in GPUs is actually pretty damn well optimized
Algorithms are easier to reason about than staring at NNs weights
But maybe the third assumption is the non-obvious one?
For the sake of discourse:
I still question [...] “AGI RSIs toward non-DL architecture that results in it maximizing some utility function in a pre-DL manner” being more dangerous than simply “AGI RSIs”
My initial motive to write “Foom by change of paradigm” was to show another previously unstated way RSI could happen. Just to show how RSI could happen, because if your frame of mind is “only compute can create intelligence” foom is indeed unfeasible… but if it is possible to make the paradigm jump then you might just be blind to this path and fuck up royally, as the French say.
One key thing that I find interesting is also that this paradigm shift does circumvent the “AIs not creating other AIs because of alignment difficulties”
I think that this argument is at odds with the universal learning hypothesis...
I am afraid I am not familiar with this hypothesis and google (or ChatGPT) aren’t helpful. What do you mean with this and modularity?
P.S. I have now realized that the opposite of a black-box is indeed a glass-box and not a white-box lol. You can’t see inside a box of any colour unless it is clear, like glass!
Strong upvoted this post. I think the intuition is good and that architecture shifts invalidating anti-foom arguments derived from the nature of the DL paradigm is counter-evidence to those arguments, but simultaneously does not render them moot (i.e. I can still see soft takeoff as described by Jacob Cannell to be probable and assume he would be unlikely to update given the contents of this post).
I might try and present a more formal version of this argument later, but I still question the probability of a glass-box transition of type “AGI RSIs toward non-DL architecture that results in it maximizing some utility function in a pre-DL manner” being more dangerous than simply “AGI RSIs”. If behaving like an expected utility maximizer was optimal: would not AGI have done so without the architecture transition? If not, then you need to make the case for why glass-box architectures are better ways of building cognitive systems. I think that this argument is at odds with the universal learning hypothesis and seems more in-line with evolved modularity, which has a notoriously poor mapping to post-DL thinking. ULH seems to suggest that actually modular approaches might be inferior efficiency-wise to universal learning approaches, which contradicts the primary motive a general intelligence might have to RSI in the direction of a glass-box architecture.
You are basically discussing these two assumptions I made (under “Algorithmic foom (k>1) is possible”), right?
But maybe the third assumption is the non-obvious one?
For the sake of discourse:
My initial motive to write “Foom by change of paradigm” was to show another previously unstated way RSI could happen. Just to show how RSI could happen, because if your frame of mind is “only compute can create intelligence” foom is indeed unfeasible… but if it is possible to make the paradigm jump then you might just be blind to this path and fuck up royally, as the French say.
One key thing that I find interesting is also that this paradigm shift does circumvent the “AIs not creating other AIs because of alignment difficulties”
I am afraid I am not familiar with this hypothesis and google (or ChatGPT) aren’t helpful. What do you mean with this and modularity?
P.S. I have now realized that the opposite of a black-box is indeed a glass-box and not a white-box lol. You can’t see inside a box of any colour unless it is clear, like glass!