Okay, I’m understanding your proposed future better. I still think that anything recursively self-improving (RSI) will be the end of us, if it’s not aligned to be long-term stable. And that even non-RSI self-replicating agents are a big problem for this scenario (since they can cooperate nearly perfectly). But their need for GPU space is an important limitation.
I think this is a possible way to get to the real intelligence explosion of RSI, and it’s the likely scenario we’re facing if language model cognitive architectures take off like I think they will But I don’t think it helps with the need to get alignment right for the first real superintelligence. That will be capable of either stealing, buying, or building its own compute resources.
Okay, I’m understanding your proposed future better. I still think that anything recursively self-improving (RSI) will be the end of us, if it’s not aligned to be long-term stable. And that even non-RSI self-replicating agents are a big problem for this scenario (since they can cooperate nearly perfectly). But their need for GPU space is an important limitation.
I think this is a possible way to get to the real intelligence explosion of RSI, and it’s the likely scenario we’re facing if language model cognitive architectures take off like I think they will But I don’t think it helps with the need to get alignment right for the first real superintelligence. That will be capable of either stealing, buying, or building its own compute resources.