Got it. I think I personally expect a period of at least 2-3 years when we have human-level AI (~‘as good as or better than most humans at most tasks’) but it’s not capable of full RSI.
It also seems plausible to me that strong RSI in the sense I use it above (‘able to eg directly edit their own weights in ways that significantly improve their intelligence or other capabilities’) may take a long time to develop or even require already-superhuman levels of intelligence. As a loose demonstration of that possibility, the best team of neurosurgeons etc in the world couldn’t currently operate on someone’s brain to give them greater intelligence, even if they had tools that let them precisely edit individual neurons and connections. I’m certainly not confident that’s much too hard for human-level AI, but it seems plausible.
The problem is that not having that scenario be immediately a risk may make people complacent about allowing lots of parahuman AGI before it becomes superhuman and fully RSI capable.
That seems highly plausible to me too; my mainline guess is that by default, given human-level AI, it rapidly proliferates as replacement employees and for other purposes until either there’s a sufficiently large catastrophe, or it improves to superhuman.
Got it. I think I personally expect a period of at least 2-3 years when we have human-level AI (~‘as good as or better than most humans at most tasks’) but it’s not capable of full RSI.
It also seems plausible to me that strong RSI in the sense I use it above (‘able to eg directly edit their own weights in ways that significantly improve their intelligence or other capabilities’) may take a long time to develop or even require already-superhuman levels of intelligence. As a loose demonstration of that possibility, the best team of neurosurgeons etc in the world couldn’t currently operate on someone’s brain to give them greater intelligence, even if they had tools that let them precisely edit individual neurons and connections. I’m certainly not confident that’s much too hard for human-level AI, but it seems plausible.
That seems highly plausible to me too; my mainline guess is that by default, given human-level AI, it rapidly proliferates as replacement employees and for other purposes until either there’s a sufficiently large catastrophe, or it improves to superhuman.