BCI enhancement and WBE are still mostly outside the Overton window, yet we saw how fast that changed with AI safety in the last few months. Is there some way that we can anticipate or speed up this happening with such technologies?
I think the graphs are helpful and mostly correct with BCI/WBE. Its clear to me that we have to get WBE right soonish even if AI alignment goes as well as we could possibly hope. The bandwidth required to get BCI to be effective is very much unknown atm, especially regards linking people together.
I guess a world with a high chance of happening is where we develop AGI with HW not that much different from what we currently have, i.e. AGI in <5 years. The Von Neumann Bottleneck is a fundamental limit, so we may have many fast IQ 160 AGI, or a slower than human IQ 200 one that thinks for 6 months and concludes with high confidence that we need to build better hardware for it to improved more. There is large room for improvement with a new chip design it has come up with.
Then we have a choice—instead of building such HW to run an AGI we do WBE instead—inefficiently with the VNB HW with the understanding that with more advanced HW we will run WBE rather than AGI.
BCI enhancement and WBE are still mostly outside the Overton window, yet we saw how fast that changed with AI safety in the last few months. Is there some way that we can anticipate or speed up this happening with such technologies?
I think the graphs are helpful and mostly correct with BCI/WBE. Its clear to me that we have to get WBE right soonish even if AI alignment goes as well as we could possibly hope. The bandwidth required to get BCI to be effective is very much unknown atm, especially regards linking people together.
Sorry but aren’t we in a fast takeoff world at the point of WBE. What’s the disjunctive world of no recursive self-improvement and WBE?
I guess a world with a high chance of happening is where we develop AGI with HW not that much different from what we currently have, i.e. AGI in <5 years. The Von Neumann Bottleneck is a fundamental limit, so we may have many fast IQ 160 AGI, or a slower than human IQ 200 one that thinks for 6 months and concludes with high confidence that we need to build better hardware for it to improved more. There is large room for improvement with a new chip design it has come up with.
Then we have a choice—instead of building such HW to run an AGI we do WBE instead—inefficiently with the VNB HW with the understanding that with more advanced HW we will run WBE rather than AGI.
But that still requires us to have developed human brain-scanning technology within 5 years, right? That does not seem remotely plausible.
No it requires us getting AGI limited by the VNB then stopping making more advanced HW for a while. During that HW pause we do brain scanning