I guess a world with a high chance of happening is where we develop AGI with HW not that much different from what we currently have, i.e. AGI in <5 years. The Von Neumann Bottleneck is a fundamental limit, so we may have many fast IQ 160 AGI, or a slower than human IQ 200 one that thinks for 6 months and concludes with high confidence that we need to build better hardware for it to improved more. There is large room for improvement with a new chip design it has come up with.
Then we have a choice—instead of building such HW to run an AGI we do WBE instead—inefficiently with the VNB HW with the understanding that with more advanced HW we will run WBE rather than AGI.
Sorry but aren’t we in a fast takeoff world at the point of WBE. What’s the disjunctive world of no recursive self-improvement and WBE?
I guess a world with a high chance of happening is where we develop AGI with HW not that much different from what we currently have, i.e. AGI in <5 years. The Von Neumann Bottleneck is a fundamental limit, so we may have many fast IQ 160 AGI, or a slower than human IQ 200 one that thinks for 6 months and concludes with high confidence that we need to build better hardware for it to improved more. There is large room for improvement with a new chip design it has come up with.
Then we have a choice—instead of building such HW to run an AGI we do WBE instead—inefficiently with the VNB HW with the understanding that with more advanced HW we will run WBE rather than AGI.
But that still requires us to have developed human brain-scanning technology within 5 years, right? That does not seem remotely plausible.
No it requires us getting AGI limited by the VNB then stopping making more advanced HW for a while. During that HW pause we do brain scanning