The other gestured-example I’ve heard is “upload aligned people who think hard for 1000 subjective years and hopefully figure something out.” I’ve heard someone from MIRI argue that one is also unworkable but wasn’t sure on the exact reasons.
Standard counterargument to that one is “by the time we can do that we’ll already have beyond-human AI capabilities (since running humans is a lower bound on what AI can do), and therefore foom”.
You could have another limited AI design a nanofactory to make ultra-fast computers to run the emulations. I think a more difficult problem is getting a limited AI to do neuroscience well. Actually I think this whole scenario is kind of silly, but given the implausible premise of a single AI lab having a massive tech lead over all others, neuroscience may be the bigger barrier.
Standard counterargument to that one is “by the time we can do that we’ll already have beyond-human AI capabilities (since running humans is a lower bound on what AI can do), and therefore foom”.
You could have another limited AI design a nanofactory to make ultra-fast computers to run the emulations. I think a more difficult problem is getting a limited AI to do neuroscience well. Actually I think this whole scenario is kind of silly, but given the implausible premise of a single AI lab having a massive tech lead over all others, neuroscience may be the bigger barrier.