At present it looks to me like a positive singularity is substantially more likely to occur starting with whole-brain emulation than with Friendly AI.
That’s an interesting claim, and you should post your analysis of it (e.g. the evidence and reasoning that you use to form the estimate that a positive singularity is “substantially more likely” given WBE).
You may want to read this paper I presented at FHI. Note that there’s a big difference between the probability of risk conditional on WBE coming first or AI coming first and marginal impact of effort. In particular some of our uncertainty is about logical facts about the space of algorithms and technology landscape, and some of it is about the extent and effectiveness of activism/intervention.
That’s an interesting claim, and you should post your analysis of it (e.g. the evidence and reasoning that you use to form the estimate that a positive singularity is “substantially more likely” given WBE).
There’s a thread with some relevant points (both for and against) titled Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity’s Future. I hadn’t looked at the comments until just now and still have to read them all; but see in particular a comment by Carl Shulman.
After reading all of the comments I’ll think about whether I have something to add beyond them and get back to you.
You may want to read this paper I presented at FHI. Note that there’s a big difference between the probability of risk conditional on WBE coming first or AI coming first and marginal impact of effort. In particular some of our uncertainty is about logical facts about the space of algorithms and technology landscape, and some of it is about the extent and effectiveness of activism/intervention.
Thanks for the very interesting reference! Is it linked on the SIAI research papers page? I didn’t see it there.
I appreciate this point which you’ve made to me previously (and which appears in your comment that I linked above!).