I find the analysis largely convincing as well, and further feel a 3/1M chance per century of existential disaster is extremely conservative. But I also don’t find the idea of a singleton depressing. Bostrom suggests the idea of a singleton being a world democratic government or a benevolent superintelligent machine, which Eliezer’s CEV seems able to realize, at least with my initial understanding. It even seems possible that singletons such as that might dissolve themselves if that’s what was desired (<-serious handwaving), but I admit that a singleton has such potential for staying power that it’s probably best to assume it’s “forever”.
With my views on the varied risks we face, the unique potential of a singleton to solve many of them, and with a personal estimate of .5 probability of surviving this century at best, a singleton seems worth looking into. Its a huge danger itself, but I think we ought to investigate the best ways to make a “safe” singleton at the same time as looking for ways to avoid risk without one, not waiting until we are sure we absolutely need one.
I realize this was not the focus of the post and so apologize if it’s too off-topic. I wanted to draw more attention to it as a potential solution, though I don’t mean to withdraw attention from the post’s central issues.
Although your conclusions are very depressing, it seems I must accept them. The other commenters’ reluctance to agree puzzles me.
I find the analysis largely convincing as well, and further feel a 3/1M chance per century of existential disaster is extremely conservative. But I also don’t find the idea of a singleton depressing. Bostrom suggests the idea of a singleton being a world democratic government or a benevolent superintelligent machine, which Eliezer’s CEV seems able to realize, at least with my initial understanding. It even seems possible that singletons such as that might dissolve themselves if that’s what was desired (<-serious handwaving), but I admit that a singleton has such potential for staying power that it’s probably best to assume it’s “forever”.
With my views on the varied risks we face, the unique potential of a singleton to solve many of them, and with a personal estimate of .5 probability of surviving this century at best, a singleton seems worth looking into. Its a huge danger itself, but I think we ought to investigate the best ways to make a “safe” singleton at the same time as looking for ways to avoid risk without one, not waiting until we are sure we absolutely need one.
I realize this was not the focus of the post and so apologize if it’s too off-topic. I wanted to draw more attention to it as a potential solution, though I don’t mean to withdraw attention from the post’s central issues.