Answers: (1) Not one I would trust with my future, no. (2) I would be very surprised if humans managed to build a FAI without being able in principle to reliably judge the relative value of different scenarios. (3) I would be extremely surprised if the things we currently call human values were not contradictory, even if only because (a) they’re all underspecified, hence the first two questions, and (b) different humans have different values that really do conflict.
In your three scenarios, I’d say all of them are likely far better than we have any right to expect and would count as a positive singularity in my estimation, although there are enough unspecified details that could sway me otherwise.
For me the most glaring fault in them, especially (2) and (3), is that they prescribe a single kind of future that all humans somehow agree on or are persuaded/coerced to agree to. In this sense they make me feel a bit like I felt when I read Friendship is Optimal—not anything I’d deliberately set out to build as I am now, but something I’m capable of valuing and appreciating.
Also, for (1) the phrase “immortality turns out to be impossible” means very different things in sub-scenario (a) where life extension of humans beyond 120 years is impossible, vs (b) lifespans can be extended many times, possible to millions of subjective years or more, but we can’t escape the heat death of the universe. VHEM seems much more understandable to me in the latter than the former world, if making new people would shorten the lifespan of every existing person by spreading resources thinner.
Answers: (1) Not one I would trust with my future, no. (2) I would be very surprised if humans managed to build a FAI without being able in principle to reliably judge the relative value of different scenarios. (3) I would be extremely surprised if the things we currently call human values were not contradictory, even if only because (a) they’re all underspecified, hence the first two questions, and (b) different humans have different values that really do conflict.
In your three scenarios, I’d say all of them are likely far better than we have any right to expect and would count as a positive singularity in my estimation, although there are enough unspecified details that could sway me otherwise.
For me the most glaring fault in them, especially (2) and (3), is that they prescribe a single kind of future that all humans somehow agree on or are persuaded/coerced to agree to. In this sense they make me feel a bit like I felt when I read Friendship is Optimal—not anything I’d deliberately set out to build as I am now, but something I’m capable of valuing and appreciating.
Also, for (1) the phrase “immortality turns out to be impossible” means very different things in sub-scenario (a) where life extension of humans beyond 120 years is impossible, vs (b) lifespans can be extended many times, possible to millions of subjective years or more, but we can’t escape the heat death of the universe. VHEM seems much more understandable to me in the latter than the former world, if making new people would shorten the lifespan of every existing person by spreading resources thinner.