even if that chance of asi apocalypse is only 5%, that is 5% multiplied by all possible human goodness, which is a big deal to our species in expectation.
The problem is that if you really believe (because EY and others are shouting it from the rooftops) that there is a ~!00% chance we’re all gonna die shortly, you are not going to be motivated to plan for the 50⁄50 or 10⁄90 scenario. Once you acknowledge that you can’t really make a confident prediction on this matter, it is illogical to only plan for the minimal and maximal cases (we all die/everything is great). Those outcomes need no planning, so spending energy focusing on them is not optimal.
Sans hard data, as a Bayesian, shouldn’t one start with a balanced set of priors over all the possible outcomes, then focus on the ones you may be able to influence?
I’m not sure what you think I believe, but yeah I think we should be looking at scenarios in between the extremes.
I was giving reasons why I maintain some optimism, and maintaining optimism while reading Yudkowsky leaves me in the middle, where actions can be taken.
I’m not sure what you think I believe, but yeah I think we should be looking at scenarios in between the extremes.
I was giving reasons why I maintain some optimism, and maintaining optimism while reading Yudkowsky leaves me in the middle, where actions can be taken.
Violent agreement! I was using the pronoun ‘you’ rhetorically.