I have a preference for minds as close to mine continuing existence assuming their lives are worth living. If it’s misaligned enough that the remaining humans don’t have good lives, then yes it doesn’t matter but I’d just lead with that rather than just the deaths.
And if they do have lives worth living and don’t end up being the last humans, then that leaves us with a lot more positive-human-lived-seconds in the 2B death case.
Sure, but 1. I only put 80% or so on MWI/MUH etc. and 2. I’m talking about optimizing for more positive-human-lived-seconds, not for just a binary ‘I want some humans to keep living’ .
I am dominated by it, and okay, I see what you are saying. Whichever scenario results in a higher chance of human control of the light cone is the one I prefer, and these considerations are relevant only where we don’t control it.
I have a preference for minds as close to mine continuing existence assuming their lives are worth living. If it’s misaligned enough that the remaining humans don’t have good lives, then yes it doesn’t matter but I’d just lead with that rather than just the deaths.
And if they do have lives worth living and don’t end up being the last humans, then that leaves us with a lot more positive-human-lived-seconds in the 2B death case.
This view as stated seems very likely to be satisfied by e.g. everett branches. (See (3) on my above list.)
Sure, but 1. I only put 80% or so on MWI/MUH etc. and 2. I’m talking about optimizing for more positive-human-lived-seconds, not for just a binary ‘I want some humans to keep living’ .
Then why aren’t you mostly dominated by the possibility of >10^50 positive-human-lived-seconds via human control of the light cone?
Maybe some sort of diminishing returns?
I am dominated by it, and okay, I see what you are saying. Whichever scenario results in a higher chance of human control of the light cone is the one I prefer, and these considerations are relevant only where we don’t control it.