My current view is that conditional on ending up with full misaligned AI control:
20% extinction
50% chance >1 billion humans die or suffer outcome at least as bad as death.
for me preventing a say 10% chance of extinction is much more important than even a 99% chance of 2B people dying
I don’t see why this would be true:
From a longtermist perspective, we lose control over the lightcone either way (we’re conditioning on full misaligned AI control).
From a perspective where you just care about currently alive beings on planet earth, I don’t see why extinction is that much worse.
From a perspective in which you just want some being to be alive somewhere, I think that expansive notions of the universe/multiverse virtually guarantee this (but perhaps you dismiss this for some reason).
Also, to be clear, perspectives 2 and 3 don’t seem very reasonable to me as terminal philosophical views (rather than e.g. heuristics) as they priviledge time and locations in space in a pretty specific way.
I have a preference for minds as close to mine continuing existence assuming their lives are worth living. If it’s misaligned enough that the remaining humans don’t have good lives, then yes it doesn’t matter but I’d just lead with that rather than just the deaths.
And if they do have lives worth living and don’t end up being the last humans, then that leaves us with a lot more positive-human-lived-seconds in the 2B death case.
Sure, but 1. I only put 80% or so on MWI/MUH etc. and 2. I’m talking about optimizing for more positive-human-lived-seconds, not for just a binary ‘I want some humans to keep living’ .
I am dominated by it, and okay, I see what you are saying. Whichever scenario results in a higher chance of human control of the light cone is the one I prefer, and these considerations are relevant only where we don’t control it.
My current view is that conditional on ending up with full misaligned AI control:
20% extinction
50% chance >1 billion humans die or suffer outcome at least as bad as death.
I don’t see why this would be true:
From a longtermist perspective, we lose control over the lightcone either way (we’re conditioning on full misaligned AI control).
From a perspective where you just care about currently alive beings on planet earth, I don’t see why extinction is that much worse.
From a perspective in which you just want some being to be alive somewhere, I think that expansive notions of the universe/multiverse virtually guarantee this (but perhaps you dismiss this for some reason).
Also, to be clear, perspectives 2 and 3 don’t seem very reasonable to me as terminal philosophical views (rather than e.g. heuristics) as they priviledge time and locations in space in a pretty specific way.
I have a preference for minds as close to mine continuing existence assuming their lives are worth living. If it’s misaligned enough that the remaining humans don’t have good lives, then yes it doesn’t matter but I’d just lead with that rather than just the deaths.
And if they do have lives worth living and don’t end up being the last humans, then that leaves us with a lot more positive-human-lived-seconds in the 2B death case.
This view as stated seems very likely to be satisfied by e.g. everett branches. (See (3) on my above list.)
Sure, but 1. I only put 80% or so on MWI/MUH etc. and 2. I’m talking about optimizing for more positive-human-lived-seconds, not for just a binary ‘I want some humans to keep living’ .
Then why aren’t you mostly dominated by the possibility of >10^50 positive-human-lived-seconds via human control of the light cone?
Maybe some sort of diminishing returns?
I am dominated by it, and okay, I see what you are saying. Whichever scenario results in a higher chance of human control of the light cone is the one I prefer, and these considerations are relevant only where we don’t control it.