Everything feels so low-stakes right now compared to future possibilities, and I am envious of people who don’t realize that. I need to spend less time thinking about it but I still can’t wrap my head around people rolling a dice which might have s-risks on it. It just seems like a -inf EV decision. I do not understand the thought process of people who see -inf and just go “yeah I’ll gamble that.” It’s so fucking stupid.
They are not necessarily “seeing” -inf in the way you or me are. They’re just kinda not thinking about it, or think that 0 (death) is the lowest utility can realistically go.
What looks like an S-risk to you or me may not count as -inf for some people.
I think humanity’s actions right now are most comparable those of a drug addict. We as a species dont have the necessary equivalent of executive function and self control to abstain from racing towards AGI. And if we’re gonna do it anyway, those that shout about how we’re all gonna die just ruin everyone’s mood.
Or for that matter to abstain towards burning infinite fossil fuels. We happen to not live on a planet with enough carbon to trigger a Venus-like cascade, but if that wasn’t the case I don’t know if we could stop ourselves from doing that either.
The thing is, any kind of large scale coordination to that effect seems more and more like it would require a degree of removal of agency from individuals that I’d call dystopian. You can’t be human and free without a freedom to make mistakes. But the higher the stakes, the greater the technological power we wield, the less tolerant our situation becomes of mistakes. So the alternative would be that we need to willingly choose to slow down or abort entirely certain branches of technological progress—choosing shorter and more miserable lives over the risk of having to curtail our freedom. But of course for the most part, not unreasonably!, we don’t really want to take that trade-off, and ask “why not both?”.
What looks like an S-risk to you or me may not count as -inf for some people
True but that’s just for relatively “mild” S-risks like “a dystopia in which AI rules the world, sees all and electrocutes anyone who commits a crime by the standards of the year it was created in, forever”. It’s a bad outcome, you could classify it as S-risk, but it’s still among the most aligned AIs imaginable and relatively better than extinction.
I simply don’t think many people think about what does an S-risk literally worse than extinction look like. To be fair I also think these aren’t very likely outcomes, as they would require an AI very aligned to human values—if aligned for evil.
No, I mean, I think some people actually hold that any existence is better than non-existence, so death is -inf for them and existence, even in any kind of hellscape, is above-zero utility.
I just think any such people lack imagination. I am 100% confident there exists an amount of suffering that would have them wish for death instead; they simply can’t conceive of it.
One way to make this work is to just not consider your driven-to-madness future self an authority on the matter of what’s good or not. You can expect to start wishing for death, and still take actions that would lead you to this state, because present!you thinks that existing in a state of wishing for death is better than not existing at all.
Everything feels so low-stakes right now compared to future possibilities, and I am envious of people who don’t realize that. I need to spend less time thinking about it but I still can’t wrap my head around people rolling a dice which might have s-risks on it. It just seems like a -inf EV decision. I do not understand the thought process of people who see -inf and just go “yeah I’ll gamble that.” It’s so fucking stupid.
They are not necessarily “seeing” -inf in the way you or me are. They’re just kinda not thinking about it, or think that 0 (death) is the lowest utility can realistically go.
What looks like an S-risk to you or me may not count as -inf for some people.
I think humanity’s actions right now are most comparable those of a drug addict. We as a species dont have the necessary equivalent of executive function and self control to abstain from racing towards AGI. And if we’re gonna do it anyway, those that shout about how we’re all gonna die just ruin everyone’s mood.
Or for that matter to abstain towards burning infinite fossil fuels. We happen to not live on a planet with enough carbon to trigger a Venus-like cascade, but if that wasn’t the case I don’t know if we could stop ourselves from doing that either.
The thing is, any kind of large scale coordination to that effect seems more and more like it would require a degree of removal of agency from individuals that I’d call dystopian. You can’t be human and free without a freedom to make mistakes. But the higher the stakes, the greater the technological power we wield, the less tolerant our situation becomes of mistakes. So the alternative would be that we need to willingly choose to slow down or abort entirely certain branches of technological progress—choosing shorter and more miserable lives over the risk of having to curtail our freedom. But of course for the most part, not unreasonably!, we don’t really want to take that trade-off, and ask “why not both?”.
True but that’s just for relatively “mild” S-risks like “a dystopia in which AI rules the world, sees all and electrocutes anyone who commits a crime by the standards of the year it was created in, forever”. It’s a bad outcome, you could classify it as S-risk, but it’s still among the most aligned AIs imaginable and relatively better than extinction.
I simply don’t think many people think about what does an S-risk literally worse than extinction look like. To be fair I also think these aren’t very likely outcomes, as they would require an AI very aligned to human values—if aligned for evil.
No, I mean, I think some people actually hold that any existence is better than non-existence, so death is -inf for them and existence, even in any kind of hellscape, is above-zero utility.
I just think any such people lack imagination. I am 100% confident there exists an amount of suffering that would have them wish for death instead; they simply can’t conceive of it.
One way to make this work is to just not consider your driven-to-madness future self an authority on the matter of what’s good or not. You can expect to start wishing for death, and still take actions that would lead you to this state, because present!you thinks that existing in a state of wishing for death is better than not existing at all.
I think that’s perfectly coherent.
I mean, I guess it’s technically coherent, but it also sounds kind of insane. That way Dormammu lies.
Why would one even care about their future self if they’re so unconcerned about that self’s preferences?