I disagree strongly with this implied framing that all which matters is minimization of risk. Functional humans are not pure risk-avoiders, nor is our civilization. Small chances of heaven can counterbalance small chances of hell. (I also disagree with the implied model from your first link where cumulative risk is the product of small independent risk per year, but that’s more minor in comparison).
Do you think there’s a way to reframe my position in a way that you’d agree with, or at least don’t strongly disagree with? (In other words, I’m not sure how much of the disagreement is with the substance of what I’m saying vs the way I’m saying it.) Or, to approach this another way, how would you state/frame your own position on this topic?
You linked to an article on existential security—“a place of safety—a place where existential risk is low and stays low”—which implies all that matters is risk minimization, rather than utility maximization with some risk discounting. To be fair, my disagreement there isn’t specific to your points.
Separately I’m also skeptical of estimating risk through some long list of obstacles, as the relevance of those obstacles are correlated or mostly determined by a small number of more fundamental issues (takeoff speed, brain tractability, alignment vs capability tractability, etc).
You linked to an article on existential security—“a place of safety—a place where existential risk is low and stays low”—which implies all that matters is risk minimization, rather than utility maximization with some risk discounting.
Existential risk is just the probability that a large portion of the future’s value is lost. “Small chances of heaven can counterbalance small chances of hell.” implies that it’s about reducing the risk of hell, when in fact it’s equally concerned with the absence of Heaven.
Ok that is an unexpected interpretation as it’s not how I typically think of ‘risk’, but yes if that’s the intended interpretation it resolves my objection.
I disagree strongly with this implied framing that all which matters is minimization of risk. Functional humans are not pure risk-avoiders, nor is our civilization. Small chances of heaven can counterbalance small chances of hell. (I also disagree with the implied model from your first link where cumulative risk is the product of small independent risk per year, but that’s more minor in comparison).
Do you think there’s a way to reframe my position in a way that you’d agree with, or at least don’t strongly disagree with? (In other words, I’m not sure how much of the disagreement is with the substance of what I’m saying vs the way I’m saying it.) Or, to approach this another way, how would you state/frame your own position on this topic?
You linked to an article on existential security—“a place of safety—a place where existential risk is low and stays low”—which implies all that matters is risk minimization, rather than utility maximization with some risk discounting. To be fair, my disagreement there isn’t specific to your points.
Separately I’m also skeptical of estimating risk through some long list of obstacles, as the relevance of those obstacles are correlated or mostly determined by a small number of more fundamental issues (takeoff speed, brain tractability, alignment vs capability tractability, etc).
Existential risk is just the probability that a large portion of the future’s value is lost. “Small chances of heaven can counterbalance small chances of hell.” implies that it’s about reducing the risk of hell, when in fact it’s equally concerned with the absence of Heaven.
Ok that is an unexpected interpretation as it’s not how I typically think of ‘risk’, but yes if that’s the intended interpretation it resolves my objection.