I that in the mean & median cases, value(MAGIC)>value(US first, no race)>value(US first, race)>value(PRC first, no race)>value(PRC first, race)>value(PRC first, race)≫value(extinction)
While I think the core claim “across a wide family of possible futures, racing can be net beneficial” is true, the sheer number of parameters you have chosen arbitrarily or said “eh, let’s assume this is normally distributed” demonstrates the futility of approaching this question numerically.
I’m not sure there’s added value in an overly complex model (v.s. simply stating your preference ordering). Feels like false precision.
Presumably hardcore Doomers have:
value(SHUT IT ALL DOWN) > 0.2 > value(MAGIC) > 0 = value(US first, no race)=value(US first, race)=value(PRC first, no race)=value(PRC first, race)=value(PRC first, race)=value(extinction)
Whereas e/acc has an ordering more like:
value(US first, race)>value(PRC first, race)>value(US first, no race)>value(PRC first, no race)>value(PRC first, race)>value(extinction)>value(MAGIC)
There’s two arguments you’ve made, one is very gnarly, the other is wrong :-):
“the sheer number of parameters you have chosen arbitrarily or said “eh, let’s assume this is normally distributed” demonstrates the futility of approaching this question numerically.”
“simply stating your preference ordering”
I didn’t just state a preference ordering over futures, I also ass-numbered their probabilities and ass-guessed ways of getting there. For to estimate an expected value of an action, one requires two things: A list of probabilities, and a list of utilities—you merely propose giving one of those.
(As for the “false precision”, I feel like the debate has run its course; I consider Scott Alexander, 2017 to be the best rejoinder here. The world is likely not structured in a way that makes trying harder to estimate be less accurate in expectation (which I’d dub the Taoist assumption, thinking & estimating more should narrow the credences over time. Same reason why I’ve defended the bioanchors report against accusations of uselessness with having distributions over 14 orders of magnitude).
value(SHUT IT ALL DOWN) > 0.2 > value(MAGIC) > 0 = value(US first, no race)=value(US first, race)=value(PRC first, no race)=value(PRC first, race)=value(PRC first, race)=value(extinction)
Yes, that is essentially my preference ordering / assignments, which remains the case even if the 0.2 is replaced with 0.05 -- in case anyone is wondering whether there are real human beings outside MIRI who are that pessimistic about the AI project.
While I think the core claim “across a wide family of possible futures, racing can be net beneficial” is true, the sheer number of parameters you have chosen arbitrarily or said “eh, let’s assume this is normally distributed” demonstrates the futility of approaching this question numerically.
I’m not sure there’s added value in an overly complex model (v.s. simply stating your preference ordering). Feels like false precision.
Presumably hardcore Doomers have:
value(SHUT IT ALL DOWN) > 0.2 > value(MAGIC) > 0 = value(US first, no race)=value(US first, race)=value(PRC first, no race)=value(PRC first, race)=value(PRC first, race)=value(extinction)
Whereas e/acc has an ordering more like:
value(US first, race)>value(PRC first, race)>value(US first, no race)>value(PRC first, no race)>value(PRC first, race)>value(extinction)>value(MAGIC)
There’s two arguments you’ve made, one is very gnarly, the other is wrong :-):
“the sheer number of parameters you have chosen arbitrarily or said “eh, let’s assume this is normally distributed” demonstrates the futility of approaching this question numerically.”
“simply stating your preference ordering”
I didn’t just state a preference ordering over futures, I also ass-numbered their probabilities and ass-guessed ways of getting there. For to estimate an expected value of an action, one requires two things: A list of probabilities, and a list of utilities—you merely propose giving one of those.
(As for the “false precision”, I feel like the debate has run its course; I consider Scott Alexander, 2017 to be the best rejoinder here. The world is likely not structured in a way that makes trying harder to estimate be less accurate in expectation (which I’d dub the Taoist assumption, thinking & estimating more should narrow the credences over time. Same reason why I’ve defended the bioanchors report against accusations of uselessness with having distributions over 14 orders of magnitude).
Yes, that is essentially my preference ordering / assignments, which remains the case even if the 0.2 is replaced with 0.05 -- in case anyone is wondering whether there are real human beings outside MIRI who are that pessimistic about the AI project.