I definitely like the second operationalization better. That being said I think that is pretty meaningfully different and I’m not willing to bet on it. I was expecting timelines to be a major objection to your initial claim, but it’s totally plausible that accumulating additional evidence gets people to believe in doom before doom actually occurs.
Also we’d need someone to actually run the survey (I’m not likely to).
I guess when you say “>= 10% x-risk in the next decade” you mean >= 10% chance that our actions don’t matter after that. I think it’s plausible a majority of the survey population would say that. If you also include the conjunct “and our actions matter between now and then” then I’m back to thinking that it’s less plausible.
How about we do a lazy bet: Neither of us runs the survey, but we agree that if such a survey is run and brought to our attention, the loser pays the winner?
Difficulty with this is that we don’t get to pick the operationalization. Maybe our meta-operationalization can be “<50% of respondents claim >10% probability of X, where X is some claim that strongly implies AI takeover or other irreversible loss of human control / influence of human values, by 2032.” How’s that sound?
...but actually though I guess my credences aren’t that different from yours here so it’s maybe not worth our time to bet on. I actually have very little idea what the community thinks, I was just pushing back against the OP who seemed to be asserting a consensus without evidence.
Sure, I’m happy to do a lazy bet of this form. (I’ll note that if we want to maintain the original point we should also require that the survey happen soon, e.g. in the next year or two, so that we avoid the case where someone does a survey in 2030 at which point it’s obvious how things go, but I’m also happy not putting a time bound on when the survey happens since given my beliefs on p(doom by 2032) I think this benefits me.)
I definitely like the second operationalization better. That being said I think that is pretty meaningfully different and I’m not willing to bet on it. I was expecting timelines to be a major objection to your initial claim, but it’s totally plausible that accumulating additional evidence gets people to believe in doom before doom actually occurs.
Also we’d need someone to actually run the survey (I’m not likely to).
I guess when you say “>= 10% x-risk in the next decade” you mean >= 10% chance that our actions don’t matter after that. I think it’s plausible a majority of the survey population would say that. If you also include the conjunct “and our actions matter between now and then” then I’m back to thinking that it’s less plausible.
How about we do a lazy bet: Neither of us runs the survey, but we agree that if such a survey is run and brought to our attention, the loser pays the winner?
Difficulty with this is that we don’t get to pick the operationalization. Maybe our meta-operationalization can be “<50% of respondents claim >10% probability of X, where X is some claim that strongly implies AI takeover or other irreversible loss of human control / influence of human values, by 2032.” How’s that sound?
...but actually though I guess my credences aren’t that different from yours here so it’s maybe not worth our time to bet on. I actually have very little idea what the community thinks, I was just pushing back against the OP who seemed to be asserting a consensus without evidence.
Sure, I’m happy to do a lazy bet of this form. (I’ll note that if we want to maintain the original point we should also require that the survey happen soon, e.g. in the next year or two, so that we avoid the case where someone does a survey in 2030 at which point it’s obvious how things go, but I’m also happy not putting a time bound on when the survey happens since given my beliefs on p(doom by 2032) I think this benefits me.)
$100 at even odds?
Deal! :)