[crossposting my comment from the EA forum as I expect it’s also worth discussing here]
whether you have a 5-10 year timeline or a 15-20 year timeline
Something that I’d like this post to address that it doesn’t is that to have “a timeline” rather than a distribution seems ~indefensible given the amount of uncertainty involved. People quote medians (or modes, and it’s not clear to me that they reliability differentiate between these) ostensibly as a shorthand for their entire distribution, but then discussion proceeds based only on the point estimates.
I think a shift of 2 years in the median of your distribution looks like a shift of only a few % in your P(AGI by 20XX) numbers for all 20XX, and that means discussion of what people who “have different timelines” should do is usually better framed as “what strategies will turn out to have been helpful if AGI arrives in 2030″.
While this doesn’t make discussion like this post useless, I don’t think this is a minor nitpick. I’m extremely worried by “plays for variance”, some of which are briefly mentioned above (though far from the worst I’ve heard). I think these tend to look good only on worldviews which are extremely overconfident, and treat timelines as point estimates/extremely sharp peaks). More balanced views, even those with a median much sooner than mine, should typically realise that the EV gained in the worlds where things move quickly is not worth the expected cost in worlds where they don’t. This is in addition to the usual points about co-operative behaviour when uncertain about the state of the world, adverse selection, the unilateralist’s curse etc.
[Cross-posting my answer] Thanks for your comment! That’s an important point that you’re bringing up.
My sense is that at the movement level, the consideration you bring up is super important. Indeed, even though I have fairly short timelines, I would like funders to hedge for long timelines (e.g. fund stuff for China AI Safety). Thus I think that big actors should have in mind their full distribution to optimize their resource allocation.
That said, despite that, I have two disagreements:
I feel like at the individual level (i.e. people working in governance for instance, or even organizations), it’s too expensive to optimize over a distribution and thus you should probably optimize with a strategy of “I want to have solved my part of the problem by 20XX”. And for that purpose, identifying the main characteristics of the strategic landscape at that point (which this post is trying to do) is useful.
“the EV gained in the worlds where things move quickly is not worth the expected cost in worlds where they don’t.” I disagree with this statement, even at the movement level. For instance I think that the trade-off of “should we fund this project which is not the ideal one but still quite good?” is one that funders often encounter and I would expect that funders have more risk adverseness than necessary because when you’re not highly time-constrained, it’s probably the best strategy (i.e. in every fields except in AI safety, it’s probably a way better strategy to trade-off a couple of years against better founders).
Finally, I agree that “the best strategies will have more variance” is not a good advice for everyone. The reason I decided to write it rather than not is because I think that the AI governance community tends to have a too high degree of risk adverseness (which is a good feature in their daily job) which penalizes mechanically a decent amount of actions that are way more useful under shorter timelines.
[crossposting my comment from the EA forum as I expect it’s also worth discussing here]
Something that I’d like this post to address that it doesn’t is that to have “a timeline” rather than a distribution seems ~indefensible given the amount of uncertainty involved. People quote medians (or modes, and it’s not clear to me that they reliability differentiate between these) ostensibly as a shorthand for their entire distribution, but then discussion proceeds based only on the point estimates.
I think a shift of 2 years in the median of your distribution looks like a shift of only a few % in your P(AGI by 20XX) numbers for all 20XX, and that means discussion of what people who “have different timelines” should do is usually better framed as “what strategies will turn out to have been helpful if AGI arrives in 2030″.
While this doesn’t make discussion like this post useless, I don’t think this is a minor nitpick. I’m extremely worried by “plays for variance”, some of which are briefly mentioned above (though far from the worst I’ve heard). I think these tend to look good only on worldviews which are extremely overconfident, and treat timelines as point estimates/extremely sharp peaks). More balanced views, even those with a median much sooner than mine, should typically realise that the EV gained in the worlds where things move quickly is not worth the expected cost in worlds where they don’t. This is in addition to the usual points about co-operative behaviour when uncertain about the state of the world, adverse selection, the unilateralist’s curse etc.
[Cross-posting my answer]
Thanks for your comment!
That’s an important point that you’re bringing up.
My sense is that at the movement level, the consideration you bring up is super important. Indeed, even though I have fairly short timelines, I would like funders to hedge for long timelines (e.g. fund stuff for China AI Safety). Thus I think that big actors should have in mind their full distribution to optimize their resource allocation.
That said, despite that, I have two disagreements:
I feel like at the individual level (i.e. people working in governance for instance, or even organizations), it’s too expensive to optimize over a distribution and thus you should probably optimize with a strategy of “I want to have solved my part of the problem by 20XX”. And for that purpose, identifying the main characteristics of the strategic landscape at that point (which this post is trying to do) is useful.
“the EV gained in the worlds where things move quickly is not worth the expected cost in worlds where they don’t.” I disagree with this statement, even at the movement level. For instance I think that the trade-off of “should we fund this project which is not the ideal one but still quite good?” is one that funders often encounter and I would expect that funders have more risk adverseness than necessary because when you’re not highly time-constrained, it’s probably the best strategy (i.e. in every fields except in AI safety, it’s probably a way better strategy to trade-off a couple of years against better founders).
Finally, I agree that “the best strategies will have more variance” is not a good advice for everyone. The reason I decided to write it rather than not is because I think that the AI governance community tends to have a too high degree of risk adverseness (which is a good feature in their daily job) which penalizes mechanically a decent amount of actions that are way more useful under shorter timelines.