You should say “timelines” instead of “your timelines”.
One thing I notice in AI safety career and strategy discussions is that there is a lot of epistemic helplessness in regard to AGI timelines. People often talk about “your timelines” instead of “timelines” when giving advice, even if they disagree strongly with the timelines. I think this habit causes people to ignore disagreements in unhelpful ways.
Here’s one such conversation:
Bob: Should I do X if my timelines are 10 years?
Alice (who has 4 year timelines): I think X makes sense if your timelines are longer that 6 years, so yes!
Alice will encourage Bob to do X despite the fact that Alice thinks timelines are shorter than 6 years! Alice is actively giving Bob bad advice by her own lights (by assuming timelines she doesn’t agree with). Alice should instead say “I think timelines are shorter than 6 years, so X doesn’t make sense. But if they were longer than 6 years it would make sense”.
In most discussions, there should be no such thing as “your timelines” or “my timelines”. That framing makes it harder to converge, and it encourages people to give each other advice that they don’t even think makes sense.
Note that I do think some plans make sense as bets for long timeline worlds, and that using medians somewhat oversimplifies timelines. My point still holds if you replace the medians with probability distributions.
Hmm. I think there are two dimensions to the advice (what is a reasonable distribution of timelines to have, vs what should I actually do). It’s perfectly fine to have some humility about one while still giving opinions on the other. “If you believe Y, then it’s reasonable to do X” can be a useful piece of advice. I’d normally mention that I don’t believe Y, but for a lot of conversations, we’ve already had that conversation, and it’s not helpful to repeat it.
Timelines are a result of a person’s intuitions about a technical milestone being reached in the future, it is super obviously impossible for us to have a consensus about that kind of thing.
Talking only synchronises beliefs if you have enough time to share all of the relevant information, with technical matters, you usually don’t.
I agree with this in the world where people are being epistemically rigorous/honest with themselves about their timelines and where there’s a real consensus view on them. I’ve observed that it’s pretty rare for people to make decisions truly grounded in their timelines, or to do so only nominally, and I think there’s a lot of social signaling going on when (especially younger) people state their timelines.
I appreciate that more experienced people are willing to give advice within a particular frame (“if timelines were x”, “if China did y”, “if Anthropic did z”, “If I went back to school”, etc etc), even if they don’t agree with the frame itself. I rely on more experienced people in my life to offer advice of this form (“I’m not sure I agree with your destination, but admit there’s uncertainty, and love and respect you enough to advise you on your path”).
Of course they should voice their disagreement with the frame (and I agree this should happen more for timelines in particular), but to gate direct counsel on urgent, object-level decisions behind the resolution of background disagreements is broadly unhelpful.
When someone says “My timelines are x, what should I do?”, I actually hear like three claims:
Timelines are x
I believe timelines are x
I am interested in behaving as though timelines are x
Evaluation of the first claim is complicated and other people do a better job of it than I do so let’s focus on the others.
“I believe timelines are x” is a pretty easy roll to disbelieve. Under relatively rigorous questioning, nearly everyone (particularly everyone ‘career-advice-seeking age’) will either say they are deferring (meaning they could just as easily defer to someone else tomorrow), or admit that it’s a gut feel, especially for their ~90 percent year, and especially for more and more capable systems (this is more true of ASI than weak AGI, for instance, although those terms are underspecified). Still others will furnish 0 reasoning transparency and thus reveal their motivations to be principally social (possibly a problem unique to the bay, although online e/acc culture has a similar Thing).
“I am interested in behaving as though timelines are x” is an even easier roll to disbelieve. Very few people act on their convictions in sweeping, life-changing ways without concomitant benefits (money, status, power, community), including people within AIS (sorry friends).
With these uncertainties, piled on top of the usual uncertainties surrounding timelines, I’m not sure I’d want anyone to act so nobly as to refuse advice to someone with different timelines.
If Alice is a senior AIS professional who gives advice to undergrads at parties in Berkeley (bless her!), how would her behavior change under your recommendation? It sounds like maybe she would stop fostering a diverse garden of AIS saplings and instead become the awful meme of someone who just wants to fight about a highly speculative topic. Seems like a significant value loss.
Their timelines will change some other day; everyone’s will. In the meantime, being equipped to talk to people with a wide range of safety-concerned views (especially for more senior, or just Older people), seems useful.
harder to converge
Converge for what purpose? It feels like the marketplace of ideas is doing an ok job of fostering a broad portfolio of perspectives. If anything, we are too convergent and, as a consequence, somewhat myopic internally. Leopold mind-wormed a bunch of people until Tegmark spoke up (and that only somewhat helped). Few thought governance was a good idea until pretty recently (~3 years ago), and it would be going better if those interested in the angle weren’t shouted down so emphatically to begin with.
If individual actors need to cross some confidence threshold in order to act, but the reasonable confidence interval is in fact very wide, I’d rather have a bunch of actors with different timelines, which roughly sum to the shape of the reasonable thing*, then have everyone working on the same overconfident assumption that later comes back to bite us (when we’ve made mistakes in the past, this is often why).
*Which is, by the way, closer to flat than most people’s individual timelines
You should say “timelines” instead of “your timelines”.
One thing I notice in AI safety career and strategy discussions is that there is a lot of epistemic helplessness in regard to AGI timelines. People often talk about “your timelines” instead of “timelines” when giving advice, even if they disagree strongly with the timelines. I think this habit causes people to ignore disagreements in unhelpful ways.
Here’s one such conversation:
Bob: Should I do X if my timelines are 10 years?
Alice (who has 4 year timelines): I think X makes sense if your timelines are longer that 6 years, so yes!
Alice will encourage Bob to do X despite the fact that Alice thinks timelines are shorter than 6 years! Alice is actively giving Bob bad advice by her own lights (by assuming timelines she doesn’t agree with). Alice should instead say “I think timelines are shorter than 6 years, so X doesn’t make sense. But if they were longer than 6 years it would make sense”.
In most discussions, there should be no such thing as “your timelines” or “my timelines”. That framing makes it harder to converge, and it encourages people to give each other advice that they don’t even think makes sense.
Note that I do think some plans make sense as bets for long timeline worlds, and that using medians somewhat oversimplifies timelines. My point still holds if you replace the medians with probability distributions.
Hmm. I think there are two dimensions to the advice (what is a reasonable distribution of timelines to have, vs what should I actually do). It’s perfectly fine to have some humility about one while still giving opinions on the other. “If you believe Y, then it’s reasonable to do X” can be a useful piece of advice. I’d normally mention that I don’t believe Y, but for a lot of conversations, we’ve already had that conversation, and it’s not helpful to repeat it.
Timelines are a result of a person’s intuitions about a technical milestone being reached in the future, it is super obviously impossible for us to have a consensus about that kind of thing.
Talking only synchronises beliefs if you have enough time to share all of the relevant information, with technical matters, you usually don’t.
I agree with this in the world where people are being epistemically rigorous/honest with themselves about their timelines and where there’s a real consensus view on them. I’ve observed that it’s pretty rare for people to make decisions truly grounded in their timelines, or to do so only nominally, and I think there’s a lot of social signaling going on when (especially younger) people state their timelines.
I appreciate that more experienced people are willing to give advice within a particular frame (“if timelines were x”, “if China did y”, “if Anthropic did z”, “If I went back to school”, etc etc), even if they don’t agree with the frame itself. I rely on more experienced people in my life to offer advice of this form (“I’m not sure I agree with your destination, but admit there’s uncertainty, and love and respect you enough to advise you on your path”).
Of course they should voice their disagreement with the frame (and I agree this should happen more for timelines in particular), but to gate direct counsel on urgent, object-level decisions behind the resolution of background disagreements is broadly unhelpful.
When someone says “My timelines are x, what should I do?”, I actually hear like three claims:
Timelines are x
I believe timelines are x
I am interested in behaving as though timelines are x
Evaluation of the first claim is complicated and other people do a better job of it than I do so let’s focus on the others.
“I believe timelines are x” is a pretty easy roll to disbelieve. Under relatively rigorous questioning, nearly everyone (particularly everyone ‘career-advice-seeking age’) will either say they are deferring (meaning they could just as easily defer to someone else tomorrow), or admit that it’s a gut feel, especially for their ~90 percent year, and especially for more and more capable systems (this is more true of ASI than weak AGI, for instance, although those terms are underspecified). Still others will furnish 0 reasoning transparency and thus reveal their motivations to be principally social (possibly a problem unique to the bay, although online e/acc culture has a similar Thing).
“I am interested in behaving as though timelines are x” is an even easier roll to disbelieve. Very few people act on their convictions in sweeping, life-changing ways without concomitant benefits (money, status, power, community), including people within AIS (sorry friends).
With these uncertainties, piled on top of the usual uncertainties surrounding timelines, I’m not sure I’d want anyone to act so nobly as to refuse advice to someone with different timelines.
If Alice is a senior AIS professional who gives advice to undergrads at parties in Berkeley (bless her!), how would her behavior change under your recommendation? It sounds like maybe she would stop fostering a diverse garden of AIS saplings and instead become the awful meme of someone who just wants to fight about a highly speculative topic. Seems like a significant value loss.
Their timelines will change some other day; everyone’s will. In the meantime, being equipped to talk to people with a wide range of safety-concerned views (especially for more senior, or just Older people), seems useful.
Converge for what purpose? It feels like the marketplace of ideas is doing an ok job of fostering a broad portfolio of perspectives. If anything, we are too convergent and, as a consequence, somewhat myopic internally. Leopold mind-wormed a bunch of people until Tegmark spoke up (and that only somewhat helped). Few thought governance was a good idea until pretty recently (~3 years ago), and it would be going better if those interested in the angle weren’t shouted down so emphatically to begin with.
If individual actors need to cross some confidence threshold in order to act, but the reasonable confidence interval is in fact very wide, I’d rather have a bunch of actors with different timelines, which roughly sum to the shape of the reasonable thing*, then have everyone working on the same overconfident assumption that later comes back to bite us (when we’ve made mistakes in the past, this is often why).
*Which is, by the way, closer to flat than most people’s individual timelines