But if you think TAI is coming within 10 years (for example, if you think that the current half-life on worlds surviving is 10 years; if you think 10 years is the amount of time in which half of worlds are doomed)
Note that these are very different claims, both because the half-life for a given value is below its mean, and because TAI doesn’t imply doom. Even if you do have very high P(doom), it seems odd to just assume everyone else does too.
then depending on your distribution-over-time you should absolutely not wait 5 years before doing research, because TAI could happen in 9 years but it could also happen in 1 year
So? Your research doesn’t have to be useful in every possible world. If a PhD increases the quality of your research by, say, 3x (which is plausible, since research is heavy-tailed) then it may well be better to do that research for half the time.
(In general I don’t think x-risk-motivated people should do PhDs that don’t directly contribute to alignment, to be clear; I just think this isn’t a good argument for that conclusion.)
Note that these are very different claims, both because the half-life for a given value is below its mean, and because TAI doesn’t imply doom. Even if you do have very high P(doom), it seems odd to just assume everyone else does too.
So? Your research doesn’t have to be useful in every possible world. If a PhD increases the quality of your research by, say, 3x (which is plausible, since research is heavy-tailed) then it may well be better to do that research for half the time.
(In general I don’t think x-risk-motivated people should do PhDs that don’t directly contribute to alignment, to be clear; I just think this isn’t a good argument for that conclusion.)