Setting aside how important timelines are for strategy, the fact that P(doom) combines several questions together is a good point. Another way to decompose P(doom) is:
How likely are we to survive if we do nothing about the risk? Or perhaps: How likely are we to survive if we do alignment research at the current pace?
How much can we really reduce the risk with sustained effort? How immutable is the overall risk?
Though people probably mean different things by P(doom) and seems worthwhile to disentangle them.
Talking about our reasoning for our personal estimates of p(doom) is useful if and only if it helps sway some potential safety researchers into working on safety …
Good point, P(doom) also serves a promotional role, in that it illustrates the size of the problem to others and potentially gets more people to work on alignment.
Setting aside how important timelines are for strategy, the fact that P(doom) combines several questions together is a good point. Another way to decompose P(doom) is:
How likely are we to survive if we do nothing about the risk? Or perhaps: How likely are we to survive if we do alignment research at the current pace?
How much can we really reduce the risk with sustained effort? How immutable is the overall risk?
Though people probably mean different things by P(doom) and seems worthwhile to disentangle them.
Good point, P(doom) also serves a promotional role, in that it illustrates the size of the problem to others and potentially gets more people to work on alignment.