For example, I find it hard to predict when and how AGI is developed, and I expect that many of my ideas and predictions about that will be mistaken. This makes me more pessimistic, rather than less, since it seems pretty hard to get AI alignment right if we can’t even predict basic things like “when will this system have situational awareness”, etc.
Yes, and this can be framed as a consequence of a more general principle, which is that model uncertainty doesn’t save you from pessimistic outcomes unless your prior (which after all is what you fall back to in the subset of possible worlds where your primary inside-view models are significantly flawed) offers its own reasons to be reassured. And if your prior doesn’t say that (and for the record: mine doesn’t), then having model uncertainty doesn’t actually reduce P(doom) by very much!
Yes, and this can be framed as a consequence of a more general principle, which is that model uncertainty doesn’t save you from pessimistic outcomes unless your prior (which after all is what you fall back to in the subset of possible worlds where your primary inside-view models are significantly flawed) offers its own reasons to be reassured. And if your prior doesn’t say that (and for the record: mine doesn’t), then having model uncertainty doesn’t actually reduce P(doom) by very much!