Hmm, I don’t think it needs to be reference class tennis. I think people do think about the fact that humanity could go extinct at some point. But if you went just off those reference classes we’d still have at least what, a thousand years? A million years?
If that’s the case, we wouldn’t be doing AI safety research; we’d be saving up money to do AI safety research later when it’s easier (and therefore more cost effective).
In general, predicting that a variable will follow a line is much “stronger” than predicting an event will occur at some unknown time. The prior likelihood on trend-following is extremely low, and it makes more information-dense predictions about the future.
That said, I think an interesting case of tennis might be extrapolating the number of species to predict when it will hit 0! If this follows a line, that would mean a disagreement between the gods of straight lines. I had trouble actually finding a graph though.
Hmm, I don’t think it needs to be reference class tennis. I think people do think about the fact that humanity could go extinct at some point. But if you went just off those reference classes we’d still have at least what, a thousand years? A million years?
If that’s the case, we wouldn’t be doing AI safety research; we’d be saving up money to do AI safety research later when it’s easier (and therefore more cost effective).
In general, predicting that a variable will follow a line is much “stronger” than predicting an event will occur at some unknown time. The prior likelihood on trend-following is extremely low, and it makes more information-dense predictions about the future.
That said, I think an interesting case of tennis might be extrapolating the number of species to predict when it will hit 0! If this follows a line, that would mean a disagreement between the gods of straight lines. I had trouble actually finding a graph though.