I don’t know where to get a good prior. I suppose you might look at past instances where someone claimed to be close to doing something that seemed about as difficult and confusing as AGI seems to be (before taking into account a history of promises that didn’t pan out, but after taking into account what we know about the confusingness of the problem, insofar as that knowledge doesn’t itself come from the fact of failed promises). I don’t know what that prior would look like, but it seems like it would assign (if you randomly selected a kind of feat) a substantially greater than 1⁄10 probability of seeing at least 10 failed predictions of achieving that feat for every successful such prediction, a substantially greater than 1⁄100 probability of seeing at least 100 failed predictions for every successful prediction, and so on.
I don’t know where to get a good prior. I suppose you might look at past instances where someone claimed to be close to doing something that seemed about as difficult and confusing as AGI seems to be (before taking into account a history of promises that didn’t pan out, but after taking into account what we know about the confusingness of the problem, insofar as that knowledge doesn’t itself come from the fact of failed promises). I don’t know what that prior would look like, but it seems like it would assign (if you randomly selected a kind of feat) a substantially greater than 1⁄10 probability of seeing at least 10 failed predictions of achieving that feat for every successful such prediction, a substantially greater than 1⁄100 probability of seeing at least 100 failed predictions for every successful prediction, and so on.