You’re correct, but where would I find a better prior? I’d rather be too conservative than resort to wild guessing (which it would be, since I’m not an expert on AGI).
(A variant of this is rhollerith_dot_com’s objection below, that I failed to take into account whatever the probability of working AGI leading to death is. Presumably that changes the prior as well.)
A. Many commonly used priors are listed in the Handbook of Chemistry and Physics.
Q. Where do priors originally come from?
A. Never ask that question.
Q. Uh huh. Then where do scientists get their priors?
A. Priors for scientific problems are established by annual vote of the AAAS. In recent years the vote has become fractious and controversial, with widespread acrimony, factional polarization, and several outright assassinations. This may be a front for infighting within the Bayes Council, or it may be that the disputants have too much spare time. No one is really sure.
Q. I see. And where does everyone else get their priors?
A. They download their priors from Kazaa.
Q. What if the priors I want aren’t available on Kazaa?
A. There’s a small, cluttered antique shop in a back alley of San Francisco’s Chinatown. Don’t ask about the bronze rat.
Isn’t the lesson of the Quantum Physics sequence that ordinary humans today should get their priors from the least complex (and falsifiable?) statements that aren’t inconsistent with empirical knowledge.
I don’t know where to get a good prior. I suppose you might look at past instances where someone claimed to be close to doing something that seemed about as difficult and confusing as AGI seems to be (before taking into account a history of promises that didn’t pan out, but after taking into account what we know about the confusingness of the problem, insofar as that knowledge doesn’t itself come from the fact of failed promises). I don’t know what that prior would look like, but it seems like it would assign (if you randomly selected a kind of feat) a substantially greater than 1⁄10 probability of seeing at least 10 failed predictions of achieving that feat for every successful such prediction, a substantially greater than 1⁄100 probability of seeing at least 100 failed predictions for every successful prediction, and so on.
You’re correct, but where would I find a better prior? I’d rather be too conservative than resort to wild guessing (which it would be, since I’m not an expert on AGI).
(A variant of this is rhollerith_dot_com’s objection below, that I failed to take into account whatever the probability of working AGI leading to death is. Presumably that changes the prior as well.)
http://yudkowsky.net/rational/bayes
Isn’t the lesson of the Quantum Physics sequence that ordinary humans today should get their priors from the least complex (and falsifiable?) statements that aren’t inconsistent with empirical knowledge.
I don’t know where to get a good prior. I suppose you might look at past instances where someone claimed to be close to doing something that seemed about as difficult and confusing as AGI seems to be (before taking into account a history of promises that didn’t pan out, but after taking into account what we know about the confusingness of the problem, insofar as that knowledge doesn’t itself come from the fact of failed promises). I don’t know what that prior would look like, but it seems like it would assign (if you randomly selected a kind of feat) a substantially greater than 1⁄10 probability of seeing at least 10 failed predictions of achieving that feat for every successful such prediction, a substantially greater than 1⁄100 probability of seeing at least 100 failed predictions for every successful prediction, and so on.