The idea of a “p(doom)” isn’t quite as facially insane as “AGI timelines” as marker of personal identity, but (1) you want action-conditional doom, (2) people with the same numbers may have wildly different models, (3) these are pretty rough log-odds and it may do violence to your own mind to force itself to express its internal intuitions in those terms which is why I don’t go around forcing my mind to think in those terms myself, (4) most people haven’t had the elementary training in calibration and prediction markets that would be required for them to express this number meaningfully and you’re demanding them to do it anyways, (5) the actual social role being played by this number is as some sort of weird astrological sign and that’s not going to help people think in an unpressured way about the various underlying factual questions that ought finally and at the very end to sum to a guess about how reality goes.
This seems very reasonable to me, and I think it’s a very common opinion about [edit: I meant among] AI safety people that discussing p(doom) numbers without lots of underlying models is not very useful.
The important part of Eliezer’s writing on probability IMO is to notice that the underlying laws of probability are Bayesian and do sanity checks, not to always explicitly calculate probabilities. Given that it’s only kinda useful in life in general, it is reasonable that (4) and (5) can make trying it net negative.
So, is he saying that he is calibrated well enough to have a meaningful “action-conditional” p(doom), but most people are not? And that they should not engage in “fake Bayesianism”? But then, according to the prevailing wisdom, how would one decide how to act if they cannot put a number on each potential action?
Speaking only for myself: perhaps you should put a number on each potential action and choose accordingly, but you do not need to communicate the exact number. Yes, the fact that you worry about safety and chose to work on it already implies something about the number, but you don’t have to make the public information even more specific.
This creates some problems with coordination; if you believe that p(doom) is exactly X, it would have certain advantages if all people who want to contribute to AI safety believed that p(doom) is exactly X. But maybe the disadvantages outweigh that.
Thank you, I forgot about that one. I guess the summary would be “if your calibration for this class of possibilities sucks, don’t make up numbers, lest you start trusting them”. If so, that makes sense.
I notice my confusion when Eliezer speaks out against the idea of expressing p(doom) as a number: https://x.com/ESYudkowsky/status/1823529034174882234
I mean, I don’t like it either, but I thought his whole point about Bayesian approach was to express odds and calculate expected values.
He explains why two tweets down the thread.
This seems very reasonable to me, and I think it’s a very common opinion
about[edit: I meant among] AI safety people that discussing p(doom) numbers without lots of underlying models is not very useful.The important part of Eliezer’s writing on probability IMO is to notice that the underlying laws of probability are Bayesian and do sanity checks, not to always explicitly calculate probabilities. Given that it’s only kinda useful in life in general, it is reasonable that (4) and (5) can make trying it net negative.
So, is he saying that he is calibrated well enough to have a meaningful “action-conditional” p(doom), but most people are not? And that they should not engage in “fake Bayesianism”? But then, according to the prevailing wisdom, how would one decide how to act if they cannot put a number on each potential action?
Speaking only for myself: perhaps you should put a number on each potential action and choose accordingly, but you do not need to communicate the exact number. Yes, the fact that you worry about safety and chose to work on it already implies something about the number, but you don’t have to make the public information even more specific.
This creates some problems with coordination; if you believe that p(doom) is exactly X, it would have certain advantages if all people who want to contribute to AI safety believed that p(doom) is exactly X. But maybe the disadvantages outweigh that.
That is a good point, deciding is different from communicating the rationale for your decisions. Maybe that is what Eliezer is saying.
The idea that you can only decide how to act if you have numbers is a strawman. Rationalists are not Straw-Vulcans.
I think you are missing the point, and taking cheap shots.
The prevailing wisdom does not say that you need to put a number on each potential action to act.
see https://www.lesswrong.com/posts/AJ9dX59QXokZb35fk/when-not-to-use-probabilities
Thank you, I forgot about that one. I guess the summary would be “if your calibration for this class of possibilities sucks, don’t make up numbers, lest you start trusting them”. If so, that makes sense.