Speaking only for myself: perhaps you should put a number on each potential action and choose accordingly, but you do not need to communicate the exact number. Yes, the fact that you worry about safety and chose to work on it already implies something about the number, but you don’t have to make the public information even more specific.
This creates some problems with coordination; if you believe that p(doom) is exactly X, it would have certain advantages if all people who want to contribute to AI safety believed that p(doom) is exactly X. But maybe the disadvantages outweigh that.
Speaking only for myself: perhaps you should put a number on each potential action and choose accordingly, but you do not need to communicate the exact number. Yes, the fact that you worry about safety and chose to work on it already implies something about the number, but you don’t have to make the public information even more specific.
This creates some problems with coordination; if you believe that p(doom) is exactly X, it would have certain advantages if all people who want to contribute to AI safety believed that p(doom) is exactly X. But maybe the disadvantages outweigh that.
That is a good point, deciding is different from communicating the rationale for your decisions. Maybe that is what Eliezer is saying.