I can think of many situations where a zero prior gives rise to tangibly different behavior, and even severe consequences. To take your example, suppose that we (or Omega, since we’re going to assume nigh omniscience) asked the person whether JFK was murdered by Lee Harvey Oswald or not, and if they get it wrong, then they are killed/tortured/dust-specked into oblivion/whatever. (let’s also assume that the question is clearly defined enough that the person can’t play with definitions and just say that God is in everyone and God killed JFK)
However, let me steelman this a bit by somewhat moving the goalposts: if we allow a single random belief to have P=0, then it seems very unlikely that it will have a serious effect. I guess that the above scenario would require that we know that the person has P=0 about something (or have Omega exist), which, if we agree that such a belief will not have much empirical effect, is almost impossible to know. So that’s also unlikely.
suppose that we (or Omega, since we’re going to assume nigh omniscience) asked the person whether JFK was murdered by Lee Harvey Oswald or not, and if they get it wrong, then they are killed/tortured/dust-specked into oblivion/whatever.
Okay, but what is the utility function Omega is trying to optimize?
Let’s say you walk up to Omega, tell it “was JFK murdered by Lee Harvey Oswald or not? And by the way, if you get this wrong, I am going to kill you/torture you/dust-spec you.”
Unless we’ve figured out how to build safe oracles, with very high probability, Omega is not a safe oracle. Via https://arbital.com/p/instrumental_convergence/, even though Omega may or may not care if it gets tortured/dust-speced, we can assume it doesn’t want to get killed. So what is it going to do?
Do you think it’s going to tell you what it thinks is the true answer? Or do you think it’s going to tell you the answer that will minimize the risk of it getting killed?
That wasn’t really my point, but I see what you mean. The point was that it is possible to have a situation where the 0 prior does have specific consequences, not that it’s likely, but you’re right that my example was a bit off, since obviously the person getting interrogated should just lie about it.
I can think of many situations where a zero prior gives rise to tangibly different behavior, and even severe consequences. To take your example, suppose that we (or Omega, since we’re going to assume nigh omniscience) asked the person whether JFK was murdered by Lee Harvey Oswald or not, and if they get it wrong, then they are killed/tortured/dust-specked into oblivion/whatever. (let’s also assume that the question is clearly defined enough that the person can’t play with definitions and just say that God is in everyone and God killed JFK)
However, let me steelman this a bit by somewhat moving the goalposts: if we allow a single random belief to have P=0, then it seems very unlikely that it will have a serious effect. I guess that the above scenario would require that we know that the person has P=0 about something (or have Omega exist), which, if we agree that such a belief will not have much empirical effect, is almost impossible to know. So that’s also unlikely.
Okay, but what is the utility function Omega is trying to optimize?
Let’s say you walk up to Omega, tell it “was JFK murdered by Lee Harvey Oswald or not? And by the way, if you get this wrong, I am going to kill you/torture you/dust-spec you.”
Unless we’ve figured out how to build safe oracles, with very high probability, Omega is not a safe oracle. Via https://arbital.com/p/instrumental_convergence/, even though Omega may or may not care if it gets tortured/dust-speced, we can assume it doesn’t want to get killed. So what is it going to do?
Do you think it’s going to tell you what it thinks is the true answer? Or do you think it’s going to tell you the answer that will minimize the risk of it getting killed?
That wasn’t really my point, but I see what you mean. The point was that it is possible to have a situation where the 0 prior does have specific consequences, not that it’s likely, but you’re right that my example was a bit off, since obviously the person getting interrogated should just lie about it.