Vaniver and Math_viking if you claim that it is impossible for me to grant you infinite negative utility/infinite negative utility is incoherent/return a category error on infinite negative utility, then you are assigning a probability of 0 to the existence of infinite negative utility, and (implicitly (because P(A) >= P(A and B). A here is “infinite negative utility exists”. B is “I can grant infinite negative utility”.) assigning a probability of 0 to me granting you infinite negative utility.
Does it makes sense to call it true or false? Not really; when I try to call it a proposition the response should be “type error; that’s a string that doesn’t parse into a proposition.”
Ah, but what probability do we assign to the statement “‘a xor’ results in a type error because it’s a string that doesn’t parse into a proposition”? 1-epsilon, and we’re done. Remember, the probabilistic model of utility came from somewhere, and has an associated level of evidence and support. It’s not impossible to convince me that it’s wrong.<a>
But does this make me vulnerable to Pascal’s mugging? However low I make epsilon, surely infinity is larger. It does not, because of the difference between inside-model probabilities and outside-model probabilities.
Suppose I am presented with a dilemma. Various different strategies all propose different actions; the alphabetical strategy claims I should pick the first option, the utility-maximizing strategy claims I should pick the option with highest EV, the satisficing strategy claims I should pick any option that’s ‘good enough’, and so on. But the epsilon chance that the utility is in fact infinite is not within the utility-maximizing strategy; it refers to a case where the utility-maximizing strategy’s assumptions are broken, and thus needs to be handled by a different strategy—presumably one that doesn’t immediately choke on infinities!
I understand your argument about breaking the assumptions of the strategy. What does inside model probabilities and outside model probabilities mean? I don’t want to blindly guessing.
The basic problem with this kind of argument is that you are taking the math too seriously. Probability theory leads to absurdities if you assume that you do not have a probability of 0 or 1 for “the trillionth digit of pi is greater than 5.” In reality normal people will neither be certain that is true nor certain it is false. In other words, probability is a formalism of degrees of belief, and it is an imperfect formalism, not a perfect one.
If we consider the actual matter at hand, rather than the imperfect formalism, we actually have bounded utility. So we do not care about very low probability threats, including the one in your example. But although we have bounded utility, we are not infinitely certain of the fact that our probability is bounded. Thus we do not assign a probability of zero to “we have unbounded utility.” Nonetheless, it would be a misuse of a flawed formalism to conclude that we have to act on the possibility of the infinite negative utility. In reality, we act based on our limited knowledge of our bounded utility, and assume the threat is worthless.
Vaniver and Math_viking if you claim that it is impossible for me to grant you infinite negative utility/infinite negative utility is incoherent/return a category error on infinite negative utility, then you are assigning a probability of 0 to the existence of infinite negative utility, and (implicitly (because P(A) >= P(A and B). A here is “infinite negative utility exists”. B is “I can grant infinite negative utility”.) assigning a probability of 0 to me granting you infinite negative utility.
Consider the logical proposition “A xor”.
Does it makes sense to call it true or false? Not really; when I try to call it a proposition the response should be “type error; that’s a string that doesn’t parse into a proposition.”
Ah, but what probability do we assign to the statement “‘a xor’ results in a type error because it’s a string that doesn’t parse into a proposition”? 1-epsilon, and we’re done. Remember, the probabilistic model of utility came from somewhere, and has an associated level of evidence and support. It’s not impossible to convince me that it’s wrong.<a>
But does this make me vulnerable to Pascal’s mugging? However low I make epsilon, surely infinity is larger. It does not, because of the difference between inside-model probabilities and outside-model probabilities.
Suppose I am presented with a dilemma. Various different strategies all propose different actions; the alphabetical strategy claims I should pick the first option, the utility-maximizing strategy claims I should pick the option with highest EV, the satisficing strategy claims I should pick any option that’s ‘good enough’, and so on. But the epsilon chance that the utility is in fact infinite is not within the utility-maximizing strategy; it refers to a case where the utility-maximizing strategy’s assumptions are broken, and thus needs to be handled by a different strategy—presumably one that doesn’t immediately choke on infinities!
I understand your argument about breaking the assumptions of the strategy. What does inside model probabilities and outside model probabilities mean? I don’t want to blindly guessing.
See here.
The basic problem with this kind of argument is that you are taking the math too seriously. Probability theory leads to absurdities if you assume that you do not have a probability of 0 or 1 for “the trillionth digit of pi is greater than 5.” In reality normal people will neither be certain that is true nor certain it is false. In other words, probability is a formalism of degrees of belief, and it is an imperfect formalism, not a perfect one.
If we consider the actual matter at hand, rather than the imperfect formalism, we actually have bounded utility. So we do not care about very low probability threats, including the one in your example. But although we have bounded utility, we are not infinitely certain of the fact that our probability is bounded. Thus we do not assign a probability of zero to “we have unbounded utility.” Nonetheless, it would be a misuse of a flawed formalism to conclude that we have to act on the possibility of the infinite negative utility. In reality, we act based on our limited knowledge of our bounded utility, and assume the threat is worthless.