Not in my comfort zone here, but surely you have to allow for probabilities of 0 when building any formal mathematical system. P(A|~A) has to be 0 or you can’t do algebra. As an agent viewing the system on a meta level, I can’t assign a personal probability of 0 to any proof, but within the system it needs to be allowable.
It’s not a probable outcome, but there literally is no such thing as an impossible outcome.
You donate to the Corrupt Society For Curing Non-existent Diseases in Cute Kittens, the money is used for hallucinogens, the hallucinogens are found by the owners kid, who when high comes up with a physics kitchen experiment which creates an Zeno Machine, and mess around with it randomly. This turns out to simulate an infinite amount of infinitely large cheesecakes, and through a symbolism that you haven’t learnt about yet simulated chesecakes have according to your utility function an utility equal to the logarithm of their weight in solar masses.
Who said my utility function was unbounded? (Which, BTW, is the same as my reply to the Pascal’s Mugger in the wording “create 3^^^3 units of disutility”.)
0 shouldn’t be assigned as a probability if you’re going to do Bayesian updates. That doesn’t interfere with the necessity of using 0 when assigning probabilities to continuous distributions, as any evidence you have in practice will be at a particular precision.
For example, say the time it takes to complete a task is x. You might assign a probability of 20% that the task is finished between 2.3 and 2.4 seconds, with an even distribution between. Then, the probability that it is exactly 2.35 seconds is 0; however, the measured time might be 2.3500 seconds to the precision of your timing device, whose prior probability would be .02%.
Edit: I need a linter for these comments. Where’s the warning “x was declared but never used”?
I know that. But any possible interval must be non-zero.
Also, some exact numbers are exceptions, depending on how you measure things: for example, there is a possibility the “task” “takes” EXACTLY 0 seconds, because it was already done. For example, sorting something that was already in the right order. (In some contexts. In other contexts it might be a negative time, or how long it took to check that it really was already done, or something like that)
Infinite utility seems like it might be a similar case.
This model doesn’t seem to work well for extreme values. Most illustratively if gives zero for infinite outcomes. Zero is not a probability.
Not in my comfort zone here, but surely you have to allow for probabilities of 0 when building any formal mathematical system. P(A|~A) has to be 0 or you can’t do algebra. As an agent viewing the system on a meta level, I can’t assign a personal probability of 0 to any proof, but within the system it needs to be allowable.
Much discussion of this generally and this point in particular.
I don’t know that the results there are necessarily correct, but they are certainly relevant.
Thanks for linking those, they are exactly what I were referring to.
How to deal with deductive uncertainty is an open problem.
Nor is infinity a possible outcome for a charity.
It’s not a probable outcome, but there literally is no such thing as an impossible outcome.
You donate to the Corrupt Society For Curing Non-existent Diseases in Cute Kittens, the money is used for hallucinogens, the hallucinogens are found by the owners kid, who when high comes up with a physics kitchen experiment which creates an Zeno Machine, and mess around with it randomly. This turns out to simulate an infinite amount of infinitely large cheesecakes, and through a symbolism that you haven’t learnt about yet simulated chesecakes have according to your utility function an utility equal to the logarithm of their weight in solar masses.
Who said my utility function was unbounded? (Which, BTW, is the same as my reply to the Pascal’s Mugger in the wording “create 3^^^3 units of disutility”.)
No one—he just said you don’t have infinite confidence that your utility function is bounded.
Yup. Thanks for handling that one for me.
If you’re going to have a probability distribution that covers continuous intervals, 0 has to be allowed as a probability.
That just looks like a proof you can’t have probability distributions over continuous intervals.
0 shouldn’t be assigned as a probability if you’re going to do Bayesian updates. That doesn’t interfere with the necessity of using 0 when assigning probabilities to continuous distributions, as any evidence you have in practice will be at a particular precision.
For example, say the time it takes to complete a task is x. You might assign a probability of 20% that the task is finished between 2.3 and 2.4 seconds, with an even distribution between. Then, the probability that it is exactly 2.35 seconds is 0; however, the measured time might be 2.3500 seconds to the precision of your timing device, whose prior probability would be .02%.
Edit: I need a linter for these comments. Where’s the warning “x was declared but never used”?
I know that. But any possible interval must be non-zero.
Also, some exact numbers are exceptions, depending on how you measure things: for example, there is a possibility the “task” “takes” EXACTLY 0 seconds, because it was already done. For example, sorting something that was already in the right order. (In some contexts. In other contexts it might be a negative time, or how long it took to check that it really was already done, or something like that)
Infinite utility seems like it might be a similar case.