This was inspired by the recent Pascal’s mugging thread, but it seems like a slightly more general and much harder question. It sufficiently hard I’m not even sure where to start looking for the answer, but I guess my first step is to try to formalize the question.
From a computer programming perspective, it seems like a decision AI might have to have a few notations for probabilities and utilities which did not chart to actual numbers. For instance, assume a decision AI capable of assessing probability and utility uses RAM to do so, and has a finite amount of it. It seems that a properly programmed decision AI would have to have states for things that might be described in English much like the following:
“Event U is so improbable, I ran out of RAM midway through attempting to calculate the how close to 0 the probability was.”
“Event V is so probable, I ran out of RAM midway through attempting to calculate the how close to 1 the probability was.”
“Event W has a sufficiently hard to calculate probability that I ran out of RAM midway through attempting to calculate what number the probability appeared to be approaching.”
“Event X is such a large positive utility, I ran out of RAM midway through attempting to calculate the how high the positive utility was.”
“Event Y is such a large negative utility, I ran out of RAM midway through attempting to calculate the how low the negative utility was.”
“Event Z has a sufficiently hard to calculate utility that I ran out of RAM midway through attempting to calculate what number the utility appeared to be approaching.”
How would we want an decision AI to react to events involving those three kinds of probabilities and those three kinds of utilities?
Events U and V can be handled in the obvious fashion.
Event W is cause for mild concern, with potential for alarm. Start by assuming the event has high probability (~ 1), and compute an output. The try with low probability (~ 0). If the outputs are the same, ignore the problem and await more evidence. If the outputs are similar, attempt to decide whether the difference between them might plausibly have a large impact. If not, pick something within that range and proceed. If the problem remains unsolved, go into alarm mode and request programmer assistance.
Events X and Y can be mitigated with an appropriate prior for the expected utility of a typical action, as informed by past experience. That should allow for reasonable decisions in many cases of (unreasonable utility) * (unreasonable probability), since those terms will produce a very low expected utility one way or the other. If the problem is still unresolved, seek programmer guidance.
This was inspired by the recent Pascal’s mugging thread, but it seems like a slightly more general and much harder question. It sufficiently hard I’m not even sure where to start looking for the answer, but I guess my first step is to try to formalize the question.
From a computer programming perspective, it seems like a decision AI might have to have a few notations for probabilities and utilities which did not chart to actual numbers. For instance, assume a decision AI capable of assessing probability and utility uses RAM to do so, and has a finite amount of it. It seems that a properly programmed decision AI would have to have states for things that might be described in English much like the following:
“Event U is so improbable, I ran out of RAM midway through attempting to calculate the how close to 0 the probability was.”
“Event V is so probable, I ran out of RAM midway through attempting to calculate the how close to 1 the probability was.”
“Event W has a sufficiently hard to calculate probability that I ran out of RAM midway through attempting to calculate what number the probability appeared to be approaching.”
“Event X is such a large positive utility, I ran out of RAM midway through attempting to calculate the how high the positive utility was.”
“Event Y is such a large negative utility, I ran out of RAM midway through attempting to calculate the how low the negative utility was.”
“Event Z has a sufficiently hard to calculate utility that I ran out of RAM midway through attempting to calculate what number the utility appeared to be approaching.”
How would we want an decision AI to react to events involving those three kinds of probabilities and those three kinds of utilities?
Events U and V can be handled in the obvious fashion.
Event W is cause for mild concern, with potential for alarm. Start by assuming the event has high probability (~ 1), and compute an output. The try with low probability (~ 0). If the outputs are the same, ignore the problem and await more evidence. If the outputs are similar, attempt to decide whether the difference between them might plausibly have a large impact. If not, pick something within that range and proceed. If the problem remains unsolved, go into alarm mode and request programmer assistance.
Events X and Y can be mitigated with an appropriate prior for the expected utility of a typical action, as informed by past experience. That should allow for reasonable decisions in many cases of (unreasonable utility) * (unreasonable probability), since those terms will produce a very low expected utility one way or the other. If the problem is still unresolved, seek programmer guidance.
Event Z can be handled analogously to event W.
When thinking about these things I occasionally find it useful to use intervals instead of numbers to represent probabilities and utilities:
P(U) is in (0, epsilon), where epsilon is the lowest upper bound for the probability I found before I ran out of RAM.
P(V) is in (1 - epsilon, 1).
P(W) is in (0, 1); or in (a, b) if I managed to find nontrivial bounds a and b before I ran out of RAM.
U(X) is in (N, infinity)
U(Y) is in (-infinity, N)
U(Z) is in (-infinity, infinity); or (M, N) if I managed to find finite upper or lower bounds before running out of RAM.
EDIT: This might be what is known as “interval-valued probabilities” in the literature.