By “a specific gamble” do you mean “a specific pair of gambles”? Remember, preferences are between two things! And you hardly need a utility function to express a preference between a single pair of gambles.
I don’t understand how to make sense of what you’re saying. Agent’s preferences are the starting point—preferences as in, given a choice between the two, which do you pick? It’s not clear to me how you have a notion of preference that allows for this to be undefined (the agent can be indifferent, but that’s distinct).
I mean, you could try to come up with such a thing, but I’d be pretty skeptical of its meaningfulness. (What happens if you program these preferences into an FAI and then it hits a choice for which its preference is undefined? Does it act arbitrarily? How does this differ from indifference, then? By lack of transitivity, maybe? But then that’s effectively just nontransitive indifference, which seems like it would be a problem...)
I think your comment is the sort of thing that sounds reasonable if you reason backward, starting from the idea of expected utility, but will fall apart if you reason forward, starting from the idea of preferences. But if you have some way of making it work, I’d be interested to hear...
By “a specific gamble” do you mean “a specific pair of gambles”? Remember, preferences are between two things! And you hardly need a utility function to express a preference between a single pair of gambles.
This is true, then it would only be between a specific subset of gambles.
I don’t understand how to make sense of what you’re saying. Agent’s preferences are the starting point—preferences as in, given a choice between the two, which do you pick? It’s not clear to me how you have a notion of preference that allows for this to be undefined (the agent can be indifferent, but that’s distinct).
I mean, you could try to come up with such a thing, but I’d be pretty skeptical of its meaningfulness. (What happens if you program these preferences into an FAI and then it hits a choice for which its preference is undefined? Does it act arbitrarily? How does this differ from indifference, then? By lack of transitivity, maybe? But then that’s effectively just nontransitive indifference, which seems like it would be a problem...)
I think you should be able to set things up so that you never encounter a pair of gambles where this is undefined. I’ll illustrate with an example. Suppose you start with a prior over the integers, such that:
p(n) = (C/F(n)) where F(n) is a function that grows really fast and C is a normalization constant. Then the set of gambles that we’re considering would be posteriors on the integers given that they obey certain properties. For instance, we could ask the agent to choose between the posterior over integers given that n is odd vs the posterior given that n is even.
I’m pretty sure that you can construct an agent that behaves as if it had an unbounded utility function in this case. So long as the utility associated with an integer n grows sufficiently slower than F(N), all expectations over posteriors on the integers should be well defined.
If you were to build an FAI this way, it would never end up in a belief state where the expected utility diverges between two outcomes. The expected utility would be well defined over any posterior on it’s prior, so it’s choice given a pair of gambles would also be well defined for any belief state it could find itself in.
Huh. This would need some elaboration, but this is definitely the most plausible way around the problem I’ve seen.
Now (in Savage’s formalism) actions are just functions from world-states to outcomes (maybe with a measurability condition), so regardless of your prior it’s easy to construct the relevant St. Petersburg gambles if the utility function is unbounded. But seems like what you’re saying is, if we don’t allow arbitrary actions, then the prior could be such that, not only are none of the permitted actions St. Petersburg gambles, but also this remains the case even after future updates. Interesting! Yeah, that just might be workable...
By “a specific gamble” do you mean “a specific pair of gambles”? Remember, preferences are between two things! And you hardly need a utility function to express a preference between a single pair of gambles.
I don’t understand how to make sense of what you’re saying. Agent’s preferences are the starting point—preferences as in, given a choice between the two, which do you pick? It’s not clear to me how you have a notion of preference that allows for this to be undefined (the agent can be indifferent, but that’s distinct).
I mean, you could try to come up with such a thing, but I’d be pretty skeptical of its meaningfulness. (What happens if you program these preferences into an FAI and then it hits a choice for which its preference is undefined? Does it act arbitrarily? How does this differ from indifference, then? By lack of transitivity, maybe? But then that’s effectively just nontransitive indifference, which seems like it would be a problem...)
I think your comment is the sort of thing that sounds reasonable if you reason backward, starting from the idea of expected utility, but will fall apart if you reason forward, starting from the idea of preferences. But if you have some way of making it work, I’d be interested to hear...
This is true, then it would only be between a specific subset of gambles.
I think you should be able to set things up so that you never encounter a pair of gambles where this is undefined. I’ll illustrate with an example. Suppose you start with a prior over the integers, such that:
p(n) = (C/F(n)) where F(n) is a function that grows really fast and C is a normalization constant. Then the set of gambles that we’re considering would be posteriors on the integers given that they obey certain properties. For instance, we could ask the agent to choose between the posterior over integers given that n is odd vs the posterior given that n is even.
I’m pretty sure that you can construct an agent that behaves as if it had an unbounded utility function in this case. So long as the utility associated with an integer n grows sufficiently slower than F(N), all expectations over posteriors on the integers should be well defined.
If you were to build an FAI this way, it would never end up in a belief state where the expected utility diverges between two outcomes. The expected utility would be well defined over any posterior on it’s prior, so it’s choice given a pair of gambles would also be well defined for any belief state it could find itself in.
Huh. This would need some elaboration, but this is definitely the most plausible way around the problem I’ve seen.
Now (in Savage’s formalism) actions are just functions from world-states to outcomes (maybe with a measurability condition), so regardless of your prior it’s easy to construct the relevant St. Petersburg gambles if the utility function is unbounded. But seems like what you’re saying is, if we don’t allow arbitrary actions, then the prior could be such that, not only are none of the permitted actions St. Petersburg gambles, but also this remains the case even after future updates. Interesting! Yeah, that just might be workable...