Guilty as charged—I did read your post as arguing in favor of geometric averaging, when it really wasn’t. Sorry.
The main point still seems strange to me, though. Suppose you were programming a robot to act on my behalf, and you asked me to write out some goodness values for outcomes, to program them into the robot. Then before writing out the goodnesses I’d be sure to ask you: which method would the robot use for evaluating lotteries over outcomes? Depending on that, the goodness values I’d write for you (to achieve the desired behavior from the robot) would be very different.
To me it suggests that the goodness values and the averaging method are not truly independent degrees of freedom. So it’s simpler to nail down the averaging method, to use ordinary arithmetic averaging, and then assign the goodness values. We don’t lose any ability to describe behavior (as long as it’s consistent), and we remain with only the degree of freedom that actually matters.
(apologies for taking a couple of days to respond, work has been busy)
I think your robot example nicely demonstrates the difference between our intuitions. As cubefox pointed out in another comment, what representation you want to use depends on what you take as basic.
There are certain types of preferences/behaviours which cannot be expressed using arithmetic averaging. These are the ones which violate VNM, and I think violating VNM axioms isn’t totally crazy. I think its worth exploring these VNM-violating preferences and seeing what they look like when more fleshed out. That’s what I tried to do in this post.
If I wanted a robot that violated one of the VNM axioms, then I wouldn’t be able to describe it by ‘nailing down the averaging method to use ordinary arithmetic averaging and assigning goodness values’. For example, if there were certain states of the world which I wanted to avoid at all costs (and thus violate the continuity axiom), I could assign zero utility to it and use geometric averaging. I couldn’t do this with arithmetic averaging and any finite utilities [1].
A better example is Scott Garrabrant’s argument regarding abandoning the VNM axiom of independence. If I wanted to program a robot which sometimes preferred lotteries to any definite outcome, I wouldn’t be able to program the robot using arithmetic averaging over goodness values.
I think that these examples show that there is at least some independence between averaging methods and utility/goodness.
(ok, I guess you could assign ‘negative infinity’ utility to those states if you wanted. But once you’re doing stuff like that, it seems to me that geometric averaging is a much more intuitive way to describe these preferences. )
For example, if there were certain states of the world which I wanted to avoid at all costs (and thus violate the continuity axiom), I could assign zero utility to it and use geometric averaging. I couldn’t do this with arithmetic averaging and any finite utilities.
Well, you can’t have some states as “avoid at all costs” and others as “achieve at all costs”, because having them in the same lottery leads to nonsense, no matter what averaging you use. And allowing only one of the two seems arbitrary. So it seems cleanest to disallow both.
If I wanted to program a robot which sometimes preferred lotteries to any definite outcome, I wouldn’t be able to program the robot using arithmetic averaging over goodness values.
But geometric averaging wouldn’t let you do that either, or am I missing something?
Well, you can’t have some states as “avoid at all costs” and others as “achieve at all costs”, because having them in the same lottery leads to nonsense, no matter what averaging you use. And allowing only one of the two seems arbitrary. So it seems cleanest to disallow both.
Fine. But the purpose of exploring different averaging methods is to see whether it expands the richness of the kind of behaviour we want to describe. The point is that using arithmetic averaging is a choice which limits the kind of behaviour we can get. Maybe we want to describe behaviours which can’t be described under expected utility. Having an ‘avoid at all costs state’ is one such behaviour which finds natural description using a non-arithmetic averaging which can’t be described in more typical VNM terms.
If your position is ‘I would never want to describe normative ethics using anything other than expected utility’ then that’s fine, but some people (like me) are interested in looking at what alternatives to expected utility might be. That’s why I wrote this post. As it stands, I didn’t find geometric averaging very satisfactory (as I wrote in the post), but I think things like this are worth exploring.
But geometric averaging wouldn’t let you do that either, or am I missing something?
You are right. Geometric averaging on its own doesn’t give allow violations of independence. But some other protocol for deciding over lotteries does. It’s described more in the Garrabrant post linked above.
Guilty as charged—I did read your post as arguing in favor of geometric averaging, when it really wasn’t. Sorry.
The main point still seems strange to me, though. Suppose you were programming a robot to act on my behalf, and you asked me to write out some goodness values for outcomes, to program them into the robot. Then before writing out the goodnesses I’d be sure to ask you: which method would the robot use for evaluating lotteries over outcomes? Depending on that, the goodness values I’d write for you (to achieve the desired behavior from the robot) would be very different.
To me it suggests that the goodness values and the averaging method are not truly independent degrees of freedom. So it’s simpler to nail down the averaging method, to use ordinary arithmetic averaging, and then assign the goodness values. We don’t lose any ability to describe behavior (as long as it’s consistent), and we remain with only the degree of freedom that actually matters.
(apologies for taking a couple of days to respond, work has been busy)
I think your robot example nicely demonstrates the difference between our intuitions. As cubefox pointed out in another comment, what representation you want to use depends on what you take as basic.
There are certain types of preferences/behaviours which cannot be expressed using arithmetic averaging. These are the ones which violate VNM, and I think violating VNM axioms isn’t totally crazy. I think its worth exploring these VNM-violating preferences and seeing what they look like when more fleshed out. That’s what I tried to do in this post.
If I wanted a robot that violated one of the VNM axioms, then I wouldn’t be able to describe it by ‘nailing down the averaging method to use ordinary arithmetic averaging and assigning goodness values’. For example, if there were certain states of the world which I wanted to avoid at all costs (and thus violate the continuity axiom), I could assign zero utility to it and use geometric averaging. I couldn’t do this with arithmetic averaging and any finite utilities [1].
A better example is Scott Garrabrant’s argument regarding abandoning the VNM axiom of independence. If I wanted to program a robot which sometimes preferred lotteries to any definite outcome, I wouldn’t be able to program the robot using arithmetic averaging over goodness values.
I think that these examples show that there is at least some independence between averaging methods and utility/goodness.
(ok, I guess you could assign ‘negative infinity’ utility to those states if you wanted. But once you’re doing stuff like that, it seems to me that geometric averaging is a much more intuitive way to describe these preferences. )
Well, you can’t have some states as “avoid at all costs” and others as “achieve at all costs”, because having them in the same lottery leads to nonsense, no matter what averaging you use. And allowing only one of the two seems arbitrary. So it seems cleanest to disallow both.
But geometric averaging wouldn’t let you do that either, or am I missing something?
Fine. But the purpose of exploring different averaging methods is to see whether it expands the richness of the kind of behaviour we want to describe. The point is that using arithmetic averaging is a choice which limits the kind of behaviour we can get. Maybe we want to describe behaviours which can’t be described under expected utility. Having an ‘avoid at all costs state’ is one such behaviour which finds natural description using a non-arithmetic averaging which can’t be described in more typical VNM terms.
If your position is ‘I would never want to describe normative ethics using anything other than expected utility’ then that’s fine, but some people (like me) are interested in looking at what alternatives to expected utility might be. That’s why I wrote this post. As it stands, I didn’t find geometric averaging very satisfactory (as I wrote in the post), but I think things like this are worth exploring.
You are right. Geometric averaging on its own doesn’t give allow violations of independence. But some other protocol for deciding over lotteries does. It’s described more in the Garrabrant post linked above.