You can’t simply average the km’s. Suppose you estimate .5 probability that k2 should be twice k1 and .5 probability that k1 should be twice k2. Then if you normalize k1 to 1, k2 will average to 1.25, while similarly if you normalize k2 to 1, k1 will average to 1.25.
In general, to each choice of km’s will correspond a utility function and the utility function we should use will be a linear combination of those utility functions and we will have renormalization parameters k’m and, if we accept the argument given in your post, those k’m ought to be just as dependant on your preferences, so you’re probably also uncertain about the values that those parameters should take and so you obtain k″m’s and so on ad infinitum. So you obtain an infinite tower of uncertain parameters and it isn’t obvious how to obtain a utility function out of this mess.
Why would you use a geometric mean? It might make the problem go away, but why does it do that? what was the problem and why is a geometric mean the only solution?
I think it’s a really bad strategy to respond to an apparent problem by just pulling ideas out of thin air until you find one who’s flaws are non-obvious. It seems much more prudent to try to understand what the problem is and why it occurred, so that we can derive something that actually does the right thing. A problem is an indication that we don’t understand what’s going on, which should be a halt-and-rethink-everything moment.
I was just pointing out that it is a possible solution to the problem that Karl mentioned. I agree that it probably isn’t a good solution overall. Maybe I shouldn’t have brought it up.
The tone of my response was a bit hostile. Sorry about that. It was a general comment against doing things that way, which I’ve had nothing but trouble from. It was prompted by prompted by your comment, but not really a reply to your idea in particular.
Hmm. I’ll have to take a closer look at that. You mean that the uncertainties are correlated, right?
and we will have renormalization parameters k’m and
Can you show where you got that? My impression was that once we got to the set of (equivalent, only difference is scale) utility functions, averaging them just works without room for more fine-tuning.
But as I said, that part is shaky because I haven’t actually supported those intuitions with any particular assumptions. We’ll see what happens when we build it up from more solid ideas.
Hmm. I’ll have to take a closer look at that. You mean that the uncertainties are correlated, right?
No. To quote your own post:
A similar process allows us to arbitrarily set exactly one of the km.
I meant that the utility function resulting from averaging over your uncertainty over the km’s will depend on which km you chose to arbitrarily set in this way. I gave an example of this phenomenon in my original comment.
You can’t simply average the km’s. Suppose you estimate .5 probability that k2 should be twice k1 and .5 probability that k1 should be twice k2. Then if you normalize k1 to 1, k2 will average to 1.25, while similarly if you normalize k2 to 1, k1 will average to 1.25.
In general, to each choice of km’s will correspond a utility function and the utility function we should use will be a linear combination of those utility functions and we will have renormalization parameters k’m and, if we accept the argument given in your post, those k’m ought to be just as dependant on your preferences, so you’re probably also uncertain about the values that those parameters should take and so you obtain k″m’s and so on ad infinitum. So you obtain an infinite tower of uncertain parameters and it isn’t obvious how to obtain a utility function out of this mess.
You could use a geometric mean, although this might seem intuitively unsatisfactory in some cases.
Why would you use a geometric mean? It might make the problem go away, but why does it do that? what was the problem and why is a geometric mean the only solution?
I think it’s a really bad strategy to respond to an apparent problem by just pulling ideas out of thin air until you find one who’s flaws are non-obvious. It seems much more prudent to try to understand what the problem is and why it occurred, so that we can derive something that actually does the right thing. A problem is an indication that we don’t understand what’s going on, which should be a halt-and-rethink-everything moment.
I was just pointing out that it is a possible solution to the problem that Karl mentioned. I agree that it probably isn’t a good solution overall. Maybe I shouldn’t have brought it up.
The tone of my response was a bit hostile. Sorry about that. It was a general comment against doing things that way, which I’ve had nothing but trouble from. It was prompted by prompted by your comment, but not really a reply to your idea in particular.
Hmm. I’ll have to take a closer look at that. You mean that the uncertainties are correlated, right?
Can you show where you got that? My impression was that once we got to the set of (equivalent, only difference is scale) utility functions, averaging them just works without room for more fine-tuning.
But as I said, that part is shaky because I haven’t actually supported those intuitions with any particular assumptions. We’ll see what happens when we build it up from more solid ideas.
No. To quote your own post:
I meant that the utility function resulting from averaging over your uncertainty over the km’s will depend on which km you chose to arbitrarily set in this way. I gave an example of this phenomenon in my original comment.
Oh sorry. I get what you mean now. Thanks.
I’ll have to think about that and see where the mistake is. That’s pretty serious, though.