It’s not about marginal expected utilities of the charities as much as it is about the expected utilities for exploitation/manipulation of what ever proxies you, and those like you, have used for making your number which you insist on calling ‘expected utility’.
Let’s first get sorted out the gun turret example, shall we? The gun is trying to hit some manoeuvrable spacecraft at considerable distance; it is shooting predictively. If you get an expected damage function over the angles of the turret, and shoot at the maximum of that function, what will happen is that your expected damage function will suddenly acquire a dip at that point because the target will learn to evade being hit. Do you fully understand the logic behind randomization of the shots there? Behind not shooting at the maximum of what ever function you approximate the expected utility with? The optimum targeting strategy looks like shooting into the space region of the possible target positions, with some sort of pattern. The best pattern may be some random distribution, or it may be some criss cross pattern, or the like.
Note also that it has nothing to do with saturation; it works the same if there’s no ‘ship destroyed’ limit and you are trying to get target maximally wet with a water hose.
The same situation arises in general when you can not calculate expected utility properly. I have no objection that you should pay to the charity with the highest expected utility. You do not know highest expected utility. You are practically unable to estimate it. What charity looks best to you is not expected utility. What you think is expected utility, relates to expected utility as much as how strong a beam you think bridge requires relates to the actual requirements as set by building code. Go read on equilibrium strategies and such.
It’s not about marginal expected utilities of the charities as much as it is about the expected utilities for exploitation/manipulation of what ever proxies you, and those like you, have used for making your number which you insist on calling ‘expected utility’.
Let’s first get sorted out the gun turret example, shall we? The gun is trying to hit some manoeuvrable spacecraft at considerable distance; it is shooting predictively. If you get an expected damage function over the angles of the turret, and shoot at the maximum of that function, what will happen is that your expected damage function will suddenly acquire a dip at that point because the target will learn to evade being hit. Do you fully understand the logic behind randomization of the shots there? Behind not shooting at the maximum of what ever function you approximate the expected utility with? The optimum targeting strategy looks like shooting into the space region of the possible target positions, with some sort of pattern. The best pattern may be some random distribution, or it may be some criss cross pattern, or the like.
Note also that it has nothing to do with saturation; it works the same if there’s no ‘ship destroyed’ limit and you are trying to get target maximally wet with a water hose.
The same situation arises in general when you can not calculate expected utility properly. I have no objection that you should pay to the charity with the highest expected utility. You do not know highest expected utility. You are practically unable to estimate it. What charity looks best to you is not expected utility. What you think is expected utility, relates to expected utility as much as how strong a beam you think bridge requires relates to the actual requirements as set by building code. Go read on equilibrium strategies and such.