I said nothing about an arbitrary utility function (nor proof for that matter). I was saying that applying utility theory to a specific set of terminal values seems to basically get you an idealized version of utilitarianism, which is what I thought the standard moral theory was around here.
If you know the utility function that is objectively correct, then you have the correct metaethics, and VnM style utility maximisation only tells you how implement it efficiently.
The first thing is “utilitarianism is true”, the second thing is “rationality is useful”.
But that goes back to the issue everyone criticises: EY recommends an object level decision...prefer torture to dust specks… unconditionally without knowing the reader’s UF.
If he had succeeded in arguing, or even tried to tried to argue that there is one true objective UF, then he would be in a position to hand out unconditional advice.
Or if he could show that preferring torture to dust specks was rational given an arbitrary UF, then he could also hand out unconditional advice (in the sense that the conditioning on an subjective UF doesn’t make a difference,). But he doesn’t do that, because if someone has a UF that places negative infinity utility on torture, that’s not up for grabs… their personal UF is what it is .
I said nothing about an arbitrary utility function (nor proof for that matter). I was saying that applying utility theory to a specific set of terminal values seems to basically get you an idealized version of utilitarianism, which is what I thought the standard moral theory was around here.
If you know the utility function that is objectively correct, then you have the correct metaethics, and VnM style utility maximisation only tells you how implement it efficiently.
The first thing is “utilitarianism is true”, the second thing is “rationality is useful”.
But that goes back to the issue everyone criticises: EY recommends an object level decision...prefer torture to dust specks… unconditionally without knowing the reader’s UF.
If he had succeeded in arguing, or even tried to tried to argue that there is one true objective UF, then he would be in a position to hand out unconditional advice.
Or if he could show that preferring torture to dust specks was rational given an arbitrary UF, then he could also hand out unconditional advice (in the sense that the conditioning on an subjective UF doesn’t make a difference,). But he doesn’t do that, because if someone has a UF that places negative infinity utility on torture, that’s not up for grabs… their personal UF is what it is .