Taken to its extreme, imagine that someone made all your decisions for you. You would seem to have a higher utility, but you would have no free will. You would be more like a character in a book than a living person.
I think that maybe there’s some way in which the amount of aliveness you have is a function of the amount of free will you have, and that your “super-utility” is utility * aliveness. So a life with less freedom could have higher utility, yet be less valuable.
Many (even most?) people do have freedom as part of their utility function, but at different weights, so redefinition is unnecessary if you grant a moral imperative to increase the utility of others.
The problem is that most people interpret such an imperative as “increase the utility others would have if they shared my own utility function”, which is not at all correct. Simply redefining utility in the general case to include freedom is in this class of mistakes.
Amartya Sen has written extensively about how to do just this, though he wouldn’t call it utility either (it’s one of the cornerstones of the capability approach). He formalizes it in terms of the real option sets available to an individual rather than “free will” though. The main difficulty is how to quantify and value different option sets. (You can’t just look at the size of the sets, because different options are likely to be differentially valuable qua options, and you need to incorporate that somehow.)
You seem to be handwaving the definition of “free will” a bit here. On some level, the laws of physics “make all my decisions”, but this clearly doesn’t bother me. Is it really free will that matters, or the perception of it?
If Omega felt sorry for someone and (based on their own utility function) started making subtle interventions in their life, blocking off the possibility of bad choices and opening doors for good choices, all without them noticing—they’d still be reacting to their environment and would have a higher utility. Is that bad?
Taken to its extreme, imagine that someone made all your decisions for you. You would seem to have a higher utility, but you would have no free will. You would be more like a character in a book than a living person.
I think that maybe there’s some way in which the amount of aliveness you have is a function of the amount of free will you have, and that your “super-utility” is utility * aliveness. So a life with less freedom could have higher utility, yet be less valuable.
Is there a reason you can’t just redefine utility to capture the value of freedom?
Many (even most?) people do have freedom as part of their utility function, but at different weights, so redefinition is unnecessary if you grant a moral imperative to increase the utility of others.
The problem is that most people interpret such an imperative as “increase the utility others would have if they shared my own utility function”, which is not at all correct. Simply redefining utility in the general case to include freedom is in this class of mistakes.
Amartya Sen has written extensively about how to do just this, though he wouldn’t call it utility either (it’s one of the cornerstones of the capability approach). He formalizes it in terms of the real option sets available to an individual rather than “free will” though. The main difficulty is how to quantify and value different option sets. (You can’t just look at the size of the sets, because different options are likely to be differentially valuable qua options, and you need to incorporate that somehow.)
You seem to be handwaving the definition of “free will” a bit here. On some level, the laws of physics “make all my decisions”, but this clearly doesn’t bother me. Is it really free will that matters, or the perception of it?
If Omega felt sorry for someone and (based on their own utility function) started making subtle interventions in their life, blocking off the possibility of bad choices and opening doors for good choices, all without them noticing—they’d still be reacting to their environment and would have a higher utility. Is that bad?
My own view is that there’s no particular reason it couldn’t be bad, if the individual concerned happened to value Omega not doing that sort of thing.