Just because the question is very, very hard doesn’t mean there’s no answer.
Definitely true. That’s why I said “yet?” It may be possible in the future to develop something like a general individual utility function, but we certainly do not have that now.
Perhaps I’m confused. The meta-utility function—isn’t that literally identical to the social utility function? Beyond the social function, utilitarianism/consequentialism isn’t making tradeoffs—the goal of the whole philosophy is to maximize the utility of some group, and once we’ve defined that group (a task for which we cannot use a utility function without infinite regress), the rest is a matter of the specific form.
The meta-utility function—isn’t that literally identical to the social utility function?
Yes. The problem is that we can’t actually calculate with it because the only information we have about it is vague intuitions, some of which may be wrong.
Definitely true. That’s why I said “yet?” It may be possible in the future to develop something like a general individual utility function, but we certainly do not have that now.
Perhaps I’m confused. The meta-utility function—isn’t that literally identical to the social utility function? Beyond the social function, utilitarianism/consequentialism isn’t making tradeoffs—the goal of the whole philosophy is to maximize the utility of some group, and once we’ve defined that group (a task for which we cannot use a utility function without infinite regress), the rest is a matter of the specific form.
Yes. The problem is that we can’t actually calculate with it because the only information we have about it is vague intuitions, some of which may be wrong.
If only we were self-modifying intelligences… ;)