The most common way to get a bool out of that is to label the maximum ‘true’ and everything else false, but that doesn’t give a realistically human-followable result.
You have to get decisions out of the moral theory. A decision is a choice of a single thing to do out of all the possibilities for action. For any theory that rates possible actions by a real-valued measure, maximising that measure is the result the theory prescribes.
If that does not give a realistically human-followable result, then either you give up the idea of measuring decisions by utility, or you take account of people’s limitations in defining the utility function. However, if you believe your utility function should be a collective measure of the well-being of all sentient individuals (that is, if you not merely have a utility function, but are a utilitarian), of which there are at least 7 billion, you would have to rate your personal quality of life vastly higher than anyone else’s to make a dent in the rigours to which it calls you.
You have to get decisions out of the moral theory. A decision is a choice of a single thing to do out of all the possibilities for action. For any theory that rates possible actions by a real-valued measure, maximising that measure is the result the theory prescribes.
If that does not give a realistically human-followable result, then either you give up the idea of measuring decisions by utility, or you take account of people’s limitations in defining the utility function. However, if you believe your utility function should be a collective measure of the well-being of all sentient individuals (that is, if you not merely have a utility function, but are a utilitarian), of which there are at least 7 billion, you would have to rate your personal quality of life vastly higher than anyone else’s to make a dent in the rigours to which it calls you.