Thanks nyan, this was really helpful in comprehending what you told me last time. So if I understand you correctly, utilities are both subjective and descriptive. They only identify what a particular single agent actually prefers under uncertain conditions. Is this right? If so, how do we take into account situations where one is not sure what one wants? Being turned into a whale might be as awesome as being turned into a gryphon, but since you don’t (presumably) know what either would be like, how do you calculate your expected payoff?
Can you link me to or in some way dereference “what I told you last time”?
one is not sure what one wants?
how do you calculate your expected payoff?
If you have a probability distribution over possible utility values or something, I don’t know what to do with it. It’s a type error to aggregate utilities from different utility functions, so don’t do that. That’s the moral uncertainty problem, and I don’t think there’s a satisfactory solution yet. Though Bostrom or someone might have done some good work on it that I haven’t seen.
For now, it probably works to guess at how good it seems relative to other things. Sometimes breaking it down into a more detailed scenario helps, looking at it a few different ways, etc. Fundamentally though, I don’t know. Maximizing EU without a real utility function is hard. Moral philosophy is hard.
Thanks nyan, this was really helpful in comprehending what you told me last time. So if I understand you correctly, utilities are both subjective and descriptive. They only identify what a particular single agent actually prefers under uncertain conditions. Is this right? If so, how do we take into account situations where one is not sure what one wants? Being turned into a whale might be as awesome as being turned into a gryphon, but since you don’t (presumably) know what either would be like, how do you calculate your expected payoff?
Can you link me to or in some way dereference “what I told you last time”?
If you have a probability distribution over possible utility values or something, I don’t know what to do with it. It’s a type error to aggregate utilities from different utility functions, so don’t do that. That’s the moral uncertainty problem, and I don’t think there’s a satisfactory solution yet. Though Bostrom or someone might have done some good work on it that I haven’t seen.
For now, it probably works to guess at how good it seems relative to other things. Sometimes breaking it down into a more detailed scenario helps, looking at it a few different ways, etc. Fundamentally though, I don’t know. Maximizing EU without a real utility function is hard. Moral philosophy is hard.
My bad, nyan.
You were explaining to me the difference between utility in Decision theory and utility in utilitarianism. I will try to find the thread later.
Thanks.
Are all those ostensibly unintentional typos an inside joke of some kind?
No, they are due solely to autocorrect, sloppy writing and haste. I will try to be more careful, apologies.
You know you can go back and fix them right?
Done.
...Am I the only who is wondering how being turned into a hale would even work and whether or not that would be awesome?
Probably not possible since it isn’t even a noun.
Hale is a noun, alright.