Obviously there is a lot of complexity. In _this model_:
the complexity lies mainly in some unknown function Hapiness(), which maps from mind-states (or a fair chunk of the human brain) to real numbers. Apparently, humans have the ability to evaluate some sort of this estimate. The proposal here is they make some sort of really non-linear mapping, e.g. log(), when asked to map this to scale 1.. 10.
Prescriptive morality starts elsewhere: when you take such number, aggregate it somehow over some number of people, and claim it is worth optimizing.
What I’m saying, anyone making such prescriptive claims, should consider the possibility they are aggregating in a bad way. (Anyone optimizing e.g. QUALYs or Gross domestic hapiness or some conceptions of utilitarian value is making such claims)
By revealed preference people often put exponentially more resources to going 9 to 10 than from 1 to 2, so I don’t think the suggestion 1 to 2 is as valuable as 9 to 10 is intuitive at all.
The only trouble with that last sentence is that if happiness is correlated with amount of resources, then this is going to confound any argument from different people spending different amounts of money.
To answer the question, we could look at cases where someone gives money to someone else (to look at altruistic preferences), and try to guess about what sort of impact people want their resources ot have, as a function of quality of life of the recipient. So, e.g. if people want give a lot of money to people who are already happy, then this would indicate that people are intuitively aggregating in a way that weights higher subjective happiness more.
We could also look at what kind of actions people take when planning for the future (measuring selfish preferences) - if they have a 50% probability of good outcomes and a 50% probability of bad outcomes, and they can buy insurance that pays out double in one of the outcomes, do they want the payout in the bad outcome or in the good outcome?
Obviously there is a lot of complexity. In _this model_:
the complexity lies mainly in some unknown function Hapiness(), which maps from mind-states (or a fair chunk of the human brain) to real numbers. Apparently, humans have the ability to evaluate some sort of this estimate. The proposal here is they make some sort of really non-linear mapping, e.g. log(), when asked to map this to scale 1.. 10.
Prescriptive morality starts elsewhere: when you take such number, aggregate it somehow over some number of people, and claim it is worth optimizing.
What I’m saying, anyone making such prescriptive claims, should consider the possibility they are aggregating in a bad way. (Anyone optimizing e.g. QUALYs or Gross domestic hapiness or some conceptions of utilitarian value is making such claims)
By revealed preference people often put exponentially more resources to going 9 to 10 than from 1 to 2, so I don’t think the suggestion 1 to 2 is as valuable as 9 to 10 is intuitive at all.
The only trouble with that last sentence is that if happiness is correlated with amount of resources, then this is going to confound any argument from different people spending different amounts of money.
To answer the question, we could look at cases where someone gives money to someone else (to look at altruistic preferences), and try to guess about what sort of impact people want their resources ot have, as a function of quality of life of the recipient. So, e.g. if people want give a lot of money to people who are already happy, then this would indicate that people are intuitively aggregating in a way that weights higher subjective happiness more.
We could also look at what kind of actions people take when planning for the future (measuring selfish preferences) - if they have a 50% probability of good outcomes and a 50% probability of bad outcomes, and they can buy insurance that pays out double in one of the outcomes, do they want the payout in the bad outcome or in the good outcome?