I think these weights are descriptive, not prescriptive.
What do you mean by that? Are you saying humans already maximize expected utility using some linear aggregation of individual values, so these weights already exist? But the whole point of the OP is to convince people who are not already EU maximizers to become EU maximizers.
Are you saying humans already maximize expected utility using some linear aggregation of individual values, so these weights already exist?
I think my answer would be along the lines of “humans have preferences that could be consistently aggregated but they are bad at consistently aggregating them due to the computational difficulties involved.” For example, much of the early statistical prediction rule work fit a linear regression to a particular expert’s output on training cases, and found that the regression of that expert beat the expert on new cases- that is, it captured enough of their expertise but did not capture as much of their mistakes, fatigue, and off days. If you’re willing to buy that a simple algorithm based on a doctor can diagnose a disease better than that doctor, then it doesn’t seem like a big stretch to claim that a simple algorithm based on a person can satisfy that person’s values better than that person’s decisions made in real-time. (In order to move from ‘diagnose this one disease’ to ‘make choices that impact my life trajectory’ you need much, much more data, and probably more sophisticated aggregation tools than linear regression, but the basic intuition should hold.)
And so I think the methodology is (sort of) prescriptive: whatever you do, if it isn’t equivalent to a linear combination of your subvalues, then your aggregation procedure is introducing new subvalues, which is probably a bug.* (The ‘equivalent to’ is what makes it only ‘sort of’ prescriptive.) If the weights aren’t all positive, that’s probably also a bug (since that means one of your subvalues has no impact on your preferences, and thus it’s not actually a subvalue). But what should the relative weights for v_3 and v_4 be? Well, that depends on the tradeoffs that the person is willing to make; it’s not something we can pin down theoretically.
*Or you erroneously identified two subvalues as distinct, when they are related and should be mapped jointly.
And so I think the methodology is (sort of) prescriptive: whatever you do, if it isn’t equivalent to a linear combination of your subvalues, then your aggregation procedure is introducing new subvalues, which is probably a bug.
I tried to argue against this in the top level comment of this thread, but may not have been very clear. I just came up with a new argument, and would be interested to know whether it makes more sense to you.
What do you mean by that? Are you saying humans already maximize expected utility using some linear aggregation of individual values, so these weights already exist? But the whole point of the OP is to convince people who are not already EU maximizers to become EU maximizers.
I think my answer would be along the lines of “humans have preferences that could be consistently aggregated but they are bad at consistently aggregating them due to the computational difficulties involved.” For example, much of the early statistical prediction rule work fit a linear regression to a particular expert’s output on training cases, and found that the regression of that expert beat the expert on new cases- that is, it captured enough of their expertise but did not capture as much of their mistakes, fatigue, and off days. If you’re willing to buy that a simple algorithm based on a doctor can diagnose a disease better than that doctor, then it doesn’t seem like a big stretch to claim that a simple algorithm based on a person can satisfy that person’s values better than that person’s decisions made in real-time. (In order to move from ‘diagnose this one disease’ to ‘make choices that impact my life trajectory’ you need much, much more data, and probably more sophisticated aggregation tools than linear regression, but the basic intuition should hold.)
And so I think the methodology is (sort of) prescriptive: whatever you do, if it isn’t equivalent to a linear combination of your subvalues, then your aggregation procedure is introducing new subvalues, which is probably a bug.* (The ‘equivalent to’ is what makes it only ‘sort of’ prescriptive.) If the weights aren’t all positive, that’s probably also a bug (since that means one of your subvalues has no impact on your preferences, and thus it’s not actually a subvalue). But what should the relative weights for v_3 and v_4 be? Well, that depends on the tradeoffs that the person is willing to make; it’s not something we can pin down theoretically.
*Or you erroneously identified two subvalues as distinct, when they are related and should be mapped jointly.
I tried to argue against this in the top level comment of this thread, but may not have been very clear. I just came up with a new argument, and would be interested to know whether it makes more sense to you.