It’s interesting, but it assumes that human desires can be meaningfully mapped into something like a utility function, in a way which makes me skeptical about its usefulness. (Though I have a hard time articulating my objection more clearly than that.)
I recognise that argument, but surely we can use consideration of utility function in models in order to make progress along thinking about these things.
Even if we crudely imagine a typical human who happens to be ticking all Mazlow’s boxes with access to happiness, meaning and resources tending to be more towards our (current...) normalised ‘1’ and someone in solitary confinement, in psychological torture, tending towards our normalised ‘0’ as a utility point – even then the concept is sufficiently coherent and grokable to allow use of these kinds of models?
Do you disagree? I am curious – I have encountered this point several times and I’d like to see where we differ.
human desires can be meaningfully mapped into something like a utility function
I don’t believe this is possible in useful way.
Do you mean not possible for humans with current tools or theoretically impossible? (It seems to me that in principle human preferences can be mapped to something like a utility function in a way that is at least useful, even if not ideal.)
That’s a whole conversation! I probably shouldn’t start talking about this, since I don’t have the time to do it justice.
In the main, I feel that humans are not easily modelled by a utility function, but we have meta-preferences that cause us to hate facing the kind of trade-offs that utility functions imply. I’d bet most people would pay to not have their preferences replaced with a utility function, no matter how well defined it was.
This is probably (and quite possibly by an order of magnitude so) the most important contribution from lesswrong in it’s entirety in several months.
I like your skilled use of understatement! ;-)
It’s interesting, but it assumes that human desires can be meaningfully mapped into something like a utility function, in a way which makes me skeptical about its usefulness. (Though I have a hard time articulating my objection more clearly than that.)
I recognise that argument, but surely we can use consideration of utility function in models in order to make progress along thinking about these things.
Even if we crudely imagine a typical human who happens to be ticking all Mazlow’s boxes with access to happiness, meaning and resources tending to be more towards our (current...) normalised ‘1’ and someone in solitary confinement, in psychological torture, tending towards our normalised ‘0’ as a utility point – even then the concept is sufficiently coherent and grokable to allow use of these kinds of models?
Do you disagree? I am curious – I have encountered this point several times and I’d like to see where we differ.
I don’t believe this is possible in useful way. However, having a utility solution may mean we can generalise to other situations...
Do you mean not possible for humans with current tools or theoretically impossible? (It seems to me that in principle human preferences can be mapped to something like a utility function in a way that is at least useful, even if not ideal.)
That’s a whole conversation! I probably shouldn’t start talking about this, since I don’t have the time to do it justice.
In the main, I feel that humans are not easily modelled by a utility function, but we have meta-preferences that cause us to hate facing the kind of trade-offs that utility functions imply. I’d bet most people would pay to not have their preferences replaced with a utility function, no matter how well defined it was.
But that’s a conversation for after the baby!