No, I’m trying to understand the process others use to make their claims about what they value (besides direct experiences). I can’t reproduce it, so it feels like they are confabulating, but I don’t assume that’s the most likely answer here.
For our purposes, we can get some pretty decent bayesian evidence on what our values are by simply asking “which future scenario do I want to steer the world towards?” Is that going to give us perfect information on exactly what we value? No. But is it a pretty good start? Yes.
That seems horribly broken. There are tons of biases that make asking such questions essentially meaningless. Looking at anticipated and real rewards and punishments can easily be done and fits into simple models that actually predict people’s behaviors. Asking complex question leads to stuff like the Trolley problem which is notoriously unreliable and useless with regards to figuring out why we prefer some options to others.
It seems to me that assuming complex values requires cognitive algorithms that are much more expensive than anything evolution might build and don’t easily fit actually revealed preferences. Their only strength seems to be that they would match some thoughts that come up while contemplating decisions (and not even non-contradictory ones). Isn’t that privileging a very complex hypothesis?
No, I’m trying to understand the process others use to make their claims about what they value (besides direct experiences). I can’t reproduce it, so it feels like they are confabulating, but I don’t assume that’s the most likely answer here.
That seems horribly broken. There are tons of biases that make asking such questions essentially meaningless. Looking at anticipated and real rewards and punishments can easily be done and fits into simple models that actually predict people’s behaviors. Asking complex question leads to stuff like the Trolley problem which is notoriously unreliable and useless with regards to figuring out why we prefer some options to others.
It seems to me that assuming complex values requires cognitive algorithms that are much more expensive than anything evolution might build and don’t easily fit actually revealed preferences. Their only strength seems to be that they would match some thoughts that come up while contemplating decisions (and not even non-contradictory ones). Isn’t that privileging a very complex hypothesis?