When you’re talking about the utility of squirrels, what exactly are you calculating? How much you personally value squirrels? How do you measure that? If it is just a thought experiment (“I would pay $1 per squirrel to prevent their deaths”) how do you know that you aren’t just lying to yourself & if it really came down to it, you wouldn’t pay? Maybe we can only really calculate utility after the fact by looking at what people do rather than what they say.
I may not actually want to pay $1 per squirrel, but if I still want to want to, then that’s as significant a part of my ethics as my desire to avoid being a wire-head, even though once I tried it I would almost certainly never want to stop.
I would rather observe you & see what you do to avoid becoming a wirehead. I’d put saying you want to avoid becoming a wirehead & saying you want to want to pay to save the squirrels in the same camp—totally unprovable at this point in time. In the future maybe we can scan your brain & see which of your stated preferences you are likely to act on; that’d be extremely cool, especially if we could scan politicians during their campaigns.
When you’re talking about the utility of squirrels, what exactly are you calculating? How much you personally value squirrels? How do you measure that? If it is just a thought experiment (“I would pay $1 per squirrel to prevent their deaths”) how do you know that you aren’t just lying to yourself & if it really came down to it, you wouldn’t pay? Maybe we can only really calculate utility after the fact by looking at what people do rather than what they say.
I may not actually want to pay $1 per squirrel, but if I still want to want to, then that’s as significant a part of my ethics as my desire to avoid being a wire-head, even though once I tried it I would almost certainly never want to stop.
I would rather observe you & see what you do to avoid becoming a wirehead. I’d put saying you want to avoid becoming a wirehead & saying you want to want to pay to save the squirrels in the same camp—totally unprovable at this point in time. In the future maybe we can scan your brain & see which of your stated preferences you are likely to act on; that’d be extremely cool, especially if we could scan politicians during their campaigns.
How do you know those people aren’t still “lying to themselves”? Humans are not known for being perfect, bias-free reasoners.
Maybe we can only really calculate utility after the fact by looking at what perfect Bayesian agents do rather than mere mortals.