IME it’s a lot easier to make these estimations if I calibrate my utils. Otherwise I’m just tossing labels around without ever dereferencing them.
If I assume, somewhat arbitrarily, that “1 unit of utility” is a just-noticeable utility difference at my current average utility… and I try to imagine what “1 billion utility” might actually be like, I have real trouble coming up with anything about which I don’t have strong emotions.
This isn’t terribly surprising, since emotions are tied pretty closely to value judgments in my brain.
IME it’s a lot easier to make these estimations if I calibrate my utils. Otherwise I’m just tossing labels around without ever dereferencing them.
If I assume, somewhat arbitrarily, that “1 unit of utility” is a just-noticeable utility difference at my current average utility… and I try to imagine what “1 billion utility” might actually be like, I have real trouble coming up with anything about which I don’t have strong emotions.
This isn’t terribly surprising, since emotions are tied pretty closely to value judgments in my brain.
Is it different for you?