It seems plausible to me that J really, truly cares about himself significantly more than he cares about other people, certainly with P > 0.05.
The effect could be partly due to this and partly due to scope insensitivity but still… how do you distinguish one from the other?
It seems: caring about yourself → caring what society thinks of you → following society’s norms → tendency towards scope insensitivity (since several of society’s norms are scope-insensitive).
In other words: how do you tell whether J has utility function F, or a different utility function G which he is doing a poor job of optimising due to biases? I assume it would have something to do with pointing out the error and seeing how he reacts, but it can’t be that simple. Is the question even meaningful?
Re: “charities that work”, your assumption is correct.
Considering that J is contributing a lot of money to truly effective charity, I think that his utility function is such that he will gain more utils from the huge amount of fun generated from his continued donations minus that by social shame minus that of ten people dying compared to J himself dying if his biases did not render him incapable of appreciating just how much fun his charity was generating. If he’s very selfish, my probability estimate is raised (not above .95, but above whatever it would have been before) by the fact that most people don’t want to die.
One way to find out the source of such a decision is telling them to read the Sequences, and see what they think afterwards. The question is very meaningful, because the whole point of instrumental rationality is learning how to prevent your biases from sabotaging your utility function.
It seems plausible to me that J really, truly cares about himself significantly more than he cares about other people, certainly with P > 0.05.
The effect could be partly due to this and partly due to scope insensitivity but still… how do you distinguish one from the other?
It seems: caring about yourself → caring what society thinks of you → following society’s norms → tendency towards scope insensitivity (since several of society’s norms are scope-insensitive).
In other words: how do you tell whether J has utility function F, or a different utility function G which he is doing a poor job of optimising due to biases? I assume it would have something to do with pointing out the error and seeing how he reacts, but it can’t be that simple. Is the question even meaningful?
Re: “charities that work”, your assumption is correct.
Considering that J is contributing a lot of money to truly effective charity, I think that his utility function is such that he will gain more utils from the huge amount of fun generated from his continued donations minus that by social shame minus that of ten people dying compared to J himself dying if his biases did not render him incapable of appreciating just how much fun his charity was generating. If he’s very selfish, my probability estimate is raised (not above .95, but above whatever it would have been before) by the fact that most people don’t want to die.
One way to find out the source of such a decision is telling them to read the Sequences, and see what they think afterwards. The question is very meaningful, because the whole point of instrumental rationality is learning how to prevent your biases from sabotaging your utility function.