One. Either he can predict my actions and is honest and I get a million dollars, or he can’t and I don’t lose anything, but don’t get a thousand dollars, and get to laugh at him. (Note that I think one of the reasons the problem is bogus is you are restricted by the conditions of the problem from considering any gain (ie, get to laugh at him) except the cash, which is too unrealistic to matter.) (Also note that this is from memory and I think Newcomb’s and similar problems are bogus enough that I haven’t read any posts, or anything else, about them in well over a year.)
So your objection (correct me if I am wrong) is that it makes no sense to value what the other aliens do because what they do doesn’t effect you in any way. You don’t have a problem with acting as if your own behavior determines what the aliens do, given that they decide their actions based on a reliable prediction of what you will do. You just don’t care. You reject the premise about your own utility function depending on them as a silly utility function.
As your second sentence admits, What they do is their decision. Letting their decision influence me in the way you seem to support is no different than giving in to “emotional blackmail”.
Your final sentence makes no sense, I cannot figure out what you mean by it.
Your final sentence makes no sense, I cannot figure out what you mean by it.
You reject premise 1), where premise 1) is: your utility function depends on them, i.e.”They are altruistic”. You reject it because you think any utility function with premise 1) is silly.
Letting their decision influence me in the way you seem to support
I didn’t advocate one way or the other. Utility functions simply happened to be provided for us in the post so for me the question becomes one of abstract reasoning. But I certainly don’t object to you mentioning an aversion to utility functions of the type given. It is a relevant contribution and a legitimate perspective.
Your final sentence makes no sense, I cannot figure out what you mean by it.
I read it, it makes sense. See above for more explanation.
is no different than giving in to “emotional blackmail”.
Or simple trade. They have preferences about the future state of their local environment and preferences about the future state of your local environment (for whatever unspecified reason). You are in a symmetrical situation. You discover that you can get an overall higher utility by doing some slightly worse things in your local environment in exchange for them doing some more preferred things where they live. This doesn’t exclude them outright synthesizing humans!
is no different than giving in to “emotional blackmail”.
How many boxes do you take in Newcomb’s problem?
One. Either he can predict my actions and is honest and I get a million dollars, or he can’t and I don’t lose anything, but don’t get a thousand dollars, and get to laugh at him. (Note that I think one of the reasons the problem is bogus is you are restricted by the conditions of the problem from considering any gain (ie, get to laugh at him) except the cash, which is too unrealistic to matter.) (Also note that this is from memory and I think Newcomb’s and similar problems are bogus enough that I haven’t read any posts, or anything else, about them in well over a year.)
So your objection (correct me if I am wrong) is that it makes no sense to value what the other aliens do because what they do doesn’t effect you in any way. You don’t have a problem with acting as if your own behavior determines what the aliens do, given that they decide their actions based on a reliable prediction of what you will do. You just don’t care. You reject the premise about your own utility function depending on them as a silly utility function.
Your first sentence is a fair description.
As your second sentence admits, What they do is their decision. Letting their decision influence me in the way you seem to support is no different than giving in to “emotional blackmail”.
Your final sentence makes no sense, I cannot figure out what you mean by it.
You reject premise 1), where premise 1) is: your utility function depends on them, i.e.”They are altruistic”. You reject it because you think any utility function with premise 1) is silly.
I didn’t advocate one way or the other. Utility functions simply happened to be provided for us in the post so for me the question becomes one of abstract reasoning. But I certainly don’t object to you mentioning an aversion to utility functions of the type given. It is a relevant contribution and a legitimate perspective.
I read it, it makes sense. See above for more explanation.
Or simple trade. They have preferences about the future state of their local environment and preferences about the future state of your local environment (for whatever unspecified reason). You are in a symmetrical situation. You discover that you can get an overall higher utility by doing some slightly worse things in your local environment in exchange for them doing some more preferred things where they live. This doesn’t exclude them outright synthesizing humans!
Or trade.