Well, I agree that current humans can’t choose to do this, but it seems like it might be possible through technology. It seems that it would be something highly analgous to wireheading. Thanks for rephrasing it this way, it helped the idea click for me.
I guess it depends on whether or not Bob knows your decision algorithm. If all he can see is whether he got the dollar or not, then in the worlds you care about, he gets dollars more often, and thus reciprocates more often, as far as you are concerned. But if he realizes you are using option 3, then he would be more grateful in this case.
From what I understand, you’re asking about people self-modifying to believe that something they desire is true.
There is no reason to do this. The choice of self-modifying doesn’t increase the probability of it being true. All it does is result in future!you believing it’s true, but you don’t care about what future!you believes. You care about what actually happens.
Well, I agree that current humans can’t choose to do this, but it seems like it might be possible through technology.
Even with technology, it won’t be possible to decide to have already been apathetic at the time when you are deciding whether to become apathetic. Hence, when you are deciding whether to become apathetic, you will make that decision based on what you cared about pre-apathy. So if, in that pre-apathetic state, you care about things that apathy would harm, then you won’t decide to become apathetic.
Yeah, but people are stupid so they might do it anyway. This is why I said it was like wireheading, you are increasing your expected pleasure (in some sense) at the cost of losing lots of things you really care about.
Also, an agent that could change his preferences could just give himself good outcomes by choosing to change his preferences to the outcomes that he will get. The point is that if you could do this, you would not increase your expected utility, but instead create a different agent with a very convenient utility function. This is not something I would want to do.
Oh, yes, you are right, I forgot that he doesn’t get to see the result.
Well, I agree that current humans can’t choose to do this, but it seems like it might be possible through technology. It seems that it would be something highly analgous to wireheading. Thanks for rephrasing it this way, it helped the idea click for me.
I guess it depends on whether or not Bob knows your decision algorithm. If all he can see is whether he got the dollar or not, then in the worlds you care about, he gets dollars more often, and thus reciprocates more often, as far as you are concerned. But if he realizes you are using option 3, then he would be more grateful in this case.
From what I understand, you’re asking about people self-modifying to believe that something they desire is true.
There is no reason to do this. The choice of self-modifying doesn’t increase the probability of it being true. All it does is result in future!you believing it’s true, but you don’t care about what future!you believes. You care about what actually happens.
Even with technology, it won’t be possible to decide to have already been apathetic at the time when you are deciding whether to become apathetic. Hence, when you are deciding whether to become apathetic, you will make that decision based on what you cared about pre-apathy. So if, in that pre-apathetic state, you care about things that apathy would harm, then you won’t decide to become apathetic.
Yeah, but people are stupid so they might do it anyway. This is why I said it was like wireheading, you are increasing your expected pleasure (in some sense) at the cost of losing lots of things you really care about.
Also, an agent that could change his preferences could just give himself good outcomes by choosing to change his preferences to the outcomes that he will get. The point is that if you could do this, you would not increase your expected utility, but instead create a different agent with a very convenient utility function. This is not something I would want to do.
Oh, yes, you are right, I forgot that he doesn’t get to see the result.