But less is different from never. A rational choice would be to never tip at a restaurant you never visit again.
The reason people still tip is because of evolved mechanisms that make them feel guilty for betraying the waitress even when rationally they will face no negative consequences for doing it.
And this mechanism, in turn, is hard wired in to encourage you to play fair even when higher areas of your brain determine their is no reason to in this situation.
A rational choice would be to never tip at a restaurant you never visit again.
This is debatable. You prefer tips to be made in the (counterfactual) hypothetical where you work at that restaurant, so to the extent there is a priori uncertainty about whether you would be working at a restaurant vs. be a customer who never visits again, there is potentially an opportunity for increasing expected utility by transferring value between these hypotheticals.
This is debatable. You prefer tips to be made in the (counterfactual) hypothetical where you work at that restaurant, so to the extent there is a priori uncertainty about whether you would be working at a restaurant vs. be a customer who never visits again, there is potentially an opportunity for positive sum transfer of expected utility between these hypotheticals.
No, Gerald is correct. Given a known culture with known typical behavior of tipping by the other (human) customers and known consequences (or lack thereof) of not tipping after a single visit it is an error to use updateless considerations as an excuse to give away money. UDT does not cooperate with CooperateBot (or anonymous restaurant staff). If all the human customers and waiters with their cultural indoctrination were discarded and replaced with agents like itself then the question becomes somewhat more open.
(I edited the last sentence for clarity since you’ve quoted it.)
My point was not that the situation is analogous to PD (the waiter doesn’t play, it’s a one player decision, not a two player game). It’s the uncertainty about utility of waiter’s profit that UDT considerations apply to. If you are the waiter, then you value waiter’s profit, otherwise you don’t (for the purposes of the thought experiment). In PD, you don’t care about CooperateBot’s winnings.
The analogy is with Counterfactual Mugging. The coin toss (a priori uncertainty) is whether you would become a customer or a waiter, the observation is that you are in fact a customer, and a relevant UDT consideration is that you should optimize expected utility across both of the hypotheticals, where in one of them you are a customer and in the other a waiter. By giving the tip, you subtract utility from your hypothetical where you are a customer, and transfer it to the other hypothetical where you are a waiter (that is, in the hypothetical where you are a customer, the utility becomes less if customers give tips; and in the hypothetical where you are a waiter, the utility becomes greater if customers give tips).
I don’t know which direction is more valuable: for a waiter to tip the customer or conversely. It might be that the predictable effect is too small to matter. I certainly don’t understand this situation enough to settle on a recommendation in either direction. My point is that the situation is more complex than it may seem, so a policy chosen in absence of these considerations shouldn’t be seen as definitive.
it is an error to use updateless considerations as an excuse to give away money
It is an error to use excuses in general. It is enlightening to work on better understanding of what given considerations actually imply.
My point was not that the situation is analogous to PD (the waiter doesn’t play, it’s a one player decision, not a two player game).
Not true (that it is single player game), but this is tangential.
It’s the uncertainty about utility of waiter’s profit that UDT considerations apply to. If you are the waiter, then you value waiter’s profit, otherwise you don’t (for the purposes of the thought experiment). In PD, you don’t care about CooperateBot’s winnings.
My previous response applies. In particular that this consideration only applies after you discard key features of the problem—that is, you make all the other relevant participants in the game rational agents rather than humans with known cultural programming. In the actual problem you have no more reason to (act as if you) believe you are (or could be a priori) the waiter than to believe you are the cow that you are served or the fork you use to eat the slaughtered, barbecued cow.
It is enlightening to work on better understanding of what given considerations actually imply.
These considerations don’t apply. This is just another example of the all too common use of “Oooh, Deep Timeless Updateless Reflective. Cooperate, morality, hugs!” when the actual situation would prompt a much more straightforward but less ‘nice’ solution.
My point was not that the situation is analogous to PD (the waiter doesn’t play, it’s a one player decision, not a two player game).
Not true (that it is single player game), but this is tangential.
Well, it seems obvious to me that this is a one player game, so for me it’s not tangential, it’s very important for me to correct the error on this. As I see it, the only decision here is whether to tip, and this decision is made by the customer. Where is the other player, what is its action?
make all the other relevant participants in the game rational agents rather than humans with known cultural programming. In the actual problem you have no more reason to (act as if you) believe you are (or could be a priori) the waiter than to believe you are the cow that you are served or the fork you use to eat the slaughtered, barbecued cow.
Rationality of the other participants is only relevant to the choice of their actions, and no actions of the waiter are involved in this thought experiment (as far as I can see or stipulate in my interpretation). So indeed the waiter is analogous to a cow in this respect, as a cow’s inability to make good decisions is equally irrelevant. It’s value of personal prosperity that the hypotheticals compare. The distinction I’m drawing attention to is how you care about yourself vs. how you could counterfactually care about the waiter if you were the waiter (or a cow if you were the cow), not how you make decisions yourself vs. how the waiter (or a cow) makes decisions.
It is enlightening to work on better understanding of what given considerations actually imply.
These considerations don’t apply.
That’s exactly the question I’m considering. I’m not sure if they apply or not, or what they suggest if they do, I don’t know how to think about this problem so as to see this clearly. You insist that they don’t, but that doesn’t help me if you don’t help me understand how they don’t.
One sense of “applying” for an idea is when you can make novel conclusions about a problem by making an analogy with the idea. Since I’m not making novel conclusions (any conclusions!), in this sense the idea indeed doesn’t apply. What I am insisting on is that my state of knowledge doesn’t justify certainty in the decision in question, and I’m skeptical of certainty in others being justified.
This is just another example of the all too common use of “Oooh, Deep Timeless Updateless Reflective. Cooperate, morality, hugs!” when the actual situation would prompt a much more straightforward but less ‘nice’ solution.
(It may sound unlikely, but I’m almost certain I’m indifferent to conclusions on things like this in the sense that I’m mostly interested in what decision theory itself says, and much less in what I’d do in practice with that information. The trouble is that I don’t understand decision theory well enough, and so going against emotional response is no more comforting than going with it.)
But less is different from never. A rational choice would be to never tip at a restaurant you never visit again.
The reason people still tip is because of evolved mechanisms that make them feel guilty for betraying the waitress even when rationally they will face no negative consequences for doing it.
And this mechanism, in turn, is hard wired in to encourage you to play fair even when higher areas of your brain determine their is no reason to in this situation.
This is debatable. You prefer tips to be made in the (counterfactual) hypothetical where you work at that restaurant, so to the extent there is a priori uncertainty about whether you would be working at a restaurant vs. be a customer who never visits again, there is potentially an opportunity for increasing expected utility by transferring value between these hypotheticals.
No, Gerald is correct. Given a known culture with known typical behavior of tipping by the other (human) customers and known consequences (or lack thereof) of not tipping after a single visit it is an error to use updateless considerations as an excuse to give away money. UDT does not cooperate with CooperateBot (or anonymous restaurant staff). If all the human customers and waiters with their cultural indoctrination were discarded and replaced with agents like itself then the question becomes somewhat more open.
(I edited the last sentence for clarity since you’ve quoted it.)
My point was not that the situation is analogous to PD (the waiter doesn’t play, it’s a one player decision, not a two player game). It’s the uncertainty about utility of waiter’s profit that UDT considerations apply to. If you are the waiter, then you value waiter’s profit, otherwise you don’t (for the purposes of the thought experiment). In PD, you don’t care about CooperateBot’s winnings.
The analogy is with Counterfactual Mugging. The coin toss (a priori uncertainty) is whether you would become a customer or a waiter, the observation is that you are in fact a customer, and a relevant UDT consideration is that you should optimize expected utility across both of the hypotheticals, where in one of them you are a customer and in the other a waiter. By giving the tip, you subtract utility from your hypothetical where you are a customer, and transfer it to the other hypothetical where you are a waiter (that is, in the hypothetical where you are a customer, the utility becomes less if customers give tips; and in the hypothetical where you are a waiter, the utility becomes greater if customers give tips).
I don’t know which direction is more valuable: for a waiter to tip the customer or conversely. It might be that the predictable effect is too small to matter. I certainly don’t understand this situation enough to settle on a recommendation in either direction. My point is that the situation is more complex than it may seem, so a policy chosen in absence of these considerations shouldn’t be seen as definitive.
It is an error to use excuses in general. It is enlightening to work on better understanding of what given considerations actually imply.
Not true (that it is single player game), but this is tangential.
My previous response applies. In particular that this consideration only applies after you discard key features of the problem—that is, you make all the other relevant participants in the game rational agents rather than humans with known cultural programming. In the actual problem you have no more reason to (act as if you) believe you are (or could be a priori) the waiter than to believe you are the cow that you are served or the fork you use to eat the slaughtered, barbecued cow.
These considerations don’t apply. This is just another example of the all too common use of “Oooh, Deep Timeless Updateless Reflective. Cooperate, morality, hugs!” when the actual situation would prompt a much more straightforward but less ‘nice’ solution.
Well, it seems obvious to me that this is a one player game, so for me it’s not tangential, it’s very important for me to correct the error on this. As I see it, the only decision here is whether to tip, and this decision is made by the customer. Where is the other player, what is its action?
Rationality of the other participants is only relevant to the choice of their actions, and no actions of the waiter are involved in this thought experiment (as far as I can see or stipulate in my interpretation). So indeed the waiter is analogous to a cow in this respect, as a cow’s inability to make good decisions is equally irrelevant. It’s value of personal prosperity that the hypotheticals compare. The distinction I’m drawing attention to is how you care about yourself vs. how you could counterfactually care about the waiter if you were the waiter (or a cow if you were the cow), not how you make decisions yourself vs. how the waiter (or a cow) makes decisions.
That’s exactly the question I’m considering. I’m not sure if they apply or not, or what they suggest if they do, I don’t know how to think about this problem so as to see this clearly. You insist that they don’t, but that doesn’t help me if you don’t help me understand how they don’t.
One sense of “applying” for an idea is when you can make novel conclusions about a problem by making an analogy with the idea. Since I’m not making novel conclusions (any conclusions!), in this sense the idea indeed doesn’t apply. What I am insisting on is that my state of knowledge doesn’t justify certainty in the decision in question, and I’m skeptical of certainty in others being justified.
(It may sound unlikely, but I’m almost certain I’m indifferent to conclusions on things like this in the sense that I’m mostly interested in what decision theory itself says, and much less in what I’d do in practice with that information. The trouble is that I don’t understand decision theory well enough, and so going against emotional response is no more comforting than going with it.)