If an agent is really in a pure one-shot case, that agent can do anything at all
You can learn about a situation other than by facing that exact situation yourself. For example, you may observe other agents facing that situation or receive testimony from an agent that has proven itself trustworthy. You don’t even seem to disagree with me here as you wrote: “you can learn enough about the universe to be confident you’re now in a counterfactual mugging without ever having faced one before”
“This goes along with the idea that it’s unreasonable to consider agents as if they emerge spontaneously from a vacuum, face a single decision problem, and then disappear”—I agree with this. I asked this question because I didn’t have a good model of how to conceptualise decision theory problems, although I think I have a clearer idea now that we’ve got the Counterfactual Prisoner’s Dilemma.
One way of appealing to human moral intuition
Doesn’t work on counter-factually selfish agents
Decision theory should be reflectively endorsed decision theory. That’s what decision theory basically is: thinking we do ahead of time which is supposed to help us make decisions
Thinking about decisions before you make them != thinking about decisions timelessly
You can learn about a situation other than by facing that exact situation yourself. For example, you may observe other agents facing that situation or receive testimony from an agent that has proven itself trustworthy. You don’t even seem to disagree with me here as you wrote: “you can learn enough about the universe to be confident you’re now in a counterfactual mugging without ever having faced one before”
Right, I agree with you here. The argument is that we have to understand learning in the first place to be able to make these arguments, and iterated situations are the easiest setting to do that in. So if you’re imagining that an agent learns what situation it’s in more indirectly, but thinks about that situation differently than an agent who learned in an iterated setting, there’s a question of why that is. It’s more a priori plausible to me that a learning agent thinks about a problem by generalizing from similar situations it has been in, which I expect to act kind of like iteration.
Or, as I mentioned re: all games are iterated games in logical time, the agent figures out how to handle a situation by generalizing from similar scenarios across logic. So any game we talk about is iterated in this sense.
>One way of appealing to human moral intuition
Doesn’t work on counter-factually selfish agents
I disagree. Reciprocal altruism and true altruism are kind of hard to distinguish in human psychology, but I said “it’s a good deal” to point at the reciprocal-altruism intuition. The point being that acts of reciprocal altruism can be a good deal w/o having considered them ahead of time. It’s perfectly possible to reason “it’s a good deal to lose my hand in this situation, because I’m trading it for getting my life saved in a different situation; one which hasn’t come about, but could have.”
I kind of feel like you’re just repeatedly denying this line of reasoning. Yes, the situation in front of you is that you’re in the risk-hand world rather than the risk-life world. But this is just question-begging with respect to updateful reasoning. Why give priority to that way of thinking over the “but it could just as well have been my life at steak” world? Especially when we can see that the latter way of reasoning does better on average?
>Decision theory should be reflectively endorsed decision theory. That’s what decision theory basically is: thinking we do ahead of time which is supposed to help us make decisions
Thinking about decisions before you make them != thinking about decisions timelessly
Ah, that’s kind of the first reply from you that’s surprised me in a bit. Can you say more about that? My feeling is that in this particular case the equality seems to hold.
The argument is that we have to understand learning in the first place to be able to make these arguments, and iterated situations are the easiest setting to do that in
Iterated situations are indeed useful for understanding learning. But I’m trying to abstract out over the learning insofar as I can. I care that you get the information required for the problem, but not so much how you get it.
Especially when we can see that the latter way of reasoning does better on average?
The average includes worlds that you know you are not in. So this doesn’t help us justify taking these counterfactuals into account, indeed for us to care about the average we need to already have an independent reason to care about these counterfactuals.
I kind of feel like you’re just repeatedly denying this line of reasoning. Yes, the situation in front of you is that you’re in the risk-hand world rather than the risk-life world. But this is just question-begging with respect to updateful reasoning.
I’m not saying you should reason in this way. You should reason updatelessly. But in order to get to the point of finding the Counterfactual Prisoner’s Dilemma, while I consider a satisfactory justification, I had rigorously question every other solution until I found one which could withstand the questioning. This seems like a better solution as it is less dependent on tricky to evaluate philosophical claims.
Ah, that’s kind of the first reply from you that’s surprised me in a bit
Well, thinking about a decision after you make it won’t do you much good. So you’re pretty always thinking about decisions before you make them. But timelessness involves thinking about decision before you end up facing them.
Iterated situations are indeed useful for understanding learning. But I’m trying to abstract out over the learning insofar as I can. I care that you get the information required for the problem, but not so much how you get it.
OK, but I don’t see how that addresses my argument.
The average includes worlds that you know you are not in. So this doesn’t help us justify taking these counterfactuals into account,
This is the exact same response again (ie the very kind of response I was talking about in my remark you’re responding to), where you beg the question of whether we should evaluate from an updateful perspective. Why is it problematic that we already know we are not in those worlds? Because you’re reasoning updatefully? My original top-level answer explained why I think this is a circular justification in a way that the updateless position isn’t.
I’m not saying you should reason in this way. You should reason updatelessly.
Ok. So what’s at steak in this discussion is the justification for updatelessness, not the whether of updatelessness.
I still don’t get why you seem to dismiss my justification for updatelessness, though. All I’m understanding of your objection is a question-begging appeal to updatelful reasoning.
You feel that I’m begging the question. I guess I take only thinking about this counterfactual as the default position, as where an average person is likely to be starting from. And I was trying to see if I could find an argument strong enough to displace this. So I’ll freely admit I haven’t provided a first-principles argument for focusing just on this counterfactual.
OK, but I don’t see how that addresses my argument.
Your argument is that we need to look at iterated situations to understand learning. Sure, but that doesn’t mean that we have to interpret every problem in iterated form. If we need to understand learning better, we can look at a few iterated problems beforehand, rather than turning this one into an iterated problems.
The average includes worlds that you know you are not in. So this doesn’t help us justify taking these counterfactuals into account,
Let me explain more clearly why this is a circular argument:
a) You want to show that we should take counterfactuals into account when making decisions
b) You argue that this way of making decisions does better on average
c) The average includes the very counterfactuals whose value is in question. So b depends on a already being proven ⇒ circular argument
Let me explain more clearly why this is a circular argument:
a) You want to show that we should take counterfactuals into account when making decisions
b) You argue that this way of making decisions does better on average
c) The average includes the very counterfactuals whose value is in question. So b depends on a already being proven ⇒ circular argument
That isn’t my argument though. My argument is that we ARE thinking ahead about counterfactual mugging right now, in considering the question. We are not misunderstanding something about the situation, or missing critical information. And from our perspective right now, we can see that agreeing to be mugged is the best strategy on average.
We can see that if we update on the value of the coin flip being tails, we would change our mind about this. But the statement of the problem requires that there is also the possibility of heads. So it does not make sense to consider the tails scenario in isolation; that would be a different decision problem (one in which Omega asks us for $100 out of the blue with no other significant backstory).
So we (right now, considering how to reason about counterfactual muggings in the abstract) know that there are the two possibilities, with equal probability, and so the best strategy on average is to pay. So we see behaving updatefully as bad.
So my argument for considering the multiple possibilities is, the role of thinking about decision theory now is to help guide the actions of my future self.
You feel that I’m begging the question. I guess I take only thinking about this counterfactual as the default position, as where an average person is likely to be starting from. And I was trying to see if I could find an argument strong enough to displace this. So I’ll freely admit I haven’t provided a first-principles argument for focusing just on this counterfactual.
I think the average person is going to be thinking about things like duty, honor, and consistency which can serve some of the purpose of updatelessness. But sure, updateful reasoning is a natural kind of starting point, particularly coming from a background of modern economics or bayesian decision theory.
But my argument is compatible with that starting point, if you accept my “the role of thinking about decision theory now is to help guide future actions” line of thinking. In that case, starting from updateful assumptions now, decision-theoretic reasoning makes you think you should behave updatelessly in the future.
Whereas the assumption you seem to be using, in your objection to my line of reasoning, is “we should think of decision-theoretic problems however we think of problems now”. So if we start out an updateful agent, we would think about decision-theoretic problems and think “I should be updateful”. If we start out a CDT agent, then when we think about decision-theoretic problems we would conclude that you should reason causally. EDT agents would think about problems and conclude you should reason evidentially. And so on. That’s the reasoning I’m calling circular.
Of course an agent should reason about a problem using its best current understanding. But my claim is that when doing decision theory, the way that best understanding should be applied is to figure out what decision theory does best, not to figure out what my current decision theory already does. And when we think about problems like counterfactual mugging, the description of the problem requires that there’s both the possibility of heads and tails. So “best” means best overall, not just down the one branch.
If the act of doing decision theory were generally serving the purpose of aiding in making the current decision, then my argument would not make sense, and yours would. Current-me might want to tell the me in that universe to be more updateless about things, but alternate-me would not be interested in hearing it, because alternate-me wouldn’t be interested in thinking ahead in general, and the argument wouldn’t make any sense with respect to alternate-me’s current decision.
So my argument involves a fact about the world which I claim determines which of several ways to reason, and hence, is not circular.
My argument is that we ARE thinking ahead about counterfactual mugging right now, in considering the question
When we think about counterfactual muggings, we naturally imagine the possibility of facing a counterfactual mugging in the future. I don’t dispute the value of pre-committing either to take a specific action or to acting updatelessly. However, instead of imagining a future mugging, we could also imagine a present mugging where we didn’t have time to make any pre-commitments. I don’t think it is immediately obvious that we should think updatelessly, instead I believe that it requires further justification.
The role of thinking about decision theory now is to help guide the actions of my future self
This is effectively an attempt at proof-by-definition
I think the average person is going to be thinking about things like duty, honor, and consistency which can serve some of the purpose of updatelessness. But sure, updateful reasoning is a natural kind of starting point, particularly coming from a background of modern economics or bayesian decision theory
If someone’s default is already updateless reasoning, then there’s no need for us to talk them into it. It’s only people with an updateful default that we need to convince (until recently I had an updateful default).
And when we think about problems like counterfactual mugging, the description of the problem requires that there’s both the possibility of heads and tails
It requires a counterfactual possibility, not an actual possibility. And a counterfactual possibility isn’t actual, it’s counter to the factual. So it’s not clear this has any relevance.
It looks like to me that you’re tripping yourself up with verbal arguments that aren’t at all obviously true. The reason why I believe that the Counterfactual Prisoner’s Dilemma is important is because it is a mathematical result that doesn’t require much in the way of assumptions. Sure, it still has to be interpreted, but it seems hard to find an interpretations that avoids the conclusion that the updateful perspective doesn’t quite succeed on its own terms.
You can learn about a situation other than by facing that exact situation yourself. For example, you may observe other agents facing that situation or receive testimony from an agent that has proven itself trustworthy. You don’t even seem to disagree with me here as you wrote: “you can learn enough about the universe to be confident you’re now in a counterfactual mugging without ever having faced one before”
“This goes along with the idea that it’s unreasonable to consider agents as if they emerge spontaneously from a vacuum, face a single decision problem, and then disappear”—I agree with this. I asked this question because I didn’t have a good model of how to conceptualise decision theory problems, although I think I have a clearer idea now that we’ve got the Counterfactual Prisoner’s Dilemma.
Doesn’t work on counter-factually selfish agents
Thinking about decisions before you make them != thinking about decisions timelessly
Right, I agree with you here. The argument is that we have to understand learning in the first place to be able to make these arguments, and iterated situations are the easiest setting to do that in. So if you’re imagining that an agent learns what situation it’s in more indirectly, but thinks about that situation differently than an agent who learned in an iterated setting, there’s a question of why that is. It’s more a priori plausible to me that a learning agent thinks about a problem by generalizing from similar situations it has been in, which I expect to act kind of like iteration.
Or, as I mentioned re: all games are iterated games in logical time, the agent figures out how to handle a situation by generalizing from similar scenarios across logic. So any game we talk about is iterated in this sense.
I disagree. Reciprocal altruism and true altruism are kind of hard to distinguish in human psychology, but I said “it’s a good deal” to point at the reciprocal-altruism intuition. The point being that acts of reciprocal altruism can be a good deal w/o having considered them ahead of time. It’s perfectly possible to reason “it’s a good deal to lose my hand in this situation, because I’m trading it for getting my life saved in a different situation; one which hasn’t come about, but could have.”
I kind of feel like you’re just repeatedly denying this line of reasoning. Yes, the situation in front of you is that you’re in the risk-hand world rather than the risk-life world. But this is just question-begging with respect to updateful reasoning. Why give priority to that way of thinking over the “but it could just as well have been my life at steak” world? Especially when we can see that the latter way of reasoning does better on average?
Ah, that’s kind of the first reply from you that’s surprised me in a bit. Can you say more about that? My feeling is that in this particular case the equality seems to hold.
Iterated situations are indeed useful for understanding learning. But I’m trying to abstract out over the learning insofar as I can. I care that you get the information required for the problem, but not so much how you get it.
The average includes worlds that you know you are not in. So this doesn’t help us justify taking these counterfactuals into account, indeed for us to care about the average we need to already have an independent reason to care about these counterfactuals.
I’m not saying you should reason in this way. You should reason updatelessly. But in order to get to the point of finding the Counterfactual Prisoner’s Dilemma, while I consider a satisfactory justification, I had rigorously question every other solution until I found one which could withstand the questioning. This seems like a better solution as it is less dependent on tricky to evaluate philosophical claims.
Well, thinking about a decision after you make it won’t do you much good. So you’re pretty always thinking about decisions before you make them. But timelessness involves thinking about decision before you end up facing them.
OK, but I don’t see how that addresses my argument.
This is the exact same response again (ie the very kind of response I was talking about in my remark you’re responding to), where you beg the question of whether we should evaluate from an updateful perspective. Why is it problematic that we already know we are not in those worlds? Because you’re reasoning updatefully? My original top-level answer explained why I think this is a circular justification in a way that the updateless position isn’t.
Ok. So what’s at steak in this discussion is the justification for updatelessness, not the whether of updatelessness.
I still don’t get why you seem to dismiss my justification for updatelessness, though. All I’m understanding of your objection is a question-begging appeal to updatelful reasoning.
You feel that I’m begging the question. I guess I take only thinking about this counterfactual as the default position, as where an average person is likely to be starting from. And I was trying to see if I could find an argument strong enough to displace this. So I’ll freely admit I haven’t provided a first-principles argument for focusing just on this counterfactual.
Your argument is that we need to look at iterated situations to understand learning. Sure, but that doesn’t mean that we have to interpret every problem in iterated form. If we need to understand learning better, we can look at a few iterated problems beforehand, rather than turning this one into an iterated problems.
Let me explain more clearly why this is a circular argument:
a) You want to show that we should take counterfactuals into account when making decisions
b) You argue that this way of making decisions does better on average
c) The average includes the very counterfactuals whose value is in question. So b depends on a already being proven ⇒ circular argument
That isn’t my argument though. My argument is that we ARE thinking ahead about counterfactual mugging right now, in considering the question. We are not misunderstanding something about the situation, or missing critical information. And from our perspective right now, we can see that agreeing to be mugged is the best strategy on average.
We can see that if we update on the value of the coin flip being tails, we would change our mind about this. But the statement of the problem requires that there is also the possibility of heads. So it does not make sense to consider the tails scenario in isolation; that would be a different decision problem (one in which Omega asks us for $100 out of the blue with no other significant backstory).
So we (right now, considering how to reason about counterfactual muggings in the abstract) know that there are the two possibilities, with equal probability, and so the best strategy on average is to pay. So we see behaving updatefully as bad.
So my argument for considering the multiple possibilities is, the role of thinking about decision theory now is to help guide the actions of my future self.
I think the average person is going to be thinking about things like duty, honor, and consistency which can serve some of the purpose of updatelessness. But sure, updateful reasoning is a natural kind of starting point, particularly coming from a background of modern economics or bayesian decision theory.
But my argument is compatible with that starting point, if you accept my “the role of thinking about decision theory now is to help guide future actions” line of thinking. In that case, starting from updateful assumptions now, decision-theoretic reasoning makes you think you should behave updatelessly in the future.
Whereas the assumption you seem to be using, in your objection to my line of reasoning, is “we should think of decision-theoretic problems however we think of problems now”. So if we start out an updateful agent, we would think about decision-theoretic problems and think “I should be updateful”. If we start out a CDT agent, then when we think about decision-theoretic problems we would conclude that you should reason causally. EDT agents would think about problems and conclude you should reason evidentially. And so on. That’s the reasoning I’m calling circular.
Of course an agent should reason about a problem using its best current understanding. But my claim is that when doing decision theory, the way that best understanding should be applied is to figure out what decision theory does best, not to figure out what my current decision theory already does. And when we think about problems like counterfactual mugging, the description of the problem requires that there’s both the possibility of heads and tails. So “best” means best overall, not just down the one branch.
If the act of doing decision theory were generally serving the purpose of aiding in making the current decision, then my argument would not make sense, and yours would. Current-me might want to tell the me in that universe to be more updateless about things, but alternate-me would not be interested in hearing it, because alternate-me wouldn’t be interested in thinking ahead in general, and the argument wouldn’t make any sense with respect to alternate-me’s current decision.
So my argument involves a fact about the world which I claim determines which of several ways to reason, and hence, is not circular.
When we think about counterfactual muggings, we naturally imagine the possibility of facing a counterfactual mugging in the future. I don’t dispute the value of pre-committing either to take a specific action or to acting updatelessly. However, instead of imagining a future mugging, we could also imagine a present mugging where we didn’t have time to make any pre-commitments. I don’t think it is immediately obvious that we should think updatelessly, instead I believe that it requires further justification.
This is effectively an attempt at proof-by-definition
If someone’s default is already updateless reasoning, then there’s no need for us to talk them into it. It’s only people with an updateful default that we need to convince (until recently I had an updateful default).
It requires a counterfactual possibility, not an actual possibility. And a counterfactual possibility isn’t actual, it’s counter to the factual. So it’s not clear this has any relevance.
It looks like to me that you’re tripping yourself up with verbal arguments that aren’t at all obviously true. The reason why I believe that the Counterfactual Prisoner’s Dilemma is important is because it is a mathematical result that doesn’t require much in the way of assumptions. Sure, it still has to be interpreted, but it seems hard to find an interpretations that avoids the conclusion that the updateful perspective doesn’t quite succeed on its own terms.