They don’t have to be known to be impossible. Just unlikely. If you’re facing someone similar to yourself, it might be that choosing to defect makes it more likely that they defect, and enough so to counter out any gain you’d have, but you still don’t know they’ll defect.
Came here to say that, see it’s been said. If your actions don’t approach the choice you would make given impossibility, as the probability of something approaches (but does not reach) zero, then you must either be assigning infinite utility to something or you must not be maximizing expected utility.
When you say that choosing to defect might make it more likely that they defect, do you mean that choosing to defect may cause the probability that the other person will defect to go up, or do you mean that the probability of the other player defecting, given that you defected, may be greater than the probability given that you cooperated?
To quote Douglas Adams, “The impossible often has a kind of integrity to it which the merely improbable lacks.” If it is impossible to have off-diagonal results, that is a much stronger argument for cooperating than having it be improbable, even if the probability of an on-diagonal result is 99.99%; as long as the possibility exists, one should take it into consideration.
If it is impossible to have off-diagonal results, that is a much stronger argument for cooperating than having it be improbable
If the probability is epsilon, then having the probability be zero is only an epsilon stronger argument. If you doubt this let epsilon equal 1/googolplex.
I mean the second one. Also, if I said the first one, I would mean the second one. They’re the same by the definitions I use. The second one is more clear.
If it is impossible to have off-diagonal results, that is a much stronger argument for cooperating than having it be improbable, even if the probability of an on-diagonal result is 99.99%; as long as the possibility exists, one should take it into consideration.
If the probability of an on-diagonal result is sufficiently high, and the benefit of an off-diagonal one is sufficiently low, that is all that’s necessary for it to be worth while to cooperate.
I likely live in a universe where these outcomes are impossible
What do you mean by “impossible”? If you mean highly unlikely, then you’re using recursive probability, which doesn’t make a whole lot of sense. If you mean against the laws of physics, then it’s false. If you mean that it won’t happen, then it’s just a longer way of saying that those outcomes are unlikely.
What if you are playing with someone and their decision on the current round does not affect your decision in the current round?
If you are known to cooperate because it means that your opponent (who is defined as ‘similar to yourself’), then your opponent knows he is choosing between 3 points and 5 points. Being like you, he chooses 3 points.
If you are playing against someone whose decision you determine, (or influence) then you choose the square; if the nature of your control prevents you from choosing 5 or 0 (or makes those very unlikely) points but allows you to choose 3 or 1 (or make one of those very likely), choose 3. However, there only one player in that game.
Given the choice between 0 points and 1 point, you would prefer 1 point; given the choice between 3 points and 5 points, you would prefer 3 points. (Consider the case where you are playing a cooperatebot; the choice which correlates is cooperation; against a defectbot, the choice which correlates is defection. There are no other strategies in single PD without the ability to communicate beforehand.)
Why would you prefer three points to five points? Aren’t points just a way of specifying utility? Five points is better than three points by definition.
Right- which means defectbot is the optimal strategy. However, when playing against someone who is defined to be using the same strategy as you, you get more points by using the other strategy.
It should not be the case that two players independently using the optimal option would score more if the optimal option were different.
They don’t have to be known to be impossible. Just unlikely. If you’re facing someone similar to yourself, it might be that choosing to defect makes it more likely that they defect, and enough so to counter out any gain you’d have, but you still don’t know they’ll defect.
Came here to say that, see it’s been said. If your actions don’t approach the choice you would make given impossibility, as the probability of something approaches (but does not reach) zero, then you must either be assigning infinite utility to something or you must not be maximizing expected utility.
When you say that choosing to defect might make it more likely that they defect, do you mean that choosing to defect may cause the probability that the other person will defect to go up, or do you mean that the probability of the other player defecting, given that you defected, may be greater than the probability given that you cooperated?
To quote Douglas Adams, “The impossible often has a kind of integrity to it which the merely improbable lacks.” If it is impossible to have off-diagonal results, that is a much stronger argument for cooperating than having it be improbable, even if the probability of an on-diagonal result is 99.99%; as long as the possibility exists, one should take it into consideration.
If the probability is epsilon, then having the probability be zero is only an epsilon stronger argument. If you doubt this let epsilon equal 1/googolplex.
I mean the second one. Also, if I said the first one, I would mean the second one. They’re the same by the definitions I use. The second one is more clear.
If the probability of an on-diagonal result is sufficiently high, and the benefit of an off-diagonal one is sufficiently low, that is all that’s necessary for it to be worth while to cooperate.
Yes. I model “unlikely” as “I likely live in a universe where these outcomes are impossible”, but that’s just an unimportant different in perspective.
What do you mean by “impossible”? If you mean highly unlikely, then you’re using recursive probability, which doesn’t make a whole lot of sense. If you mean against the laws of physics, then it’s false. If you mean that it won’t happen, then it’s just a longer way of saying that those outcomes are unlikely.
it’s just a longer way of saying that those outcomes are unlikely.
What if you are playing with someone and their decision on the current round does not affect your decision in the current round?
If you are known to cooperate because it means that your opponent (who is defined as ‘similar to yourself’), then your opponent knows he is choosing between 3 points and 5 points. Being like you, he chooses 3 points.
If you are playing against someone whose decision you determine, (or influence) then you choose the square; if the nature of your control prevents you from choosing 5 or 0 (or makes those very unlikely) points but allows you to choose 3 or 1 (or make one of those very likely), choose 3. However, there only one player in that game.
I don’t care which way the causal chain points. All I care about is if the decisions correlate.
Also, I’m not sure of most of what you’re saying.
Given the choice between 0 points and 1 point, you would prefer 1 point; given the choice between 3 points and 5 points, you would prefer 3 points. (Consider the case where you are playing a cooperatebot; the choice which correlates is cooperation; against a defectbot, the choice which correlates is defection. There are no other strategies in single PD without the ability to communicate beforehand.)
Why would you prefer three points to five points? Aren’t points just a way of specifying utility? Five points is better than three points by definition.
Right- which means defectbot is the optimal strategy. However, when playing against someone who is defined to be using the same strategy as you, you get more points by using the other strategy.
It should not be the case that two players independently using the optimal option would score more if the optimal option were different.