I guess we have talked about this a bunch last year, but since the post has come up again...
It then becomes clear what the requirements are besides “I believe we have compatible DTs” for Arif to believe there is decision-entanglement:
“I believe we have entangled epistemic algorithms (or that there is epistemic-entanglement[5], for short)”, and “I believe we have been exposed to compatible pieces of evidence”.
I still don’t understand why it’s necessary to talk about epistemic algorithms and their entanglement as opposed to just talking about the beliefs that you happen to have (as would be normal in decision and game theory theory).
Say Alice has epistemic algorithm A with inputs x that gives rise to beliefs b and Bob has a completely different [ETA: epistemic] algorithm A’ with completely different inputs x’ that happens to give rise to beliefs b as well. Alice and Bob both use decision algorithm D to make decisions. Part of b is the belief that Alice and Bob have the same beliefs and the same decision algorithm. It seems that Alice and Bob should cooperate. (If D is EDT/FDT/..., they will cooperate.) So it seems that the whole A,x,A’,x’ stuff just doesn’t matter for what they should do. It only matters what their beliefs are. My sense from the post and past discussions is that you disagree with this perspective and that I don’t understand why.
(Of course, you can talk about how in practice, arriving at the right kind of b will typically require having similar A, A’ and similar x, x’.)
(Of course, you need to have some requirement to the extent that Alice can’t modify her beliefs in such a way that she defects but that she doesn’t (non-causally) make it much more likely that Bob also defects. But I view this as an assumption about decision-theoretic not epistemic entanglement: I don’t see why an epistemic algorithm (in the usual sense of the word) would make such self-modifications.)
Oh nice, thanks for this! I think I now see much more clearly why we’re both confused about what the other thinks.
Say Alice has epistemic algorithm A with inputs x that gives rise to beliefs b and Bob has a completely different algorithm A’ with completely different inputs x’ that happens to give rise to beliefs b as well. Alice and Bob both use decision algorithm D to make decisions. Part of b is the belief that Alice and Bob have the same beliefs and the same decision algorithm. It seems that Alice and Bob should cooperate.
(I’ll respond using my definitions/framing which you don’t share, so you might find this confusing, but hopefully, you’ll understand what I mean and agree although you would frame/explain things very differently.)
Say Bob is CooperateBot. Alice may believe she’s decision-entangled with them, in which case she (subjectively) should cooperate, but that doesn’t mean that their decisions are logically dependent (i.e., that her belief is warranted). If Alice changes her decision and defects, Bob’s decision remains the same. So unless Alice is also a CooperateBot, her belief b (“my decision and Bob’s are logically dependent / entangled such that I must cooperate”) is wrong. There is no decision-entanglement. Just “coincidental” mutual cooperation. You can still argue that Alice should cooperate given that she believes b of course, but b is false. If only she could realize that, she would stop naively cooperating and get a higher payoff.
So it seems that the whole A,x,A’,x’ stuff just doesn’t matter for what they should do. It only matters what their beliefs are.
It matters what their beliefs are to know what they will do, but two agents believing their decisions are logically dependent doesn’t magically create logical dependency.
If I play a one-shot PD against you and we both believe we should cooperate, that doesn’t mean that we necessarily both defect in a counterfactual scenario where one of us believes they should defect (i.e., that doesn’t mean there is decision-entanglement / logical dependency, i.e., that doesn’t mean that our belief that we should cooperate is warranted, i.e., that doesn’t mean that we’re not two suckers cooperating for wrong reasons while we could be exploiting the other and avoid being exploited). And whether we necessarily both defect in a counterfactual scenario where one of us believes they should defect (i.e., whether we are decision-entangled) depends on how we came to our beliefs that our decisions are logically dependent and that we must cooperate (as illustrated—in a certain way—in my above figures).
(Of course, you need to have some requirement to the extent that Alice can’t modify her beliefs in such a way that she defects but that she doesn’t (non-causally) make it much more likely that Bob also defects. But I view this as an assumption about decision-theoretic not epistemic entanglement: I don’t see why an epistemic algorithm (in the usual sense of the word) would make such self-modifications.).
After reading that, I’m really starting to think that we (at least mostly) agree but that we just use incompatible framings/definitions to explain things.
Fwiw, while I see how my framing can seem unnecessarily confusing, I think yours is usually used/interpreted oversimplistically (by you but also and especially by others) and is therefore extremely conducive to Motte-and-bailey fallacies[1] leading us to widely underestimate the fragility of decision-entanglement. I might be confused though, of course.
Thanks a lot for your comment! I think I understand you much better now and it helped me reclarify things in my mind. :)
E.g., it’s easy to argue that widely different agents may converge on the exact same DT, but not if you include intricacies like the one in your last paragraph.
I guess we have talked about this a bunch last year, but since the post has come up again...
I still don’t understand why it’s necessary to talk about epistemic algorithms and their entanglement as opposed to just talking about the beliefs that you happen to have (as would be normal in decision and game theory theory).
Say Alice has epistemic algorithm A with inputs x that gives rise to beliefs b and Bob has a completely different [ETA: epistemic] algorithm A’ with completely different inputs x’ that happens to give rise to beliefs b as well. Alice and Bob both use decision algorithm D to make decisions. Part of b is the belief that Alice and Bob have the same beliefs and the same decision algorithm. It seems that Alice and Bob should cooperate. (If D is EDT/FDT/..., they will cooperate.) So it seems that the whole A,x,A’,x’ stuff just doesn’t matter for what they should do. It only matters what their beliefs are. My sense from the post and past discussions is that you disagree with this perspective and that I don’t understand why.
(Of course, you can talk about how in practice, arriving at the right kind of b will typically require having similar A, A’ and similar x, x’.)
(Of course, you need to have some requirement to the extent that Alice can’t modify her beliefs in such a way that she defects but that she doesn’t (non-causally) make it much more likely that Bob also defects. But I view this as an assumption about decision-theoretic not epistemic entanglement: I don’t see why an epistemic algorithm (in the usual sense of the word) would make such self-modifications.)
Oh nice, thanks for this! I think I now see much more clearly why we’re both confused about what the other thinks.
(I’ll respond using my definitions/framing which you don’t share, so you might find this confusing, but hopefully, you’ll understand what I mean and agree although you would frame/explain things very differently.)
Say Bob is CooperateBot. Alice may believe she’s decision-entangled with them, in which case she (subjectively) should cooperate, but that doesn’t mean that their decisions are logically dependent (i.e., that her belief is warranted). If Alice changes her decision and defects, Bob’s decision remains the same. So unless Alice is also a CooperateBot, her belief b (“my decision and Bob’s are logically dependent / entangled such that I must cooperate”) is wrong. There is no decision-entanglement. Just “coincidental” mutual cooperation. You can still argue that Alice should cooperate given that she believes b of course, but b is false. If only she could realize that, she would stop naively cooperating and get a higher payoff.
It matters what their beliefs are to know what they will do, but two agents believing their decisions are logically dependent doesn’t magically create logical dependency.
If I play a one-shot PD against you and we both believe we should cooperate, that doesn’t mean that we necessarily both defect in a counterfactual scenario where one of us believes they should defect (i.e., that doesn’t mean there is decision-entanglement / logical dependency, i.e., that doesn’t mean that our belief that we should cooperate is warranted, i.e., that doesn’t mean that we’re not two suckers cooperating for wrong reasons while we could be exploiting the other and avoid being exploited). And whether we necessarily both defect in a counterfactual scenario where one of us believes they should defect (i.e., whether we are decision-entangled) depends on how we came to our beliefs that our decisions are logically dependent and that we must cooperate (as illustrated—in a certain way—in my above figures).
After reading that, I’m really starting to think that we (at least mostly) agree but that we just use incompatible framings/definitions to explain things.
Fwiw, while I see how my framing can seem unnecessarily confusing, I think yours is usually used/interpreted oversimplistically (by you but also and especially by others) and is therefore extremely conducive to Motte-and-bailey fallacies[1] leading us to widely underestimate the fragility of decision-entanglement. I might be confused though, of course.
Thanks a lot for your comment! I think I understand you much better now and it helped me reclarify things in my mind. :)
E.g., it’s easy to argue that widely different agents may converge on the exact same DT, but not if you include intricacies like the one in your last paragraph.