If we raised different hands, I do think it would quickly cause us to completely diverge in terms of how many body movements are equal. That doesn’t mean we would be very different, or that I’m fragile. I’m pretty much the same as I was a week ago, but my movements now are different. I was just pointing out that “decisions” isn’t that much more well defined than what it was coming to define (divergent).
I would automatically cooperate
In a True Prisoner’s Dilemma, or even in situations like the OP? The divergence there is that one person knows they are “A” and the other “B”, in ways relevant to their actions.
Ah, I see. We may not disagree, then. My angle was simply that “continuing to agree on all decisions” might be quite robust versus environmental noise, assuming the decision is felt to be impacted by my values (i.e. not chocolate versus vanilla, which I might settle with a coinflip anyway!)
In the OP’s scenario, yes, I cooperate without bothering to reflect. It’s clearly, obviously, the thing to do, says my brain.
I don’t understand the relevance of the TPD. How can I possibly be in a True Prisoner’s Dilemma against myself, when I can’t even be in a TPD against a randomly chosen human?
Yes, for a copy close enough that he will do everything that I will do and nothing that I won’t. In simple resource-gain scenarios like the OP’s, I’m selfish relative to my value system, not relative to my locus of consciousness.
If we raised different hands, I do think it would quickly cause us to completely diverge in terms of how many body movements are equal. That doesn’t mean we would be very different, or that I’m fragile. I’m pretty much the same as I was a week ago, but my movements now are different. I was just pointing out that “decisions” isn’t that much more well defined than what it was coming to define (divergent).
In a True Prisoner’s Dilemma, or even in situations like the OP? The divergence there is that one person knows they are “A” and the other “B”, in ways relevant to their actions.
Ah, I see. We may not disagree, then. My angle was simply that “continuing to agree on all decisions” might be quite robust versus environmental noise, assuming the decision is felt to be impacted by my values (i.e. not chocolate versus vanilla, which I might settle with a coinflip anyway!)
In the OP’s scenario, yes, I cooperate without bothering to reflect. It’s clearly, obviously, the thing to do, says my brain.
I don’t understand the relevance of the TPD. How can I possibly be in a True Prisoner’s Dilemma against myself, when I can’t even be in a TPD against a randomly chosen human?
OP is assuming selfishness, which makes this True. Any PD is TPD for a selfish person. Is it still the obvious thing to do if you’re selfish?
Yes, for a copy close enough that he will do everything that I will do and nothing that I won’t. In simple resource-gain scenarios like the OP’s, I’m selfish relative to my value system, not relative to my locus of consciousness.
So we have different models of selfishness, then. My model doesn’t care about anything but “me”, which doesn’t include clones.