No, it’s not synonymous. If you precommit, you become a cooperator, but you can also be one without precommiting. If you are an AI that is written to be a cooperator, you’ll be one. If you decide to act as a cooperator, you may be one. Being a cooperator is relatively easy. Being a cooperator and successfully signating that you are one, without precommitment, is in practice much harder. And a related problem, if you are a cooperator, you have to recognize a signal that the other person is a cooperator also, which may be too hard if he hasn’t precommited.
What? The implication goes both ways. If you’re a cooperator (in your terms), then you’re precommitted to cooperating (in classical terms). Maybe you misunderstand the word “precommitment”? It doesn’t necessarily imply that some natural power forces the other guy to believe you.
If you define precommitment this way, then every property becomes a precommitment to having that property, and the concept of precommitment becomes tautological. For example, is it a precommitment to always prefer good over evil (defined however you like)?
What’s “mutable”? Changing in time? Cooperation may be a one-off encounter, with no multiple occasions to change over. You may be a cooperator for the duration of one encounter, and a rock elsewhere. Every fact is immutable, so I don’t know what you imply here.
Precommitment is an interaction between two different times: the time when you’re doing cheap talk with the opponent, and the time when you’re actually deciding in the closed room. The time you burn your ships, and the time your troops go to battle. Signaling time and play time. If a property is immutable (preferably physically immutable) between those two times, that’s precommitment. Sounds synonymous to your “being a cooperator” concept.
In other words, my point is that if the signaling is about your future property, at the moment when you have to perform the promised behavior, there is no need for any kind of persistence, thus according to your definition precommitment is unnecessary. Likewise, signaling doesn’t need to consist in you presenting any kind of argument, it may already be known that you (will be) a cooperator.
For example, the agent in question may be selected from a register of cooperators, where 99% of them are known to be cooperators. And cooperators themselves might as well be human, who decided to follow this counterintuitive algorithm, and benefit from doing so when interacting with other known cooperators, without any tangible precommitment system in place, no punishment for not being cooperators. This example may be implemented through reputation system.
No such thing as future property. This isn’t a factual disagreement on my part, just a quibble over terms; disregard it.
Your example isn’t about signaling or precommitment, it’s changing the game into multiple-shot, modifying the agent’s utility function in an isolated play to take into account their reputation for future plays. Yes, it works. But doesn’t help much in true one-shot (or last-play) situations.
On the other hand, the ideal platonic PD is also quite rare in reality—not as rare as Newcomb’s, but still. You may remember us having an isomorphic argument about Newcomb’s some time ago, with roles reversed—you defending the ideal platonic Newcomb’s Problem, and me questioning its assumptions :-)
Me, I don’t feel moral problems defecting in the pure one-shot PD. Some situations are just bad to be in, and the best way out is bad too. Especially situations where something terribly important to you is controlled by a cold uncaring alien entity, and the problem has been carefully constructed to prohibit you from manipulating it (Eliezer’s “true PD”).
No such thing as future property. This isn’t a factual disagreement on my part, just a quibble over terms; disregard it.
In what sense do you mean no such thing? Clearly, there are future properties. My cat has a property of being dead in the future.
Your example isn’t about signaling or precommitment, it’s changing the game into multiple-shot, modifying the agent’s utility function in an isolated play to take into account their reputation for future plays. Yes, it works. But doesn’t help much in true one-shot (or last-play) situations.
Yes, it was just an example of how to set up cooperation without precommitment. It’s clear that signaling being a one-off cooperator is a very hard problem, if you are only human and there are no Omegas flying around.
This doesn’t place the future in a privileged position. Even though I’m certain I saw my cat 10 minutes ago, it wasn’t alive a week ago with probability one, either.
My answer to this would be that people have dispositions to behavior, and these dispositions color everything we do. If one might profit by showing courage, a coward will not do as well as a courageous man.
Of course, the relative success of such people at faking in appropriate situations is perhaps an empirical question.
ETA: this makes less sense as a direct response since you edited your comment. However, I think the difference is that “being a cooperator” regards a disposition that is part of the sort of person you are (though I think the above comment uses it more narrowly as a disposition that might only affect this one action), while a precommitment… well, I’m not sure actual people really do have those, if they’re immutable.
No, it’s not synonymous. If you precommit, you become a cooperator, but you can also be one without precommiting. If you are an AI that is written to be a cooperator, you’ll be one. If you decide to act as a cooperator, you may be one. Being a cooperator is relatively easy. Being a cooperator and successfully signating that you are one, without precommitment, is in practice much harder. And a related problem, if you are a cooperator, you have to recognize a signal that the other person is a cooperator also, which may be too hard if he hasn’t precommited.
What? The implication goes both ways. If you’re a cooperator (in your terms), then you’re precommitted to cooperating (in classical terms). Maybe you misunderstand the word “precommitment”? It doesn’t necessarily imply that some natural power forces the other guy to believe you.
If you define precommitment this way, then every property becomes a precommitment to having that property, and the concept of precommitment becomes tautological. For example, is it a precommitment to always prefer good over evil (defined however you like)?
Not every property. Every immutable property. They’re very rare. Your example isn’t a precommitment because it’s not immutable.
What’s “mutable”? Changing in time? Cooperation may be a one-off encounter, with no multiple occasions to change over. You may be a cooperator for the duration of one encounter, and a rock elsewhere. Every fact is immutable, so I don’t know what you imply here.
Yes, mutable means changing in time.
Precommitment is an interaction between two different times: the time when you’re doing cheap talk with the opponent, and the time when you’re actually deciding in the closed room. The time you burn your ships, and the time your troops go to battle. Signaling time and play time. If a property is immutable (preferably physically immutable) between those two times, that’s precommitment. Sounds synonymous to your “being a cooperator” concept.
In other words, my point is that if the signaling is about your future property, at the moment when you have to perform the promised behavior, there is no need for any kind of persistence, thus according to your definition precommitment is unnecessary. Likewise, signaling doesn’t need to consist in you presenting any kind of argument, it may already be known that you (will be) a cooperator.
For example, the agent in question may be selected from a register of cooperators, where 99% of them are known to be cooperators. And cooperators themselves might as well be human, who decided to follow this counterintuitive algorithm, and benefit from doing so when interacting with other known cooperators, without any tangible precommitment system in place, no punishment for not being cooperators. This example may be implemented through reputation system.
No such thing as future property. This isn’t a factual disagreement on my part, just a quibble over terms; disregard it.
Your example isn’t about signaling or precommitment, it’s changing the game into multiple-shot, modifying the agent’s utility function in an isolated play to take into account their reputation for future plays. Yes, it works. But doesn’t help much in true one-shot (or last-play) situations.
On the other hand, the ideal platonic PD is also quite rare in reality—not as rare as Newcomb’s, but still. You may remember us having an isomorphic argument about Newcomb’s some time ago, with roles reversed—you defending the ideal platonic Newcomb’s Problem, and me questioning its assumptions :-)
Me, I don’t feel moral problems defecting in the pure one-shot PD. Some situations are just bad to be in, and the best way out is bad too. Especially situations where something terribly important to you is controlled by a cold uncaring alien entity, and the problem has been carefully constructed to prohibit you from manipulating it (Eliezer’s “true PD”).
In what sense do you mean no such thing? Clearly, there are future properties. My cat has a property of being dead in the future.
Yes, it was just an example of how to set up cooperation without precommitment. It’s clear that signaling being a one-off cooperator is a very hard problem, if you are only human and there are no Omegas flying around.
My cat has a property of being dead in the future.
Not with probability one, it doesn’t.
This doesn’t place the future in a privileged position. Even though I’m certain I saw my cat 10 minutes ago, it wasn’t alive a week ago with probability one, either.
Sorry. I deleted my comment to acknowledge my stupidity in making it. By now it’s clear that we don’t disagree substantively.
My answer to this would be that people have dispositions to behavior, and these dispositions color everything we do. If one might profit by showing courage, a coward will not do as well as a courageous man.
Of course, the relative success of such people at faking in appropriate situations is perhaps an empirical question.
ETA: this makes less sense as a direct response since you edited your comment. However, I think the difference is that “being a cooperator” regards a disposition that is part of the sort of person you are (though I think the above comment uses it more narrowly as a disposition that might only affect this one action), while a precommitment… well, I’m not sure actual people really do have those, if they’re immutable.