First of all, we need to start making a distinction between you what you predict I’ll do and what I’m signaling I’m going to do. Quick-and-dirty explanation of why this is necessary: If you predict I’ll cooperate but you’re planning to defect, I’ll signal to defy your prediction and defect along with you.
I think clippy’s statement should be
I signal to cooperate with you if and only if ((you’re planning to cooperate with me if and only if you predict I would cooperate with you) and you would cooperate with me).
Detailed explanation follows.
There are four situations where I have to decide what to signal:
You predict I’ll cooperate and you’re planning to cooperate.
You predict I’ll cooperate and you’re planning not to cooperate.
You predict I’ll defect and you’re planning to cooperate.
You predict I’ll defect and you’re planning to defect.
I want to cooperate in situation 1 only, and none of the other situations.
Truth table key:
P is the proposition “You predict I’ll cooperate”
Q is the proposition “You’re going to cooperate”
S is the proposition “I’m signaling I will cooperate”
Truth table:
P # Q # (Q <=> P) # (Q <=> P) ^ Q # S # S <=> (Q <=> P) ^ Q
1. T # T # T # T # T # T
2. T # F # F # F # F # T
3. F # T # F # F # F # T
4. F # F # T # F # F # T
So basically, the signaling behavior I described (cooperating in situation 1 only) is the only possible behavior that can truthfully satisfy the statement
I signal to cooperate with you if and only if ((you’re planning to cooperate with me if and only if you predict I would cooperate with you) and you would cooperate with me).
Note that there is a signal that is almost as good. Signaling that I will cooperate if (you predict I’ll defect and you’re planning to cooperate) is almost as good as signaling that I’ll defect in that situation. Using this signaling profile, broadcasting one’s intentions is as simple as saying
I signal to cooperate with you if and only if you’re planning to cooperate with me.
My guess is that the first, more complicated signal is ever-so-slightly better, in case you actually do cooperate thinking I’ll defect—that way I’ll be able to reap the rewards of defection without being inconsistent with my signal. But of course, it’s very unlikely for you to cooperate thinking I’ll defect.
I signal to cooperate with you if and only if ((you’re planning to cooperate with me if and only if you predict I would cooperate with you) and you would cooperate with me).
Should the word “signal” be part of the signal itself? That seems unnecessarily recursive. Maybe Clippy’s recommendation should be that I ought to signal
I will cooperate with you if and only if ((you’re planning to cooperate with me if and only if you predict I would cooperate with you) and you would cooperate with me).
This does seem more promising than Clippy’s original version. Written this way, each atomic proposition is distinct. For example, “you’re planning to cooperate with me” doesn’t mean the same thing as “you would cooperate with me”. One refers to what you’re planning to do, and the other refers to what you will in fact do. Read this way, the signal’s form is
S ⇔ ((Q ⇔ P) & R),
and I don’t see any obvious problem with that.
However, you would seem to render it in the propositional calculus as
S ⇔ ((Q ⇔ P) & Q),
where
P = You predict I’ll cooperate,
Q = You’re going to cooperate,
S = I will cooperate.
(I’ve omitted the initial “I’m signalling” from your rendering of S, for the reason that I gave above.)
Now, S ⇔ ((Q ⇔ P) & Q) is logically equivalent to S ⇔ (Q & P). So, to signal this proposition is to signal
I will cooperate iff you’re going to cooperate and you predict that I’ll cooperate.
As you say, this seems very similar to signalling
I will cooperate iff you will cooperate.
In fact, I’d call these signals functionally indistinguishable because, if you believe my signals, then either signal will lead you to predict my cooperation under the same circumstances.
For, suppose that I gave the second, apparently weaker signal. If you cooperated with me while anticipating that I would defect, then that would mean that you didn’t believe me when I said that I would cooperate with you if you cooperated with me, which would mean that you didn’t believe my signal.
Thus, insofar as you trust my signals, either signal would lead you to predict the same behavior from me. So, in that sense, they have the same informational content.
For, suppose that I gave the second, apparently weaker signal. If you cooperated with me while anticipating that I would defect, then that would mean that you didn’t believe me when I said that I would cooperate with you if you cooperated with me, which would mean that you didn’t believe my signal.
First of all, we need to start making a distinction between you what you predict I’ll do and what I’m signaling I’m going to do. Quick-and-dirty explanation of why this is necessary: If you predict I’ll cooperate but you’re planning to defect, I’ll signal to defy your prediction and defect along with you.
I think clippy’s statement should be
Detailed explanation follows.
There are four situations where I have to decide what to signal:
You predict I’ll cooperate and you’re planning to cooperate.
You predict I’ll cooperate and you’re planning not to cooperate.
You predict I’ll defect and you’re planning to cooperate.
You predict I’ll defect and you’re planning to defect.
I want to cooperate in situation 1 only, and none of the other situations.
Truth table key:
P is the proposition “You predict I’ll cooperate”
Q is the proposition “You’re going to cooperate”
S is the proposition “I’m signaling I will cooperate”
Truth table:
So basically, the signaling behavior I described (cooperating in situation 1 only) is the only possible behavior that can truthfully satisfy the statement
Note that there is a signal that is almost as good. Signaling that I will cooperate if (you predict I’ll defect and you’re planning to cooperate) is almost as good as signaling that I’ll defect in that situation. Using this signaling profile, broadcasting one’s intentions is as simple as saying
My guess is that the first, more complicated signal is ever-so-slightly better, in case you actually do cooperate thinking I’ll defect—that way I’ll be able to reap the rewards of defection without being inconsistent with my signal. But of course, it’s very unlikely for you to cooperate thinking I’ll defect.
Should the word “signal” be part of the signal itself? That seems unnecessarily recursive. Maybe Clippy’s recommendation should be that I ought to signal
This does seem more promising than Clippy’s original version. Written this way, each atomic proposition is distinct. For example, “you’re planning to cooperate with me” doesn’t mean the same thing as “you would cooperate with me”. One refers to what you’re planning to do, and the other refers to what you will in fact do. Read this way, the signal’s form is
S ⇔ ((Q ⇔ P) & R),
and I don’t see any obvious problem with that.
However, you would seem to render it in the propositional calculus as
S ⇔ ((Q ⇔ P) & Q),
where
P = You predict I’ll cooperate,
Q = You’re going to cooperate,
S = I will cooperate.
(I’ve omitted the initial “I’m signalling” from your rendering of S, for the reason that I gave above.)
Now, S ⇔ ((Q ⇔ P) & Q) is logically equivalent to S ⇔ (Q & P). So, to signal this proposition is to signal
As you say, this seems very similar to signalling
In fact, I’d call these signals functionally indistinguishable because, if you believe my signals, then either signal will lead you to predict my cooperation under the same circumstances.
For, suppose that I gave the second, apparently weaker signal. If you cooperated with me while anticipating that I would defect, then that would mean that you didn’t believe me when I said that I would cooperate with you if you cooperated with me, which would mean that you didn’t believe my signal.
Thus, insofar as you trust my signals, either signal would lead you to predict the same behavior from me. So, in that sense, they have the same informational content.
I guess. Or maybe I’m a masochist ;)
I accept all your suggested improvements.