I had long ago (but after being heavily influenced by Overcoming Bias) thought that signaling could be seen simply as a corollary to Bayes’ theorem. That is, when one says something, one knows that its effect on a listener will depend on the listener’s rational updating on the fact that one said it. If one wants the listener to behave as if X is true, one should say something that the listener would only expect in case X is true.
Thinking in this way, one quickly arrives at conclusions like “oh, so hard-to-fake signals are stronger” and “if everyone starts sending the same signal in the same way, that makes it a lot weaker”, which test quite well against observations of the real world.
Powerful corollary: we should expect signaling, along with these basic properties, to be prominent in any group of intelligent minds. For example, math departments and alien civilizations. (Non-example: solitary AI foom.)
Your math department example reminds me of a few experiences. From time to time, I’d be present when a small group of 3-4 professors were quietly discussing roadblocks in their research. Problems would be introduced, mentioning a number of unexpectedly connected fields, Symplectic This-Thats, and the Cohomology of Riff-Raffs. Eventually as the speaker relaxed and their anxiety settled, it would turn out that they were having trouble with an inequality and lost a constant along the way. So, the group would get to work, perhaps they would be able to fix the issue, then the next speaker in the circle would start to announce his problem.
What was surprising to me, was that they were not strangers. Most had been friends for over a decade. I wonder if the others were even still listening to the name-dropping. The context it provided wasn’t at all helpful for finding a typo, that’s for sure. I suppose it may be nice for “Keeping up with the Joneses”, so to speak.
This article made me think the same thing. Signaling is essentially gaming Bayes Theorem: providing what one believes others to count as evidence of appropriate strength to get them to update to a desired conclusion.
I had long ago (but after being heavily influenced by Overcoming Bias) thought that signaling could be seen simply as a corollary to Bayes’ theorem. That is, when one says something, one knows that its effect on a listener will depend on the listener’s rational updating on the fact that one said it. If one wants the listener to behave as if X is true, one should say something that the listener would only expect in case X is true.
Thinking in this way, one quickly arrives at conclusions like “oh, so hard-to-fake signals are stronger” and “if everyone starts sending the same signal in the same way, that makes it a lot weaker”, which test quite well against observations of the real world.
Powerful corollary: we should expect signaling, along with these basic properties, to be prominent in any group of intelligent minds. For example, math departments and alien civilizations. (Non-example: solitary AI foom.)
Your math department example reminds me of a few experiences. From time to time, I’d be present when a small group of 3-4 professors were quietly discussing roadblocks in their research. Problems would be introduced, mentioning a number of unexpectedly connected fields, Symplectic This-Thats, and the Cohomology of Riff-Raffs. Eventually as the speaker relaxed and their anxiety settled, it would turn out that they were having trouble with an inequality and lost a constant along the way. So, the group would get to work, perhaps they would be able to fix the issue, then the next speaker in the circle would start to announce his problem.
What was surprising to me, was that they were not strangers. Most had been friends for over a decade. I wonder if the others were even still listening to the name-dropping. The context it provided wasn’t at all helpful for finding a typo, that’s for sure. I suppose it may be nice for “Keeping up with the Joneses”, so to speak.
This article made me think the same thing. Signaling is essentially gaming Bayes Theorem: providing what one believes others to count as evidence of appropriate strength to get them to update to a desired conclusion.