it’s real easy to look at a behavior that doesn’t seem to make sense otherwise and say “oh, duh, signalling”. The key is that the behavior doesn’t make sense otherwise: it’s costly, and that’s an indication that, if people are doing it, there’s a benefit you’re not seeing.
People do all sorts of insane things for reasons other than signaling, though. Like because their parents did it, or because the behavior was rewarded at some point.
Of course, signaling behavior is often rewarded, due to it being successful signaling… which means it might be more accurate to say that people do things because they’ve been rewarded at some point for doing them, and it just so happens that signaling behavior is often rewarded.
(Which is just the sort of detail we would want to see from a good theory of signaling—or anything else about human behavior.)
Unfortunately, the search for a Big Idea in human behavior is kind of dangerous. Not just because a big-enough idea gets close to being tautological, but also because it’s a bad idea to assume that people are sane or do things for sane reasons!
If you view people as stupid robots that latch onto and imitate the first patterns they see that produce some sort of reward (as well as freezing out anything that produces pain early on) and then stubbornly refusing to change despite all reason, then that’s definitely a Big Idea enough to explain nearly everything important about human behavior.
We just don’t like that idea because it’s not beautiful and elegant, the way Big Ideas like evolution and relativity are.
(It’s also not the sort of idea we’re looking for, because we want Big Ideas about psychology to help us bypass any need to understand individual human beings and their tortured histories, or even look at what their current programming is. Unfortunately, this is like expecting a Theory of Computing to let us equally predict obscure problems in Vista and OS X, without ever looking at their source code or development history of either one.)
(It’s also not the sort of idea we’re looking for, because we want Big Ideas about psychology to help us bypass any need to understand individual human beings and their tortured histories, or even look at what their current programming is. Unfortunately, this is like expecting a Theory of Computing to let us equally predict obscure problems in Vista and OS X, without ever looking at their source code or development history of either one.)
So do you think e.g. overcoming akrasia necessitates understanding your self-programming via a set of decent algorithms for doing so (e.g. what Less Wrong is for epistemic rationality) that allow you to figure out for yourself whatever problems you may have? That would be a little worrying insofar as something like akrasia might be similar to a blue screen of death in your Theory of Computing example: a common failure mode resulting from any number of different problems that can only be resolved by the application of high-level learned algorithms that most people simply don’t have and never bother to find, and those who do find are unable to succinctly express in such a way as to be memetically fit.
On top of that, similar to how most people never notice that they’re horrible epistemic rationalists and that there is a higher standard to which they could aspire, most good epistemic rationalists themselves may at least notice that they’re sub-par along many dimensions of instrumental rationality and yet completely fail to be motivated to do anything about it: they pride themselves on being correct, not being successful, in the same way most people pride themselves on their success and not their correctness (by gerrymandering their definition of correctness to be success like rationalists may gerrymander their definition of success to be correctness, resulting in both of them losing by either succeeding at the wrong things or failing to succeed at the right things).
So do you think e.g. overcoming akrasia necessitates understanding your self-programming via a set of decent algorithms for doing so (e.g. what Less Wrong is for epistemic rationality) that allow you to figure out for yourself whatever problems you may have?
Btw, it would be more accurate to speak of “akrasias” as individual occurrences, rather than “akrasia” as a non-countable. One can overcome an akrasia, but not “akrasia” in some general sense.
they pride themselves on being correct, not being successful
Yep, major failure mode. Been there, done that. ;-)
Btw, it would be more accurate to speak of “akrasias” as individual occurrences, rather than “akrasia” as a non-countable. One can overcome an akrasia, but not “akrasia” in some general sense.
I bet you think the war on terror is a badly framed concept.
Signaling behavior is often rewarded, due to it being successful signaling… which means it might be more accurate to say that people do things because they’ve been rewarded at some point for doing them, and it just so happens that signaling behavior is often rewarded.
The evolutionary/signaling explanation is distinct from the rewards/conditioning explanation, because the former says that people are predisposed to engage in behaviors that were good signaling in the ancestral environment whether or not they are rewarded today.
The evolutionary/signaling explanation is distinct from the rewards/conditioning explanation, because the former says that people are predisposed to engage in behaviors that were good signaling in the ancestral environment whether or not they are rewarded today.
As a practical matter of evolution, signal-detection has to evolve before signal-generation, or there’s no benefit to generating the signal. And evolution likes to reuse existing machinery, e.g. reinforcement.
In practice, human beings also seem to have some sort of “sociometer” or “how other people probably see me”, so signaling behavior can be reinforcing even without others’ direct interaction.
It’s very unparsimonious to assume that specific human signaling behaviors are inborn, given that there are such an incredible number of such behaviors in use. Much easier to assume that signal detection and self-reflection add up to standard reinforcement, as signal-detection and self-reflection are independently useful, while standalone signaling behaviors are not.
This seems to preclude cases where pre-existing behaviors are co-opted as signals. Did you mean to preclude such cases?
Bleah. I notice that I am confused. Or at least, confusing. ;-)
What I was trying to say was that there’s no reason to fake (or enhance) a characteristic or behavior until after it’s being evaluated by others. So the evolutionary process is:
There’s some difference between individuals that provides useful information
A detector evolves to exploit this information
Selection pressure causes faking of the signal
This process is also repeated in memetic form, as well as genetic form. People do a behavior for some reason, people learn to use it to evaluate, and then other people learn to game the signal.
It is very unparsimonious to assume that specific human signaling behaviors are inborn, given that there are such an incredible number of such behaviors in use.
I agree that the vast majority of specific human behaviors, signaling or otherwise are learned, not in-born, as an Occam prior would suggest. That does not, however, mean that all signaling behaviors are learned. Many animals have instinctual mating rituals, and it would be quite surprising if the evolutionary pressures that enable these to develop in other species were entirely absent in humans.
Much easier to assume that signal detection and self-reflection add up to standard reinforcement, as signal-detection and self-reflection are independently useful, while standalone signaling behaviors are not.
I would expect signaling to show up both in reinforced behaviors and in the rewards themselves (the feeling of having signaled a given trait could feel rewarding). Again, most are probably behaviors that have been rewarded or learned memetically, but given the large and diverse signaling behaviors, the more complex explanation probably applies to some (but not most) of them.
People do all sorts of insane things for reasons other than signaling, though. Like because their parents did it, or because the behavior was rewarded at some point.
Minor quibble: the conscious reasons for someone’s actions may not be signaling, but that may be little more than a rationalization for an unconsciously motivated attempt to signal some quality. Mating is filled with such signalling. While most people probably have some vague idea about sending the right signals to the opposite (or same) sex, few people realize that they are subconsciously sending and responding to signals. All they notice are their feelings.
Minor quibble: the conscious reasons for someone’s actions may not be signaling, but that may be little more than a rationalization for an unconsciously motivated attempt to signal some quality.
If you read the rest of the comment to which you are replying, I pointed out that it’s effectively best to assume that nobody knows why they’re doing anything, and that we’re simply doing what’s been rewarded.
That some of those things that are rewarded can be classed as “signaling”, may actually have less to do (evolutionarily) with the person exhibiting the behavior, and more to do with the person(s) rewarding or demonstrating those behaviors.
IOW, we may not have an instinct to “signal”, but only to imitate what we see others responding to, and do more of what gives appropriate responses. That would allow our motivation to be far less conscious, for one thing.
(Somewhat-unrelated point: the most annoying thing about trying to study human motivation is the implicit assumption we have that people should know why they do things. But when viewed from an ev. psych perspective, it makes more sense to ask why is there any reason for us to know anything about our own motivations at all? We don’t expect other animals to have insight into their own motivation, so why would we expect that, at 5% difference from a chimpanzee, we should automatically know everything about our own motivations? It’s absurd.)
I’m not sure that the class of all actions that are motivated by signaling is the same as (or a subset of) the class of all actions that are rewarded. At least, if by rewarded, you mean something other than the rewards of pleasure and pain that the brain gives.
People do all sorts of insane things for reasons other than signaling, though. Like because their parents did it, or because the behavior was rewarded at some point.
Of course, signaling behavior is often rewarded, due to it being successful signaling… which means it might be more accurate to say that people do things because they’ve been rewarded at some point for doing them, and it just so happens that signaling behavior is often rewarded.
(Which is just the sort of detail we would want to see from a good theory of signaling—or anything else about human behavior.)
Unfortunately, the search for a Big Idea in human behavior is kind of dangerous. Not just because a big-enough idea gets close to being tautological, but also because it’s a bad idea to assume that people are sane or do things for sane reasons!
If you view people as stupid robots that latch onto and imitate the first patterns they see that produce some sort of reward (as well as freezing out anything that produces pain early on) and then stubbornly refusing to change despite all reason, then that’s definitely a Big Idea enough to explain nearly everything important about human behavior.
We just don’t like that idea because it’s not beautiful and elegant, the way Big Ideas like evolution and relativity are.
(It’s also not the sort of idea we’re looking for, because we want Big Ideas about psychology to help us bypass any need to understand individual human beings and their tortured histories, or even look at what their current programming is. Unfortunately, this is like expecting a Theory of Computing to let us equally predict obscure problems in Vista and OS X, without ever looking at their source code or development history of either one.)
So do you think e.g. overcoming akrasia necessitates understanding your self-programming via a set of decent algorithms for doing so (e.g. what Less Wrong is for epistemic rationality) that allow you to figure out for yourself whatever problems you may have? That would be a little worrying insofar as something like akrasia might be similar to a blue screen of death in your Theory of Computing example: a common failure mode resulting from any number of different problems that can only be resolved by the application of high-level learned algorithms that most people simply don’t have and never bother to find, and those who do find are unable to succinctly express in such a way as to be memetically fit.
On top of that, similar to how most people never notice that they’re horrible epistemic rationalists and that there is a higher standard to which they could aspire, most good epistemic rationalists themselves may at least notice that they’re sub-par along many dimensions of instrumental rationality and yet completely fail to be motivated to do anything about it: they pride themselves on being correct, not being successful, in the same way most people pride themselves on their success and not their correctness (by gerrymandering their definition of correctness to be success like rationalists may gerrymander their definition of success to be correctness, resulting in both of them losing by either succeeding at the wrong things or failing to succeed at the right things).
Yes; see here for why.
Btw, it would be more accurate to speak of “akrasias” as individual occurrences, rather than “akrasia” as a non-countable. One can overcome an akrasia, but not “akrasia” in some general sense.
Yep, major failure mode. Been there, done that. ;-)
I bet you think the war on terror is a badly framed concept.
I’d like to see this expanded into a post.
The evolutionary/signaling explanation is distinct from the rewards/conditioning explanation, because the former says that people are predisposed to engage in behaviors that were good signaling in the ancestral environment whether or not they are rewarded today.
As a practical matter of evolution, signal-detection has to evolve before signal-generation, or there’s no benefit to generating the signal. And evolution likes to reuse existing machinery, e.g. reinforcement.
In practice, human beings also seem to have some sort of “sociometer” or “how other people probably see me”, so signaling behavior can be reinforcing even without others’ direct interaction.
It’s very unparsimonious to assume that specific human signaling behaviors are inborn, given that there are such an incredible number of such behaviors in use. Much easier to assume that signal detection and self-reflection add up to standard reinforcement, as signal-detection and self-reflection are independently useful, while standalone signaling behaviors are not.
Er?
This seems to preclude cases where pre-existing behaviors are co-opted as signals.
Did you mean to preclude such cases?
Bleah. I notice that I am confused. Or at least, confusing. ;-)
What I was trying to say was that there’s no reason to fake (or enhance) a characteristic or behavior until after it’s being evaluated by others. So the evolutionary process is:
There’s some difference between individuals that provides useful information
A detector evolves to exploit this information
Selection pressure causes faking of the signal
This process is also repeated in memetic form, as well as genetic form. People do a behavior for some reason, people learn to use it to evaluate, and then other people learn to game the signal.
Ah, gotcha. Yes, that makes sense.
I agree that the vast majority of specific human behaviors, signaling or otherwise are learned, not in-born, as an Occam prior would suggest. That does not, however, mean that all signaling behaviors are learned. Many animals have instinctual mating rituals, and it would be quite surprising if the evolutionary pressures that enable these to develop in other species were entirely absent in humans.
I would expect signaling to show up both in reinforced behaviors and in the rewards themselves (the feeling of having signaled a given trait could feel rewarding). Again, most are probably behaviors that have been rewarded or learned memetically, but given the large and diverse signaling behaviors, the more complex explanation probably applies to some (but not most) of them.
Minor quibble: the conscious reasons for someone’s actions may not be signaling, but that may be little more than a rationalization for an unconsciously motivated attempt to signal some quality. Mating is filled with such signalling. While most people probably have some vague idea about sending the right signals to the opposite (or same) sex, few people realize that they are subconsciously sending and responding to signals. All they notice are their feelings.
If you read the rest of the comment to which you are replying, I pointed out that it’s effectively best to assume that nobody knows why they’re doing anything, and that we’re simply doing what’s been rewarded.
That some of those things that are rewarded can be classed as “signaling”, may actually have less to do (evolutionarily) with the person exhibiting the behavior, and more to do with the person(s) rewarding or demonstrating those behaviors.
IOW, we may not have an instinct to “signal”, but only to imitate what we see others responding to, and do more of what gives appropriate responses. That would allow our motivation to be far less conscious, for one thing.
(Somewhat-unrelated point: the most annoying thing about trying to study human motivation is the implicit assumption we have that people should know why they do things. But when viewed from an ev. psych perspective, it makes more sense to ask why is there any reason for us to know anything about our own motivations at all? We don’t expect other animals to have insight into their own motivation, so why would we expect that, at 5% difference from a chimpanzee, we should automatically know everything about our own motivations? It’s absurd.)
I’m not sure that the class of all actions that are motivated by signaling is the same as (or a subset of) the class of all actions that are rewarded. At least, if by rewarded, you mean something other than the rewards of pleasure and pain that the brain gives.