I think you oversell the usefulness of this test, both because of how hard it is to make predictions about unrepeatable “experiments” that don’t include value-judgments and because of how easy it is to game the statements—imagine:
(A) the false statement to be selected to be false for extraneous reasons and
(B) for the proponent of the Big Idea to argue (A) when it isn’t true.
Let’s say my friend and I are doing this test. His Big Idea is signaling; my task is to construct three statements.
1) Men who want to mate spend a lot of money. (Signaling resources!)
2) Women who want to mate volunteer. (Signaling nurturing!)
3) Children often share with each other, unprompted, while young. (Signaling cooperation to parents!)
Well, obviously #3 isn’t right because of other concerns—it turns out competing for, and hoarding, resources has been evolutionarily more successful than signaling social fitness. Does that mean signaling as an idea isn’t useful? No; it wrongly explained (3) for a valid reason. (3) is false for reasons unrelated to signaling.
Psychohistorian doesn’t say the idea isn’t useful, just that reliance on it is incorrect. If the theory is “people mostly do stuff because of signalling”, honestly, that’s a pretty crappy theory. Once Signalling Guy fails this test, he should take that as a sign to go back and refine the theory, perhaps to
“People do stuff because of signalling when the benefit of the signal, in the environment of evolutionary adaptation, was worth more than its cost.”
This means that making predictions requires estimating the cost and benefit of the behavior in advance, which requires a lot more data and computation, but that’s what makes the theory a useful predictor instead of just another bogus Big Idea.
Not to point fingers at Freakonomics fans (not least because I’m guilty of this myself in party conversation) but it’s real easy to look at a behavior that doesn’t seem to make sense otherwise and say “oh, duh, signalling”. The key is that the behavior doesn’t make sense otherwise: it’s costly, and that’s an indication that, if people are doing it, there’s a benefit you’re not seeing. That technique may be helpful for explaining, but it’s not helpful for predicting since, as you pointed out, it can explain anything if there’s not enough cost/benefit information to rule it out.
it’s real easy to look at a behavior that doesn’t seem to make sense otherwise and say “oh, duh, signalling”. The key is that the behavior doesn’t make sense otherwise: it’s costly, and that’s an indication that, if people are doing it, there’s a benefit you’re not seeing.
People do all sorts of insane things for reasons other than signaling, though. Like because their parents did it, or because the behavior was rewarded at some point.
Of course, signaling behavior is often rewarded, due to it being successful signaling… which means it might be more accurate to say that people do things because they’ve been rewarded at some point for doing them, and it just so happens that signaling behavior is often rewarded.
(Which is just the sort of detail we would want to see from a good theory of signaling—or anything else about human behavior.)
Unfortunately, the search for a Big Idea in human behavior is kind of dangerous. Not just because a big-enough idea gets close to being tautological, but also because it’s a bad idea to assume that people are sane or do things for sane reasons!
If you view people as stupid robots that latch onto and imitate the first patterns they see that produce some sort of reward (as well as freezing out anything that produces pain early on) and then stubbornly refusing to change despite all reason, then that’s definitely a Big Idea enough to explain nearly everything important about human behavior.
We just don’t like that idea because it’s not beautiful and elegant, the way Big Ideas like evolution and relativity are.
(It’s also not the sort of idea we’re looking for, because we want Big Ideas about psychology to help us bypass any need to understand individual human beings and their tortured histories, or even look at what their current programming is. Unfortunately, this is like expecting a Theory of Computing to let us equally predict obscure problems in Vista and OS X, without ever looking at their source code or development history of either one.)
(It’s also not the sort of idea we’re looking for, because we want Big Ideas about psychology to help us bypass any need to understand individual human beings and their tortured histories, or even look at what their current programming is. Unfortunately, this is like expecting a Theory of Computing to let us equally predict obscure problems in Vista and OS X, without ever looking at their source code or development history of either one.)
So do you think e.g. overcoming akrasia necessitates understanding your self-programming via a set of decent algorithms for doing so (e.g. what Less Wrong is for epistemic rationality) that allow you to figure out for yourself whatever problems you may have? That would be a little worrying insofar as something like akrasia might be similar to a blue screen of death in your Theory of Computing example: a common failure mode resulting from any number of different problems that can only be resolved by the application of high-level learned algorithms that most people simply don’t have and never bother to find, and those who do find are unable to succinctly express in such a way as to be memetically fit.
On top of that, similar to how most people never notice that they’re horrible epistemic rationalists and that there is a higher standard to which they could aspire, most good epistemic rationalists themselves may at least notice that they’re sub-par along many dimensions of instrumental rationality and yet completely fail to be motivated to do anything about it: they pride themselves on being correct, not being successful, in the same way most people pride themselves on their success and not their correctness (by gerrymandering their definition of correctness to be success like rationalists may gerrymander their definition of success to be correctness, resulting in both of them losing by either succeeding at the wrong things or failing to succeed at the right things).
So do you think e.g. overcoming akrasia necessitates understanding your self-programming via a set of decent algorithms for doing so (e.g. what Less Wrong is for epistemic rationality) that allow you to figure out for yourself whatever problems you may have?
Btw, it would be more accurate to speak of “akrasias” as individual occurrences, rather than “akrasia” as a non-countable. One can overcome an akrasia, but not “akrasia” in some general sense.
they pride themselves on being correct, not being successful
Yep, major failure mode. Been there, done that. ;-)
Btw, it would be more accurate to speak of “akrasias” as individual occurrences, rather than “akrasia” as a non-countable. One can overcome an akrasia, but not “akrasia” in some general sense.
I bet you think the war on terror is a badly framed concept.
Signaling behavior is often rewarded, due to it being successful signaling… which means it might be more accurate to say that people do things because they’ve been rewarded at some point for doing them, and it just so happens that signaling behavior is often rewarded.
The evolutionary/signaling explanation is distinct from the rewards/conditioning explanation, because the former says that people are predisposed to engage in behaviors that were good signaling in the ancestral environment whether or not they are rewarded today.
The evolutionary/signaling explanation is distinct from the rewards/conditioning explanation, because the former says that people are predisposed to engage in behaviors that were good signaling in the ancestral environment whether or not they are rewarded today.
As a practical matter of evolution, signal-detection has to evolve before signal-generation, or there’s no benefit to generating the signal. And evolution likes to reuse existing machinery, e.g. reinforcement.
In practice, human beings also seem to have some sort of “sociometer” or “how other people probably see me”, so signaling behavior can be reinforcing even without others’ direct interaction.
It’s very unparsimonious to assume that specific human signaling behaviors are inborn, given that there are such an incredible number of such behaviors in use. Much easier to assume that signal detection and self-reflection add up to standard reinforcement, as signal-detection and self-reflection are independently useful, while standalone signaling behaviors are not.
This seems to preclude cases where pre-existing behaviors are co-opted as signals. Did you mean to preclude such cases?
Bleah. I notice that I am confused. Or at least, confusing. ;-)
What I was trying to say was that there’s no reason to fake (or enhance) a characteristic or behavior until after it’s being evaluated by others. So the evolutionary process is:
There’s some difference between individuals that provides useful information
A detector evolves to exploit this information
Selection pressure causes faking of the signal
This process is also repeated in memetic form, as well as genetic form. People do a behavior for some reason, people learn to use it to evaluate, and then other people learn to game the signal.
It is very unparsimonious to assume that specific human signaling behaviors are inborn, given that there are such an incredible number of such behaviors in use.
I agree that the vast majority of specific human behaviors, signaling or otherwise are learned, not in-born, as an Occam prior would suggest. That does not, however, mean that all signaling behaviors are learned. Many animals have instinctual mating rituals, and it would be quite surprising if the evolutionary pressures that enable these to develop in other species were entirely absent in humans.
Much easier to assume that signal detection and self-reflection add up to standard reinforcement, as signal-detection and self-reflection are independently useful, while standalone signaling behaviors are not.
I would expect signaling to show up both in reinforced behaviors and in the rewards themselves (the feeling of having signaled a given trait could feel rewarding). Again, most are probably behaviors that have been rewarded or learned memetically, but given the large and diverse signaling behaviors, the more complex explanation probably applies to some (but not most) of them.
People do all sorts of insane things for reasons other than signaling, though. Like because their parents did it, or because the behavior was rewarded at some point.
Minor quibble: the conscious reasons for someone’s actions may not be signaling, but that may be little more than a rationalization for an unconsciously motivated attempt to signal some quality. Mating is filled with such signalling. While most people probably have some vague idea about sending the right signals to the opposite (or same) sex, few people realize that they are subconsciously sending and responding to signals. All they notice are their feelings.
Minor quibble: the conscious reasons for someone’s actions may not be signaling, but that may be little more than a rationalization for an unconsciously motivated attempt to signal some quality.
If you read the rest of the comment to which you are replying, I pointed out that it’s effectively best to assume that nobody knows why they’re doing anything, and that we’re simply doing what’s been rewarded.
That some of those things that are rewarded can be classed as “signaling”, may actually have less to do (evolutionarily) with the person exhibiting the behavior, and more to do with the person(s) rewarding or demonstrating those behaviors.
IOW, we may not have an instinct to “signal”, but only to imitate what we see others responding to, and do more of what gives appropriate responses. That would allow our motivation to be far less conscious, for one thing.
(Somewhat-unrelated point: the most annoying thing about trying to study human motivation is the implicit assumption we have that people should know why they do things. But when viewed from an ev. psych perspective, it makes more sense to ask why is there any reason for us to know anything about our own motivations at all? We don’t expect other animals to have insight into their own motivation, so why would we expect that, at 5% difference from a chimpanzee, we should automatically know everything about our own motivations? It’s absurd.)
I’m not sure that the class of all actions that are motivated by signaling is the same as (or a subset of) the class of all actions that are rewarded. At least, if by rewarded, you mean something other than the rewards of pleasure and pain that the brain gives.
I think you oversell the usefulness of this test, both because of how hard it is to make predictions about unrepeatable “experiments” that don’t include value-judgments and because of how easy it is to game the statements—imagine:
(A) the false statement to be selected to be false for extraneous reasons and (B) for the proponent of the Big Idea to argue (A) when it isn’t true.
Let’s say my friend and I are doing this test. His Big Idea is signaling; my task is to construct three statements.
1) Men who want to mate spend a lot of money. (Signaling resources!) 2) Women who want to mate volunteer. (Signaling nurturing!) 3) Children often share with each other, unprompted, while young. (Signaling cooperation to parents!)
Well, obviously #3 isn’t right because of other concerns—it turns out competing for, and hoarding, resources has been evolutionarily more successful than signaling social fitness. Does that mean signaling as an idea isn’t useful? No; it wrongly explained (3) for a valid reason. (3) is false for reasons unrelated to signaling.
Psychohistorian doesn’t say the idea isn’t useful, just that reliance on it is incorrect. If the theory is “people mostly do stuff because of signalling”, honestly, that’s a pretty crappy theory. Once Signalling Guy fails this test, he should take that as a sign to go back and refine the theory, perhaps to
“People do stuff because of signalling when the benefit of the signal, in the environment of evolutionary adaptation, was worth more than its cost.”
This means that making predictions requires estimating the cost and benefit of the behavior in advance, which requires a lot more data and computation, but that’s what makes the theory a useful predictor instead of just another bogus Big Idea.
Not to point fingers at Freakonomics fans (not least because I’m guilty of this myself in party conversation) but it’s real easy to look at a behavior that doesn’t seem to make sense otherwise and say “oh, duh, signalling”. The key is that the behavior doesn’t make sense otherwise: it’s costly, and that’s an indication that, if people are doing it, there’s a benefit you’re not seeing. That technique may be helpful for explaining, but it’s not helpful for predicting since, as you pointed out, it can explain anything if there’s not enough cost/benefit information to rule it out.
People do all sorts of insane things for reasons other than signaling, though. Like because their parents did it, or because the behavior was rewarded at some point.
Of course, signaling behavior is often rewarded, due to it being successful signaling… which means it might be more accurate to say that people do things because they’ve been rewarded at some point for doing them, and it just so happens that signaling behavior is often rewarded.
(Which is just the sort of detail we would want to see from a good theory of signaling—or anything else about human behavior.)
Unfortunately, the search for a Big Idea in human behavior is kind of dangerous. Not just because a big-enough idea gets close to being tautological, but also because it’s a bad idea to assume that people are sane or do things for sane reasons!
If you view people as stupid robots that latch onto and imitate the first patterns they see that produce some sort of reward (as well as freezing out anything that produces pain early on) and then stubbornly refusing to change despite all reason, then that’s definitely a Big Idea enough to explain nearly everything important about human behavior.
We just don’t like that idea because it’s not beautiful and elegant, the way Big Ideas like evolution and relativity are.
(It’s also not the sort of idea we’re looking for, because we want Big Ideas about psychology to help us bypass any need to understand individual human beings and their tortured histories, or even look at what their current programming is. Unfortunately, this is like expecting a Theory of Computing to let us equally predict obscure problems in Vista and OS X, without ever looking at their source code or development history of either one.)
So do you think e.g. overcoming akrasia necessitates understanding your self-programming via a set of decent algorithms for doing so (e.g. what Less Wrong is for epistemic rationality) that allow you to figure out for yourself whatever problems you may have? That would be a little worrying insofar as something like akrasia might be similar to a blue screen of death in your Theory of Computing example: a common failure mode resulting from any number of different problems that can only be resolved by the application of high-level learned algorithms that most people simply don’t have and never bother to find, and those who do find are unable to succinctly express in such a way as to be memetically fit.
On top of that, similar to how most people never notice that they’re horrible epistemic rationalists and that there is a higher standard to which they could aspire, most good epistemic rationalists themselves may at least notice that they’re sub-par along many dimensions of instrumental rationality and yet completely fail to be motivated to do anything about it: they pride themselves on being correct, not being successful, in the same way most people pride themselves on their success and not their correctness (by gerrymandering their definition of correctness to be success like rationalists may gerrymander their definition of success to be correctness, resulting in both of them losing by either succeeding at the wrong things or failing to succeed at the right things).
Yes; see here for why.
Btw, it would be more accurate to speak of “akrasias” as individual occurrences, rather than “akrasia” as a non-countable. One can overcome an akrasia, but not “akrasia” in some general sense.
Yep, major failure mode. Been there, done that. ;-)
I bet you think the war on terror is a badly framed concept.
I’d like to see this expanded into a post.
The evolutionary/signaling explanation is distinct from the rewards/conditioning explanation, because the former says that people are predisposed to engage in behaviors that were good signaling in the ancestral environment whether or not they are rewarded today.
As a practical matter of evolution, signal-detection has to evolve before signal-generation, or there’s no benefit to generating the signal. And evolution likes to reuse existing machinery, e.g. reinforcement.
In practice, human beings also seem to have some sort of “sociometer” or “how other people probably see me”, so signaling behavior can be reinforcing even without others’ direct interaction.
It’s very unparsimonious to assume that specific human signaling behaviors are inborn, given that there are such an incredible number of such behaviors in use. Much easier to assume that signal detection and self-reflection add up to standard reinforcement, as signal-detection and self-reflection are independently useful, while standalone signaling behaviors are not.
Er?
This seems to preclude cases where pre-existing behaviors are co-opted as signals.
Did you mean to preclude such cases?
Bleah. I notice that I am confused. Or at least, confusing. ;-)
What I was trying to say was that there’s no reason to fake (or enhance) a characteristic or behavior until after it’s being evaluated by others. So the evolutionary process is:
There’s some difference between individuals that provides useful information
A detector evolves to exploit this information
Selection pressure causes faking of the signal
This process is also repeated in memetic form, as well as genetic form. People do a behavior for some reason, people learn to use it to evaluate, and then other people learn to game the signal.
Ah, gotcha. Yes, that makes sense.
I agree that the vast majority of specific human behaviors, signaling or otherwise are learned, not in-born, as an Occam prior would suggest. That does not, however, mean that all signaling behaviors are learned. Many animals have instinctual mating rituals, and it would be quite surprising if the evolutionary pressures that enable these to develop in other species were entirely absent in humans.
I would expect signaling to show up both in reinforced behaviors and in the rewards themselves (the feeling of having signaled a given trait could feel rewarding). Again, most are probably behaviors that have been rewarded or learned memetically, but given the large and diverse signaling behaviors, the more complex explanation probably applies to some (but not most) of them.
Minor quibble: the conscious reasons for someone’s actions may not be signaling, but that may be little more than a rationalization for an unconsciously motivated attempt to signal some quality. Mating is filled with such signalling. While most people probably have some vague idea about sending the right signals to the opposite (or same) sex, few people realize that they are subconsciously sending and responding to signals. All they notice are their feelings.
If you read the rest of the comment to which you are replying, I pointed out that it’s effectively best to assume that nobody knows why they’re doing anything, and that we’re simply doing what’s been rewarded.
That some of those things that are rewarded can be classed as “signaling”, may actually have less to do (evolutionarily) with the person exhibiting the behavior, and more to do with the person(s) rewarding or demonstrating those behaviors.
IOW, we may not have an instinct to “signal”, but only to imitate what we see others responding to, and do more of what gives appropriate responses. That would allow our motivation to be far less conscious, for one thing.
(Somewhat-unrelated point: the most annoying thing about trying to study human motivation is the implicit assumption we have that people should know why they do things. But when viewed from an ev. psych perspective, it makes more sense to ask why is there any reason for us to know anything about our own motivations at all? We don’t expect other animals to have insight into their own motivation, so why would we expect that, at 5% difference from a chimpanzee, we should automatically know everything about our own motivations? It’s absurd.)
I’m not sure that the class of all actions that are motivated by signaling is the same as (or a subset of) the class of all actions that are rewarded. At least, if by rewarded, you mean something other than the rewards of pleasure and pain that the brain gives.