Do I have a bias or useful heuristic? If a signal is easy to fake, is it a bias to assume that it is disingenuous or is it an useful heuristic?
I read Robert Hanson’s post about why there are so many charities specifically focusing on kids and he basically summed it up as signalling to seem kind, for potential mates, being a major factor. There were some good rebuttals in the comment sections but whether or not signalling is at play is not the point, I’m sure to a certain degree it is, how much? I don’t know. The point is that I automatically dismiss the authenticity of a signal if the signal is difficult to authenticate. In this example it is possible for people to both, signal that they care about children for a potential mate, as well as actually really caring about children ( e.g. innate emotional response).
EDIT: Just to be clear, this is a question about signalling and how I strongly associate easy to fake signals with dishonest signalling, not about charities.
Every heuristic involves a bias when you use it in some contexts.
Yes, but does it more often yield a satisfactory solution across many contexts if yes, then I’d label it a useful heuristic and if it is often wrong I would label it a bias.
You’re not using your words as effectively as you could be. Heuristics are mental shortcuts, bias is a systematic deviation from rationality. A heuristic can’t be a bias, and a bias can’t be a heuristic. Heuristics can lead to bias. The utility of a certain heuristic might be evaluated based on an evaluation of how much computation using the heuristic saves versus how much bias using the heuristic will incur. Using a bad heuristic might cause an individual to become biased, but the heuristic itself is not a bias.
I agree with your last sentence. The important thing should be how much good does the charity really do to those children. Are they really making their lives better, or is it merely some nonsense to “show that we care”?
Because there are many charities (at least in my country) focusing on providing children things they don’t really need; such as donating boring used books to children in orphanages. Obviously, “giving to children in orphanages” is a touching signal of caring, and most people don’t realize that those children already have more books than they can read (and they usually don’t wish to read the kind of books other people are throwing away, because honestly no one does). In this case, the real help to children in orphanages would be trying to change the legislation to make their adoption easier (again, this is an issue in my country, in your part of the world the situation may be different), helping them avoid abuse, or providing them human contact and meaningful activities. But most people don’t care about the details, not even enough to learn those details.
This depends on what you mean by “care”, i.e., they care about children in the sense that they derive warm fuzzies from doing things that superficially seem to help them. They don’t care in the sense that they aren’t interested in how much said actions actually help children (or whether they help them at all).
If I do something for myself, and there is no obvious result, I see that there is no obvious result, and i disappoints me. If I do something for other people, there is always an obvious result: I feel better about myself.
Because other people reward you socially for doing things for other people. If you do something good for person A, it makes sense for a person A to reward you—they want to reinforce the behavior they benefit from. But it also makes sense for an unrelated person B to reward you, despite not benefiting from this specific action—they want to reinforce the general algorithm that makes you help other people, because who knows, tomorrow they may benefit from the same algorithm.
The experimental prediction of this hypothesis is that the person B will be more likely to reward you socially for helping person A, if the person B believes they belong to the same reference class as person A (and thus it is more likely that an algorithm benefiting A would also benefit B).
Now who would have a motivation to reward you for helping yourself? One possibility is a person who really loves you; such person would be happy to see you doing things that benefit you. Parents or grandparents may be in that position naturally.
Another possibility is a person who sees you as a loyal member of their tribe, but not a threat. For such person, your success is a success of the tribe is their success. They benefit from having stronger allies; unless those allies becoming strong changes their position within the tribe. So one would help members of their tribe who are significantly weaker… or perhaps even significantly stronger… in either case the tribe becomes stronger and the relative position within the tribe is not changed. The first part is teachers helping their students, or tribe leaders helping their tribe except for their rivals; the second part is average tribe members supporting their leader.
Again, the experimental prediction would be that when you join some “tribe”, the people stronger than you will support you at the beginning, but then will be likely to stab you in the back when you reach their level.
Now, how to use this knowledge for your success in the real life. We are influenced by social rewards whether we want it or not. One strategy could be trying to reward myself indirectly—for example make a commitment that when I make something useful for myself, I will reward myself by exposing myself to a friendly social interaction. Second strategy is to find company of people who love me, by using “do they reward me for helping myself?” as a filter. (Problem is how to tell a difference between these people, and those that reward me for being a weak member of their tribe, and will later backstab me when I become stronger.) Third strategy is to find company of people much stronger than me with similar values. (And not forget to switch to even stronger people when I become strong.) Another strategy could be to join a group that feels far from the victory… a group that is still in the “conquering the world” mode, not in the “sharing the spoils” mode. (Be careful when the group reaches some victories.)
Anecdotal verification: one of my friends said that when he was running out of money, it made sense for him to buy meals for other people. Those people didn’t reciprocate, but third parties were more likely to help him.
Then I guess people from CFAR should go to some universities and give lectures about… effective altruism. (With the expected result that the students will be more likely to support CFAR and attend their seminars.) Or I could try this in my country when recruiting for my local LW group.
I guess it also explains why religious groups focus so much on charity. It is difficult to argue against a group that many people associate with “helping others”, even if other actions of the group hurt others. The winning strategy is probably making the charity 10% of what you really do, but 90% of what other people associate with you.
EDIT: Doing charity is the traditional PR activity of governments, U.N., various cults and foundations. I feel like reinventing the wheel again. The winning strategies are already known and fully exploited. I just didn’t recognize them as viable strategies for everyone including me, because I was successfully conditioned to associate them with someone else.
Sure. For example if you are donating money, you display your ability to make more money than you need. And if you donate someone else’s money (like a church that takes money from state), you display your ability to take money from people, which is even more impressive.
Because it’s considered good to even try to help someone else so you care less about outcomes. EG donating to charity is a good act regardless of whether you check to see if your donation saved a life. On the other hand, doing something for yourself that has no real benefits is viewed as a waste of time.
Do I have a bias or useful heuristic? If a signal is easy to fake, is it a bias to assume that it is disingenuous or is it an useful heuristic?
I read Robert Hanson’s post about why there are so many charities specifically focusing on kids and he basically summed it up as signalling to seem kind, for potential mates, being a major factor. There were some good rebuttals in the comment sections but whether or not signalling is at play is not the point, I’m sure to a certain degree it is, how much? I don’t know. The point is that I automatically dismiss the authenticity of a signal if the signal is difficult to authenticate. In this example it is possible for people to both, signal that they care about children for a potential mate, as well as actually really caring about children ( e.g. innate emotional response).
EDIT: Just to be clear, this is a question about signalling and how I strongly associate easy to fake signals with dishonest signalling, not about charities.
That’s like asking whether someone is a freedom fighter or a terrorist.
Every heuristic involves a bias when you use it in some contexts.
Yes, but does it more often yield a satisfactory solution across many contexts if yes, then I’d label it a useful heuristic and if it is often wrong I would label it a bias.
You’re not using your words as effectively as you could be. Heuristics are mental shortcuts, bias is a systematic deviation from rationality. A heuristic can’t be a bias, and a bias can’t be a heuristic. Heuristics can lead to bias. The utility of a certain heuristic might be evaluated based on an evaluation of how much computation using the heuristic saves versus how much bias using the heuristic will incur. Using a bad heuristic might cause an individual to become biased, but the heuristic itself is not a bias.
I agree with your last sentence. The important thing should be how much good does the charity really do to those children. Are they really making their lives better, or is it merely some nonsense to “show that we care”?
Because there are many charities (at least in my country) focusing on providing children things they don’t really need; such as donating boring used books to children in orphanages. Obviously, “giving to children in orphanages” is a touching signal of caring, and most people don’t realize that those children already have more books than they can read (and they usually don’t wish to read the kind of books other people are throwing away, because honestly no one does). In this case, the real help to children in orphanages would be trying to change the legislation to make their adoption easier (again, this is an issue in my country, in your part of the world the situation may be different), helping them avoid abuse, or providing them human contact and meaningful activities. But most people don’t care about the details, not even enough to learn those details.
I suspect there’s also some sentimentality about books in play.
Yes, throwing a book away is nearly like burning it. Giving it to an orphanage is completely guilt free.
This depends on what you mean by “care”, i.e., they care about children in the sense that they derive warm fuzzies from doing things that superficially seem to help them. They don’t care in the sense that they aren’t interested in how much said actions actually help children (or whether they help them at all).
I think that most people just never question the effectivity of the charities they donate to. It’s a charity for xxx, of course it helps xxx!
And yet they question the effectivity of the things they do for themselves.
Well, because that’s in near mode.
If I do something for myself, and there is no obvious result, I see that there is no obvious result, and i disappoints me. If I do something for other people, there is always an obvious result: I feel better about myself.
This is more or less the distinction I was going for.
Why isn’t this equally true for doing things for oneself?
Because other people reward you socially for doing things for other people. If you do something good for person A, it makes sense for a person A to reward you—they want to reinforce the behavior they benefit from. But it also makes sense for an unrelated person B to reward you, despite not benefiting from this specific action—they want to reinforce the general algorithm that makes you help other people, because who knows, tomorrow they may benefit from the same algorithm.
The experimental prediction of this hypothesis is that the person B will be more likely to reward you socially for helping person A, if the person B believes they belong to the same reference class as person A (and thus it is more likely that an algorithm benefiting A would also benefit B).
Now who would have a motivation to reward you for helping yourself? One possibility is a person who really loves you; such person would be happy to see you doing things that benefit you. Parents or grandparents may be in that position naturally.
Another possibility is a person who sees you as a loyal member of their tribe, but not a threat. For such person, your success is a success of the tribe is their success. They benefit from having stronger allies; unless those allies becoming strong changes their position within the tribe. So one would help members of their tribe who are significantly weaker… or perhaps even significantly stronger… in either case the tribe becomes stronger and the relative position within the tribe is not changed. The first part is teachers helping their students, or tribe leaders helping their tribe except for their rivals; the second part is average tribe members supporting their leader.
Again, the experimental prediction would be that when you join some “tribe”, the people stronger than you will support you at the beginning, but then will be likely to stab you in the back when you reach their level.
Now, how to use this knowledge for your success in the real life. We are influenced by social rewards whether we want it or not. One strategy could be trying to reward myself indirectly—for example make a commitment that when I make something useful for myself, I will reward myself by exposing myself to a friendly social interaction. Second strategy is to find company of people who love me, by using “do they reward me for helping myself?” as a filter. (Problem is how to tell a difference between these people, and those that reward me for being a weak member of their tribe, and will later backstab me when I become stronger.) Third strategy is to find company of people much stronger than me with similar values. (And not forget to switch to even stronger people when I become strong.) Another strategy could be to join a group that feels far from the victory… a group that is still in the “conquering the world” mode, not in the “sharing the spoils” mode. (Be careful when the group reaches some victories.)
Anecdotal verification: one of my friends said that when he was running out of money, it made sense for him to buy meals for other people. Those people didn’t reciprocate, but third parties were more likely to help him.
Then I guess people from CFAR should go to some universities and give lectures about… effective altruism. (With the expected result that the students will be more likely to support CFAR and attend their seminars.) Or I could try this in my country when recruiting for my local LW group.
I guess it also explains why religious groups focus so much on charity. It is difficult to argue against a group that many people associate with “helping others”, even if other actions of the group hurt others. The winning strategy is probably making the charity 10% of what you really do, but 90% of what other people associate with you.
EDIT: Doing charity is the traditional PR activity of governments, U.N., various cults and foundations. I feel like reinventing the wheel again. The winning strategies are already known and fully exploited. I just didn’t recognize them as viable strategies for everyone including me, because I was successfully conditioned to associate them with someone else.
Among other things, charity is a show of strength.
Sure. For example if you are donating money, you display your ability to make more money than you need. And if you donate someone else’s money (like a church that takes money from state), you display your ability to take money from people, which is even more impressive.
wow this is an insanely better version of my comment.
Because it’s considered good to even try to help someone else so you care less about outcomes. EG donating to charity is a good act regardless of whether you check to see if your donation saved a life. On the other hand, doing something for yourself that has no real benefits is viewed as a waste of time.
How comes practitioners of (say) homoeopathy haven’t all gone bankrupt then?
Just because you question something, doesn’t mean you reach the correct answer.