According to the principle of enlightened self-interest, you should help other people because this will help you in the long run. I’ve seen it argued that this is the reason why people have an instinct to help others. I don’t think that this would mean helping people the way an Effective Altruist would. It would mean giving the way people instinctually do. You give gifts to friends, give to your community, give to children’s hospitals, that sort of thing.
This makes me wonder about what I’m calling enlightened altruism. If you get power from helping people in that way, then you can use the power to help people effectively.
Well, we can use the outside view here. If we look at people who are particularly successful, did they get that way by helping others? What’s the proportion relative to poor people?
I don’t think this backs up the idea of enlightened self-interest very well. Sure, you have to “play by the rules” to be successful, but going above and beyond doesn’t seem to lead to additional success.
Another question we might ask is “where do peoples’ instincts for giving come from?” If you believe Dawkins et al., it’s the selfishness of genes, which does not have to causally pay of for the organism (instead, the payoff is acausal). This is not the sort of thing where giving according to our instincts will lead to us getting more money.
Imagine a planet with one billion people each of whom has $1000, except the 99,999,990 people who played the lottery and lost and now have $990 each and the 10 people who played the lottery and won and now have $1,000,990 each. 100% of the rich people played the lottery whereas only 10% of the poor people did so, but that doesn’t mean playing the lottery was a good idea.
which does not have to causally pay of for the organism (instead, the payoff is acausal)
I’m not sure what you mean here. Can you give an example?
Suppose I have a gene that makes me cooperate in a prisoner’s dilemma with my relatives. This gene benefits me, because now I can cooperate with my cousins and get the better payoff (assuming my cousins also have this gene!). But you know what would be even better? If my cousins cooperated with me but I defected. So from a causal decision theory standpoint, my best route is to ignore my instincts and defect.
But if I had a gene that said “defect with my cousins,” that would mean my cousins defect back, and so we all lose. So our instincts be beneficial even when the individual best strategy doesn’t line up with them (Because our instincts can be correlated with other humans’).
But you know what would be even better? If my cousins cooperated with me but I defected. So from a causal decision theory standpoint, my best route is to ignore my instincts and defect.
This reasoning assumes that you are special and significantly different from your cousins. If you’re not, your cousins follow the same strategy and you all defect, gene or no gene.
It’s mostly limited to this site, and I don’t know how much that exact wording is used, but it refers to things like Newcomb’s problem, where you can get some benefit from what you do, but you’re not actually causing it.
I should add that when I told Manfred, I didn’t understand, it was more that I didn’t understand how it applied to that situation.
I’m familiar with the concept of an acausal trade. But I don’t understand how it applies to the situation of playing Prisoner’s Dilemma with your cousins.
According to the principle of enlightened self-interest, you should help other people because this will help you in the long run. I’ve seen it argued that this is the reason why people have an instinct to help others. I don’t think that this would mean helping people the way an Effective Altruist would. It would mean giving the way people instinctually do. You give gifts to friends, give to your community, give to children’s hospitals, that sort of thing.
This makes me wonder about what I’m calling enlightened altruism. If you get power from helping people in that way, then you can use the power to help people effectively.
Well, we can use the outside view here. If we look at people who are particularly successful, did they get that way by helping others? What’s the proportion relative to poor people?
I don’t think this backs up the idea of enlightened self-interest very well. Sure, you have to “play by the rules” to be successful, but going above and beyond doesn’t seem to lead to additional success.
Another question we might ask is “where do peoples’ instincts for giving come from?” If you believe Dawkins et al., it’s the selfishness of genes, which does not have to causally pay of for the organism (instead, the payoff is acausal). This is not the sort of thing where giving according to our instincts will lead to us getting more money.
Survivorship bias alert!
He qualified that by “What’s the proportion relative to poor people?” thus not just looking at the survivors.
Imagine a planet with one billion people each of whom has $1000, except the 99,999,990 people who played the lottery and lost and now have $990 each and the 10 people who played the lottery and won and now have $1,000,990 each. 100% of the rich people played the lottery whereas only 10% of the poor people did so, but that doesn’t mean playing the lottery was a good idea.
My point is more about giving the standard amount to the standard charities, rather than earmarking it all for the most efficient one.
I’m not sure what you mean here. Can you give an example?
Suppose I have a gene that makes me cooperate in a prisoner’s dilemma with my relatives. This gene benefits me, because now I can cooperate with my cousins and get the better payoff (assuming my cousins also have this gene!). But you know what would be even better? If my cousins cooperated with me but I defected. So from a causal decision theory standpoint, my best route is to ignore my instincts and defect.
But if I had a gene that said “defect with my cousins,” that would mean my cousins defect back, and so we all lose. So our instincts be beneficial even when the individual best strategy doesn’t line up with them (Because our instincts can be correlated with other humans’).
This reasoning assumes that you are special and significantly different from your cousins. If you’re not, your cousins follow the same strategy and you all defect, gene or no gene.
That’s what acausal benefit means.
Google: No results found for “acausal benefit”
Can you elaborate?
It’s mostly limited to this site, and I don’t know how much that exact wording is used, but it refers to things like Newcomb’s problem, where you can get some benefit from what you do, but you’re not actually causing it.
I should add that when I told Manfred, I didn’t understand, it was more that I didn’t understand how it applied to that situation.
I’m familiar with the concept of an acausal trade. But I don’t understand how it applies to the situation of playing Prisoner’s Dilemma with your cousins.
The wiki article on acausal trade may prove helpful.