I see it as equivalent if your cost-benefit calculation values that which is functional and creative.
denisbider
My observations aren’t Randian in origin. At least, I haven’t read her books; I even somewhat disapprove of her, from what I know of her idiosyncrasies as a person.
I do think that this is an important topic for this group to consider, because the community is about rationality. My observation is that many commenters seem to not be realizing the proper role of empathy in our emotional spectrum, and are trying to extent their empathy to their broader environment in ways that don’t make sense.
Also, if my anti-empathy comment is being downvoted because it isn’t part of a group theme, then the pro-empathy comments should be downvoted as well, but they are not. This indicates that people vote based on what they agree with, whether it is in context or not—and not based on what is in context, and/or provides food for thought.
Thanks.
Oh, well now it’s −6. :))
Please clarify. The article you link to is sensible, yet I do not see what part of it is at odds with what I wrote.
I am essentially saying that charity is harmful because the cost-benefit calculation comes out negative when charity is used outside of the context in which it works (a small, closely knit social group).
Actually thinking that out loud makes you honest. People who think of themselves as compassionate are much the same as I described, except that they would rather have me not exist, because my existence violates their values. Instead, they would prefer the existence of non-contributing people who need their help. (I have actually heard that from folks like that, in quite those words.)
The difference between me and such people is that they don’t understand themselves—nor the dynamics of the world we live in. It’s frustrating to be labeled a heartless bastard, but understanding what I do and acting differently would make me a hypocrite and spread falsity. According to my values, that’s much worse.
It’s also interesting to see how karma on this site falls steadily with honesty, and what that implies about what the balance of readers come here for. Sadly, it seems to be to further their existing preconceptions. :)
I think I’m with Wei in his analysis—resolving the inconsistency from the top down, not from the bottom up.
I accept that our feelings of empathy and compassion are something evolution came up with in order to make us function decently in small groups. I accept that this empathy works only for small groups, and cannot scale to groups that are too large for everyone to keep track of each other. Maintaining cohesion and functionality in larger groups requires formal mechanisms such as hierarchy and money, and empathy is at best of marginal value, or at worst sabotages a constructive order. Universal empathy is, if not outright impossible, at least very difficult to reconcile with the things we do to other creatures for convenience.
Of abstract things related to humanity, my top values are creativity and prosperity, not individual people. My perception is that a relatively small proportion of people contribute the vast majority of that which I value. On the other hand, a relatively large proportion of people are having disruptive or destructive effects.
I therefore do not value human life in general, just like I don’t value bacteria in general, but I value that human life (and that bacteria) which contributes towards the creativity and prosperity I want to see. People who undermine that, I have no compassion for, and I would in fact prefer them to not exist.
See what I mean about the voting system being broken?
http://lesswrong.com/lw/1r9/shut_up_and_divide/1lxw
Currently voted −2 and below threshold.
Completely rational points of view that people find offensive cannot be expressed.
This is a site that is supposed to be about countering bias. Countering bias necessarily involves assaulting our emotional preconceptions which are the cause of falsity of thought. Yet, performing such assaults is actively discouraged.
Does that make this site Less Wrong, or More Wrong?
Yes, but charity is not without external consequence.
The continuous rewarding of the dysfunctional does have long term effects, which I believe are negative on balance.
The reason we evolved empathy is for cohesion with our immediate social group, where our empathy is balanced with everyone keeping track of everyone else, and an effective sense of group fairness.
But this only works within our immediate social group. Charity towards complete strangers is harmful because it is not balanced with fairness.
To balance our economic interactions outside the immediate social group that we can monitor, we already have a functioning system that’s fair and encourages constructive behavior.
That system is money. Use it for what it’s for.
If F(n) < n, then yes, karma disappears from the system when voting on comments, but is pumped back in when voting on articles.
It does appear that the choice of a suitable F(n) isn’t quite obvious, and this is probably why F(n) = infinite is currently used.
Still, I think that a more restrictive choice would produce better results, and less frivolous voting.
Voted down for failing to get the point.
You should also help people when they are suffering and you are able.
Quite the opposite. Most suffering is self-inflicted, and as such is a reminder that you need to learn a lesson. External help removes the suffering and makes it seem as though no lesson needs to be learned. This perpetuates the cycle and leads to more suffering.
One shouldn’t do one’s kids’ homework.
Charity is the process of taking purchasing power away from functional, creative individuals and communities, and giving it to dysfunctional, destructive individuals and communities.
Charity doesn’t change the nature of the dysfunctional and destructive. It only restructures the reward system so that the dysfunctional and the destructive is rewarded, and the functional and constructive is penalized.
A person who does this willingly is, I am sad to say, stupid. You are only supposed to do this if people force you at gunpoint (taxes), and even then it’s more patriotic to flee.
You should reward people for doing the right thing—providing a quality product or service—not for when they fail miserably.
- Feb 11, 2010, 9:17 PM; 6 points) 's comment on Shut Up and Divide? by (
- Feb 11, 2010, 4:57 PM; -8 points) 's comment on Open Thread: February 2010 by (
Why do you people worry so much about human suffering while eating meat?
Charity is generally harmful. The problem of charity is not how to raise money; everyone likes to purchase good feelings by contributing to charity. The problem is how to spend it so it actually helps, rather than harms.
Chances are, you’re going to be spending your money more wisely if it doesn’t fall from the sky.
If this is money for AI, I don’t see how money is even the bottleneck at this point.
While the LW voting system seems to work, and it is possibly better than the absence of any threshold, my experience is that the posts that contain valuable and challenging content don’t get upvoted, while the most upvotes are received by posts that state the obvious or express an emotion with which readers identify.
I feel there’s some counterproductivity there, as well as an encouragement of groupthink. Most significantly, I have noticed that posts which challenge that which the group takes for granted get downvoted. In order to maintain karma, it may in fact be important not to annoy others with ideas they don’t like—to avoid challenging majority wisdom, or to do so very carefully and selectively. Meanwhile, playing on the emotional strings of the readers works like a charm, even though that’s one of the most bias-encouraging behaviors, and rather counterproductive.
I find those flaws of some concern for a site like this one. I think the voting system should be altered to make upvoting as well as downvoting more costly. If you have to pick and choose what comments and articles to upvote/downnvote, I think people will be voting with more reason.
There are various ways to make voting costlier, but an easy way would be to restrict the number of votes anyone has. One solution would be for votes to be related to karma. If I’ve gained 500 karma, I should be able to upvote or downvote F(500) comments, where F would probably be a log function of some sort. This would both give more leverage to people who are more active contributors, especially those who write well-accepted articles (since you get 10x karma per upvote for that), and it would also limit the damage from casual participants who might otherwise be inclined to vote more emotionally.
For posts, this might work.
For comments, these are loaded without most readers reading them. Furthermore, the likelihood that any single comment will be read decreases with the number of all comments. It seems like this would work much less well for comments.
The way I understand our disagreement is, you see FAI as a limited-functionality add-on that makes a few aspects of our lives easier for us, while I see it as an unstoppable force, with great implications for everything in its causal future, which just can’t not revolutionize everything, including how we feel, how we think, what we do. I believe I’m following the chain of reasoning to the end, whereas you appear to think we can stop after the first couple steps.
Is there some reason to believe our current degree of complexity is optimal?
Why would we want to be reforged as something that suffers boredom, when we can be reforged as something that never experiences a negative feeling at all? Or experiences them just for variety, if that is what one would prefer?
If complexity is such a plus, then why stop at what we are now? Why not make ourselves more complex? Right now we chase after air, water, food, shelter, love, social status, why not make things more fun by making us all desire paperclips, too? That would be more complex. Everything we already do now, but now with paperclips! Sounds fun? :)
But the AI has to plan for a maximized outcome until the end of the universe. In order to maximize the benefit from energy before thermal death, resource efficiency right now is as important as when resources will be scarcest.
This is unless the AI discovers that thermal death can be overcome, in which case, great! But what we know so far indicates that the universe will eventually die, even if many billions of year in the future. So conservative resource management is important from day 1.
Which is more functional and creative: a community that leverages its own potential and builds its own clinic, or a community that relies on outsiders to provide that clinic?