“Doing Good in the Most Helping Way”- It is good to try to help people. It is better to help people in the best way possible. You should look at what actually happens when you try to help people in order to find out how well your helping worked. If we look at lots of different ways of helping people, then we can find out which way is best. You should give your money to the people who are best at helping people.
Where we live, and in places like it, everyone has lots more money than most people who live in other places. That means we have lots that we can give away to the people in the other places. It might be a good idea to try to make lots of money so that you can give away even more!
Hi, I’m new to LessWrong and haven’t read the morality sequence and haven’t read many arguments for effective altruism, so could you elaborate on this sentiment?
I agree with this kind of movement because intuitively it feels really good to help people and it feels really bad to know that people or animals are suffering. I think it’s quite certain that there other minds similar to mine and these minds are capable of same kind of feelings that I am. I wouldn’t want other people to feel the same kind of bad feelings that I have sometimes felt, but I know there are minds who experience more than a million times the worst pain I’ve ever felt.
Still, there are some people, who think rationality is about always thinking about only one’s own well-being, who might disagree with this. They might say, that the well-being of other minds doesn’t affect your mind directly. So if you don’t know about it, it’s irrelevant to you. Some of these people may also try to minimize the effect of the natural empathy by acknowledging that the being who is suffering is different from you. They could be your enemies or someone who is not “worth” your efforts. It’s easier to cope with the fact that an animal who belongs into a different species is suffering than someone in your family. Or consider someone who has a different skin color and whose people behave strangely and who sometimes have violent and “primitive” habits are suffering on the other side of the world (note, this is not what I think, but what I’ve heard other people say… they basically think some people are a bit like the baby eating aliens) - are their suffering worth less? Intuitively it feels that way because they don’t belong into your tribe. Anyway, these minds are still capable of same kind of suffering.
The question still stands, if someone is “rationally” interested in one’s own well-being only, and if someone only cares about other minds to the extent of how they affect your own mind through the natural empathy reflex, then why should they be interested in the well-being of other minds purely for the sake of their well-being? Shouldn’t they only be interested in how the suffering of the other minds affects your own mind?
Hi, welcome to LW! I will reply to your comments here in one place, instead of each of them separately.
Do you mean I should only post there until I mature enough that I can post here?
No. It is okay to ask, and it is also okay to disagree. Just choose a proper place. For example, this is an article about “explaining hard ideas with simple words”, therefore this discussion being here is quickly getting off-topic. You are not speaking about explaining effective atruism using simple words, but about misconceptions people have about rationality and altruism. That’s a different topic; and now the whole comment tree is unrelated to the original article.
Don’t worry, it happens. Just know that the proper place to ask questions like this is usually the latest Open Thread, and sometimes there is a special thread (like the “stupid” questions) for that. (I would say this is actually the website’s fault, for not making the Open Thread more visible.)
if someone is “rationally” interested in one’s own well-being only
Then of course such person will act rationally by caring only about their own well-being, and considering others only to the degree they influence this specific goal. For example, a rational sociopath. -- Sometimes we speak about paperclip maximizers, to make it more obvious (and less related to specific details of sociopathy or whatever). For a paperclip maximizer, it is rational to maximize the number of paperclips, and to care about human suffering only as much as it can influence the number of paperclips. So for example, if people would react to their suffering by destroying paperclips, or if they would respond to paperclip maximizer’s help by building many new paperclips out of gratitude, then the paperclip maximizer could help them. The paperclip maximizer could even pretend it cares about human suffering, if that helps to maximize the number of paperclips in the future. -- But we are not trying here to sell effective altruism to paperclip maximizers, nor to sociopaths. Only to people who (a) care about suffering of others, and (b) want to be reflectively consistent (want to care about what they would care about if they knew more, etc.).
There is this “Hollywood rationality” meme, which suggests that rational people should be sociopaths; or even should consider themselves imperfect if they aren’t ones… and should feel a pressure to self-modify to become ones. I guess most people here consider this bullshit; and actually exposing the bullshitness of similar ideas is one of the missions of LW. Perhaps the simplest response is: Uhm… why? Perhaps someone had a wrong idea of rationality, and is now optimizing for that wrong idea. (See the nameless virtue.)
Essentially this would be a debate about whether people truly care about others, or whether we truly are self-deceiving sociopaths (and therefore the most rational ones should be able to see through this self-deception). What does that even mean? What does it mean for a human? There is a ton of assumptions and confusions, so we shouldn’t expect to solve this all within five minutes. (And until this all is solved, my lazy answer is that the burden of proof is on those people who suggest that a self-modification to a sociopath is the best way of maximizing my values which currently include caring for others. Because optimizing for values I don’t have seems like a lost purpose.) We will not solve this fully here; perhaps at some other place.
Do you usually expect people to read all the sequences before they can ask questions?
It would be nice, because we wouldn’t have to go over the basics again and again. On the other hand, it’s not realistic. Perhaps the nice and realistic solution could be this: A new person asks something that sounds already-answered to the veterans. The veterans give a short explanation and links to relevant articles from the sequences. The new person reads the articles; and if there are further questions or if the original question does not seem answered sufficiently, then the new person asks additional questions in an Open Thread.
Again, if people here agree with this solution, then it probably should be a policy written in a visible place.
Hi, I’m new to LessWrong and haven’t read the morality sequence and haven’t read many arguments for effective altruism, so could you elaborate on this sentiment?
How I read this: “Hi! I know exactly where to find the information I am asking for, but instead of reading the material (that I know exists) that has already been written that answers my question, can you write a response that explains the whole of morality?”
To start off with, you seem to be using the term “rationality” to mean something completely different than what we mean when we say it. I recommend Julia Galef’s Straw Vulcan talk.
You slightly misunderstood what I meant, but maybe that’s understandable. I’m not a native English speaker and I’m quite poor at expressing myself even in my native language. You don’t have to be so condescending, I was just being curious. Do you usually expect people to read all the sequences before they can ask questions? If so, I apologize because I didn’t know this rule. I can come back here after a few months when I’ve read all the sequences.
I know exactly where to find the information I am asking for, but instead of reading the material (that I know exists)
Okay, sorry. I just wanted to be honest. I have read most of the sequences listed on the sequences page. The morality sequence is quite big and reading it seems a daunting task because I have books related to my degree that I’m supposed to be reading and they are of bigger importance to me at the moment. I thought there could be a quick answer to this question. But if you have any specific blog posts related to this issue in mind, please link them!
To start off with, you seem to be using the term “rationality” to mean something completely different than what we mean when we say it.
I’m aware of that. With quotation marks around the word I was signaling that I don’t really think it’s real rationality or the same kind of rationality LessWrong people use. I know that rationalist people don’t think that way. It’s just that in some economic texts people to use the word “rationality” to mean that: a “rational” agent is only interested in his own well-being.
I recommend Julia Galef’s Straw Vulcan talk.
I have read relevant blog posts on LessWrong and I think I know this concept. People think rational people are supposed to be some kind of emotional robots who don’t have any feelings and otherwise thinking like modern-day computer, very mechanically and not being very flexible in their thinking etc. In reality people can use instrumental rationality to achieve the emotionally desired goals they have or use epistemic rationality to find out what their emotionally desired goals really are?
Keep in mind that this “rationality” is just a word. Making up a word shouldn’t, on its own, be enough to show that something is good or bad. If self-interest is more “rational” than helping others, then you should be able to give good reasons for that with other words that are more clear and simple.
People get very confused when they start thinking that what they actually want matters less than some piece of paper saying what they Should or Shouldn’t want. Even if some made-up idea says you Shouldn’t want to help others except to make yourself happy, why should that matter more to me than what I actually want, which is just to help people? This is a lot like Mr. Yudkowsky’s “being sad about having to think and decide well”.
Btw, that link is really good and it made me think a bit differently. I’ve sometimes envied others for their choices and thought I’m supposed to behave in a certain way that is opposite to that… but actually what matters is what I want and how I can achieve my desires, not how I’m supposed to act.
Right! “I should...” is a means for actually making the world a better place. Don’t let it hide away in its own world; make it face up to the concerns and wishes you really have.
If self-interest is more “rational” than helping others, then you should be able to give good reasons for that with other words that are more clear and simple.
I think the gist is that we all live inside our own bubbles of consciousness and can only observe indirectly what is inside other people’s bubbles. Everything that motivates you or makes you do anything is inside that bubble. If you expand this kind of thinking, it’s not really important what is inside those other bubbles, only how they affect you. But this is kinda contrived philosophy.
I think the problem might be confusing connotation and denotation. ‘Rational self-interest’ is a term because most rationality isn’t self-interested, and most self-interest isn’t rational. But when words congeal into a phrase like that, sometimes they can seem to be interchangeable. And it doesn’t help that aynrand romanticism psychodarwinism hollywood.
Yep, the Ayn Rand type of literature is what originally brought this to my mind. I also read a book about economic sociology which told about the prisoner’s dilemma and it said the most “rational” choice is to always betray your partner (if you only play once) and Nash was surprised when people didn’t behave this way
That’s a roughly high-school-level misunderstanding of what the Prisoner’s Dilemma means, though I suppose it makes sense to be surprised that humans care about each other if you’d never met a human, and it did make sense to be confused by why humans care about each other until we recognized that (uncertainly) iterated dilemmas and kin selection were involved. I believe a great many people on LessWrong also reject the economic consensus on this issue, however; they think that two rational agents can cooperate in something like a classical PD, provided only that they have information about one another’s (super)rationality. See True Prisoner’s Dilemma and Decision Theory FAQ.
In the real world, most human interactions are not Prisoner’s Dilemmas, because in most cases people prefer something that sounds like ‘(Cooperate, Cooperate)’ to ‘(Cooperate, Defect)’. whereas in the PD the latter must have a higher payoff.
“It (game theory) assumes actors are more rational than they often are in reality. Even Nash faced this problem when some economists found that real subjects responded differently from Nash’s prediction: they followed rules of fairness, not cold, personal calculation (Nassar 1998: 199)”
Yeah, I remember reading that some slightly generous version of tit-for-tat is the most useful tactic in prisoner’s dilemma at least if you’re playing several rounds.
The reason I ask is because I have heard this claim many times, but have never encountered an actual textbook that taught it, so I’m not sure if it has any basis in reality or is just a straw man (perhaps, designed to discredit economics, or merely an honest misunderstanding of the optimization principle).
I would say that “doing good in the most helping way” only matters if you want things to be good for other people (or animals). A person who thinks well might want things to be good for other people, or want things to be good for themselves, or want there to be lots of things to hold paper together—to think well means to do things the best way to get what they want, but not to want any one thing.
Knowing whether you want things to be good for other people, or just want things to be good for yourself but feel sad when things are bad for other people, is sort of like a different thing people think about here. Sometimes we think about if we should want a thing that makes us think we have what we want, even though we are really just sitting around with the thing on our heads . If I want to think that things are good for other people (because it will make me happy and the biggest thing I want is to be happy), then I can get what I want by changing what I think. But if what I want is for things to be good for other people (even if it does not make me happy), then the only way I can get what I want is to make things better for other people (and so I want to do good in the most helping way).
I should say, I think a little different about good from most people here. Most people here think that you can want something, but also think that it is bad. I think that if you think you want something that is bad, you are probably confused about what you want, and you would stop wanting the bad thing if you thought about it enough and felt how bad it was. I am not totally sure that I am right about this, though.
The question still stands, if someone is “rationally” interested in one’s own well-being only, and if someone only cares about other minds to the extent of how they affect your own mind through the natural empathy reflex, then why should they be interested in the well-being of other minds purely for the sake of their well-being? Shouldn’t they only be interested in how the suffering of the other minds affects your own mind?
I don’t think it’s really possible to argue against this idea. If you’re only interested in your own well-being, then doing things which do not increase your own well-being will not help you achieve your goal.
But how happy or sad other minds are does change how happy or sad I am. Why would it be looking out for myself better if I ignored something that changes my life in a big way? And why should I pretend to only care about myself if I really do care about others? Or pretend to only care about how others cause changes in me, when I do in fact care about the well-being of people who don’t change me?
Suppose I said to you that it’s bad to care about the person you’re going to be. After all, you aren’t that person now. That person’s thoughts and concerns are outside of the present you. And that person can’t change anything for the present you.
That wouldn’t be a very good reason to ignore the person I’ll become. After all, I do want the person I’m going to be to be happy. I don’t need to give reasons showing why I should care about myself over time. I just need to note that I do in fact care about myself over time. How is this different, in any important way that changes the reasoning above, from noting that I do in fact care about other people in their own right?
If people only cared about other people as ways to get warm good feels for themselves, then people would be happy to change themselves to get warm good feels both when others are happy and when others are sad. People also wouldn’t care about people too far away to cause changes for them. But if I send a space car full of people far away from me, I still want them to be happy even after they’re too far away to ever change anything for me again. That’s a fact about how I am. Why should I try to change that?
I guess that makes sense. When people say things like “I want a lot of money”, “I want to live in a fulfilling relationship”, “I want to climb mt. everest”, the essential quality of these desires is that they are real and actually happen roughly the same way you picture it in your mind. No one says things like “I want to have the good feeling of living in a fulfilling relationship whether or not I actually live in one”… no. Because it’s important that they’re actually real. You can say the same thing about helping others—if you don’t want other people to suffer, then it’s important that they actually don’t suffer.
That wouldn’t be a very good reason to ignore the person I’ll become. After all, I do want the person I’m going to be to be happy. How is this different, in any important way that changes the reasoning above, from noting that I do in fact care about other people in their own right?
It’s a bit different. You will eventually become the person you are in the future, but it’s impossible to never get inside the mind of someone else, at least not directly.
people would be happy to change themselves to get warm good feels both when others are happy and when others are sad.
How would you actually change yourself? It’s very difficult in practice.
People also wouldn’t care about people too far away to cause changes for them.
But people don’t care about far away people so much as they care about people that are similar to you. When westerners get in trouble in developing countries, people make a big effort to get them safe and mostly ignore all suffering that is going on around that. People send less money to people in developing countries than say, war veterans or people at home.
That’s a fact about how I am. Why should I try to change that?
You shouldn’t. I’m the same way, I try to help people for the sake of helping them. But there are some people who are only interested in their own well-being and I’m just thinking how I could argue with them.
Because it’s important that they’re actually real.
Yes! I think that’s a lot like what I was talking about.
You will eventually become the person you are in the future
Present-you won’t. Present-you will go away and never know that will happen. You-over-time may change from present-you to you-to-come, but I wasn’t talking about you-over-time.
Also, mind reading could change this some day, maybe.
How would you actually change yourself? It’s very difficult in practice.
Yes, but even if it weren’t possible at all, and we thought it were possible, whether we wished for it could say a lot about what we really want.
People send less money to people in developing countries than say, war veterans or people at home.
Yes, but that’s very different from saying that people don’t care about far away people at all except in so far as they get changed by them. If it were completely easy for you to in a flash make the lives of everyone you’ll never know about ten times as good, for free, you would want to do that.
Effective Altruism
“Doing Good in the Most Helping Way”- It is good to try to help people. It is better to help people in the best way possible. You should look at what actually happens when you try to help people in order to find out how well your helping worked. If we look at lots of different ways of helping people, then we can find out which way is best. You should give your money to the people who are best at helping people.
Where we live, and in places like it, everyone has lots more money than most people who live in other places. That means we have lots that we can give away to the people in the other places. It might be a good idea to try to make lots of money so that you can give away even more!
Hi, I’m new to LessWrong and haven’t read the morality sequence and haven’t read many arguments for effective altruism, so could you elaborate on this sentiment?
I agree with this kind of movement because intuitively it feels really good to help people and it feels really bad to know that people or animals are suffering. I think it’s quite certain that there other minds similar to mine and these minds are capable of same kind of feelings that I am. I wouldn’t want other people to feel the same kind of bad feelings that I have sometimes felt, but I know there are minds who experience more than a million times the worst pain I’ve ever felt.
Still, there are some people, who think rationality is about always thinking about only one’s own well-being, who might disagree with this. They might say, that the well-being of other minds doesn’t affect your mind directly. So if you don’t know about it, it’s irrelevant to you. Some of these people may also try to minimize the effect of the natural empathy by acknowledging that the being who is suffering is different from you. They could be your enemies or someone who is not “worth” your efforts. It’s easier to cope with the fact that an animal who belongs into a different species is suffering than someone in your family. Or consider someone who has a different skin color and whose people behave strangely and who sometimes have violent and “primitive” habits are suffering on the other side of the world (note, this is not what I think, but what I’ve heard other people say… they basically think some people are a bit like the baby eating aliens) - are their suffering worth less? Intuitively it feels that way because they don’t belong into your tribe. Anyway, these minds are still capable of same kind of suffering.
The question still stands, if someone is “rationally” interested in one’s own well-being only, and if someone only cares about other minds to the extent of how they affect your own mind through the natural empathy reflex, then why should they be interested in the well-being of other minds purely for the sake of their well-being? Shouldn’t they only be interested in how the suffering of the other minds affects your own mind?
Hi, welcome to LW! I will reply to your comments here in one place, instead of each of them separately.
No. It is okay to ask, and it is also okay to disagree. Just choose a proper place. For example, this is an article about “explaining hard ideas with simple words”, therefore this discussion being here is quickly getting off-topic. You are not speaking about explaining effective atruism using simple words, but about misconceptions people have about rationality and altruism. That’s a different topic; and now the whole comment tree is unrelated to the original article.
Don’t worry, it happens. Just know that the proper place to ask questions like this is usually the latest Open Thread, and sometimes there is a special thread (like the “stupid” questions) for that. (I would say this is actually the website’s fault, for not making the Open Thread more visible.)
Then of course such person will act rationally by caring only about their own well-being, and considering others only to the degree they influence this specific goal. For example, a rational sociopath. -- Sometimes we speak about paperclip maximizers, to make it more obvious (and less related to specific details of sociopathy or whatever). For a paperclip maximizer, it is rational to maximize the number of paperclips, and to care about human suffering only as much as it can influence the number of paperclips. So for example, if people would react to their suffering by destroying paperclips, or if they would respond to paperclip maximizer’s help by building many new paperclips out of gratitude, then the paperclip maximizer could help them. The paperclip maximizer could even pretend it cares about human suffering, if that helps to maximize the number of paperclips in the future. -- But we are not trying here to sell effective altruism to paperclip maximizers, nor to sociopaths. Only to people who (a) care about suffering of others, and (b) want to be reflectively consistent (want to care about what they would care about if they knew more, etc.).
There is this “Hollywood rationality” meme, which suggests that rational people should be sociopaths; or even should consider themselves imperfect if they aren’t ones… and should feel a pressure to self-modify to become ones. I guess most people here consider this bullshit; and actually exposing the bullshitness of similar ideas is one of the missions of LW. Perhaps the simplest response is: Uhm… why? Perhaps someone had a wrong idea of rationality, and is now optimizing for that wrong idea. (See the nameless virtue.)
Essentially this would be a debate about whether people truly care about others, or whether we truly are self-deceiving sociopaths (and therefore the most rational ones should be able to see through this self-deception). What does that even mean? What does it mean for a human? There is a ton of assumptions and confusions, so we shouldn’t expect to solve this all within five minutes. (And until this all is solved, my lazy answer is that the burden of proof is on those people who suggest that a self-modification to a sociopath is the best way of maximizing my values which currently include caring for others. Because optimizing for values I don’t have seems like a lost purpose.) We will not solve this fully here; perhaps at some other place.
It would be nice, because we wouldn’t have to go over the basics again and again. On the other hand, it’s not realistic. Perhaps the nice and realistic solution could be this: A new person asks something that sounds already-answered to the veterans. The veterans give a short explanation and links to relevant articles from the sequences. The new person reads the articles; and if there are further questions or if the original question does not seem answered sufficiently, then the new person asks additional questions in an Open Thread.
Again, if people here agree with this solution, then it probably should be a policy written in a visible place.
How I read this: “Hi! I know exactly where to find the information I am asking for, but instead of reading the material (that I know exists) that has already been written that answers my question, can you write a response that explains the whole of morality?”
To start off with, you seem to be using the term “rationality” to mean something completely different than what we mean when we say it. I recommend Julia Galef’s Straw Vulcan talk.
You slightly misunderstood what I meant, but maybe that’s understandable. I’m not a native English speaker and I’m quite poor at expressing myself even in my native language. You don’t have to be so condescending, I was just being curious. Do you usually expect people to read all the sequences before they can ask questions? If so, I apologize because I didn’t know this rule. I can come back here after a few months when I’ve read all the sequences.
Okay, sorry. I just wanted to be honest. I have read most of the sequences listed on the sequences page. The morality sequence is quite big and reading it seems a daunting task because I have books related to my degree that I’m supposed to be reading and they are of bigger importance to me at the moment. I thought there could be a quick answer to this question. But if you have any specific blog posts related to this issue in mind, please link them!
I’m aware of that. With quotation marks around the word I was signaling that I don’t really think it’s real rationality or the same kind of rationality LessWrong people use. I know that rationalist people don’t think that way. It’s just that in some economic texts people to use the word “rationality” to mean that: a “rational” agent is only interested in his own well-being.
I have read relevant blog posts on LessWrong and I think I know this concept. People think rational people are supposed to be some kind of emotional robots who don’t have any feelings and otherwise thinking like modern-day computer, very mechanically and not being very flexible in their thinking etc. In reality people can use instrumental rationality to achieve the emotionally desired goals they have or use epistemic rationality to find out what their emotionally desired goals really are?
Keep in mind that this “rationality” is just a word. Making up a word shouldn’t, on its own, be enough to show that something is good or bad. If self-interest is more “rational” than helping others, then you should be able to give good reasons for that with other words that are more clear and simple.
People get very confused when they start thinking that what they actually want matters less than some piece of paper saying what they Should or Shouldn’t want. Even if some made-up idea says you Shouldn’t want to help others except to make yourself happy, why should that matter more to me than what I actually want, which is just to help people? This is a lot like Mr. Yudkowsky’s “being sad about having to think and decide well”.
Btw, that link is really good and it made me think a bit differently. I’ve sometimes envied others for their choices and thought I’m supposed to behave in a certain way that is opposite to that… but actually what matters is what I want and how I can achieve my desires, not how I’m supposed to act.
Right! “I should...” is a means for actually making the world a better place. Don’t let it hide away in its own world; make it face up to the concerns and wishes you really have.
I think the gist is that we all live inside our own bubbles of consciousness and can only observe indirectly what is inside other people’s bubbles. Everything that motivates you or makes you do anything is inside that bubble. If you expand this kind of thinking, it’s not really important what is inside those other bubbles, only how they affect you. But this is kinda contrived philosophy.
Which texts are you referring to? I have about a dozen and none of them define rationality in this way.
Okay. I was wrong. It seems I don’t know enough and I should stop posting here.
I think the problem might be confusing connotation and denotation. ‘Rational self-interest’ is a term because most rationality isn’t self-interested, and most self-interest isn’t rational. But when words congeal into a phrase like that, sometimes they can seem to be interchangeable. And it doesn’t help that aynrand romanticism psychodarwinism hollywood.
Yep, the Ayn Rand type of literature is what originally brought this to my mind. I also read a book about economic sociology which told about the prisoner’s dilemma and it said the most “rational” choice is to always betray your partner (if you only play once) and Nash was surprised when people didn’t behave this way
That’s a roughly high-school-level misunderstanding of what the Prisoner’s Dilemma means, though I suppose it makes sense to be surprised that humans care about each other if you’d never met a human, and it did make sense to be confused by why humans care about each other until we recognized that (uncertainly) iterated dilemmas and kin selection were involved. I believe a great many people on LessWrong also reject the economic consensus on this issue, however; they think that two rational agents can cooperate in something like a classical PD, provided only that they have information about one another’s (super)rationality. See True Prisoner’s Dilemma and Decision Theory FAQ.
In the real world, most human interactions are not Prisoner’s Dilemmas, because in most cases people prefer something that sounds like ‘(Cooperate, Cooperate)’ to ‘(Cooperate, Defect)’. whereas in the PD the latter must have a higher payoff.
This is what was said:
“It (game theory) assumes actors are more rational than they often are in reality. Even Nash faced this problem when some economists found that real subjects responded differently from Nash’s prediction: they followed rules of fairness, not cold, personal calculation (Nassar 1998: 199)”
Yeah, I remember reading that some slightly generous version of tit-for-tat is the most useful tactic in prisoner’s dilemma at least if you’re playing several rounds.
The reason I ask is because I have heard this claim many times, but have never encountered an actual textbook that taught it, so I’m not sure if it has any basis in reality or is just a straw man (perhaps, designed to discredit economics, or merely an honest misunderstanding of the optimization principle).
Welcome to Less Wrong! Your comment would be more appropriate in the welcome thread.
Also: Uh oh! You have used non-permitted words (lesswrong, morality, sequence, arguments, effective, altruism, elaborate, sentiment)
I have already posted in there. Do you mean I should only post there until I mature enough that I can post here?
Oh, ok. The open threads are a good place to ask questions. If you aren’t satisfied with the response you get there, you can try here.
I would say that “doing good in the most helping way” only matters if you want things to be good for other people (or animals). A person who thinks well might want things to be good for other people, or want things to be good for themselves, or want there to be lots of things to hold paper together—to think well means to do things the best way to get what they want, but not to want any one thing.
Knowing whether you want things to be good for other people, or just want things to be good for yourself but feel sad when things are bad for other people, is sort of like a different thing people think about here. Sometimes we think about if we should want a thing that makes us think we have what we want, even though we are really just sitting around with the thing on our heads . If I want to think that things are good for other people (because it will make me happy and the biggest thing I want is to be happy), then I can get what I want by changing what I think. But if what I want is for things to be good for other people (even if it does not make me happy), then the only way I can get what I want is to make things better for other people (and so I want to do good in the most helping way).
I should say, I think a little different about good from most people here. Most people here think that you can want something, but also think that it is bad. I think that if you think you want something that is bad, you are probably confused about what you want, and you would stop wanting the bad thing if you thought about it enough and felt how bad it was. I am not totally sure that I am right about this, though.
(See also: good about good)
I don’t think it’s really possible to argue against this idea. If you’re only interested in your own well-being, then doing things which do not increase your own well-being will not help you achieve your goal.
But how happy or sad other minds are does change how happy or sad I am. Why would it be looking out for myself better if I ignored something that changes my life in a big way? And why should I pretend to only care about myself if I really do care about others? Or pretend to only care about how others cause changes in me, when I do in fact care about the well-being of people who don’t change me?
Suppose I said to you that it’s bad to care about the person you’re going to be. After all, you aren’t that person now. That person’s thoughts and concerns are outside of the present you. And that person can’t change anything for the present you.
That wouldn’t be a very good reason to ignore the person I’ll become. After all, I do want the person I’m going to be to be happy. I don’t need to give reasons showing why I should care about myself over time. I just need to note that I do in fact care about myself over time. How is this different, in any important way that changes the reasoning above, from noting that I do in fact care about other people in their own right?
If people only cared about other people as ways to get warm good feels for themselves, then people would be happy to change themselves to get warm good feels both when others are happy and when others are sad. People also wouldn’t care about people too far away to cause changes for them. But if I send a space car full of people far away from me, I still want them to be happy even after they’re too far away to ever change anything for me again. That’s a fact about how I am. Why should I try to change that?
I guess that makes sense. When people say things like “I want a lot of money”, “I want to live in a fulfilling relationship”, “I want to climb mt. everest”, the essential quality of these desires is that they are real and actually happen roughly the same way you picture it in your mind. No one says things like “I want to have the good feeling of living in a fulfilling relationship whether or not I actually live in one”… no. Because it’s important that they’re actually real. You can say the same thing about helping others—if you don’t want other people to suffer, then it’s important that they actually don’t suffer.
It’s a bit different. You will eventually become the person you are in the future, but it’s impossible to never get inside the mind of someone else, at least not directly.
How would you actually change yourself? It’s very difficult in practice.
But people don’t care about far away people so much as they care about people that are similar to you. When westerners get in trouble in developing countries, people make a big effort to get them safe and mostly ignore all suffering that is going on around that. People send less money to people in developing countries than say, war veterans or people at home.
You shouldn’t. I’m the same way, I try to help people for the sake of helping them. But there are some people who are only interested in their own well-being and I’m just thinking how I could argue with them.
Yes! I think that’s a lot like what I was talking about.
Present-you won’t. Present-you will go away and never know that will happen. You-over-time may change from present-you to you-to-come, but I wasn’t talking about you-over-time.
Also, mind reading could change this some day, maybe.
Yes, but even if it weren’t possible at all, and we thought it were possible, whether we wished for it could say a lot about what we really want.
Yes, but that’s very different from saying that people don’t care about far away people at all except in so far as they get changed by them. If it were completely easy for you to in a flash make the lives of everyone you’ll never know about ten times as good, for free, you would want to do that.