Hi, I’m new to LessWrong and haven’t read the morality sequence and haven’t read many arguments for effective altruism, so could you elaborate on this sentiment?
How I read this: “Hi! I know exactly where to find the information I am asking for, but instead of reading the material (that I know exists) that has already been written that answers my question, can you write a response that explains the whole of morality?”
To start off with, you seem to be using the term “rationality” to mean something completely different than what we mean when we say it. I recommend Julia Galef’s Straw Vulcan talk.
You slightly misunderstood what I meant, but maybe that’s understandable. I’m not a native English speaker and I’m quite poor at expressing myself even in my native language. You don’t have to be so condescending, I was just being curious. Do you usually expect people to read all the sequences before they can ask questions? If so, I apologize because I didn’t know this rule. I can come back here after a few months when I’ve read all the sequences.
I know exactly where to find the information I am asking for, but instead of reading the material (that I know exists)
Okay, sorry. I just wanted to be honest. I have read most of the sequences listed on the sequences page. The morality sequence is quite big and reading it seems a daunting task because I have books related to my degree that I’m supposed to be reading and they are of bigger importance to me at the moment. I thought there could be a quick answer to this question. But if you have any specific blog posts related to this issue in mind, please link them!
To start off with, you seem to be using the term “rationality” to mean something completely different than what we mean when we say it.
I’m aware of that. With quotation marks around the word I was signaling that I don’t really think it’s real rationality or the same kind of rationality LessWrong people use. I know that rationalist people don’t think that way. It’s just that in some economic texts people to use the word “rationality” to mean that: a “rational” agent is only interested in his own well-being.
I recommend Julia Galef’s Straw Vulcan talk.
I have read relevant blog posts on LessWrong and I think I know this concept. People think rational people are supposed to be some kind of emotional robots who don’t have any feelings and otherwise thinking like modern-day computer, very mechanically and not being very flexible in their thinking etc. In reality people can use instrumental rationality to achieve the emotionally desired goals they have or use epistemic rationality to find out what their emotionally desired goals really are?
Keep in mind that this “rationality” is just a word. Making up a word shouldn’t, on its own, be enough to show that something is good or bad. If self-interest is more “rational” than helping others, then you should be able to give good reasons for that with other words that are more clear and simple.
People get very confused when they start thinking that what they actually want matters less than some piece of paper saying what they Should or Shouldn’t want. Even if some made-up idea says you Shouldn’t want to help others except to make yourself happy, why should that matter more to me than what I actually want, which is just to help people? This is a lot like Mr. Yudkowsky’s “being sad about having to think and decide well”.
Btw, that link is really good and it made me think a bit differently. I’ve sometimes envied others for their choices and thought I’m supposed to behave in a certain way that is opposite to that… but actually what matters is what I want and how I can achieve my desires, not how I’m supposed to act.
Right! “I should...” is a means for actually making the world a better place. Don’t let it hide away in its own world; make it face up to the concerns and wishes you really have.
If self-interest is more “rational” than helping others, then you should be able to give good reasons for that with other words that are more clear and simple.
I think the gist is that we all live inside our own bubbles of consciousness and can only observe indirectly what is inside other people’s bubbles. Everything that motivates you or makes you do anything is inside that bubble. If you expand this kind of thinking, it’s not really important what is inside those other bubbles, only how they affect you. But this is kinda contrived philosophy.
I think the problem might be confusing connotation and denotation. ‘Rational self-interest’ is a term because most rationality isn’t self-interested, and most self-interest isn’t rational. But when words congeal into a phrase like that, sometimes they can seem to be interchangeable. And it doesn’t help that aynrand romanticism psychodarwinism hollywood.
Yep, the Ayn Rand type of literature is what originally brought this to my mind. I also read a book about economic sociology which told about the prisoner’s dilemma and it said the most “rational” choice is to always betray your partner (if you only play once) and Nash was surprised when people didn’t behave this way
That’s a roughly high-school-level misunderstanding of what the Prisoner’s Dilemma means, though I suppose it makes sense to be surprised that humans care about each other if you’d never met a human, and it did make sense to be confused by why humans care about each other until we recognized that (uncertainly) iterated dilemmas and kin selection were involved. I believe a great many people on LessWrong also reject the economic consensus on this issue, however; they think that two rational agents can cooperate in something like a classical PD, provided only that they have information about one another’s (super)rationality. See True Prisoner’s Dilemma and Decision Theory FAQ.
In the real world, most human interactions are not Prisoner’s Dilemmas, because in most cases people prefer something that sounds like ‘(Cooperate, Cooperate)’ to ‘(Cooperate, Defect)’. whereas in the PD the latter must have a higher payoff.
“It (game theory) assumes actors are more rational than they often are in reality. Even Nash faced this problem when some economists found that real subjects responded differently from Nash’s prediction: they followed rules of fairness, not cold, personal calculation (Nassar 1998: 199)”
Yeah, I remember reading that some slightly generous version of tit-for-tat is the most useful tactic in prisoner’s dilemma at least if you’re playing several rounds.
The reason I ask is because I have heard this claim many times, but have never encountered an actual textbook that taught it, so I’m not sure if it has any basis in reality or is just a straw man (perhaps, designed to discredit economics, or merely an honest misunderstanding of the optimization principle).
How I read this: “Hi! I know exactly where to find the information I am asking for, but instead of reading the material (that I know exists) that has already been written that answers my question, can you write a response that explains the whole of morality?”
To start off with, you seem to be using the term “rationality” to mean something completely different than what we mean when we say it. I recommend Julia Galef’s Straw Vulcan talk.
You slightly misunderstood what I meant, but maybe that’s understandable. I’m not a native English speaker and I’m quite poor at expressing myself even in my native language. You don’t have to be so condescending, I was just being curious. Do you usually expect people to read all the sequences before they can ask questions? If so, I apologize because I didn’t know this rule. I can come back here after a few months when I’ve read all the sequences.
Okay, sorry. I just wanted to be honest. I have read most of the sequences listed on the sequences page. The morality sequence is quite big and reading it seems a daunting task because I have books related to my degree that I’m supposed to be reading and they are of bigger importance to me at the moment. I thought there could be a quick answer to this question. But if you have any specific blog posts related to this issue in mind, please link them!
I’m aware of that. With quotation marks around the word I was signaling that I don’t really think it’s real rationality or the same kind of rationality LessWrong people use. I know that rationalist people don’t think that way. It’s just that in some economic texts people to use the word “rationality” to mean that: a “rational” agent is only interested in his own well-being.
I have read relevant blog posts on LessWrong and I think I know this concept. People think rational people are supposed to be some kind of emotional robots who don’t have any feelings and otherwise thinking like modern-day computer, very mechanically and not being very flexible in their thinking etc. In reality people can use instrumental rationality to achieve the emotionally desired goals they have or use epistemic rationality to find out what their emotionally desired goals really are?
Keep in mind that this “rationality” is just a word. Making up a word shouldn’t, on its own, be enough to show that something is good or bad. If self-interest is more “rational” than helping others, then you should be able to give good reasons for that with other words that are more clear and simple.
People get very confused when they start thinking that what they actually want matters less than some piece of paper saying what they Should or Shouldn’t want. Even if some made-up idea says you Shouldn’t want to help others except to make yourself happy, why should that matter more to me than what I actually want, which is just to help people? This is a lot like Mr. Yudkowsky’s “being sad about having to think and decide well”.
Btw, that link is really good and it made me think a bit differently. I’ve sometimes envied others for their choices and thought I’m supposed to behave in a certain way that is opposite to that… but actually what matters is what I want and how I can achieve my desires, not how I’m supposed to act.
Right! “I should...” is a means for actually making the world a better place. Don’t let it hide away in its own world; make it face up to the concerns and wishes you really have.
I think the gist is that we all live inside our own bubbles of consciousness and can only observe indirectly what is inside other people’s bubbles. Everything that motivates you or makes you do anything is inside that bubble. If you expand this kind of thinking, it’s not really important what is inside those other bubbles, only how they affect you. But this is kinda contrived philosophy.
Which texts are you referring to? I have about a dozen and none of them define rationality in this way.
Okay. I was wrong. It seems I don’t know enough and I should stop posting here.
I think the problem might be confusing connotation and denotation. ‘Rational self-interest’ is a term because most rationality isn’t self-interested, and most self-interest isn’t rational. But when words congeal into a phrase like that, sometimes they can seem to be interchangeable. And it doesn’t help that aynrand romanticism psychodarwinism hollywood.
Yep, the Ayn Rand type of literature is what originally brought this to my mind. I also read a book about economic sociology which told about the prisoner’s dilemma and it said the most “rational” choice is to always betray your partner (if you only play once) and Nash was surprised when people didn’t behave this way
That’s a roughly high-school-level misunderstanding of what the Prisoner’s Dilemma means, though I suppose it makes sense to be surprised that humans care about each other if you’d never met a human, and it did make sense to be confused by why humans care about each other until we recognized that (uncertainly) iterated dilemmas and kin selection were involved. I believe a great many people on LessWrong also reject the economic consensus on this issue, however; they think that two rational agents can cooperate in something like a classical PD, provided only that they have information about one another’s (super)rationality. See True Prisoner’s Dilemma and Decision Theory FAQ.
In the real world, most human interactions are not Prisoner’s Dilemmas, because in most cases people prefer something that sounds like ‘(Cooperate, Cooperate)’ to ‘(Cooperate, Defect)’. whereas in the PD the latter must have a higher payoff.
This is what was said:
“It (game theory) assumes actors are more rational than they often are in reality. Even Nash faced this problem when some economists found that real subjects responded differently from Nash’s prediction: they followed rules of fairness, not cold, personal calculation (Nassar 1998: 199)”
Yeah, I remember reading that some slightly generous version of tit-for-tat is the most useful tactic in prisoner’s dilemma at least if you’re playing several rounds.
The reason I ask is because I have heard this claim many times, but have never encountered an actual textbook that taught it, so I’m not sure if it has any basis in reality or is just a straw man (perhaps, designed to discredit economics, or merely an honest misunderstanding of the optimization principle).