I am not an altruist. I would like my status to be higher than it is. I would like the few people I truly care about to have higher status. Otherwise, I really don’t care, except for enjoying when certain high profile d-bags lose a lot of status. But that’s just me. My more general point was that even those altruistic sort who do care deeply about others don’t seem to generally want to raise others up above themselves, either in terms of status or materially. Can you think of one counter-example? I can’t but I’m not trying very hard.
Like I mentioned in another comment, I would feel bad for lower status people who didn’t have an equal opportunity as me for reaching the level of status I have. Like I am thriving in this world for being born lucky. And AFAIK, relatively no one lower status than me had the same opportunity as me to be in the status level I am in now.
It’s almost impossible for one person’s morality to be significantly different from the standard. It’s more likely that one who thinks themself different is simply confused.
Um, what standard of significance are you using here? Yes, humans are extremely similar compared to the vastness of that which is possible, but that doesn’t mean the remaining difference isn’t ridiculously important.
Um, what standard of significance are you using here?
The standard implied by the remark I was commenting on. Literally not caring about other people seems like something you may believe about yourself, but which can’t be true.
The standard implied by the remark I was commenting on.
I read the original post as being about the ordinary human domain, implying an ordinary human-relative standard of significance.
Literally not caring about other people seems like something you may believe about yourself, but which can’t be true.
This is ambiguous in two ways: which other people (few or all), and what sort of valuation (subjunctive revealed preference, some construal of reflective equilibrium)? I suppose it’s plausible that for every person, some appeal to empathy would sincerely motivate that person.
The underlying genetic machinery that produces an individual’s morality is a human universal. But the production of the morality is very likely dependent upon non-genetic factors. The psychological unity of humankind no more implies that people have the same morality than it implies that they have the same favorite foods.
Yes, but it’s very easy for the actual large scale consequences of a human morality to be very different. We all feel compasion for freinds and fear of strangers; but when we scale our morality to the size of humanity, the difference is huge depending whether the compassion or the fear dominates.
Hitler and Ghandi may not be that different, but the consequences of their actions were.
It’s almost impossible for one person’s morality to be significantly different from the standard.
Really? Yes, of course almost everyone falls in the tiny-in-absolute-terms human space, but significant (in ordinary language which doesn’t seem confused enough to abandon) differences within that space exist with respect to endorsed moralities (to begin with, whether one endorses any abstract moral theory), and to a lesser extent WRT revealed preferences. (WRT reflective equilibria, who the hell knows?)
I am not an altruist. I would like my status to be higher than it is. I would like the few people I truly care about to have higher status. Otherwise, I really don’t care, except for enjoying when certain high profile d-bags lose a lot of status. But that’s just me. My more general point was that even those altruistic sort who do care deeply about others don’t seem to generally want to raise others up above themselves, either in terms of status or materially. Can you think of one counter-example? I can’t but I’m not trying very hard.
Like I mentioned in another comment, I would feel bad for lower status people who didn’t have an equal opportunity as me for reaching the level of status I have. Like I am thriving in this world for being born lucky. And AFAIK, relatively no one lower status than me had the same opportunity as me to be in the status level I am in now.
It’s almost impossible for one person’s morality to be significantly different from the standard. It’s more likely that one who thinks themself different is simply confused.
Um, what standard of significance are you using here? Yes, humans are extremely similar compared to the vastness of that which is possible, but that doesn’t mean the remaining difference isn’t ridiculously important.
The standard implied by the remark I was commenting on. Literally not caring about other people seems like something you may believe about yourself, but which can’t be true.
I read the original post as being about the ordinary human domain, implying an ordinary human-relative standard of significance.
This is ambiguous in two ways: which other people (few or all), and what sort of valuation (subjunctive revealed preference, some construal of reflective equilibrium)? I suppose it’s plausible that for every person, some appeal to empathy would sincerely motivate that person.
The underlying genetic machinery that produces an individual’s morality is a human universal. But the production of the morality is very likely dependent upon non-genetic factors. The psychological unity of humankind no more implies that people have the same morality than it implies that they have the same favorite foods.
Yes, but it’s very easy for the actual large scale consequences of a human morality to be very different. We all feel compasion for freinds and fear of strangers; but when we scale our morality to the size of humanity, the difference is huge depending whether the compassion or the fear dominates.
Hitler and Ghandi may not be that different, but the consequences of their actions were.
Really? Yes, of course almost everyone falls in the tiny-in-absolute-terms human space, but significant (in ordinary language which doesn’t seem confused enough to abandon) differences within that space exist with respect to endorsed moralities (to begin with, whether one endorses any abstract moral theory), and to a lesser extent WRT revealed preferences. (WRT reflective equilibria, who the hell knows?)