Chalk me down as a regular-ol’ moral anti-realist then. Future Charlie will have made some choices and learned things that will also suit me, and I’m fine with that, but I don’t expect it to be too predictable ahead of time—that future me will have done thought and life-living that’s not easily shortcutted.
Where future moral changes seem predictable, that’s where I think they’re the most suspect—cliches or social pressure masquerading as predictions about our future thought processes. Or skills—things I already desire but am bad at—masquerading as values.
Yeah, moral anti-realist here too. I hope future-me has moral preferences that fit the circumstances and equilibria they face better than mine do, and I hope that enables more, happier, people to coexist. But I don’t think there’s any outside position that would make those specific ideas “more correct” or “better”.
Your imagined future self has the same morality that you do: enabling “more, happier, people to coexist”. The only change imagined here is being able to pursue your unchanged morality more effectively.
“better adaptation” means “better experiences in the new context”, right? Is the fact that you’re using YOUR intuitions about “better” for future-you what makes it quasi-realist? Feels pretty anti- to me...
Hmm. I don’t know how the value of an outcome for society is anything but a function of the individual doing the evaluation. Is this our crux? from the standpoint of you and future-you, are you saying you expect that future-you weights their perception of societal good higher than you do today? Or are you claiming that societies have independent desires from the individuals who act within them?
It’s not all about me, and it’s not all about preferences, since many preferences are morally irrelevant. Morality is a mechanism for preventing conflicts between individuals, and enabling coordination between individuals.
Societies don’t have anthropic preferences. Nonetheless , there are things societies objectively need to do in order to survive. If a society fails to defend itself against aggressors, or sets the tax rate to zero, it won’t survive, even though individuals might enjoy not fighting wars or paying taxes.
Even if you are assuming a context where the very existence of society is taken for granted, morality isnt just an aggregate of individual preferences.
Morality needs to provide the foundations of law: the rewards and punishments handed out to people are objective: someone is either in jail or not; they cannot be in jail from some perspectives and not others. Gay marriage is legal or not.
It is unjust to jail someone unless they have broken a clear law, crossed a bright line. And you need to punish people, so you need rules. But an aggregate of preferences is not a bright line per se. So you need a separate mechanism to turn preferences into rules. So utilitarianism is inadequate to do everything morality needs to do, even if consequentialism is basically correct.
Lesswrongians tend to assume “luxury morality”, where the continued survival of society is a given, it’s also a given that there are lots of spare resources , and the main problem seems to be what do with all the money.
From the adaptationist perspective, that is partly predictable. A wealthy society should have different object-level morality to a hardscrabble one. Failure to adapt is a problem. But it’s also a problem to forget entirely about “bad policeman” stuff like crime and punishment and laws.
It’s not all about me, and it’s not all about preferences
Not all about, but is ANY part of morality about you or about preferences? What are they mixed with, in what proportions?
To be clear (and to give you things to disagree with so I can understand your position), my morality is about me and my preferences. And since I’m anti-realist, I’m free to judge others (whose motives and preferences I don’t have access to) more on behavior than on intent. So in most non-philosophical contexts, I like to claim and signal a view of morality that’s different than my own nuanced view. Those signals/claims are more compatible with realist deontology—it’s more legible and easier to enforce on others than my true beliefs, even though it’s not fully justifiable or consistent.
It is unjust to jail someone unless they have broken a clear law, crossed a bright line.
Do you mean it’s morally wrong to do so? I’m not sure how that follows from (what I take as) your thesis that “societal survival” is the primary driver of morality. And it still doesn’t clarity what “better adaptation” actually means in terms of improving morality over time.
Not all about, but is ANY part of morality about you or about preferences?
How many million people are in my society ? How much my preferences weigh isn’t zero, but it isn’t greater than 1/N. Why would it be greater? I’m not the King of morality. Speaking of which...
my morality is about me and my preferences
But we were talking about morality , not about your morality.
And since I’m anti-realist, I’m free to judge others (whose motives and preferences I don’t have access to) more on behavior than on intent.
So what? You don’t , as an individual, have the right to put them in jail .. but society has a right to put you in jail. There’s no reason for anybody else to worry about your own personal morality, but plenty of reason for you to worry about everyone elses.
Do you mean it’s morally wrong to do so?
Would you want to be throw into jail for some reason that isn’t even clearly defined?
Chalk me down as a regular-ol’ moral anti-realist then. Future Charlie will have made some choices and learned things that will also suit me, and I’m fine with that, but I don’t expect it to be too predictable ahead of time—that future me will have done thought and life-living that’s not easily shortcutted.
Where future moral changes seem predictable, that’s where I think they’re the most suspect—cliches or social pressure masquerading as predictions about our future thought processes. Or skills—things I already desire but am bad at—masquerading as values.
Yeah, moral anti-realist here too. I hope future-me has moral preferences that fit the circumstances and equilibria they face better than mine do, and I hope that enables more, happier, people to coexist. But I don’t think there’s any outside position that would make those specific ideas “more correct” or “better”.
Your imagined future self has the same morality that you do: enabling “more, happier, people to coexist”. The only change imagined here is being able to pursue your unchanged morality more effectively.
Moral quasi realist here. What makes them better is better adaptation.
“better adaptation” means “better experiences in the new context”, right? Is the fact that you’re using YOUR intuitions about “better” for future-you what makes it quasi-realist? Feels pretty anti- to me...
No. Better outcomes for societies.
Hmm. I don’t know how the value of an outcome for society is anything but a function of the individual doing the evaluation. Is this our crux? from the standpoint of you and future-you, are you saying you expect that future-you weights their perception of societal good higher than you do today? Or are you claiming that societies have independent desires from the individuals who act within them?
It’s not all about me, and it’s not all about preferences, since many preferences are morally irrelevant. Morality is a mechanism for preventing conflicts between individuals, and enabling coordination between individuals.
Societies don’t have anthropic preferences. Nonetheless , there are things societies objectively need to do in order to survive. If a society fails to defend itself against aggressors, or sets the tax rate to zero, it won’t survive, even though individuals might enjoy not fighting wars or paying taxes.
Even if you are assuming a context where the very existence of society is taken for granted, morality isnt just an aggregate of individual preferences. Morality needs to provide the foundations of law: the rewards and punishments handed out to people are objective: someone is either in jail or not; they cannot be in jail from some perspectives and not others. Gay marriage is legal or not.
It is unjust to jail someone unless they have broken a clear law, crossed a bright line. And you need to punish people, so you need rules. But an aggregate of preferences is not a bright line per se. So you need a separate mechanism to turn preferences into rules. So utilitarianism is inadequate to do everything morality needs to do, even if consequentialism is basically correct.
Lesswrongians tend to assume “luxury morality”, where the continued survival of society is a given, it’s also a given that there are lots of spare resources , and the main problem seems to be what do with all the money.
From the adaptationist perspective, that is partly predictable. A wealthy society should have different object-level morality to a hardscrabble one. Failure to adapt is a problem. But it’s also a problem to forget entirely about “bad policeman” stuff like crime and punishment and laws.
Not all about, but is ANY part of morality about you or about preferences? What are they mixed with, in what proportions?
To be clear (and to give you things to disagree with so I can understand your position), my morality is about me and my preferences. And since I’m anti-realist, I’m free to judge others (whose motives and preferences I don’t have access to) more on behavior than on intent. So in most non-philosophical contexts, I like to claim and signal a view of morality that’s different than my own nuanced view. Those signals/claims are more compatible with realist deontology—it’s more legible and easier to enforce on others than my true beliefs, even though it’s not fully justifiable or consistent.
Do you mean it’s morally wrong to do so? I’m not sure how that follows from (what I take as) your thesis that “societal survival” is the primary driver of morality. And it still doesn’t clarity what “better adaptation” actually means in terms of improving morality over time.
How many million people are in my society ? How much my preferences weigh isn’t zero, but it isn’t greater than 1/N. Why would it be greater? I’m not the King of morality. Speaking of which...
But we were talking about morality , not about your morality.
So what? You don’t , as an individual, have the right to put them in jail .. but society has a right to put you in jail. There’s no reason for anybody else to worry about your own personal morality, but plenty of reason for you to worry about everyone elses.
Would you want to be throw into jail for some reason that isn’t even clearly defined?