I miss the discussion (on LW in general) of an approach to ethics that strives to determine what actions should be unlawful for an agent, as opposed to, say, what probability distribution over actions is optimal for an agent. (And I don’t mean “deontologic”, as the “unlawfulness” can be predicated on the consequences.) If you criticize this comment for confusion of “descriptive ethics vs. normative ethics vs. metaethics”, try to be constructive.
What: discussion of the “social contract” aspect of ethics, for example of the right to not have one’s options (sets of actions) constrained beyond a threshold X, what that threshold should be, e.g. a property that the actions that infringe right-X of others are forbidden-X.
Why should there be more of that on LW: (1) it is equally practical and important aspect of ethics as the self-help aspect, (1) it seems to be simpler than determining optimal well-being conditions.
Why should there be more of that on LW: (1) it is equally practical and important aspect of ethics as the self-help aspect
It would call for a series of ‘self help’ style of posts explaining:
The benefits of creating boundaries between your own identity and other people’s declarations of wrongness.
The art of balancing freedom with political expedience when dealing with other agents who are attempting to coerce you socially.
How to maintain internal awareness of the distinction between what you do not do for fear of social consequences vs what you do not do because of your own ethical values.
The difference between satisfying the preferences of others vs acquiescing to their demands. Included here would be how to deal with those who haven’t developed the ability to express their own desires except indirectly via the declarations of what it is ‘right’ for others do.
On the other hand, when i look at self help i see something i will continue to delay/slowly progress at a constant rate because my current situation seems to be quite similar to my ideal situation. I think that once you reach that point is when you start a more constant but passive process of improvement.
Your first #1 doesn’t seem to me to be a good justification for having more of it on LW. Lots of things are practical and important but don’t belong on LW.
Your second #1 seems to me wrong; deciding what’s actually right and wrong is very much not “simpler than determining optimal well-being conditions”, for the following reasons. (a) It’s debatable whether it’s even meaningful (since many people here are moral nonrealists or relativists of one sort or another). (b) There is no obvious way to reach agreement on what actually influences what’s right and what’s wrong. Net preference satisfaction? The will of a god? Obeying some set of ethical principles somehow built into the structure of the universe? Or what? (c) Most of the theories held by moral realists about what actually matters make it extraordinarily difficult to determine, in difficult cases, whether a given thing is right or wrong. Utilitarianism requires you to sum (or average, or something) the utilities of perhaps infinitely many beings, over a perhaps infinite extent of time and space. The theory Luke calls “desirism” requires you to work out the consequences of having many agents adopt any possible set of preferences. Intuitionist theories and divine-command theories make the details of what’s right and wrong entirely inaccessible. Etc.
Now, perhaps in fact you have some specific meta-ethical theory in mind such that, if that theory is true, then the ethical calculations become manageable. In that case, you might want to say what that meta-ethical theory is and why you think it makes the calculations manageable :-).
I don’t think Luke, at least, is conflating morality with personal preference-optimization. He’s saying: Different people have different notions of “should”-ness, and if someone says “What should I do?” then giving them a good answer has to begin with working out what notion of “should” they’re working with. That applies whether “should” is being used morally or prudentially or both.
Also: What makes a moral agent a moral agent is having personal preferences that give substantial weight to moral considerations. And what such an agent is actually deciding, on any given occasion, is what serves his/her/its goals best: it’s just that among the important goals are things like “doing what is right” and “not doing what is wrong”. So, actually, for a moral agent “personal preference-optimization” will sometimes involve a great deal of “what morality is actually about”.
There’s an important difference between saying preferences may or may not include moral values, and saying morality is, by definition, preference-maximiisation.
I miss the discussion (on LW in general) of an approach to ethics that strives to determine what actions should be unlawful for an agent, as opposed to, say, what probability distribution over actions is optimal for an agent. (And I don’t mean “deontologic”, as the “unlawfulness” can be predicated on the consequences.) If you criticize this comment for confusion of “descriptive ethics vs. normative ethics vs. metaethics”, try to be constructive.
I don’t criticize your comment on the basis of any confusion. It appears be more or less a coherent indication of preference. I criticize it based on considering the state which you desire to be both abhorrent and not (sufficiently) lacking here.
I miss the discussion (on LW in general) of an approach to ethics that strives to determine what actions should be unlawful for an agent, as opposed to, say, what probability distribution over actions is optimal for an agent. (And I don’t mean “deontologic”, as the “unlawfulness” can be predicated on the consequences.) If you criticize this comment for confusion of “descriptive ethics vs. normative ethics vs. metaethics”, try to be constructive.
Please explain why you think there should be more of that on LW.
What: discussion of the “social contract” aspect of ethics, for example of the right to not have one’s options (sets of actions) constrained beyond a threshold X, what that threshold should be, e.g. a property that the actions that infringe right-X of others are forbidden-X.
Why should there be more of that on LW: (1) it is equally practical and important aspect of ethics as the self-help aspect, (1) it seems to be simpler than determining optimal well-being conditions.
It would call for a series of ‘self help’ style of posts explaining:
The benefits of creating boundaries between your own identity and other people’s declarations of wrongness.
The art of balancing freedom with political expedience when dealing with other agents who are attempting to coerce you socially.
How to maintain internal awareness of the distinction between what you do not do for fear of social consequences vs what you do not do because of your own ethical values.
The difference between satisfying the preferences of others vs acquiescing to their demands. Included here would be how to deal with those who haven’t developed the ability to express their own desires except indirectly via the declarations of what it is ‘right’ for others do.
On the other hand, when i look at self help i see something i will continue to delay/slowly progress at a constant rate because my current situation seems to be quite similar to my ideal situation. I think that once you reach that point is when you start a more constant but passive process of improvement.
Your first #1 doesn’t seem to me to be a good justification for having more of it on LW. Lots of things are practical and important but don’t belong on LW.
Your second #1 seems to me wrong; deciding what’s actually right and wrong is very much not “simpler than determining optimal well-being conditions”, for the following reasons. (a) It’s debatable whether it’s even meaningful (since many people here are moral nonrealists or relativists of one sort or another). (b) There is no obvious way to reach agreement on what actually influences what’s right and what’s wrong. Net preference satisfaction? The will of a god? Obeying some set of ethical principles somehow built into the structure of the universe? Or what? (c) Most of the theories held by moral realists about what actually matters make it extraordinarily difficult to determine, in difficult cases, whether a given thing is right or wrong. Utilitarianism requires you to sum (or average, or something) the utilities of perhaps infinitely many beings, over a perhaps infinite extent of time and space. The theory Luke calls “desirism” requires you to work out the consequences of having many agents adopt any possible set of preferences. Intuitionist theories and divine-command theories make the details of what’s right and wrong entirely inaccessible. Etc.
Now, perhaps in fact you have some specific meta-ethical theory in mind such that, if that theory is true, then the ethical calculations become manageable. In that case, you might want to say what that meta-ethical theory is and why you think it makes the calculations manageable :-).
My answer to that question is that it what morality is actually about, and that personal preference-optimisation is something else.
I don’t think Luke, at least, is conflating morality with personal preference-optimization. He’s saying: Different people have different notions of “should”-ness, and if someone says “What should I do?” then giving them a good answer has to begin with working out what notion of “should” they’re working with. That applies whether “should” is being used morally or prudentially or both.
Also: What makes a moral agent a moral agent is having personal preferences that give substantial weight to moral considerations. And what such an agent is actually deciding, on any given occasion, is what serves his/her/its goals best: it’s just that among the important goals are things like “doing what is right” and “not doing what is wrong”. So, actually, for a moral agent “personal preference-optimization” will sometimes involve a great deal of “what morality is actually about”.
There’s an important difference between saying preferences may or may not include moral values, and saying morality is, by definition, preference-maximiisation.
Yup, there is. Did anyone say that morality is, by definition, preference-maximization?
Yes.
Do please feel free to provide more information.
I don’t criticize your comment on the basis of any confusion. It appears be more or less a coherent indication of preference. I criticize it based on considering the state which you desire to be both abhorrent and not (sufficiently) lacking here.
Do you find the “classification problem” variant of the “optimization problem” already repugnant, or is it something deeper?
Classification vs optimization is not necessarily a feature I was commenting on.
The degree of bullshit that is intrinsic to such conversations when engaged in by human participants may be a contributing factor.