What: discussion of the “social contract” aspect of ethics, for example of the right to not have one’s options (sets of actions) constrained beyond a threshold X, what that threshold should be, e.g. a property that the actions that infringe right-X of others are forbidden-X.
Why should there be more of that on LW: (1) it is equally practical and important aspect of ethics as the self-help aspect, (1) it seems to be simpler than determining optimal well-being conditions.
Why should there be more of that on LW: (1) it is equally practical and important aspect of ethics as the self-help aspect
It would call for a series of ‘self help’ style of posts explaining:
The benefits of creating boundaries between your own identity and other people’s declarations of wrongness.
The art of balancing freedom with political expedience when dealing with other agents who are attempting to coerce you socially.
How to maintain internal awareness of the distinction between what you do not do for fear of social consequences vs what you do not do because of your own ethical values.
The difference between satisfying the preferences of others vs acquiescing to their demands. Included here would be how to deal with those who haven’t developed the ability to express their own desires except indirectly via the declarations of what it is ‘right’ for others do.
On the other hand, when i look at self help i see something i will continue to delay/slowly progress at a constant rate because my current situation seems to be quite similar to my ideal situation. I think that once you reach that point is when you start a more constant but passive process of improvement.
Your first #1 doesn’t seem to me to be a good justification for having more of it on LW. Lots of things are practical and important but don’t belong on LW.
Your second #1 seems to me wrong; deciding what’s actually right and wrong is very much not “simpler than determining optimal well-being conditions”, for the following reasons. (a) It’s debatable whether it’s even meaningful (since many people here are moral nonrealists or relativists of one sort or another). (b) There is no obvious way to reach agreement on what actually influences what’s right and what’s wrong. Net preference satisfaction? The will of a god? Obeying some set of ethical principles somehow built into the structure of the universe? Or what? (c) Most of the theories held by moral realists about what actually matters make it extraordinarily difficult to determine, in difficult cases, whether a given thing is right or wrong. Utilitarianism requires you to sum (or average, or something) the utilities of perhaps infinitely many beings, over a perhaps infinite extent of time and space. The theory Luke calls “desirism” requires you to work out the consequences of having many agents adopt any possible set of preferences. Intuitionist theories and divine-command theories make the details of what’s right and wrong entirely inaccessible. Etc.
Now, perhaps in fact you have some specific meta-ethical theory in mind such that, if that theory is true, then the ethical calculations become manageable. In that case, you might want to say what that meta-ethical theory is and why you think it makes the calculations manageable :-).
What: discussion of the “social contract” aspect of ethics, for example of the right to not have one’s options (sets of actions) constrained beyond a threshold X, what that threshold should be, e.g. a property that the actions that infringe right-X of others are forbidden-X.
Why should there be more of that on LW: (1) it is equally practical and important aspect of ethics as the self-help aspect, (1) it seems to be simpler than determining optimal well-being conditions.
It would call for a series of ‘self help’ style of posts explaining:
The benefits of creating boundaries between your own identity and other people’s declarations of wrongness.
The art of balancing freedom with political expedience when dealing with other agents who are attempting to coerce you socially.
How to maintain internal awareness of the distinction between what you do not do for fear of social consequences vs what you do not do because of your own ethical values.
The difference between satisfying the preferences of others vs acquiescing to their demands. Included here would be how to deal with those who haven’t developed the ability to express their own desires except indirectly via the declarations of what it is ‘right’ for others do.
On the other hand, when i look at self help i see something i will continue to delay/slowly progress at a constant rate because my current situation seems to be quite similar to my ideal situation. I think that once you reach that point is when you start a more constant but passive process of improvement.
Your first #1 doesn’t seem to me to be a good justification for having more of it on LW. Lots of things are practical and important but don’t belong on LW.
Your second #1 seems to me wrong; deciding what’s actually right and wrong is very much not “simpler than determining optimal well-being conditions”, for the following reasons. (a) It’s debatable whether it’s even meaningful (since many people here are moral nonrealists or relativists of one sort or another). (b) There is no obvious way to reach agreement on what actually influences what’s right and what’s wrong. Net preference satisfaction? The will of a god? Obeying some set of ethical principles somehow built into the structure of the universe? Or what? (c) Most of the theories held by moral realists about what actually matters make it extraordinarily difficult to determine, in difficult cases, whether a given thing is right or wrong. Utilitarianism requires you to sum (or average, or something) the utilities of perhaps infinitely many beings, over a perhaps infinite extent of time and space. The theory Luke calls “desirism” requires you to work out the consequences of having many agents adopt any possible set of preferences. Intuitionist theories and divine-command theories make the details of what’s right and wrong entirely inaccessible. Etc.
Now, perhaps in fact you have some specific meta-ethical theory in mind such that, if that theory is true, then the ethical calculations become manageable. In that case, you might want to say what that meta-ethical theory is and why you think it makes the calculations manageable :-).