Your first #1 doesn’t seem to me to be a good justification for having more of it on LW. Lots of things are practical and important but don’t belong on LW.
Your second #1 seems to me wrong; deciding what’s actually right and wrong is very much not “simpler than determining optimal well-being conditions”, for the following reasons. (a) It’s debatable whether it’s even meaningful (since many people here are moral nonrealists or relativists of one sort or another). (b) There is no obvious way to reach agreement on what actually influences what’s right and what’s wrong. Net preference satisfaction? The will of a god? Obeying some set of ethical principles somehow built into the structure of the universe? Or what? (c) Most of the theories held by moral realists about what actually matters make it extraordinarily difficult to determine, in difficult cases, whether a given thing is right or wrong. Utilitarianism requires you to sum (or average, or something) the utilities of perhaps infinitely many beings, over a perhaps infinite extent of time and space. The theory Luke calls “desirism” requires you to work out the consequences of having many agents adopt any possible set of preferences. Intuitionist theories and divine-command theories make the details of what’s right and wrong entirely inaccessible. Etc.
Now, perhaps in fact you have some specific meta-ethical theory in mind such that, if that theory is true, then the ethical calculations become manageable. In that case, you might want to say what that meta-ethical theory is and why you think it makes the calculations manageable :-).
Your first #1 doesn’t seem to me to be a good justification for having more of it on LW. Lots of things are practical and important but don’t belong on LW.
Your second #1 seems to me wrong; deciding what’s actually right and wrong is very much not “simpler than determining optimal well-being conditions”, for the following reasons. (a) It’s debatable whether it’s even meaningful (since many people here are moral nonrealists or relativists of one sort or another). (b) There is no obvious way to reach agreement on what actually influences what’s right and what’s wrong. Net preference satisfaction? The will of a god? Obeying some set of ethical principles somehow built into the structure of the universe? Or what? (c) Most of the theories held by moral realists about what actually matters make it extraordinarily difficult to determine, in difficult cases, whether a given thing is right or wrong. Utilitarianism requires you to sum (or average, or something) the utilities of perhaps infinitely many beings, over a perhaps infinite extent of time and space. The theory Luke calls “desirism” requires you to work out the consequences of having many agents adopt any possible set of preferences. Intuitionist theories and divine-command theories make the details of what’s right and wrong entirely inaccessible. Etc.
Now, perhaps in fact you have some specific meta-ethical theory in mind such that, if that theory is true, then the ethical calculations become manageable. In that case, you might want to say what that meta-ethical theory is and why you think it makes the calculations manageable :-).