What’s the optimal career choice? Professional philanthropy, influencing, research, or something more common-sensically virtuous?
What’s the optimal donation area? Development charities? Animal welfare charities? Extinction risk mitigation charities? Meta-charities? Or investing the money and donating later?
What are the highest leverage political policies? Libertarian paternalism? Prediction markets? Cruelty taxes, such as taxes on caged hens; luxury taxes?
What are the highest value areas of research? Tropical medicine? Artificial intelligence? Economic cost-effectiveness analysis? Moral philosophy?
Given our best ethical theories (or best credence distribution in ethical theories), what’s the biggest problem we currently face?
The Theoretical List
What’s the correct population ethics? How should we value future people compared with present people? Do people have diminishing marginal value?
Should we maximise expected value when it comes to small probabilities of huge amounts of value? If not, what should we do instead?
How should we respond to the possibility of creating infinite value (or disvalue)? Should that consideration swamp all others? If not, why not?
How should we respond to the possibility that the universe actually has infinite value? Does it mean that we have no reason to do any action (because we don’t increase the sum total of value in the world)? Or does this possibility refute aggregative consequentialism?
How should we accommodate moral uncertainty? Should we apply expected utility theory? If so, how do we make intertheoretic value comparisons? Does this mean that some high-stakes theories should dominate our moral thinking, even if we assign them low credence?
How should intuitions weigh against theoretical virtues in normative ethics? Is common-sense ethics roughly correct? Or should we prefer simpler moral theories?
Should we prioritise the prevention of human wrongs over the alleviation of naturally caused suffering? If so, by how much?
What sorts of entities have moral value? Humans, presumably. But what about non-human animals? Insects? The natural environment? Artificial intelligence?
[LINK] The most important unsolved problems in ethics
Will Crouch has written up a list of the most important unsolved problems in ethics:
The Practical List
What’s the optimal career choice? Professional philanthropy, influencing, research, or something more common-sensically virtuous?
What’s the optimal donation area? Development charities? Animal welfare charities? Extinction risk mitigation charities? Meta-charities? Or investing the money and donating later?
What are the highest leverage political policies? Libertarian paternalism? Prediction markets? Cruelty taxes, such as taxes on caged hens; luxury taxes?
What are the highest value areas of research? Tropical medicine? Artificial intelligence? Economic cost-effectiveness analysis? Moral philosophy?
Given our best ethical theories (or best credence distribution in ethical theories), what’s the biggest problem we currently face?
The Theoretical List
What’s the correct population ethics? How should we value future people compared with present people? Do people have diminishing marginal value?
Should we maximise expected value when it comes to small probabilities of huge amounts of value? If not, what should we do instead?
How should we respond to the possibility of creating infinite value (or disvalue)? Should that consideration swamp all others? If not, why not?
How should we respond to the possibility that the universe actually has infinite value? Does it mean that we have no reason to do any action (because we don’t increase the sum total of value in the world)? Or does this possibility refute aggregative consequentialism?
How should we accommodate moral uncertainty? Should we apply expected utility theory? If so, how do we make intertheoretic value comparisons? Does this mean that some high-stakes theories should dominate our moral thinking, even if we assign them low credence?
How should intuitions weigh against theoretical virtues in normative ethics? Is common-sense ethics roughly correct? Or should we prefer simpler moral theories?
Should we prioritise the prevention of human wrongs over the alleviation of naturally caused suffering? If so, by how much?
What sorts of entities have moral value? Humans, presumably. But what about non-human animals? Insects? The natural environment? Artificial intelligence?
What additional items should be on these lists?