As a full throated defender of pulling the lever (given traditional assumptions such as a lack of an audience, complete knowledge of each outcomes, productivity of the people on the tracks) , there are numerous issues with your proposals:
1.) Vague alternative: You seem to be pushing towards some form of virtue ethics/basic intuitionism, but there are numerous problems with this approach. Besides determining whose basic intuitions count and whose don’t, or which virtues are important, there is very real problems when these virtues conflict. For instance, imagine you are walking at night, and are trying to cross a street. The sign says red, but no cars are around. Do you jaywalk? In this circumstance, one is forced to make a decision which pits two virtues/ intuitions against each other. The beauty of utilitarianism is that it allows us to choose in these circumstances.
2.) Subjective Morality: Yes, utilitarianism may not be “objective” in the sense that there is no intrinsic reason to value human flourishing, but I believe utilitarianism to be the viewpoint which closest conforms to what most people value. To illustrate why this matters, I take an example from Alex O’Connor. Image you need to decide what color to paint a room. Nobody has very strong opinions, but most people in your household prefer the color blue. Yes, blue might not be “objectively” the best, but if most of the people in your household like the color blue the most, there is little reason not to. We are all individually going to seek what we value, so we might as well collectively agree to a system which reflects the preferences of most people.
3.) Altruism in Disguise:
Another thing to notice is that virtue ethics can be a form of effective altruism when practiced in specific ways. In general, bettering yourself as a person by becoming more rational, less biased, etc, will in fact make the world a better place, and giving time to form meaningful relationships, engage in leisure, etc. can actually increase productivity in the long run.
You also seem advocate for fundamental changes in society, changes I am not sure I would agree with, but if your proposed changes are indeed the best way to increase the general happiness of the population, it would be, by definition, the goal of the EA movement. I think a lot of people look at the recent stuff with SBF and AI research and come to think the EA movement is only concerned with lofty existential risk scenarios, but there is a lot more to it then that.
Edit: Almost forgot this, but citation: Alex O’Connor(in this video) formulated the blue room example. We use it differently (he uses it to argue against objective morality), but he verbalized it.
Hmm maybe I spoke too soon. There have been times in different societies when the majority of people would say slavery was good, and times (even before the modern age) when the majority of people would say slavery was bad. But slavery is bad, and according to my argument it’s because it feels bad, even if you are mostly unconscious of it.
So my theory of social change is that we individually learn to feel and understand our emotions better, because emotions are the only reason we care about anything at all. The way emotions feel in my body is certainly very objective to me, and the more I understand them the more I can recognize them in other people. I’m not sure I’m willing to claim that recognizing other people’s emotions is entirely objective, but when we see an angy politician ranting and raving, no one disagrees that they are angry.
So I would say we should collectively agree to a system which reflects the true preferences of most people, but there is a process of understanding what those preferences really are.
As a full throated defender of pulling the lever (given traditional assumptions such as a lack of an audience, complete knowledge of each outcomes, productivity of the people on the tracks) , there are numerous issues with your proposals:
1.) Vague alternative: You seem to be pushing towards some form of virtue ethics/basic intuitionism, but there are numerous problems with this approach. Besides determining whose basic intuitions count and whose don’t, or which virtues are important, there is very real problems when these virtues conflict. For instance, imagine you are walking at night, and are trying to cross a street. The sign says red, but no cars are around. Do you jaywalk? In this circumstance, one is forced to make a decision which pits two virtues/ intuitions against each other. The beauty of utilitarianism is that it allows us to choose in these circumstances.
2.) Subjective Morality: Yes, utilitarianism may not be “objective” in the sense that there is no intrinsic reason to value human flourishing, but I believe utilitarianism to be the viewpoint which closest conforms to what most people value. To illustrate why this matters, I take an example from Alex O’Connor. Image you need to decide what color to paint a room. Nobody has very strong opinions, but most people in your household prefer the color blue. Yes, blue might not be “objectively” the best, but if most of the people in your household like the color blue the most, there is little reason not to. We are all individually going to seek what we value, so we might as well collectively agree to a system which reflects the preferences of most people.
3.) Altruism in Disguise:
Another thing to notice is that virtue ethics can be a form of effective altruism when practiced in specific ways. In general, bettering yourself as a person by becoming more rational, less biased, etc, will in fact make the world a better place, and giving time to form meaningful relationships, engage in leisure, etc. can actually increase productivity in the long run.
You also seem advocate for fundamental changes in society, changes I am not sure I would agree with, but if your proposed changes are indeed the best way to increase the general happiness of the population, it would be, by definition, the goal of the EA movement. I think a lot of people look at the recent stuff with SBF and AI research and come to think the EA movement is only concerned with lofty existential risk scenarios, but there is a lot more to it then that.
Edit:
Almost forgot this, but citation: Alex O’Connor(in this video) formulated the blue room example. We use it differently (he uses it to argue against objective morality), but he verbalized it.
Completely agree!
Hmm maybe I spoke too soon. There have been times in different societies when the majority of people would say slavery was good, and times (even before the modern age) when the majority of people would say slavery was bad. But slavery is bad, and according to my argument it’s because it feels bad, even if you are mostly unconscious of it.
So my theory of social change is that we individually learn to feel and understand our emotions better, because emotions are the only reason we care about anything at all. The way emotions feel in my body is certainly very objective to me, and the more I understand them the more I can recognize them in other people. I’m not sure I’m willing to claim that recognizing other people’s emotions is entirely objective, but when we see an angy politician ranting and raving, no one disagrees that they are angry.
So I would say we should collectively agree to a system which reflects the true preferences of most people, but there is a process of understanding what those preferences really are.