Reminder to self: when posting on utiltarianism-related topics, include the disclaimer that I am a consequentialist but not a utilitarian. I don’t believe there is an objective, or even outside-the-individual perspective to valuation or population aggregation.
Value is relative, and the evaluation of a universe-state can and will be different for different agents. There is no non-indexical utility, and each agent models the value of other agents’ preferences idiosyncratically.
Strong anti-realism here. And yet it’s fun to play with math and devise systems that mostly match my personal intuitions, so I can’t stay away from those topics.
I think I agree that there’s no objectively true universal value aggregation process, but if you can’t find (a method for finding) very broad value aggregation system, then you can’t have peace. You have peace insofar as factions accept the rulings of the system. Simply giving up on the utilitarian project is not really an option.
That’s an interesting reason to seek Utilitarianism that I hadn’t considered. Not “this is true” or “this makes correct recommendations”, but “if it works, it’ll be a better world”. I see some elements of Pascal’s Wager in there, but it’s a novel enough (to me) approach that I need to think more on it.
I do have to point out that perhaps “you can’t have peace” is the actual true result of individual experience. You can still have long periods of semi-peace, when a coalition is strong enough to convince others to follow their regime of law (Pax Romana, or many of today’s nations). But there’s still a lot of individual disagreement and competition for resources, and some individuals use power within the system to allocate resources in ways that other individuals disprefer.
I’m not sure if that is “peace” within your definition. If so, Utilitarian aggregation isn’t necessary—we have peace today without a working Utility calculation. If not, there’s no evidence that it’s possible. It may still be a worthwhile goal to find out.
Shameless self-promotion but I think meta-preference utilitarianism is a way to aggregate value fairly without giving up anti-realism. Also, be sure to check out the comment by Lukas_Gloor as he goes into more depth about the implications of the theory.
Well, that’s one way to lean into the “this isn’t justified by fundamentals or truth, but if people DID agree to it, things would be nicer than today (and it doesn’t violate preferences that I think should be common)”. But I’m not sure I buy that approach. Nor do I know how I’d decide between that and any other religion as a desirable coordination mechanism.
I’m not sure what flavor of moral anti-realism you subscribe to so I can’t really help you there, but I am currently writing a post which aims to destroy a particular subsection of anti-realism. My hope is that if I can destroy enough of meta-ethics, we can hopefully eventually make some sort of justified ethical system. But meta-ethics is hard and even worse, it’s boring. So I’ll probably fail.
Good luck! I look forward to seeing it, and I applaud any efforts in that direction, even as I agree that it’s likely to fail :)
That’s a pretty good summary of my relationship to Utilitarianism: I don’t believe it, but I do applaud it and I prefer most of it’s recommendations to a more nihilistic and purer theory.
Reminder to self: when posting on utiltarianism-related topics, include the disclaimer that I am a consequentialist but not a utilitarian. I don’t believe there is an objective, or even outside-the-individual perspective to valuation or population aggregation.
Value is relative, and the evaluation of a universe-state can and will be different for different agents. There is no non-indexical utility, and each agent models the value of other agents’ preferences idiosyncratically.
Strong anti-realism here. And yet it’s fun to play with math and devise systems that mostly match my personal intuitions, so I can’t stay away from those topics.
I think I agree that there’s no objectively true universal value aggregation process, but if you can’t find (a method for finding) very broad value aggregation system, then you can’t have peace. You have peace insofar as factions accept the rulings of the system. Simply giving up on the utilitarian project is not really an option.
That’s an interesting reason to seek Utilitarianism that I hadn’t considered. Not “this is true” or “this makes correct recommendations”, but “if it works, it’ll be a better world”. I see some elements of Pascal’s Wager in there, but it’s a novel enough (to me) approach that I need to think more on it.
I do have to point out that perhaps “you can’t have peace” is the actual true result of individual experience. You can still have long periods of semi-peace, when a coalition is strong enough to convince others to follow their regime of law (Pax Romana, or many of today’s nations). But there’s still a lot of individual disagreement and competition for resources, and some individuals use power within the system to allocate resources in ways that other individuals disprefer.
I’m not sure if that is “peace” within your definition. If so, Utilitarian aggregation isn’t necessary—we have peace today without a working Utility calculation. If not, there’s no evidence that it’s possible. It may still be a worthwhile goal to find out.
Shameless self-promotion but I think meta-preference utilitarianism is a way to aggregate value fairly without giving up anti-realism. Also, be sure to check out the comment by Lukas_Gloor as he goes into more depth about the implications of the theory.
Well, that’s one way to lean into the “this isn’t justified by fundamentals or truth, but if people DID agree to it, things would be nicer than today (and it doesn’t violate preferences that I think should be common)”. But I’m not sure I buy that approach. Nor do I know how I’d decide between that and any other religion as a desirable coordination mechanism.
I’m not sure what flavor of moral anti-realism you subscribe to so I can’t really help you there, but I am currently writing a post which aims to destroy a particular subsection of anti-realism. My hope is that if I can destroy enough of meta-ethics, we can hopefully eventually make some sort of justified ethical system. But meta-ethics is hard and even worse, it’s boring. So I’ll probably fail.
Good luck! I look forward to seeing it, and I applaud any efforts in that direction, even as I agree that it’s likely to fail :)
That’s a pretty good summary of my relationship to Utilitarianism: I don’t believe it, but I do applaud it and I prefer most of it’s recommendations to a more nihilistic and purer theory.
Wow what a coincidence, moral nihilism is exactly the subject of the post is was talking about. Here it is btw.