Hello! I’d welcome you, but I can’t honestly represent anything or anyone besides, well, me (I’m a complete neophyte). Really my interest quite piqued at your thoughts on Mr. Bentham’s philosophy, as they happen to be the exact opposite conclusion I came to—namely that utilitarianism is essentially for people who think, “Things could be so much better if I ran things.” The main logical process that led to this conclusion was: People aren’t being logical < If they were logical, they would consider the probability of the net good of an act, and only act if the probability was very high, or just above normal but still low risk < What about contentious issues, based upon value systems? who would make the call on those? < ___.
On that last step I’ve never really made any progress, as it seems no matter how objective (I consider this word to include the consideration of emotions) and rational you are, on the contentious issues that have no… *
… Sorry, I just had a thought. I remember reading somewhere that for things that have no right or wrong, after the collective evidence has been weighted for accuracy, legitimacy, credibility etcetera, the option(s) that have the greatest probability of truth should be (as a rationalist) treated as truth for the time being; if some new evidence tips the scale in the other direction, so follows the belief. This … means no religion could be rationally considered true—as of now, at least. Thus any governmental system based upon utilitarianism would only tolerate religion insofar as it affects the emotional welfare of its citizens. And that if either 3,000 innocents or 3 brilliant, Nobel prize winning, humanity-revolutionizing genius scientists absolutely, all other possible and impossible avenues had been taken and failed, had to die, then it would come down to probabilities of each possibility’s net good (utility) when deciding which to pick.
I suppose I made a little bit of progress there, so thank you for the kick—but you can see, I hope, how I think utilitarianism is embraced by people who think on the opposite pole of “I think being nice is good”. I don’t think it’s embraced only by those who think they should be running things anymore, though. That changed since the beginning of the post, and since this was a bit about your thought processes in coming your conclusion, I’ve kept mine. Cheers!
*The following bracketed fragment completes the thought I was going write, before I cut off the sentence and started from ”… Sorry”; it was written ex post facto: [right or wrong, it comes down to the individual value system of the decider(s).]
I think your thought process brings up a few different aspects of evaluating ethical philosophies, and disentangling them would be very helpful.
First, I certainly agree that there are probably people out there that reach utilitarianism through a process of motivated cognition—they want to be in control, and the reason they use (perhaps even to themselves) to make that sound better is that it would be for the good of everyone. However, I also think that there are many other people out there who grew up believing that good is what we should strive for and that the way to do that is to aim for the greatest benefit of the greatest number of people. These types of people might then reach for utilitarianism not to justify actions they wanted to perform already, but rather as what they perceive as the closest complete ethical system to their previous objectives.
While the former group of people merely use utilitarianism as an excuse (even if they believe they believe in it), it is actually the latter group whose reasoning I am generally more concerned about. Whereas the dictator types will do what they planned to do anyway, the forces of good types are vulnerable to taking utilitarianism too seriously, and doing such things as, for example, thinking that maybe it’s okay to sacrifice one human life if it will save one million ants (without considering ecosystem impacts), which I do not think is a thought that would have ever arisen from their core belief system. Which is not to say that all utilitarians would agree with that trade-off, but I have seen some who seem like they would, and that is just one minor example of the many problems I have with the idea.
The other point I wanted to bring up is that utilitarianism is really a system for general thinking, even if one likes it, not for immediate real-world implementation. Indeed, it is unimplementable, in the mechanism design sense. So the only way (that I can think of) that you could put it into practice in the real world is to have a strong AI (or equivalent) build detailed models of everyone (possibly involving brain scans) and implement a solution based on those (as otherwise, any implementation would suffer from participants refusing to tell the truth about their utility functions). So, the question of “how would contentious decisions be made?” is fairly unanswerable, except through accepting some deviation from utilitarianism.
I hope that helps crystallize your thoughts a little bit.
That said, where there exists a measurable difference between an implementable approximation of utilitarianism and an implementable approximation of some other moral principle X, then it makes sense to consider oneself a utilitarian or an Xian even if one is, as you say, accepting deviations from utilitarianism or X in order to achieve implementability.
Thank you! I’d never really thought of that other (the latter) approach to utilitarianism; that explains a lot. Nitpick: The use of ‘crystallize’ in regard to ‘thoughts’, I think, would only be recommendable when describing a particularly desirable thought process. I understood crystallize to mean elucidate, in this context, but cause for confusion is there.
Thanks! I was sort of using a word experimentally, and it’s good to know that it can be a bit confusing. For the record, yes, I did mean it in an elucidate sort of way.
Hello! I’d welcome you, but I can’t honestly represent anything or anyone besides, well, me (I’m a complete neophyte). Really my interest quite piqued at your thoughts on Mr. Bentham’s philosophy, as they happen to be the exact opposite conclusion I came to—namely that utilitarianism is essentially for people who think, “Things could be so much better if I ran things.” The main logical process that led to this conclusion was: People aren’t being logical < If they were logical, they would consider the probability of the net good of an act, and only act if the probability was very high, or just above normal but still low risk < What about contentious issues, based upon value systems? who would make the call on those? < ___.
On that last step I’ve never really made any progress, as it seems no matter how objective (I consider this word to include the consideration of emotions) and rational you are, on the contentious issues that have no… *
… Sorry, I just had a thought. I remember reading somewhere that for things that have no right or wrong, after the collective evidence has been weighted for accuracy, legitimacy, credibility etcetera, the option(s) that have the greatest probability of truth should be (as a rationalist) treated as truth for the time being; if some new evidence tips the scale in the other direction, so follows the belief. This … means no religion could be rationally considered true—as of now, at least. Thus any governmental system based upon utilitarianism would only tolerate religion insofar as it affects the emotional welfare of its citizens. And that if either 3,000 innocents or 3 brilliant, Nobel prize winning, humanity-revolutionizing genius scientists absolutely, all other possible and impossible avenues had been taken and failed, had to die, then it would come down to probabilities of each possibility’s net good (utility) when deciding which to pick.
I suppose I made a little bit of progress there, so thank you for the kick—but you can see, I hope, how I think utilitarianism is embraced by people who think on the opposite pole of “I think being nice is good”. I don’t think it’s embraced only by those who think they should be running things anymore, though. That changed since the beginning of the post, and since this was a bit about your thought processes in coming your conclusion, I’ve kept mine.
Cheers!
*The following bracketed fragment completes the thought I was going write, before I cut off the sentence and started from ”… Sorry”; it was written ex post facto: [right or wrong, it comes down to the individual value system of the decider(s).]
I think your thought process brings up a few different aspects of evaluating ethical philosophies, and disentangling them would be very helpful.
First, I certainly agree that there are probably people out there that reach utilitarianism through a process of motivated cognition—they want to be in control, and the reason they use (perhaps even to themselves) to make that sound better is that it would be for the good of everyone. However, I also think that there are many other people out there who grew up believing that good is what we should strive for and that the way to do that is to aim for the greatest benefit of the greatest number of people. These types of people might then reach for utilitarianism not to justify actions they wanted to perform already, but rather as what they perceive as the closest complete ethical system to their previous objectives.
While the former group of people merely use utilitarianism as an excuse (even if they believe they believe in it), it is actually the latter group whose reasoning I am generally more concerned about. Whereas the dictator types will do what they planned to do anyway, the forces of good types are vulnerable to taking utilitarianism too seriously, and doing such things as, for example, thinking that maybe it’s okay to sacrifice one human life if it will save one million ants (without considering ecosystem impacts), which I do not think is a thought that would have ever arisen from their core belief system. Which is not to say that all utilitarians would agree with that trade-off, but I have seen some who seem like they would, and that is just one minor example of the many problems I have with the idea.
The other point I wanted to bring up is that utilitarianism is really a system for general thinking, even if one likes it, not for immediate real-world implementation. Indeed, it is unimplementable, in the mechanism design sense. So the only way (that I can think of) that you could put it into practice in the real world is to have a strong AI (or equivalent) build detailed models of everyone (possibly involving brain scans) and implement a solution based on those (as otherwise, any implementation would suffer from participants refusing to tell the truth about their utility functions). So, the question of “how would contentious decisions be made?” is fairly unanswerable, except through accepting some deviation from utilitarianism.
I hope that helps crystallize your thoughts a little bit.
That said, where there exists a measurable difference between an implementable approximation of utilitarianism and an implementable approximation of some other moral principle X, then it makes sense to consider oneself a utilitarian or an Xian even if one is, as you say, accepting deviations from utilitarianism or X in order to achieve implementability.
Thank you! I’d never really thought of that other (the latter) approach to utilitarianism; that explains a lot.
Nitpick: The use of ‘crystallize’ in regard to ‘thoughts’, I think, would only be recommendable when describing a particularly desirable thought process. I understood crystallize to mean elucidate, in this context, but cause for confusion is there.
Thanks! I was sort of using a word experimentally, and it’s good to know that it can be a bit confusing. For the record, yes, I did mean it in an elucidate sort of way.