I’ve taught my philosophy students that “obvious” is a red flag in rational discourse.
It often functions as “I am not giving a logical or empirical argument here, and am trying to convince you that none is needed” (Really, why?) and “If you disagree with me, you should maybe be concerned about being stupid or ignorant for not seeing something obvious; a disagreement with my unfounded claim needs careful reasoning and arguments on your part, it may be better to be quiet, lest you are laughed at.” It so often functions as a trick to get people to overlook an unjustified statement, or to get others to justify your statements for you, or to be doubtful of themselves when doubting your unfounded claim. (Which is the very effect you have produced, with commenters below going “I really don´t get it and it bothers me alot.”, seeing the mistake for not understanding something that was not explained and is likely not true with themselves, and other commenters coming up with the arguments you did not supply.)
If a statement is actually obvious—that is, universally and instantly convincing, with everyone capable of giving the argument for it easily and quickly—this does not need to be spelled out, and generally is not, as stating that it is obvious adds nothing to what everyone knows. If the statement is rather obvious, but not quite, that is, it can be proven with ease in a few lines, it might as well be given, right?
Furthermore, I am unaware of a compelling rational argument for total utilitarianism. It is deeply controversial, for good reasons, whether morality as a whole is something that can be purely rationally derived (Hume has expanded on this quite well; it is one thing to rationally deduce how to reach a given moral goal, it is quite another to rationally generate a moral goal like “maximise average or total human happiness”, and to also prove that it is the only worthwhile goal.). And attempts to derive a purely rational moral system are notably contrary to utilitarianism (e.g. Kant’s attempt to construct a morality that consists solely of one’s actions being logically non-contradictory comes to mind, and he explicitly excludes the utility of an action from its moral judgement).
If you offer humans the chance to live in a world governed by traditional utilitarianism, many of them wish not to live there, and strongly consider the idea of constructing such a world to be a moral wrong.
Many humans choose to know uncomfortable truths, to be free, to create and discover, to sacrifice themselves for others, to have authentic self-expression, to be connected to reality, to live in a world that is just, etc. etc. over pure happiness. If offered a hypothetical scenario of being inserted into a machine where they will always feel happy, eternally fed virtual chocolate and virtual blowjobs and an endless sequence of diverting content to scroll past, forgetting all the bad that happened to them, blind to the outer world, losing their capacity for boredom and their yearning for more… Many would chose to instead live in a world that is often painful, but real, a world where their actions have impact. There is a realisation that there are things more important than happiness.
There is also often a strong feeling that there are evils that cannot be outweighed—and torturing an innocent non-consensually typically makes that list. Say we have a scenario where 20 men take a random woman, gangrape her, and kill her. They then argue that her one hour of suffering (dead now, she is suffering no more) is outweighed by the intense delight each of them feels, and will feel for decades—especially seeing as they are so many of them, and only one of her, and they really, really like raping. Heck, they’v even taped it, so millions of men will be able to look at it and get off, so it is a virtue, really. If you look at that scenario and think “That is fucked up”, you aren’t being irrational, you are showing empathy, recognising value beyond mere averages of happiness. If you were that woman, or any other oppressed group in such a system, being exploited for the “general good”, it would be your right to fight such a system with everything you’ve got—and I fucking hope that many people would have your back, and not excuse this as obviously rational.
I’ve taught my philosophy students that “obvious” is a red flag in rational discourse.
It often functions as “I am not giving a logical or empirical argument here, and am trying to convince you that none is needed” (Really, why?) and “If you disagree with me, you should maybe be concerned about being stupid or ignorant for not seeing something obvious; a disagreement with my unfounded claim needs careful reasoning and arguments on your part, it may be better to be quiet, lest you are laughed at.” It so often functions as a trick to get people to overlook an unjustified statement, or to get others to justify your statements for you, or to be doubtful of themselves when doubting your unfounded claim. (Which is the very effect you have produced, with commenters below going “I really don´t get it and it bothers me alot.”, seeing the mistake for not understanding something that was not explained and is likely not true with themselves, and other commenters coming up with the arguments you did not supply.)
If a statement is actually obvious—that is, universally and instantly convincing, with everyone capable of giving the argument for it easily and quickly—this does not need to be spelled out, and generally is not, as stating that it is obvious adds nothing to what everyone knows. If the statement is rather obvious, but not quite, that is, it can be proven with ease in a few lines, it might as well be given, right?
Furthermore, I am unaware of a compelling rational argument for total utilitarianism. It is deeply controversial, for good reasons, whether morality as a whole is something that can be purely rationally derived (Hume has expanded on this quite well; it is one thing to rationally deduce how to reach a given moral goal, it is quite another to rationally generate a moral goal like “maximise average or total human happiness”, and to also prove that it is the only worthwhile goal.). And attempts to derive a purely rational moral system are notably contrary to utilitarianism (e.g. Kant’s attempt to construct a morality that consists solely of one’s actions being logically non-contradictory comes to mind, and he explicitly excludes the utility of an action from its moral judgement).
If you offer humans the chance to live in a world governed by traditional utilitarianism, many of them wish not to live there, and strongly consider the idea of constructing such a world to be a moral wrong.
Many humans choose to know uncomfortable truths, to be free, to create and discover, to sacrifice themselves for others, to have authentic self-expression, to be connected to reality, to live in a world that is just, etc. etc. over pure happiness. If offered a hypothetical scenario of being inserted into a machine where they will always feel happy, eternally fed virtual chocolate and virtual blowjobs and an endless sequence of diverting content to scroll past, forgetting all the bad that happened to them, blind to the outer world, losing their capacity for boredom and their yearning for more… Many would chose to instead live in a world that is often painful, but real, a world where their actions have impact. There is a realisation that there are things more important than happiness.
There is also often a strong feeling that there are evils that cannot be outweighed—and torturing an innocent non-consensually typically makes that list. Say we have a scenario where 20 men take a random woman, gangrape her, and kill her. They then argue that her one hour of suffering (dead now, she is suffering no more) is outweighed by the intense delight each of them feels, and will feel for decades—especially seeing as they are so many of them, and only one of her, and they really, really like raping. Heck, they’v even taped it, so millions of men will be able to look at it and get off, so it is a virtue, really. If you look at that scenario and think “That is fucked up”, you aren’t being irrational, you are showing empathy, recognising value beyond mere averages of happiness. If you were that woman, or any other oppressed group in such a system, being exploited for the “general good”, it would be your right to fight such a system with everything you’ve got—and I fucking hope that many people would have your back, and not excuse this as obviously rational.