My Confusion about Moral Philosophy
Something about the academic discussion about moral philosophy always confused me, probably this is a more general point about philosophy as such. Historically people tried to arrive towards truths about objects. One used to ask questions like what is right or wrong. Then one started to ask what the definition of right and wrong could be. One could call that Platonism. There is the idea of truth and the game is won by defining the idea of truth, a chair or a human in a satisfying way. I claim the opposite is true. You can define an object or an idea and the definition of the idea makes the idea to a useful entity which one can develop further. At least this would be the right way to philosophies in my opinion. Something similar is done in mathematics too to my knowledge. Axioms seem to be the beginning. On a few axioms in math all theorems and all sentences seem to be built upon. Change the axioms or subvert them then one would end up with a totally different system of mathematics, with different theorems and sentences most likely. However the main difference in this analogy is that we know of the axioms in mathematics to be true on an intuitive level. That’s the unique difficulty of philosophy. We do not seem to have axioms in philosophy. We could however make a somewhat reasonable assumption that if one of the foundational axioms will prove wrong the system of mathematics might entangle itself in contradictions or at least in some inconsistencies. Historically this did. in fact happen. To give an historical example there was a Fundation crisis of mathematics in the second half of the 19. century and in the early 20. century. Therefore one could argue that the same could happen to philosophy once philosophy is evolved enough. Now I will explain my confusion about moral philosophy.
Moral philosophy seems to me to be a judgement about once own utility function. You can basically choose if you care more about being just to people, maximizing their utility or doing what is regarded as honorable by your peers. You can choose if you want to include animals, plants or just humans in your considerations. There does not seem to be a right answer in the sense that a right answer would have a special pair of attributes. In the usual academic discussion of utilitarianism, deontological ethics or virtue ethics there will always appear something that makes a theory problematic, therefore one will abstain from fully committing to one of the mentioned systems, of which their are of course several different versions. What confuses me a bit is that those problems will change anyone’s mind. A strict utilitarian will necessarily get in conflict with some considerations of justice. That should not surprise someone because if one started deciding to be utilitarian one defined a scope about things one will care and about which things one will not care. The true reasons one might be uncomfortable with the implications of the trolley problem is that one violates his utility function which precisely does not care about the academic discussion of it, but cares about the felling of guilt and shame. Morality is motivated by our feelings and our philosophy about it is just an attempt to make or feelings that evolutionary evolved consistent. The rational way to deal with once morality seems therefore for me to be to just make sure one minimizes guilt, shame and maximizes the pleasure that helping others will give most people. If one assumes that we can not control our moral sentiments or do away with it, we could have a inconsistent moral system without compromising our rationality. Because it’s inconsistency contributes to our moral enjoyment and minimizes our moral suffering.
At the beginning I described mathematics. And I described that it’s foundations relies on Axioms. It seems to me that one could describe a whole school of thought in philosophy on the foundation of rationality. Instead of asking in moral philosophy what is right to do? Which is determined by vague notions of right. One could ask what is rational to do? Rationality is far easier to define and inconsistencies can exist as long a consistency with the idea of rationality is present. This will of course not end the discussion about moral philosophy, but could show that it isn’t as relevant for humans to a certain extent. This mode of thinking could be extended to other fields too. For example to politics. Instead of concerning oneself in political philosophy to such an large extent on legitimation questions one could concern oneself more about what rational legislators or governments should do. Rationality of a government could even play a part in legitimizing it.
In mathematics, axioms are not just chosen based of what feels correct—instead, the implications of those axioms are explored, and only if those seem to match the intuition too, then the axioms have some chance of getting accepted. If a reasonably-seeming set axioms allows you to prove something that clearly should not be provable (such as—in the extreme case—a contradiction), then you know your axioms are no good.
Axiomatically stating a particular ethical framework, then exploring the consequences of the axioms in the extreme and tricky cases can serve a similar purpose—if simingly sensible ethical “axioms” lead to completely unreasonable conclusions, then you know you have to revise the stated ethical framework in some way.
I agree with the first statement of yours. But I disagree with the second. As I stated in my text I think that morality is determined by conflicting emotions. If your morality is build around the wish to help and cultural guilt feelings both motivations will end up in being in conflict with each other. I would however agree that that a axiomatic approach in your sense where you choose the axioms also based on where they will lead you down the rabbit hole makes sense in other fields of philosophy or if the aim of once moral philosophy is achieving rationality above arriving at the right morality
You didn’t argue for that, and it seems obviously false: if I like killing people, that doesn’t make it moral for me to kill people. Morality might have something to do with utility, but that in itself doesn’t tell you whether extreme selfishness, extreme altruism, or something in between is correct. Trying to treat morality as rationality doesn’t help either, because rationality is about optimising a UF which is not otherwise specified, so that the resulting morality could range between extreme selfishness and extreme altruism.
It’s striking that, in practice, rationalism-inspired morality actually does range from extreme egoism to extreme altruism, and that rationalists think they are nonethless basically on the same page.
It doesn’t make killing people moral for most people but for a nazi it is moral to kill the Jews to give an extreme example. Or another example you hate your boss you would like to kill him, but killing him would make you feel guilty. So you measure your expected utility and decide. But I would really Appreciate if you would articulate your view more because I am not sure I totally understood you
If extreme moral subjectivism is true, that would be the case. Most people find extreme moral subjectivism to be false, and consider killing people to be wrong. I was appealing to common intuitions.
How do you know that extreme moral subjectivism is true? You title states that you are confused. If you in fact know that moral subjectivism is the one true system of ethics, how can you be confused?
Of course, the Nazi believes that it is moral to kill the jew. Maybe the ”...for X” clause indicates a mere belief. But you can’t disprove a true fact by pointing out that someone believes differently.
To be totally honest most of the academic philosophical discussions confuse me in several ways. I am not sure my position can be called extreme moral subjectivism. I for example know you can define justice. And a certain action can be according to that definition just or not. Hence justice exists. But it exists because humans define the idea of justice. Hence killing someone would not be just. The idea of justice however is of interest because our utility perception holds it necessary to create notions of justice towards satisfying or wish to help or towards controlling guilt and shame feelings. Killing is wrong however is a statement without truth value as long one does not specify what wrong means. It might be unjust under certain created moral systems. This would be my position on that matter. But if you disagree I would really like to hear in what sense you would disagree
You seem to be assuming that moral philosophy has to work in a maths-like way, where you start from definitions and axioms. But a lot of people like to start from beliefs about what sort of things are widely believed to be good and bad, and work back from the examples to general principles.
That’s true, but is that also your opinion?
Mathematics can be taken in that way but there are important ways how it does’t work like that. Axioms are assumed, not believed and this can make a difference. There is also the whole business whether you are pro or anti axiom of choice. This divide is an example how the axioms are not evident and intuition conflicts genuinely happen. Then there are questions about what cardinalities exist or not which suffer from there not being a favorite axiom set or reason to prefer one ovedr the other.
Philosphy is more used to there being mutiple camps and some sorts of argumentation flying in some circles and not others. Adding axioms gives you fodder to consruct proofs but it also increases threshold to find the proof compelling. You could have dialogue like “I have a neat proof. Assume that A”, “I don’t believe A”, “Well you won’t believe my proof then”. You could think of mathematians starting their things with “To whomever shares these basic beliefs...”, they don’t argue or find commom ground but just list the prerequisite to find the material interesting. True it is more common to expect for simple explanastion to advance mathetmatical disagreements where phisophy expects for disagreements to stay lingering but the kind of moves happen on both ends.
The proof won’t be a convincing argument for agreeing with its conclusion. But the proof itself can be checked without belief in A, and if it checks out, this state of affairs can be described as belief in the proof.
yes, althought most proofs use their axioms so one needs the ability to hold the axiom tentatively. if one is incapable of imagining what it would be to hold A then following the proof is going to be challenging.
But “Y proves X” has meanings of “makes X very firmly true” and “Y has X as theorem” which are not always the same thing.
I agree
Interesting points and important topic but there are some things I would like to challenge.
First off is your notion of a moral or ethical system based on rationality. I don’t disagree and in fact think it could make enormous difference in legal reform. It is important to realize that this is not a novel idea though. Some forms of utilitarian or consequentialist thought work toward a similar goal. See Peter Singer as an example.
Second, the idea that a rationally consistent philosophy will by definition minimize moral suffering can be disproven by experience. This is entirely anecdotal but I can recount actions even from yesterday that were rational and even compassionate, but still left me feeling morally challenged in a way I could not explain through logic. Personal rationality and personal morality are related but separate cognitive domains which are at best often in conflict with each other and, at worst, in conflict with themselves.
Finally, I would disagree that rationality is that much easier to define—at least in the clear-cut way you present it. I imagine you and I have rationalities that are very similar but rationality is not an independent concept floating around in metaphysical space. It is shaped by our epistemology and emotional cognition. For example, can you imagine the rational system of a person with eschatological beliefs? The difference in rationality between a venture capitalist based in non-renewable resources and a complex systems theorist devoted to climate management? Between someone who believes in women’s choice in abortion and someone who believes an egg is ensouled at conception? Etc. Rationality is shaped by beliefs and emotions more than the other way around.
All that said, I think you are discussing something very important. Moving away from “good-vs-evil” politics and behaviour is becoming crucial in the world and we can’t get there without these types of discussions. I suspect we would both embrace a best-practice system of ethics, separated from moral judgments. However, it is essentially important to realize rationality is not objective or unchanging, and the way to change people’s minds is not by telling them they are “wrong” but by showing them what is “right” and why. We may learn about our own rational inconsistencies in the meantime.
Have I misinterpreted you in any way? If so, my apologies. I hope this comment is helpful and look forward to a reply if you feel one is necessary.
Thank you for your comment. To some extent I hoped for some kind of constructive criticism of this sort.
first, strictly speaking i think rationality in humans will cause them to lack a precise moral system, precisely because our moral feelings (guilt, shame, pleasure in helping someone) are systems that stand in conflict to each other. Hence a consistent moral system cannot regulate our moral systems efficiently.
your second point or observation is something I am agreeing with. which is why I am advocating a moral system that isn’t strictly utilitarian or deontolgical, because moral systems are one sided in the sense that they just address one moral feeling instead of the whole range of them.
I would define rationality in following abstract terms. An agent is rational if his behaviors maximize his utility function over a prioritized time horizon. I think the true reason why some people support for example abortion or for someone not supporting abortion is in which statements leads to more utility. There is no thing that is violated if we kill another human being or torture one. What is really violated is our taste, our guilty capacity or our sense of shame. For some people this is violated if it’s not bible supported and for some people it is violated if it is just distasteful.
In the second bigger comment section you misinterpreted my text. My whole point in text addresses the impossibility of a moral system for a human being that wants to be rational