On Objective Ethics, and a bit about boats
For a good 2,500 years or so humanity has been struggling to find quite a lot of things, but especially importantly what we should do and why. Big questions like these often seem rather irrational, and most philosophers throughout history have decided to approach this issue using mostly a form of irrational inquiry. I am a transhumanist, and I hold quite strong views on human ethics, but the one issue that I’ve encountered is if one says “why” enough times to a set of ethical assertions they somewhat fall to pieces. Using a form of theory reductionism, we can take a given ethical theory like virtue ethics to be blatantly false. The assertion that the ultimate good is a collection of a series of attributes such as courage, kindness, knowledge etc. is not backed up by anything. All I have to do is ask why that is and then immediately it is made clear that there is no logic underlying that claim, and the same goes for practically every established ethical theory.
And now, we come on to metaethics. Why should I even do something that’s “correct”, anyway? If anyone in the comments can prove to me that suffering, or premature death, can be proven to be evil using any form of logical reasoning I’ll be astounded, because you can’t. Using no system of a priori reasoning can I deduce that suffering or death is inherently bad. Even in the very unlikely instance that there is some form of deity, why is that deity’s ethics good? Just because I see the Ten Commandments were invented by some Abrahamic God and I’ll suffer some very bad consequences from not adhering to them, what is there about those commandments that makes them inherently correct? Any form of ethical system, it seems, is now doomed to fail, so why can’t I just go about killing people? This is a very startling conclusion to hit upon.
However, it is not one that I quite agree with. That line of reasoning leads to some pretty dire consequences, and it commits a fundamental flaw in not making a distinction between Goal reasoning and Attainment reasoning. The line of inquiry that I pursued in the previous paragraph was an analysis of how rational thought cannot lead us to finding a goal. Neither a priori or empirical reasoning will help us find an ethical code written in the stars, and even if we found something, why would we follow it? What rationality can do, and what it is very good at doing, is helping us find an efficient route of attainment. Rationality coupled with scientific, empirical observation helps us find more knowledge, and in the field of ethics can implement human ideas. Us, as a species, have to come to some form of conclusion as to how we want our society, and eventually our universe-shard to be optimised, and then we employ rational inquiry to deduce how best to do this. I’m saying that there is no ethics written in the stars, and no ethics that you can find using pure reason, or empirical observation, or both.
Moral theories, then, should be treated as what they are: propositions on ways to optimise things, and make things better, in alignment with whatever goals we decide. It just so happens that the goals a lot of us humans have decided on is preventing death or cruelty, or stopping certain acts from occurring. There is no reason to say why it would be any less noble for us to devote all of our resources to cheese manufacture than to EA. The only differentiator is that us, as humans, have decided for some emotively backed reason that we want to have certain goals, like the prevention of human suffering and the gathering of more knowledge.
This brings to mind a quote by the Lebanese poet Kahlil Gibran,
“Your reason and your passion are the rudder and the sails of your seafaring soul.”
Human goals are what we decide them to be, for emotive, personal, or economic reasons. How we attain these goals is the domain of reason, but what we set them as is up to us.
It’s tautologous that you morally-should do what’s morally right. Whether there is anything that is morally right is another question. Whether you would be motivated...would instead of should...is another again.
Have you ever tried to decide on a completely different goal? For example, could you make yourself try to produce as many paperclips as possible, ignoring everything else?
What I am trying to say is that the choice of goals may turn out to be less arbitrary than your article seems to suggest.
[downvote explanation] This seems to be rambling on a very large topic, without much framing or rigor about what is the point. I can’t tell what you’re seeking comments on, or what updates you hope readers will make. I happen to agree with (what I think is) your core belief that all morals and ethics are created by social context rather than being physical law.
Two examples of things that make this a weak post:
It’s been either FAR longer or a bit shorter than 2500 years, depending on what dimensions of struggle you focus on, and I’m very confused why THAT is what you consider especially important. It’s not, to most people most of the time.
“requires axioms” or “not apriori justified” is EXTREMELY DIFFERENT from “blatantly false”.