Err, maybe “most sacred rights” was the wrong wording. How about “moral values”.
This goes deeper than you think. The position we’re advocating, in essence, is that
There are no inalienable rights or ontologically basic moral values. Everything we’re talking about when we use normative language is a part of us, not a property of the universe as a whole.
This doesn’t force us to be nihilists. Even if it’s just me that cares about not executing innocent people, I still care about it.
It’s really easy to get confused thinking about ethics; it’s a slippery problem.
This doesn’t mean we should throw out all general rules; some absolute ethical injunctions should be followed even when it “seems like they shouldn’t”, because of the risk of one’s own thought processes being corrupted in typical human ways.
This may sound strange, but in typical situations it all adds up to normality: you won’t see a rationalist consequentialist running around offing people because they’ve calculated them to be net negatives for human values. It can change the usual answers in extreme hypotheticals, in dealing with uncertainty, and in dealing with large numbers; but that’s because “common-sense” thinking ends up being practically incoherent in recognizable ways when those variables are added.
I don’t expect you to agree with all of this, but I hope you’ll give it the benefit of the doubt as something new, which might make sense when discussed further...
This goes deeper than you think. The position we’re advocating, in essence, is that
There are no inalienable rights or ontologically basic moral values. Everything we’re talking about when we use normative language is a part of us, not a property of the universe as a whole.
This doesn’t force us to be nihilists. Even if it’s just me that cares about not executing innocent people, I still care about it.
It’s really easy to get confused thinking about ethics; it’s a slippery problem.
The best way to make sure that more of what we value happens, generally speaking, is some form of consequentialist calculus. (I personally hesitate to call this utilitarianism because that’s often thought of as concerned only with whether people are happy, and I care about some other things as well.)
This doesn’t mean we should throw out all general rules; some absolute ethical injunctions should be followed even when it “seems like they shouldn’t”, because of the risk of one’s own thought processes being corrupted in typical human ways.
This may sound strange, but in typical situations it all adds up to normality: you won’t see a rationalist consequentialist running around offing people because they’ve calculated them to be net negatives for human values. It can change the usual answers in extreme hypotheticals, in dealing with uncertainty, and in dealing with large numbers; but that’s because “common-sense” thinking ends up being practically incoherent in recognizable ways when those variables are added.
I don’t expect you to agree with all of this, but I hope you’ll give it the benefit of the doubt as something new, which might make sense when discussed further...