The same way you make any other moral judgement—whatever way that is.
That is, you are really asking “what, if anything, is morality?” If you had an answer to that question, answering the one you explicitly asked would just be a matter of historical research, and if you don’t, there’s no possibility of answering the one you asked.
Fair enough. I think the combination of historical evidence and the lack of a term for justice in physics equations is strong evidence that morality is not real. And that bothers me. Because it seems like society would have noticed, and society clearly thinks that morality is real.
Perhaps it is real, but is not the sort of thing you are assuming it must be, to be real.
I can’t point to the number 2, and some people, perplexed by this, have asserted that numbers are not real.
I can point to a mountain, or to a river, but I can’t point to what makes a mountain a mountain or a river a river. Some people, perplexed by this, conclude there are no such things as mountains and rivers.
I can’t point to my mind....and so on.
Can I even point? What makes this hand a pointer, and how can anyone else be sure they know what I am pointing to?
Stare at anything hard enough, and you can cultivate perplexity at its existence, and conclude that nothing exists at all. This is a failure mode of the mind, not an insight into reality.
Have you seen the meta-ethics sequence? The meta-ethical position you are arguing is moral nihilism, the belief that there is no such thing as morality. There are plenty of others to consider before deciding for or against nihilism.
It’s funny that I push on the problem of moral nihilism just a little, and suddenly someone thinks I don’t believe in reality. :)
I’ve read the beginning and the end of the meta-ethics sequence, but not the middle. I agree with Eliezer that recursive questions are always possible, but you must stop asking them at some point or you miss more interesting issues. And I agree with his conclusion that the best formulation of modern ethics is consideration for the happiness of beings capable of recursive thought.
I like to write a discussion post (or a series of posts) on this issue, but I don’t know where to start. Someone else responded to me (EDIT: with what seemed to me like] questioning the assertion that science is a one-way ratchet, always getting better, never getting worse. [EDIT: But we don’t seem to have actually communicated at all, which isn’t a success on my part.]
In case you want a connection to Artificial Intelligence:
Eliezer talks about the importance of provably Friendly AI, and I agree with his point. If we create super-intelligence and it doesn’t care about our desires, that would be very bad for us. But I think that the problem I’m highlighting says something about the possibility of proving that an AI is Friendly.
Someone else responded to me by questioning the assertion that science is a one-way ratchet, always getting better, never getting worse.
It seems likely to me that I’m the person you’re referring to. If so, I don’t endorse your summary. More generally, I’m not sure either of us understood the other one clearly enough in that exchange to merit confident statements on either of our parts about what was actually said, short of literal quotes .
The same way you make any other moral judgement—whatever way that is.
That is, you are really asking “what, if anything, is morality?” If you had an answer to that question, answering the one you explicitly asked would just be a matter of historical research, and if you don’t, there’s no possibility of answering the one you asked.
Fair enough. I think the combination of historical evidence and the lack of a term for justice in physics equations is strong evidence that morality is not real. And that bothers me. Because it seems like society would have noticed, and society clearly thinks that morality is real.
Society has failed to notice lots of things.
Perhaps it is real, but is not the sort of thing you are assuming it must be, to be real.
I can’t point to the number 2, and some people, perplexed by this, have asserted that numbers are not real.
I can point to a mountain, or to a river, but I can’t point to what makes a mountain a mountain or a river a river. Some people, perplexed by this, conclude there are no such things as mountains and rivers.
I can’t point to my mind....and so on.
Can I even point? What makes this hand a pointer, and how can anyone else be sure they know what I am pointing to?
Stare at anything hard enough, and you can cultivate perplexity at its existence, and conclude that nothing exists at all. This is a failure mode of the mind, not an insight into reality.
Have you seen the meta-ethics sequence? The meta-ethical position you are arguing is moral nihilism, the belief that there is no such thing as morality. There are plenty of others to consider before deciding for or against nihilism.
How hard do you think it would be to summarize the content of the meta-ethics sequence that isn’t implicit from the Human’s Guide to Words?
I never recommend anyone read the ethics sequence fist.
It’s funny that I push on the problem of moral nihilism just a little, and suddenly someone thinks I don’t believe in reality. :)
I’ve read the beginning and the end of the meta-ethics sequence, but not the middle. I agree with Eliezer that recursive questions are always possible, but you must stop asking them at some point or you miss more interesting issues. And I agree with his conclusion that the best formulation of modern ethics is consideration for the happiness of beings capable of recursive thought.
I like to write a discussion post (or a series of posts) on this issue, but I don’t know where to start. Someone else responded to me (EDIT: with what seemed to me like] questioning the assertion that science is a one-way ratchet, always getting better, never getting worse. [EDIT: But we don’t seem to have actually communicated at all, which isn’t a success on my part.]
In case you want a connection to Artificial Intelligence:
Eliezer talks about the importance of provably Friendly AI, and I agree with his point. If we create super-intelligence and it doesn’t care about our desires, that would be very bad for us. But I think that the problem I’m highlighting says something about the possibility of proving that an AI is Friendly.
It seems likely to me that I’m the person you’re referring to. If so, I don’t endorse your summary. More generally, I’m not sure either of us understood the other one clearly enough in that exchange to merit confident statements on either of our parts about what was actually said, short of literal quotes .