If realism is false, nothing matters, so it’s not bad that everyone dies
That’s a misunderstanding of moral realism. Moral realism is a philosophical argument that states that moral arguments state true facts about the world. In other words, when I say that “Murder is bad,” that is a fact about the world, as true as 2+2=4 or the Pythagorean theorem.
It’s entirely possible for me to think that moral realism is false (i.e. morality is a condition of human minds) while also holding, as a member of humanity, a view that the mass extinction of all humanity is an undesirable state. Denying moral realism isn’t the same as saying, “Nothing matters.” It’s closer to claiming, “Rocks don’t have morality.” And an AI, insofar as it is a fancy thinking rock, won’t have morality by default either. We could, of course, give it morality, by ensuring that it is aligned to human values. But that would be the result of humans taking positive steps to impart their moral reasoning onto an otherwise amoral reality.
In other words, when I say that “Murder is bad,” that is a fact about the world, as true as 2+2=4 or the Pythagorean theorem.
I like this way of putting it.
In Principia Mathematica, Whitehead and Russell spent over 300 pages laying groundwork before they even attempt to prove 1+1=2. Among other things, they needed to define numbers (especially the numbers 1 and 2), equality, and addition.
I do think “1+1=2” is an obvious fact. If someone claimed to be intelligent and also said that 1+1=3, I’d look at them funny and press for clarification. Given all the assumptions about how numbers work I’ve absorbed over the course of my life, I’d find it hard to conceive of anything else.
Likewise, I find it hard to conceive of any alternative to “murder is bad,” because over the course of my life I’ve absorbed a lot of assumptions about the value of sentient life. But the fact that I’ve absorbed these assumptions doesn’t mean every intelligent entity would agree with them.
In this analogy, the assumptions underpinning human morality are like Euclid’s postulates. They seem so obvious that you might just take them for granted, as the only possible self-consistent system. But we could have missed something, and one of them might not be the only option, and there might be other self-consistent geometries/moralities out there. (The difference being that in the former case M.C. Escher uses it to make cool art, and in the latter case an alien or AI does something we consider evil.)
I agree that it can take a long time to prove simple things. But my claim is that one has to be very stupid to think 1+1=3, not so with the falsity of the Orthogonality thesis.
I agree that it can take a long time to prove simple things. But my claim is that one has to be very stupid to think 1+1=3
Or one might be working from different axioms. I don’t know what axioms, and I’d look at you funny until you explained, but I can’t rule it out. It’s possible (though implausible given its length) that Principia Mathematica wasn’t thorough enough, that it snuck in a hidden axiom that—if challenged—would reveal an equally-coherent alternate counting system in which 1+1=3.
I brought up Euclid’s postulates as an example of a time this actually happened. It seems obvious that “two lines that are parallel to the same line are also parallel to each other,” but in fact it only holds in Euclidean geometry. To quote the Wikipedia article on the topic,
Many other statements equivalent to the parallel postulate have been suggested, some of them appearing at first to be unrelated to parallelism, and some seeming so self-evident that they were unconsciously assumed by people who claimed to have proven the parallel postulate from Euclid’s other postulates.
“So self-evident that they were unconsciously assumed.” But it turned out, you can’t prove the parallel postulate (or any equivalent postulate) from first principles, and there were a number of equally-coherent geometries waiting to be discovered once we started questioning it.
My advice is to be equally skeptical of claims of absolute morality. I agree you can derive human morality if you assume that sentience is good, happiness is good, and so on. And maybe you can derive those from each other, or from some other axioms, but at some point your moral system does have axioms. An intelligent being that didn’t start from these axioms could likely derive a coherent moral system that went against most of what humans consider good.
Summary: you’re speculating, based on your experience as an intelligent human, that an intelligent non-human would deduce a human-like moral system. I’m speculating that it might not. The problem is, neither of us can exactly test this at the moment. The only human-level intelligences we could ask are also human, meaning they have human values and biases baked in.
We all accept similar axioms, but does that really mean those axioms are the only option?
I don’t think it’s a misunderstanding of moral realism. I think that versions of moral anti-realism don’t capture things really mattering, for reasons I explain in the linked post. I also don’t think rocks have morality—the idea of something having morality seems confused.
I think you’re failing to understand the depth of both realist and anti-realist positions, since we can reasonable interpret them as two ways of describing the same reality.
Hmm, sounds like your objection is you think if there aren’t moral facts then meaning is ungrounded. I’m not sure how to convince you this is only the only reasonable way to see the world, but I’ll point to some things that are perhaps helpful.
There’s no solid ground of reality that we can access. We’re epistemically limited in various ways that prevent us from knowing the how the world is with certainty, which prevents us from grounding meaning in facts. Yet, despite these limitations, we find meaning anyway. How’s that possible?
We, like all cybernetic beings (systems of negative feedback loops), care about things because we’re trying to target various observations, and we work to make the world in ways that make it like our observations. This feedback process is the source of meaning, although I don’t have a great link to point you at to explain this point (yet!).
This is quite a bit different from how the world seems to be though! That’s because our ontologies start out with us fused with our perception of the world, then we separate from it but think ourselves separate from the world rather than embedded in it, and during this stage of our ontological development it seems that meaning must be grounded “out there” in the world because we think we’re separate from the world. But that’s not true, though it’s hard to realize this because our brains give us the impression that we are separate from the world.
I’m not sure if any of this will be convincing, but I think you’re simply mistaken that anti-realism doesn’t account for meaning. When I look at the anti-realist story I see meaning, it just doesn’t show up the same way it does in the realist story because it rejects essentialism and so must build up a mechanistic story about where meaning comes from.
That’s a misunderstanding of moral realism. Moral realism is a philosophical argument that states that moral arguments state true facts about the world. In other words, when I say that “Murder is bad,” that is a fact about the world, as true as 2+2=4 or the Pythagorean theorem.
It’s entirely possible for me to think that moral realism is false (i.e. morality is a condition of human minds) while also holding, as a member of humanity, a view that the mass extinction of all humanity is an undesirable state. Denying moral realism isn’t the same as saying, “Nothing matters.” It’s closer to claiming, “Rocks don’t have morality.” And an AI, insofar as it is a fancy thinking rock, won’t have morality by default either. We could, of course, give it morality, by ensuring that it is aligned to human values. But that would be the result of humans taking positive steps to impart their moral reasoning onto an otherwise amoral reality.
I like this way of putting it.
In Principia Mathematica, Whitehead and Russell spent over 300 pages laying groundwork before they even attempt to prove 1+1=2. Among other things, they needed to define numbers (especially the numbers 1 and 2), equality, and addition.
I do think “1+1=2” is an obvious fact. If someone claimed to be intelligent and also said that 1+1=3, I’d look at them funny and press for clarification. Given all the assumptions about how numbers work I’ve absorbed over the course of my life, I’d find it hard to conceive of anything else.
Likewise, I find it hard to conceive of any alternative to “murder is bad,” because over the course of my life I’ve absorbed a lot of assumptions about the value of sentient life. But the fact that I’ve absorbed these assumptions doesn’t mean every intelligent entity would agree with them.
In this analogy, the assumptions underpinning human morality are like Euclid’s postulates. They seem so obvious that you might just take them for granted, as the only possible self-consistent system. But we could have missed something, and one of them might not be the only option, and there might be other self-consistent geometries/moralities out there. (The difference being that in the former case M.C. Escher uses it to make cool art, and in the latter case an alien or AI does something we consider evil.)
I agree that it can take a long time to prove simple things. But my claim is that one has to be very stupid to think 1+1=3, not so with the falsity of the Orthogonality thesis.
Or one might be working from different axioms. I don’t know what axioms, and I’d look at you funny until you explained, but I can’t rule it out. It’s possible (though implausible given its length) that Principia Mathematica wasn’t thorough enough, that it snuck in a hidden axiom that—if challenged—would reveal an equally-coherent alternate counting system in which 1+1=3.
I brought up Euclid’s postulates as an example of a time this actually happened. It seems obvious that “two lines that are parallel to the same line are also parallel to each other,” but in fact it only holds in Euclidean geometry. To quote the Wikipedia article on the topic,
“So self-evident that they were unconsciously assumed.” But it turned out, you can’t prove the parallel postulate (or any equivalent postulate) from first principles, and there were a number of equally-coherent geometries waiting to be discovered once we started questioning it.
My advice is to be equally skeptical of claims of absolute morality. I agree you can derive human morality if you assume that sentience is good, happiness is good, and so on. And maybe you can derive those from each other, or from some other axioms, but at some point your moral system does have axioms. An intelligent being that didn’t start from these axioms could likely derive a coherent moral system that went against most of what humans consider good.
Summary: you’re speculating, based on your experience as an intelligent human, that an intelligent non-human would deduce a human-like moral system. I’m speculating that it might not. The problem is, neither of us can exactly test this at the moment. The only human-level intelligences we could ask are also human, meaning they have human values and biases baked in.
We all accept similar axioms, but does that really mean those axioms are the only option?
I don’t think it’s a misunderstanding of moral realism. I think that versions of moral anti-realism don’t capture things really mattering, for reasons I explain in the linked post. I also don’t think rocks have morality—the idea of something having morality seems confused.
I think you’re failing to understand the depth of both realist and anti-realist positions, since we can reasonable interpret them as two ways of describing the same reality.
They may issue similar first order verdicts, but anti-realism doesn’t capture things really mattering.
Hmm, sounds like your objection is you think if there aren’t moral facts then meaning is ungrounded. I’m not sure how to convince you this is only the only reasonable way to see the world, but I’ll point to some things that are perhaps helpful.
There’s no solid ground of reality that we can access. We’re epistemically limited in various ways that prevent us from knowing the how the world is with certainty, which prevents us from grounding meaning in facts. Yet, despite these limitations, we find meaning anyway. How’s that possible?
We, like all cybernetic beings (systems of negative feedback loops), care about things because we’re trying to target various observations, and we work to make the world in ways that make it like our observations. This feedback process is the source of meaning, although I don’t have a great link to point you at to explain this point (yet!).
This is quite a bit different from how the world seems to be though! That’s because our ontologies start out with us fused with our perception of the world, then we separate from it but think ourselves separate from the world rather than embedded in it, and during this stage of our ontological development it seems that meaning must be grounded “out there” in the world because we think we’re separate from the world. But that’s not true, though it’s hard to realize this because our brains give us the impression that we are separate from the world.
I’m not sure if any of this will be convincing, but I think you’re simply mistaken that anti-realism doesn’t account for meaning. When I look at the anti-realist story I see meaning, it just doesn’t show up the same way it does in the realist story because it rejects essentialism and so must build up a mechanistic story about where meaning comes from.