In other words, when I say that “Murder is bad,” that is a fact about the world, as true as 2+2=4 or the Pythagorean theorem.
I like this way of putting it.
In Principia Mathematica, Whitehead and Russell spent over 300 pages laying groundwork before they even attempt to prove 1+1=2. Among other things, they needed to define numbers (especially the numbers 1 and 2), equality, and addition.
I do think “1+1=2” is an obvious fact. If someone claimed to be intelligent and also said that 1+1=3, I’d look at them funny and press for clarification. Given all the assumptions about how numbers work I’ve absorbed over the course of my life, I’d find it hard to conceive of anything else.
Likewise, I find it hard to conceive of any alternative to “murder is bad,” because over the course of my life I’ve absorbed a lot of assumptions about the value of sentient life. But the fact that I’ve absorbed these assumptions doesn’t mean every intelligent entity would agree with them.
In this analogy, the assumptions underpinning human morality are like Euclid’s postulates. They seem so obvious that you might just take them for granted, as the only possible self-consistent system. But we could have missed something, and one of them might not be the only option, and there might be other self-consistent geometries/moralities out there. (The difference being that in the former case M.C. Escher uses it to make cool art, and in the latter case an alien or AI does something we consider evil.)
I agree that it can take a long time to prove simple things. But my claim is that one has to be very stupid to think 1+1=3, not so with the falsity of the Orthogonality thesis.
I agree that it can take a long time to prove simple things. But my claim is that one has to be very stupid to think 1+1=3
Or one might be working from different axioms. I don’t know what axioms, and I’d look at you funny until you explained, but I can’t rule it out. It’s possible (though implausible given its length) that Principia Mathematica wasn’t thorough enough, that it snuck in a hidden axiom that—if challenged—would reveal an equally-coherent alternate counting system in which 1+1=3.
I brought up Euclid’s postulates as an example of a time this actually happened. It seems obvious that “two lines that are parallel to the same line are also parallel to each other,” but in fact it only holds in Euclidean geometry. To quote the Wikipedia article on the topic,
Many other statements equivalent to the parallel postulate have been suggested, some of them appearing at first to be unrelated to parallelism, and some seeming so self-evident that they were unconsciously assumed by people who claimed to have proven the parallel postulate from Euclid’s other postulates.
“So self-evident that they were unconsciously assumed.” But it turned out, you can’t prove the parallel postulate (or any equivalent postulate) from first principles, and there were a number of equally-coherent geometries waiting to be discovered once we started questioning it.
My advice is to be equally skeptical of claims of absolute morality. I agree you can derive human morality if you assume that sentience is good, happiness is good, and so on. And maybe you can derive those from each other, or from some other axioms, but at some point your moral system does have axioms. An intelligent being that didn’t start from these axioms could likely derive a coherent moral system that went against most of what humans consider good.
Summary: you’re speculating, based on your experience as an intelligent human, that an intelligent non-human would deduce a human-like moral system. I’m speculating that it might not. The problem is, neither of us can exactly test this at the moment. The only human-level intelligences we could ask are also human, meaning they have human values and biases baked in.
We all accept similar axioms, but does that really mean those axioms are the only option?
I like this way of putting it.
In Principia Mathematica, Whitehead and Russell spent over 300 pages laying groundwork before they even attempt to prove 1+1=2. Among other things, they needed to define numbers (especially the numbers 1 and 2), equality, and addition.
I do think “1+1=2” is an obvious fact. If someone claimed to be intelligent and also said that 1+1=3, I’d look at them funny and press for clarification. Given all the assumptions about how numbers work I’ve absorbed over the course of my life, I’d find it hard to conceive of anything else.
Likewise, I find it hard to conceive of any alternative to “murder is bad,” because over the course of my life I’ve absorbed a lot of assumptions about the value of sentient life. But the fact that I’ve absorbed these assumptions doesn’t mean every intelligent entity would agree with them.
In this analogy, the assumptions underpinning human morality are like Euclid’s postulates. They seem so obvious that you might just take them for granted, as the only possible self-consistent system. But we could have missed something, and one of them might not be the only option, and there might be other self-consistent geometries/moralities out there. (The difference being that in the former case M.C. Escher uses it to make cool art, and in the latter case an alien or AI does something we consider evil.)
I agree that it can take a long time to prove simple things. But my claim is that one has to be very stupid to think 1+1=3, not so with the falsity of the Orthogonality thesis.
Or one might be working from different axioms. I don’t know what axioms, and I’d look at you funny until you explained, but I can’t rule it out. It’s possible (though implausible given its length) that Principia Mathematica wasn’t thorough enough, that it snuck in a hidden axiom that—if challenged—would reveal an equally-coherent alternate counting system in which 1+1=3.
I brought up Euclid’s postulates as an example of a time this actually happened. It seems obvious that “two lines that are parallel to the same line are also parallel to each other,” but in fact it only holds in Euclidean geometry. To quote the Wikipedia article on the topic,
“So self-evident that they were unconsciously assumed.” But it turned out, you can’t prove the parallel postulate (or any equivalent postulate) from first principles, and there were a number of equally-coherent geometries waiting to be discovered once we started questioning it.
My advice is to be equally skeptical of claims of absolute morality. I agree you can derive human morality if you assume that sentience is good, happiness is good, and so on. And maybe you can derive those from each other, or from some other axioms, but at some point your moral system does have axioms. An intelligent being that didn’t start from these axioms could likely derive a coherent moral system that went against most of what humans consider good.
Summary: you’re speculating, based on your experience as an intelligent human, that an intelligent non-human would deduce a human-like moral system. I’m speculating that it might not. The problem is, neither of us can exactly test this at the moment. The only human-level intelligences we could ask are also human, meaning they have human values and biases baked in.
We all accept similar axioms, but does that really mean those axioms are the only option?