I have to agree with PK and Ben; there’s a heck of a lot more pressure for minds to converge on 2+3=5 than on any ethical statement. A mind that believes 2+3=6 will make wrong predictions about reality; a mind that has ‘wrong’ ‘beliefs’ about murder won’t. (George has a point about game theory, but that’s different from regarding someone else’s death as terminally undesirable.) “The Platonic computation I implement judges murder as undesirable regardless of what anybody thinks” isn’t the same as “murder is wrong regardless of what anybody thinks”. I could define ‘wrong’ according to the output of my computation, but such an agent-relative definition would be silly.
...unless most humans converge to the same terminal values, in which case we could sensibly define “wrong” as the output of the computation implemented by humanity. There, it adds up to normality.
I have to agree with PK and Ben; there’s a heck of a lot more pressure for minds to converge on 2+3=5 than on any ethical statement. A mind that believes 2+3=6 will make wrong predictions about reality; a mind that has ‘wrong’ ‘beliefs’ about murder won’t. (George has a point about game theory, but that’s different from regarding someone else’s death as terminally undesirable.) “The Platonic computation I implement judges murder as undesirable regardless of what anybody thinks” isn’t the same as “murder is wrong regardless of what anybody thinks”. I could define ‘wrong’ according to the output of my computation, but such an agent-relative definition would be silly.
...unless most humans converge to the same terminal values, in which case we could sensibly define “wrong” as the output of the computation implemented by humanity. There, it adds up to normality.
...well, kind of. That definition won’t do by itself for moral arguments—it’d be like the calculator that computes “what does this calculator compute as the result of 2 + 3?”—any answer is correct. Some actual content is still needed.