Morality is in some ways a harder problem than friendly AI. On the plus side, humans that don’t control nuclear weapons aren’t that powerful. On the minus side, morality has to run at the level of 7 billion single instances of a person who may have bad information.
So it needs to have heuristics that are robust against incomplete information. There’s definitely an evolutionary just-so story about the penalty of publically committing to a risky action. But even without the evolutionary social risk, there is a moral risk to permitting an interventionist murder when you aren’t all-knowing.
This looks just like the bayesian 101 example of a medical test that is 99% accurate on a disease that has 1% occurance rate. If you say that I’m in a very rare situation that requires me to commit murder, I have to assume that there are going to be many more situations that could be mistaken for this one. The “least convenient universe” story is tantalizing, but I think it leads astray here.
Morality is in some ways a harder problem than friendly AI. On the plus side, humans that don’t control nuclear weapons aren’t that powerful. On the minus side, morality has to run at the level of 7 billion single instances of a person who may have bad information.
So it needs to have heuristics that are robust against incomplete information. There’s definitely an evolutionary just-so story about the penalty of publically committing to a risky action. But even without the evolutionary social risk, there is a moral risk to permitting an interventionist murder when you aren’t all-knowing.
This looks just like the bayesian 101 example of a medical test that is 99% accurate on a disease that has 1% occurance rate. If you say that I’m in a very rare situation that requires me to commit murder, I have to assume that there are going to be many more situations that could be mistaken for this one. The “least convenient universe” story is tantalizing, but I think it leads astray here.