Right, but those questions are responsive to reasons too. Here’s where I embrace the recursion. Either we believe that ultimately the reasons stop—that is, that after a sufficiently ideal process, all of the minds in the relevant mind design space agree on the values, or we don’t. If we do, then the superintelligence should replicate that process. If we don’t, then what basis do we have for asking a superintelligence to answer the question? We might as well flip a coin.
Of course, the content of the ideal process is tricky. I’m hiding the really hard questions in there, like what counts as rationality, what kinds of minds are in the relevant mind design space, etc. Those questions are extra-hard because we can’t appeal to an ideal process to answer them on pain of circularity. (Again, political philosophy has been struggling with a version of this question for a very long time. And I do mean struggling—it’s one of the hardest questions there is.) And the best answer I can give is that there is no completely justifiable stopping point: at some point, we’re going to have to declare “these are our axioms, and we’re going with them,” even though those axioms are not going to be justifiable within the system.
What this all comes down to is that it’s all necessarily dependent on social context. The axioms of rationality and the decisions about what constitute relevant mind-space for any such superintelligence would be determined by the brute facts of what kind of reasoning is socially acceptable in the society that creates such a superintelligence. And that’s the best we can do.
Right, but those questions are responsive to reasons too. Here’s where I embrace the recursion. Either we believe that ultimately the reasons stop—that is, that after a sufficiently ideal process, all of the minds in the relevant mind design space agree on the values, or we don’t. If we do, then the superintelligence should replicate that process. If we don’t, then what basis do we have for asking a superintelligence to answer the question? We might as well flip a coin.
Of course, the content of the ideal process is tricky. I’m hiding the really hard questions in there, like what counts as rationality, what kinds of minds are in the relevant mind design space, etc. Those questions are extra-hard because we can’t appeal to an ideal process to answer them on pain of circularity. (Again, political philosophy has been struggling with a version of this question for a very long time. And I do mean struggling—it’s one of the hardest questions there is.) And the best answer I can give is that there is no completely justifiable stopping point: at some point, we’re going to have to declare “these are our axioms, and we’re going with them,” even though those axioms are not going to be justifiable within the system.
What this all comes down to is that it’s all necessarily dependent on social context. The axioms of rationality and the decisions about what constitute relevant mind-space for any such superintelligence would be determined by the brute facts of what kind of reasoning is socially acceptable in the society that creates such a superintelligence. And that’s the best we can do.