Things like the ordering of arguments are just additional questions about the rationality criteria, and my point above applies to them just as well—either there’s a justifiable answer (“this is how arguments are to be ordered,”) or it’s going to be fundamentally socially determined and there’s nothing to be done about it. The political is really deeply prior to the workings of a superintelligence in such cases: if there’s no determinate correct answer to these process questions, then humans will have to collectively muddle through to get something to feed the superintelligence. (Aristotle was right when he said politics was the ruling science...)
On the humans for humans point, I’ll appeal back to the notion of modeling minds. If we take P to be a reason, then all we have to be able to tell the superintelligence is “simulate us and consider what we take to be reasons,” and, after simulating us, the superintelligence ought to know what those things are, what we mean when we say “take to be reasons,” etc. Philosophy written by humans for humans ought to be sufficient once we specify the process by which reasons that matter to humans are to be taken into account.
Eleizer,
Things like the ordering of arguments are just additional questions about the rationality criteria, and my point above applies to them just as well—either there’s a justifiable answer (“this is how arguments are to be ordered,”) or it’s going to be fundamentally socially determined and there’s nothing to be done about it. The political is really deeply prior to the workings of a superintelligence in such cases: if there’s no determinate correct answer to these process questions, then humans will have to collectively muddle through to get something to feed the superintelligence. (Aristotle was right when he said politics was the ruling science...)
On the humans for humans point, I’ll appeal back to the notion of modeling minds. If we take P to be a reason, then all we have to be able to tell the superintelligence is “simulate us and consider what we take to be reasons,” and, after simulating us, the superintelligence ought to know what those things are, what we mean when we say “take to be reasons,” etc. Philosophy written by humans for humans ought to be sufficient once we specify the process by which reasons that matter to humans are to be taken into account.