Things like the ordering of arguments are just additional questions about the rationality criteria
...which problem you can’t hand off to the superintelligence until you’ve specified how it decides ‘rationality criteria’. Bootstrapping is allowed, skyhooking isn’t. Suppose that 98% of humans, under 98% of the extrapolated spread, would both choose a certain ordering of arguments, and also claim that this is the uniquely correct ordering. Is this sufficient to just go ahead and label that ordering the rational one? If you refuse to answer that question yourself, what is the procedure that answers it?
Things like the ordering of arguments are just additional questions about the rationality criteria
...which problem you can’t hand off to the superintelligence until you’ve specified how it decides ‘rationality criteria’. Bootstrapping is allowed, skyhooking isn’t. Suppose that 98% of humans, under 98% of the extrapolated spread, would both choose a certain ordering of arguments, and also claim that this is the uniquely correct ordering. Is this sufficient to just go ahead and label that ordering the rational one? If you refuse to answer that question yourself, what is the procedure that answers it?