Perhaps it is possible in practice/process to disentangle value alignment issues from factual disagreements. Double-crux seems optimal at reaching consensus on factual truths (e.g., which widget will have a lower error rate?) and would at least *uncover* Carl’s crux, if everybody participates in good faith, and therefore make it possible to nonetheless discover the factual truth. Then maybe punt the non-objective argument to a different process like incentive alignment as you discuss.
Perhaps it is possible in practice/process to disentangle value alignment issues from factual disagreements. Double-crux seems optimal at reaching consensus on factual truths (e.g., which widget will have a lower error rate?) and would at least *uncover* Carl’s crux, if everybody participates in good faith, and therefore make it possible to nonetheless discover the factual truth. Then maybe punt the non-objective argument to a different process like incentive alignment as you discuss.