But even this can ignite debate: which is more important, short-term revenue or long-term revenue? I can imagine two people (perhaps one very young, and one very old) in dispute over a product design where they realize the root of the disagreement is this different personal timeline.
Yeah, there are plenty of cases where people actually want different things. I think I agree that some kind of hybrid technique involving negotiation and doublecrux (among other things) might help.
Random exploration, don’t really have a point yet:
Another case might be two people arguing over how to design a widget, where Carl wants to build a widget using the special Widget Design Technique that he invented. Damien wants to build the widget using Some Other Technique. And maybe it turns out Carl’s crux is that if they use Carl’s Special Widget Design Technique, Carl will look better and get more promotions.
I think resolving that sort of situation depends on other background elements that Double Crux won’t directly help with.
If you’re the CEO of a small organization, maybe you can manage to hire people who buy into the company’s mission so thoroughly they won’t try to coopt Widget Design processes for their personal gain. Or, you might also somehow construct incentives to keep skin-in-the-game, such that it’s more in Carl’s interest to have the company do well than to get to look good using his Special Widget Design Technique. Ideally, he’s incentivized to actually have good epistemics about his technique, and see clearly whether it’s better than Damien’s Generic Technique (or Damien’s own special technique).
This is all pretty hard though (especially as the company grows). And there’s a bunch stuff outside your control as CEO because the outside world might still reward Carl more if he can tell a compelling story about how his special technique saved the day.
Perhaps it is possible in practice/process to disentangle value alignment issues from factual disagreements. Double-crux seems optimal at reaching consensus on factual truths (e.g., which widget will have a lower error rate?) and would at least *uncover* Carl’s crux, if everybody participates in good faith, and therefore make it possible to nonetheless discover the factual truth. Then maybe punt the non-objective argument to a different process like incentive alignment as you discuss.
Yeah, there are plenty of cases where people actually want different things. I think I agree that some kind of hybrid technique involving negotiation and doublecrux (among other things) might help.
Random exploration, don’t really have a point yet:
Another case might be two people arguing over how to design a widget, where Carl wants to build a widget using the special Widget Design Technique that he invented. Damien wants to build the widget using Some Other Technique. And maybe it turns out Carl’s crux is that if they use Carl’s Special Widget Design Technique, Carl will look better and get more promotions.
I think resolving that sort of situation depends on other background elements that Double Crux won’t directly help with.
If you’re the CEO of a small organization, maybe you can manage to hire people who buy into the company’s mission so thoroughly they won’t try to coopt Widget Design processes for their personal gain. Or, you might also somehow construct incentives to keep skin-in-the-game, such that it’s more in Carl’s interest to have the company do well than to get to look good using his Special Widget Design Technique. Ideally, he’s incentivized to actually have good epistemics about his technique, and see clearly whether it’s better than Damien’s Generic Technique (or Damien’s own special technique).
This is all pretty hard though (especially as the company grows). And there’s a bunch stuff outside your control as CEO because the outside world might still reward Carl more if he can tell a compelling story about how his special technique saved the day.
Perhaps it is possible in practice/process to disentangle value alignment issues from factual disagreements. Double-crux seems optimal at reaching consensus on factual truths (e.g., which widget will have a lower error rate?) and would at least *uncover* Carl’s crux, if everybody participates in good faith, and therefore make it possible to nonetheless discover the factual truth. Then maybe punt the non-objective argument to a different process like incentive alignment as you discuss.