Intuitively, I agree that the vacation question is under-defined / has too many “right” answers.
On the other hand, I can also imagine the world where you can develop some objective fun theory, or just something which actually makes the questions well-posed. And the AIs could use this fact in the debate:
Bob: “Actually, you can derive a well-defined fun theory and use it to answer this question. And then Bali clearly wins.”
Alice: “There could never be any such thing!”
Bob: “Actually, there indeed is such a theory, and its central idea is [...].”
[They go on like this for a bit, and eventually, Bob wins.]
Indeed, this seems like a thing you could (by explaining that integration is a thing) if somebody tried to convince you that there is no principled way to measure the area of a circle.
However—if true—this only shows that there are less under-defined question than we think. The “Ministry of Ambiguity versus the Department of Clarity” fight is still very much a thing, as are the incentives to manipulate the human. And perhaps most importantly, routinely holding debates where the AI “explains to you how to think about something” seems extremely dangerous...
Intuitively, I agree that the vacation question is under-defined / has too many “right” answers. On the other hand, I can also imagine the world where you can develop some objective fun theory, or just something which actually makes the questions well-posed. And the AIs could use this fact in the debate:
Bob: “Actually, you can derive a well-defined fun theory and use it to answer this question. And then Bali clearly wins.”
Alice: “There could never be any such thing!”
Bob: “Actually, there indeed is such a theory, and its central idea is [...].”
[They go on like this for a bit, and eventually, Bob wins.]
Indeed, this seems like a thing you could (by explaining that integration is a thing) if somebody tried to convince you that there is no principled way to measure the area of a circle.
However—if true—this only shows that there are less under-defined question than we think. The “Ministry of Ambiguity versus the Department of Clarity” fight is still very much a thing, as are the incentives to manipulate the human. And perhaps most importantly, routinely holding debates where the AI “explains to you how to think about something” seems extremely dangerous...