True, this is an important limitation which I glossed over.
We can do slightly better by including any bet which all participants think they can resolve later—so for example, we can bet on total utilitarianism vs average utilitarianism if we think that we can eventually agree on the answer (at which point we would resolve the bet). However, this obviously still begs the question about Agreement, and so has a risk of never being resolved.
Which is to say that if two agents disagree about something observable and quantifiable...
True, this is an important limitation which I glossed over.
We can do slightly better by including any bet which all participants think they can resolve later—so for example, we can bet on total utilitarianism vs average utilitarianism if we think that we can eventually agree on the answer (at which point we would resolve the bet). However, this obviously still begs the question about Agreement, and so has a risk of never being resolved.