Sketch of an impossibility proof. Maybe I’m missing something in the problem statement?
If it’s possible to guarantee better-than-chance performance independent of ROB’s policy for choosing numbers, then ROB has to be unable to adversarially select two numbers for which my algorithm does no better than chance.
The only input my algorithm has is the one real number I get from the arbiter, so my algorithm essentially consists of a function from reals to probabilities.
If my function assigns exactly 50% probability to at least two distinct real numbers, then ROB can choose these two numbers and constrain me to exactly 50% accuracy.
If my function assigns less than 50% probability to at least two distinct real numbers, than ROB can choose these two numbers and constrain me to less than 50% accuracy.
If my function assigns greater than 50% probability to at least two distinct real numbers, then ROB can choose these two numbers and constrain me to less than 50% accuracy.
There are more than three real numbers, therefore my function must fulfill one of (1, 2, 3) above and won’t guarantee better-than-chance performance.
Maybe the above doesn’t hold if my function can return distributions of probabilities? Not sure.
If you always answer that your number is lower, you definitely have exactly 50% accuracy, right? So ROB isn’t constraining you to less than 50% accuracy.
Even without that section, the modeling of your evaluation as a mapping from input number to a binary “you will say that this number is higher than the other” result is pretty binding. If ROB knows (or can infer) your mapping/algorithm, it can just pick numbers for which you’re wrong, every time.
Which turns this into a “whoever knows the other’s algorithm better, wins” situation.
Yeah, I was confused. I was thinking you had to state a probability of having the larger number (rather than a binary guess) and try to get better than chance according to some scoring rule.
Sketch of an impossibility proof. Maybe I’m missing something in the problem statement?
If it’s possible to guarantee better-than-chance performance independent of ROB’s policy for choosing numbers, then ROB has to be unable to adversarially select two numbers for which my algorithm does no better than chance.
The only input my algorithm has is the one real number I get from the arbiter, so my algorithm essentially consists of a function from reals to probabilities.
If my function assigns exactly 50% probability to at least two distinct real numbers, then ROB can choose these two numbers and constrain me to exactly 50% accuracy.
If my function assigns less than 50% probability to at least two distinct real numbers, than ROB can choose these two numbers and constrain me to less than 50% accuracy.
If my function assigns greater than 50% probability to at least two distinct real numbers, then ROB can choose these two numbers and constrain me to less than 50% accuracy.
There are more than three real numbers, therefore my function must fulfill one of (1, 2, 3) above and won’t guarantee better-than-chance performance.
Maybe the above doesn’t hold if my function can return distributions of probabilities? Not sure.
I don’t see how 2 is true.
If you always answer that your number is lower, you definitely have exactly 50% accuracy, right? So ROB isn’t constraining you to less than 50% accuracy.
Even without that section, the modeling of your evaluation as a mapping from input number to a binary “you will say that this number is higher than the other” result is pretty binding. If ROB knows (or can infer) your mapping/algorithm, it can just pick numbers for which you’re wrong, every time.
Which turns this into a “whoever knows the other’s algorithm better, wins” situation.
Yeah, I was confused. I was thinking you had to state a probability of having the larger number (rather than a binary guess) and try to get better than chance according to some scoring rule.