Don’t know, it might be in the paper. (I would guess the dataset size is more relevant—most people train their classifiers to convergence.)
how well do they perform if you invert their feedback though?
It would in fact be (100 - current number)%, which would be better than chance. But if you consistently inverted their feedback, then they would be worse than chance on distinguishing humans from AIs (and much more badly). I wouldn’t read too much into the anti-correlation—quite plausibly the “true” answer is that there’s very little correlation in either direction, and by random chance in this case most happened to be on the anti-correlated side.
Don’t know, it might be in the paper. (I would guess the dataset size is more relevant—most people train their classifiers to convergence.)
It would in fact be (100 - current number)%, which would be better than chance. But if you consistently inverted their feedback, then they would be worse than chance on distinguishing humans from AIs (and much more badly). I wouldn’t read too much into the anti-correlation—quite plausibly the “true” answer is that there’s very little correlation in either direction, and by random chance in this case most happened to be on the anti-correlated side.