I don’t think it will mess up the algorithms. My guess is that most people probably rounded most calibration answers to the tens place due to lack of enough confidence to be more precise, but since people are giving different values, the average across all respondents is unlikely to fall on an increment of ten, and should be a reasonably accurate measure of the respondents’ collective assigned probability for a question.
It could mess them up, because in theory a single wrong answer with 100% confidence renders the entire series infinitely poorly calibrated. The survey says that this won’t be done, that 100% will be treated as something slightly less than that. But how much less could depend on assumptions that the survey-makers made about how often people would answer this way, and maybe I did it too much.
I doubt it, since I’m pretty sure that they know enough about these pitfalls to avoid them. But I felt that I answered 0 and 100 quite a lot, so I thought that some warning was in order.
I don’t think it will mess up the algorithms. My guess is that most people probably rounded most calibration answers to the tens place due to lack of enough confidence to be more precise, but since people are giving different values, the average across all respondents is unlikely to fall on an increment of ten, and should be a reasonably accurate measure of the respondents’ collective assigned probability for a question.
It could mess them up, because in theory a single wrong answer with 100% confidence renders the entire series infinitely poorly calibrated. The survey says that this won’t be done, that 100% will be treated as something slightly less than that. But how much less could depend on assumptions that the survey-makers made about how often people would answer this way, and maybe I did it too much.
I doubt it, since I’m pretty sure that they know enough about these pitfalls to avoid them. But I felt that I answered 0 and 100 quite a lot, so I thought that some warning was in order.