The irony of this is that if, say, 83.5% of respondents instead thought UFAI was the most worrisome existential risk, that would likely be taken as evidence that the LW community was succumbing to groupthink.
My prior belief was that people on less wrong would overestimate the danger of unfriendly ai due to it being part of the reason for Less Wrong’s existence. That probability has decreased since seeing the results, but as I see no reason to believe the opposite would be the case, the effect should still be there.
I don’t quite understand your final clause. Are you saying that you still believe a significant number of people on LW overestimate the danger of UFAI, but that your confidence in that is lower than it was?
More or less. I meant that I now estimate a reduced but still non-zero probability of upwards bias, but only a negligible probability of a bias in the other direction. So the average expected upward bias is decreased but still positive. Thus I should adjust the probability of human extinction being due to unfriendly ai downwards. Of course, the possibility of less wrong over or underestimating existential risk in general is another matter.
The irony of this is that if, say, 83.5% of respondents instead thought UFAI was the most worrisome existential risk, that would likely be taken as evidence that the LW community was succumbing to groupthink.
My prior belief was that people on less wrong would overestimate the danger of unfriendly ai due to it being part of the reason for Less Wrong’s existence. That probability has decreased since seeing the results, but as I see no reason to believe the opposite would be the case, the effect should still be there.
I don’t quite understand your final clause. Are you saying that you still believe a significant number of people on LW overestimate the danger of UFAI, but that your confidence in that is lower than it was?
More or less. I meant that I now estimate a reduced but still non-zero probability of upwards bias, but only a negligible probability of a bias in the other direction. So the average expected upward bias is decreased but still positive. Thus I should adjust the probability of human extinction being due to unfriendly ai downwards. Of course, the possibility of less wrong over or underestimating existential risk in general is another matter.