I don’t quite understand your final clause. Are you saying that you still believe a significant number of people on LW overestimate the danger of UFAI, but that your confidence in that is lower than it was?
More or less. I meant that I now estimate a reduced but still non-zero probability of upwards bias, but only a negligible probability of a bias in the other direction. So the average expected upward bias is decreased but still positive. Thus I should adjust the probability of human extinction being due to unfriendly ai downwards. Of course, the possibility of less wrong over or underestimating existential risk in general is another matter.
I don’t quite understand your final clause. Are you saying that you still believe a significant number of people on LW overestimate the danger of UFAI, but that your confidence in that is lower than it was?
More or less. I meant that I now estimate a reduced but still non-zero probability of upwards bias, but only a negligible probability of a bias in the other direction. So the average expected upward bias is decreased but still positive. Thus I should adjust the probability of human extinction being due to unfriendly ai downwards. Of course, the possibility of less wrong over or underestimating existential risk in general is another matter.