I’ll also add that given the amount of evidence that I see against the proposition that Eliezer will build a Friendly AI, I have difficulty imagining how he could be persisting in holding his beliefs without having failed to give serious consideration to the possibility that he might be totally wrong.
Have you noticed that many (most?) commenters/voters seem to disagree with your estimate? That’s not necessarily strong evidence that your estimate is wrong (in the sense that a Bayesian superintelligence wouldn’t assign a probability as low as yours), but it does show that many reasonable and smart people disagree with your estimate even after seriously considering your arguments. To me that implies that Eliezer could disagree with your estimate even after seriously considering your arguments, so I don’t think his “persisting in holding his beliefs” offers much evidence for your position that Eliezer exhibited “unwillingness to seriously consider the possibility that he’s vastly overestimated his chances of building a Friendly AI”.
Have you noticed that many (most?) commenters/voters seem to disagree with your estimate?
Yes. Of course, there’s a selection effect here—the people on LW are more likely to assign a high probability to the proposition that Eliezer will build a Friendly AI (whether or not there’s epistemic reason to do so).
The people outside of LW who I talk to on a regular basis have an estimate in line with my own. I trust these people’s judgment more than I trust LW posters judgment simply because I have much more information about their positive track records for making accurate real world judgments than I do for the people on LW.
To me that implies that Eliezer could disagree with your estimate even after seriously considering your arguments, so I don’t think his “persisting in holding his beliefs” offers much evidence for your position that Eliezer exhibited “unwillingness to seriously consider the possibility that he’s vastly overestimated his chances of building a Friendly AI”.
Yes, so I agree that in your epistemological state you should feel this way. I’m explaining why in my epistemological state I feel the way I do.
In your own epistemological state, you may be justified in thinking that Eliezer and other LWers are wrong about his chances of success, but even granting that, I still don’t see why you’re so sure that Eliezer has failed to “seriously consider the possibility that he’s vastly overestimated his chances of building a Friendly AI”. Why couldn’t he have, like the other LWers apparently did, considered the possibility and then (erroneously, according to your epistemological state) rejected it?
Why couldn’t he have, like the other LWers apparently did, considered the possibility and then (erroneously, according to your epistemological state) rejected it?
My experience reading Eliezer’s writings is that he’s very smart and perceptive. I find it implausible that somebody so smart and perceptive could miss something for which there is (in my view) so much evidence for if he had engaged in such consideration. So I think that what you suggest could be the case, but I find is quite unlikely.
Have you noticed that many (most?) commenters/voters seem to disagree with your estimate? That’s not necessarily strong evidence that your estimate is wrong (in the sense that a Bayesian superintelligence wouldn’t assign a probability as low as yours), but it does show that many reasonable and smart people disagree with your estimate even after seriously considering your arguments. To me that implies that Eliezer could disagree with your estimate even after seriously considering your arguments, so I don’t think his “persisting in holding his beliefs” offers much evidence for your position that Eliezer exhibited “unwillingness to seriously consider the possibility that he’s vastly overestimated his chances of building a Friendly AI”.
Yes. Of course, there’s a selection effect here—the people on LW are more likely to assign a high probability to the proposition that Eliezer will build a Friendly AI (whether or not there’s epistemic reason to do so).
The people outside of LW who I talk to on a regular basis have an estimate in line with my own. I trust these people’s judgment more than I trust LW posters judgment simply because I have much more information about their positive track records for making accurate real world judgments than I do for the people on LW.
Yes, so I agree that in your epistemological state you should feel this way. I’m explaining why in my epistemological state I feel the way I do.
In your own epistemological state, you may be justified in thinking that Eliezer and other LWers are wrong about his chances of success, but even granting that, I still don’t see why you’re so sure that Eliezer has failed to “seriously consider the possibility that he’s vastly overestimated his chances of building a Friendly AI”. Why couldn’t he have, like the other LWers apparently did, considered the possibility and then (erroneously, according to your epistemological state) rejected it?
My experience reading Eliezer’s writings is that he’s very smart and perceptive. I find it implausible that somebody so smart and perceptive could miss something for which there is (in my view) so much evidence for if he had engaged in such consideration. So I think that what you suggest could be the case, but I find is quite unlikely.