In your own epistemological state, you may be justified in thinking that Eliezer and other LWers are wrong about his chances of success, but even granting that, I still don’t see why you’re so sure that Eliezer has failed to “seriously consider the possibility that he’s vastly overestimated his chances of building a Friendly AI”. Why couldn’t he have, like the other LWers apparently did, considered the possibility and then (erroneously, according to your epistemological state) rejected it?
Why couldn’t he have, like the other LWers apparently did, considered the possibility and then (erroneously, according to your epistemological state) rejected it?
My experience reading Eliezer’s writings is that he’s very smart and perceptive. I find it implausible that somebody so smart and perceptive could miss something for which there is (in my view) so much evidence for if he had engaged in such consideration. So I think that what you suggest could be the case, but I find is quite unlikely.
In your own epistemological state, you may be justified in thinking that Eliezer and other LWers are wrong about his chances of success, but even granting that, I still don’t see why you’re so sure that Eliezer has failed to “seriously consider the possibility that he’s vastly overestimated his chances of building a Friendly AI”. Why couldn’t he have, like the other LWers apparently did, considered the possibility and then (erroneously, according to your epistemological state) rejected it?
My experience reading Eliezer’s writings is that he’s very smart and perceptive. I find it implausible that somebody so smart and perceptive could miss something for which there is (in my view) so much evidence for if he had engaged in such consideration. So I think that what you suggest could be the case, but I find is quite unlikely.