This is the first time I can recall Eliezer giving an overt indication regarding how likely an AGI project is to doom us. He suggests that 90% chance of Doom given intelligent effort is unrealistically high.
90% was Holden’s esitmate—contingent upon a SIAI machine being involved. Not “intelligent effort”, SIAI. Those are two different things.
90% was Holden’s esitmate—contingent upon a SIAI machine being involved. Not “intelligent effort”, SIAI. Those are two different things.
My comment was a response to Eliezer, specifically the paragraph including this excerpt, among other things:
Why would someone claim to know that proving the right thing is beyond human ability, even if “100 of the world’s most intelligent and relevantly experienced people” (Holden’s terms) check it over?
90% was Holden’s esitmate—contingent upon a SIAI machine being involved. Not “intelligent effort”, SIAI. Those are two different things.
My comment was a response to Eliezer, specifically the paragraph including this excerpt, among other things: