This is the first time I can recall Eliezer giving an overt indication regarding how likely an AGI project is to doom us. He suggests that 90% chance of Doom given intelligent effort is unrealistically high. Previously I had only seem him declare that FAI is worth attempting once you multiply. While he still hasn’t given numbers (not saying he should) he has has given a bound. Interesting. And perhaps a little more optimistic than I expected—or at least more optimistic than I would have expected prior to Luke’s comment.
Isn’t it more like “how likely a formally proven FAI design is to doom us”, since this is what Holden seems to be arguing (see his quote below)?
Suppose that it is successful in the “AGI” part of its goal, i.e., it has successfully created an intelligence vastly superior to human intelligence and extraordinarily powerful from our perspective. Suppose that it has also done its best on the “Friendly” part of the goal: it has developed a formal argument for why its AGI’s utility function will be Friendly, it believes this argument to be airtight, and it has had this argument checked over by 100 of the world’s most intelligent and relevantly experienced people. .. What will be the outcome?
“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”
This is the first time I can recall Eliezer giving an overt indication regarding how likely an AGI project is to doom us. He suggests that 90% chance of Doom given intelligent effort is unrealistically high.
90% was Holden’s esitmate—contingent upon a SIAI machine being involved. Not “intelligent effort”, SIAI. Those are two different things.
90% was Holden’s esitmate—contingent upon a SIAI machine being involved. Not “intelligent effort”, SIAI. Those are two different things.
My comment was a response to Eliezer, specifically the paragraph including this excerpt, among other things:
Why would someone claim to know that proving the right thing is beyond human ability, even if “100 of the world’s most intelligent and relevantly experienced people” (Holden’s terms) check it over?
This is the first time I can recall Eliezer giving an overt indication regarding how likely an AGI project is to doom us. He suggests that 90% chance of Doom given intelligent effort is unrealistically high. Previously I had only seem him declare that FAI is worth attempting once you multiply. While he still hasn’t given numbers (not saying he should) he has has given a bound. Interesting. And perhaps a little more optimistic than I expected—or at least more optimistic than I would have expected prior to Luke’s comment.
Isn’t it more like “how likely a formally proven FAI design is to doom us”, since this is what Holden seems to be arguing (see his quote below)?
“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”
http://en.wikipedia.org/wiki/Clarke%27s_three_laws
90% was Holden’s esitmate—contingent upon a SIAI machine being involved. Not “intelligent effort”, SIAI. Those are two different things.
My comment was a response to Eliezer, specifically the paragraph including this excerpt, among other things: