Isn’t it more like “how likely a formally proven FAI design is to doom us”, since this is what Holden seems to be arguing (see his quote below)?
Suppose that it is successful in the “AGI” part of its goal, i.e., it has successfully created an intelligence vastly superior to human intelligence and extraordinarily powerful from our perspective. Suppose that it has also done its best on the “Friendly” part of the goal: it has developed a formal argument for why its AGI’s utility function will be Friendly, it believes this argument to be airtight, and it has had this argument checked over by 100 of the world’s most intelligent and relevantly experienced people. .. What will be the outcome?
“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”
Isn’t it more like “how likely a formally proven FAI design is to doom us”, since this is what Holden seems to be arguing (see his quote below)?
“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”
http://en.wikipedia.org/wiki/Clarke%27s_three_laws