One problem with this argument is how conjunctive it is: “(A) Progress crucially depends on breakthroughs in complexity management and (B) strong recursive self-improvement is impossible and (C) near-future human level AGI is neither dangerous nor possible but (D) someone working on it is crucial for said complexity management breakthroughs and (E) they’re dissuaded by friendliness concerns and (F) our scientific window of opportunity is small.”
My back-of-the-envelope, generous probabilities:
A. 0.5, this is a pretty strong requirement.
B. 0.9, for simplicity, giving your speculation the benefit of the doubt.
C. 0.9, same.
D. 0.1, a genuine problem of this magnitude is going to attract a lot of diverse talent.
E. 0.01, this is the most demanding element of the scenario, that the UFAI meme itself will crucially disrupt progress.
F. 0.05, this would represent a large break from our current form of steady scientific progress, and I haven’t yet seen much evidence that it’s terribly likely.
That product comes out to roughly 1:50,000. I’m guessing you think the actual figure is higher, and expect you’ll contest those specific numbers, but would you agree that I’ve fairly characterized the structure of your objection to FAI?
One problem with this argument is how conjunctive it is: “(A) Progress crucially depends on breakthroughs in complexity management and (B) strong recursive self-improvement is impossible and (C) near-future human level AGI is neither dangerous nor possible but (D) someone working on it is crucial for said complexity management breakthroughs and (E) they’re dissuaded by friendliness concerns and (F) our scientific window of opportunity is small.”
My back-of-the-envelope, generous probabilities:
A. 0.5, this is a pretty strong requirement.
B. 0.9, for simplicity, giving your speculation the benefit of the doubt.
C. 0.9, same.
D. 0.1, a genuine problem of this magnitude is going to attract a lot of diverse talent.
E. 0.01, this is the most demanding element of the scenario, that the UFAI meme itself will crucially disrupt progress.
F. 0.05, this would represent a large break from our current form of steady scientific progress, and I haven’t yet seen much evidence that it’s terribly likely.
That product comes out to roughly 1:50,000. I’m guessing you think the actual figure is higher, and expect you’ll contest those specific numbers, but would you agree that I’ve fairly characterized the structure of your objection to FAI?