Very impressed by the sanity level of Nilsson’s response.
I am not sure “sanity level” is the phrase I would use here, since the difference seems to be “Nilsson agrees with SIAI” rather than “Nilsson isn’t crazy.”
Especially since it seems rather discongruent to assign a a 50% probability for AI to be developed by 2050 and a 90% probability for self-modification that leads to superintelligence within 5 years of AI being developed… and then only a 0.01% probability for AI-caused human extinction within this century.
Of course, he may be presuming that superintelligence alone won’t help much.
I am not sure “sanity level” is the phrase I would use here, since the difference seems to be “Nilsson agrees with SIAI” rather than “Nilsson isn’t crazy.”
Especially since it seems rather discongruent to assign a a 50% probability for AI to be developed by 2050 and a 90% probability for self-modification that leads to superintelligence within 5 years of AI being developed… and then only a 0.01% probability for AI-caused human extinction within this century.
Of course, he may be presuming that superintelligence alone won’t help much.