I generally agree with paulfchristiano here. Regarding Q2, Q5 and Q6 I’ll note that that aside from Nils Nilsson, the researchers in question do not appear to be familiar with the most serious existential risk from AGI: the one discussed in Omohundro’s The Basic AI Drives. Researchers without this background context are unlikely to deliver informative answers on Q2, Q5 and Q6.
What bothers me in The Basic AI Drives is a complete lack of quantitativeness.
Temporal discount rate isn’t even mentioned. No analysis of self-improvement/getting-things-done tradeoff. Influence of explicit / implicit utility function dichotomy on self-improvement aren’t considered.
I find some of your issues with the piece legitimate but stand by my characterization of the most serious existential threat from AI being of the type described in the therein.
I generally agree with paulfchristiano here. Regarding Q2, Q5 and Q6 I’ll note that that aside from Nils Nilsson, the researchers in question do not appear to be familiar with the most serious existential risk from AGI: the one discussed in Omohundro’s The Basic AI Drives. Researchers without this background context are unlikely to deliver informative answers on Q2, Q5 and Q6.
What bothers me in The Basic AI Drives is a complete lack of quantitativeness.
Temporal discount rate isn’t even mentioned. No analysis of self-improvement/getting-things-done tradeoff. Influence of explicit / implicit utility function dichotomy on self-improvement aren’t considered.
I find some of your issues with the piece legitimate but stand by my characterization of the most serious existential threat from AI being of the type described in the therein.