Well, I’ve already observed what looks like confusion in the area of AGI. We can imagine this new evidence shows susceptibility to biases that would hinder his work.
But until now I’ve tentatively assumed Ben did not plan for interesting AI results because on some level he didn’t expect to produce any. More precisely, I concluded this on the basis of two assumptions: that he didn’t want to die, and that expecting interesting results would make him worry somewhat about death.
I specifically said he did not make his decision based on the arguments that I saw him present -- in part because he distinguished claims that count as logically equivalent if we reject the possibility of researchers unconsciously restricting the AI’s actions or predicting them by non-rational means. If he actually assigns significant probability to that last option, then maybe we should worry more!
Well, I’ve already observed what looks like confusion in the area of AGI. We can imagine this new evidence shows susceptibility to biases that would hinder his work.
But until now I’ve tentatively assumed Ben did not plan for interesting AI results because on some level he didn’t expect to produce any. More precisely, I concluded this on the basis of two assumptions: that he didn’t want to die, and that expecting interesting results would make him worry somewhat about death.
I specifically said he did not make his decision based on the arguments that I saw him present -- in part because he distinguished claims that count as logically equivalent if we reject the possibility of researchers unconsciously restricting the AI’s actions or predicting them by non-rational means. If he actually assigns significant probability to that last option, then maybe we should worry more!