Thank goodness, because I was starting to wonder whether I should be worried about Ben Goetzel’s AGI project. This puts my mind at ease, at least for a while.
It shouldn’t (not as a general rule; Ben’s case might have other valid reasons to come to the same conclusion). Being confused in one area doesn’t necessarily make you confused in another (or prevents from being capable despite confusion). Not getting the problem of FAI doesn’t prevent you from working towards AGI. Believing in God or Santa Claus or flying yogi doesn’t prevent you from working towards AGI. Evolution didn’t even have a mind.
Being confused in one area doesn’t necessarily make you confused in another (or prevents from being capable despite confusion).
One doesn’t need necessity to draw comfort. Being confused in one area is Bayesian evidence that one is confused in other areas, and being confused in a given area is Bayesian evidence for lack of sufficient capability in that area.
Well, I’ve already observed what looks like confusion in the area of AGI. We can imagine this new evidence shows susceptibility to biases that would hinder his work.
But until now I’ve tentatively assumed Ben did not plan for interesting AI results because on some level he didn’t expect to produce any. More precisely, I concluded this on the basis of two assumptions: that he didn’t want to die, and that expecting interesting results would make him worry somewhat about death.
I specifically said he did not make his decision based on the arguments that I saw him present -- in part because he distinguished claims that count as logically equivalent if we reject the possibility of researchers unconsciously restricting the AI’s actions or predicting them by non-rational means. If he actually assigns significant probability to that last option, then maybe we should worry more!
It shouldn’t (not as a general rule; Ben’s case might have other valid reasons to come to the same conclusion). Being confused in one area doesn’t necessarily make you confused in another (or prevents from being capable despite confusion). Not getting the problem of FAI doesn’t prevent you from working towards AGI. Believing in God or Santa Claus or flying yogi doesn’t prevent you from working towards AGI. Evolution didn’t even have a mind.
One doesn’t need necessity to draw comfort. Being confused in one area is Bayesian evidence that one is confused in other areas, and being confused in a given area is Bayesian evidence for lack of sufficient capability in that area.
Well, I’ve already observed what looks like confusion in the area of AGI. We can imagine this new evidence shows susceptibility to biases that would hinder his work.
But until now I’ve tentatively assumed Ben did not plan for interesting AI results because on some level he didn’t expect to produce any. More precisely, I concluded this on the basis of two assumptions: that he didn’t want to die, and that expecting interesting results would make him worry somewhat about death.
I specifically said he did not make his decision based on the arguments that I saw him present -- in part because he distinguished claims that count as logically equivalent if we reject the possibility of researchers unconsciously restricting the AI’s actions or predicting them by non-rational means. If he actually assigns significant probability to that last option, then maybe we should worry more!