That a de novo AGI will be nothing like a human child in terms of how to make it safe is an antiprediction in that it would take a tremendous amount of evidence to suggest otherwise, and yet Wang just assumes this without having any evidence at all.
I think this single statement summarizes the huge rift between the narrow specific LW/EY view of AGI and other more mainstream views.
For researchers who are trying to emulate or simulate brain algorithms directly, its self-evidently obvious that the resulting AGI will start like a human child. If they succeed first your ‘antiprediction’ is trivially false. And then we have researchers like Wang or Goertzel who are pursuing AGI approaches that are not brain-like at all and yet still believe the AGI will learn like a human child and specifically use that analogy.
You can label anything an “antiprediction” and thus convince yourself that you need arbitrary positive evidence to disprove your counterfactual, but in doing so you are really just rationalizing your priors/existing beliefs.
I think this single statement summarizes the huge rift between the narrow specific LW/EY view of AGI and other more mainstream views.
For researchers who are trying to emulate or simulate brain algorithms directly, its self-evidently obvious that the resulting AGI will start like a human child. If they succeed first your ‘antiprediction’ is trivially false. And then we have researchers like Wang or Goertzel who are pursuing AGI approaches that are not brain-like at all and yet still believe the AGI will learn like a human child and specifically use that analogy.
You can label anything an “antiprediction” and thus convince yourself that you need arbitrary positive evidence to disprove your counterfactual, but in doing so you are really just rationalizing your priors/existing beliefs.