Yes, what you said. The opposite of “a human-legible learning algorithm” is “a nightmarishly-complicated Rube-Goldberg-machine learning algorithm”.
If the latter is what we need, we could still presumably get AGI, but it would involve some automated search through a big space of many possible nightmarishly-complicated Rube-Goldberg-machine learning algorithms to find one that works.
That would be a different AGI development story, and thus a different blog post. Instead of “humans figure out the learning algorithm” as an exogenous input to the path-to-AGI, which is how I treated it, it would instead be an output of that automated search process. And there would be much more weight on the possibility that the resulting learning algorithm would be wildly different than the human brain’s, and hence more uncertainty in its computational requirements.
Yes, what you said. The opposite of “a human-legible learning algorithm” is “a nightmarishly-complicated Rube-Goldberg-machine learning algorithm”.
If the latter is what we need, we could still presumably get AGI, but it would involve some automated search through a big space of many possible nightmarishly-complicated Rube-Goldberg-machine learning algorithms to find one that works.
That would be a different AGI development story, and thus a different blog post. Instead of “humans figure out the learning algorithm” as an exogenous input to the path-to-AGI, which is how I treated it, it would instead be an output of that automated search process. And there would be much more weight on the possibility that the resulting learning algorithm would be wildly different than the human brain’s, and hence more uncertainty in its computational requirements.