ASSUMPTION 3: The algorithm is human-legible, but nobody knows how it works yet.
Can you clarify what you mean by this assumption? And how is your argument dependent on it?
Is the point that the “secret sauce” algorithm is something that humans can plausibly come up with by thinking hrd about it? As opposed maybe to a evolution-designed nightmare that humans cannot plausibly design except by brute forcing it?
Yes, what you said. The opposite of “a human-legible learning algorithm” is “a nightmarishly-complicated Rube-Goldberg-machine learning algorithm”.
If the latter is what we need, we could still presumably get AGI, but it would involve some automated search through a big space of many possible nightmarishly-complicated Rube-Goldberg-machine learning algorithms to find one that works.
That would be a different AGI development story, and thus a different blog post. Instead of “humans figure out the learning algorithm” as an exogenous input to the path-to-AGI, which is how I treated it, it would instead be an output of that automated search process. And there would be much more weight on the possibility that the resulting learning algorithm would be wildly different than the human brain’s, and hence more uncertainty in its computational requirements.
Can you clarify what you mean by this assumption? And how is your argument dependent on it?
Is the point that the “secret sauce” algorithm is something that humans can plausibly come up with by thinking hrd about it? As opposed maybe to a evolution-designed nightmare that humans cannot plausibly design except by brute forcing it?
Yes, what you said. The opposite of “a human-legible learning algorithm” is “a nightmarishly-complicated Rube-Goldberg-machine learning algorithm”.
If the latter is what we need, we could still presumably get AGI, but it would involve some automated search through a big space of many possible nightmarishly-complicated Rube-Goldberg-machine learning algorithms to find one that works.
That would be a different AGI development story, and thus a different blog post. Instead of “humans figure out the learning algorithm” as an exogenous input to the path-to-AGI, which is how I treated it, it would instead be an output of that automated search process. And there would be much more weight on the possibility that the resulting learning algorithm would be wildly different than the human brain’s, and hence more uncertainty in its computational requirements.