The key question that the debate was about was whether building AGI would require maybe 1-2 major insights about how to build it, vs. it would require the discovery of a large number of algorithms that would incrementally make a system more and more up-to-par with where humans are at.
Robin Hanson didn’t say that AGI would “require the discovery of a large number of algorithms”. He emphasized instead that AGI would require a lot of “content” and would require a large “base”. He said,
My opinion, which I think many AI experts will agree with at least, including say Doug Lenat who did the Eurisko program that you most admire in AI [gesturing toward Eliezer], is that it’s largely about content. There are architectural insights. There are high-level things that you can do right or wrong, but they don’t, in the end, add up to enough to make vast growth. What you need for vast growth is simply to have a big base. [...]
Similarly, I think that for minds, what matters is that it just has lots of good, powerful stuff in it, lots of things it knows, routines, strategies, and there isn’t that much at the large architectural level.
This is all vague, but I think you can interpret his comment here as emphasizing the role of data, and making sure the model has learned a lot of knowledge, routines, strategies, and so on. That’s different from saying that humans need to discover a bunch of algorithms, one by one, to incrementally make a system more up-to-par with where humans are at. It’s compatible with the view that humans don’t need to discover a lot of insights to build AGI. He’s saying that insights are not sufficient: you need to make sure there’s a lot of “content” in the AI too.
I personally find his specific view here to have been vindicated more than the alternative, even though there were many details in his general story that ended up aging very poorly (especially ems).
Robin Hanson didn’t say that AGI would “require the discovery of a large number of algorithms”. He emphasized instead that AGI would require a lot of “content” and would require a large “base”. He said,
This is all vague, but I think you can interpret his comment here as emphasizing the role of data, and making sure the model has learned a lot of knowledge, routines, strategies, and so on. That’s different from saying that humans need to discover a bunch of algorithms, one by one, to incrementally make a system more up-to-par with where humans are at. It’s compatible with the view that humans don’t need to discover a lot of insights to build AGI. He’s saying that insights are not sufficient: you need to make sure there’s a lot of “content” in the AI too.
I personally find his specific view here to have been vindicated more than the alternative, even though there were many details in his general story that ended up aging very poorly (especially ems).