Hmm, I do agree the foom debates talk a bunch about a “box in a basement team”, but the conversation was pretty explicitly not about the competitive landscape and how many people are working on this box in a basement, etc. It was about whether it would be possible for a box in a basement with the right algorithms to become superhuman in a short period of time. In-particular Eliezer says:
In other words, I’m trying to separate out the question of “How dumb is this thing (points to head); how much smarter can you build an agent; if that agent were teleported into today’s world, could it take over?” versus the question of “Who develops it, in what order, and were they all trading insights or was it more like a modern-day financial firm where you don’t show your competitors your key insights, and so on, or, for that matter, modern artificial intelligence programs?”
The key question that the debate was about was whether building AGI would require maybe 1-2 major insights about how to build it, vs. it would require the discovery of a large number of algorithms that would incrementally make a system more and more up-to-par with where humans are at. That’s what the “box in a basement” metaphor was about.
Eliezer also has said other things around the time that make it explicit that he wasn’t intending to make any specific predictions about how smooth the on-ramp to pre-foom AGI would be, how competitive it would be, etc.
I do think there is a directional update here, but I think your summary here is approximately misleading.
The key question that the debate was about was whether building AGI would require maybe 1-2 major insights about how to build it, vs. it would require the discovery of a large number of algorithms that would incrementally make a system more and more up-to-par with where humans are at.
Robin Hanson didn’t say that AGI would “require the discovery of a large number of algorithms”. He emphasized instead that AGI would require a lot of “content” and would require a large “base”. He said,
My opinion, which I think many AI experts will agree with at least, including say Doug Lenat who did the Eurisko program that you most admire in AI [gesturing toward Eliezer], is that it’s largely about content. There are architectural insights. There are high-level things that you can do right or wrong, but they don’t, in the end, add up to enough to make vast growth. What you need for vast growth is simply to have a big base. [...]
Similarly, I think that for minds, what matters is that it just has lots of good, powerful stuff in it, lots of things it knows, routines, strategies, and there isn’t that much at the large architectural level.
This is all vague, but I think you can interpret his comment here as emphasizing the role of data, and making sure the model has learned a lot of knowledge, routines, strategies, and so on. That’s different from saying that humans need to discover a bunch of algorithms, one by one, to incrementally make a system more up-to-par with where humans are at. It’s compatible with the view that humans don’t need to discover a lot of insights to build AGI. He’s saying that insights are not sufficient: you need to make sure there’s a lot of “content” in the AI too.
I personally find his specific view here to have been vindicated more than the alternative, even though there were many details in his general story that ended up aging very poorly (especially ems).
Hmm, I do agree the foom debates talk a bunch about a “box in a basement team”, but the conversation was pretty explicitly not about the competitive landscape and how many people are working on this box in a basement, etc. It was about whether it would be possible for a box in a basement with the right algorithms to become superhuman in a short period of time. In-particular Eliezer says:
The key question that the debate was about was whether building AGI would require maybe 1-2 major insights about how to build it, vs. it would require the discovery of a large number of algorithms that would incrementally make a system more and more up-to-par with where humans are at. That’s what the “box in a basement” metaphor was about.
Eliezer also has said other things around the time that make it explicit that he wasn’t intending to make any specific predictions about how smooth the on-ramp to pre-foom AGI would be, how competitive it would be, etc.
I do think there is a directional update here, but I think your summary here is approximately misleading.
Robin Hanson didn’t say that AGI would “require the discovery of a large number of algorithms”. He emphasized instead that AGI would require a lot of “content” and would require a large “base”. He said,
This is all vague, but I think you can interpret his comment here as emphasizing the role of data, and making sure the model has learned a lot of knowledge, routines, strategies, and so on. That’s different from saying that humans need to discover a bunch of algorithms, one by one, to incrementally make a system more up-to-par with where humans are at. It’s compatible with the view that humans don’t need to discover a lot of insights to build AGI. He’s saying that insights are not sufficient: you need to make sure there’s a lot of “content” in the AI too.
I personally find his specific view here to have been vindicated more than the alternative, even though there were many details in his general story that ended up aging very poorly (especially ems).