I disagree that there is a difference of kind between “engineering ingenuity” and “scientific discovery”, at least in the business of AI. The examples you give—self-play, MCTS, ConvNets—were all used in game-playing programs before AlphaGo. The trick of AlphaGo was to combine them, and then discover that it worked astonishingly well. It was very clever and tasteful engineering to combine them, but only a breakthrough in retrospect. And the people that developed them each earlier, for their independent purposes? They were part of the ordinary cycle of engineering development: “Look at a problem, think as hard as you can, come up with something, try it, publish the results.” They’re just the ones you remember, because they were good.
Paradigm shifts do happen, but I don’t think we need them between here and AGI.
Yeah I’m definitely describing something as a binary when it’s really a spectrum. (I was oversimplifying since I didn’t think it mattered for that particular context.)
In the context of AI, I don’t know what the difference is (if any) between engineering and science. You’re right that I was off-base there…
…But I do think that there’s a spectrum from ingenuity / insight to grunt-work.
So I’m bringing up a possible scenario where near-future AI gets progressively less useful as you move towards the ingenuity side of that spectrum, and where changing that situation (i.e., automating ingenuity) itself requires a lot of ingenuity, posing a chicken-and-egg problem / bottleneck that limits the scope of rapid near-future recursive AI progress.
Paradigm shifts do happen, but I don’t think we need them between here and AGI.
I disagree that there is a difference of kind between “engineering ingenuity” and “scientific discovery”, at least in the business of AI. The examples you give—self-play, MCTS, ConvNets—were all used in game-playing programs before AlphaGo. The trick of AlphaGo was to combine them, and then discover that it worked astonishingly well. It was very clever and tasteful engineering to combine them, but only a breakthrough in retrospect. And the people that developed them each earlier, for their independent purposes? They were part of the ordinary cycle of engineering development: “Look at a problem, think as hard as you can, come up with something, try it, publish the results.” They’re just the ones you remember, because they were good.
Paradigm shifts do happen, but I don’t think we need them between here and AGI.
Yeah I’m definitely describing something as a binary when it’s really a spectrum. (I was oversimplifying since I didn’t think it mattered for that particular context.)
In the context of AI, I don’t know what the difference is (if any) between engineering and science. You’re right that I was off-base there…
…But I do think that there’s a spectrum from ingenuity / insight to grunt-work.
So I’m bringing up a possible scenario where near-future AI gets progressively less useful as you move towards the ingenuity side of that spectrum, and where changing that situation (i.e., automating ingenuity) itself requires a lot of ingenuity, posing a chicken-and-egg problem / bottleneck that limits the scope of rapid near-future recursive AI progress.
Perhaps! Time will tell :)