+1 with this disagreement. ML methods seem to indicate that it’s not quite that simple—they don’t soar beyond human level just from self-improvement, when pointed at themselves they self-improve but relatively very slowly compared to the predictions. I do think it’s possible, but it no longer seems like an obvious thing the way it may have before ML became a real thing.
With respect to being a single system: it’s consistently the case that end-to-end learned neural networks are better at their jobs than plugging together disparately trained networks, which are in turn better at their jobs than neural networks that can only communicate via one-hot vectors (eg, agent populations communicating in english, or ai systems like the heavily multimodal alexa bots). The latter can be much easier to make, but in turn top out much earlier.
+1 with this disagreement. ML methods seem to indicate that it’s not quite that simple—they don’t soar beyond human level just from self-improvement, when pointed at themselves they self-improve but relatively very slowly compared to the predictions. I do think it’s possible, but it no longer seems like an obvious thing the way it may have before ML became a real thing.
With respect to being a single system: it’s consistently the case that end-to-end learned neural networks are better at their jobs than plugging together disparately trained networks, which are in turn better at their jobs than neural networks that can only communicate via one-hot vectors (eg, agent populations communicating in english, or ai systems like the heavily multimodal alexa bots). The latter can be much easier to make, but in turn top out much earlier.