I like this way of thinking about how quickly AI will grow smarter, and how much of the world will be amenable to its methods. Is understanding natural language sufficient to take over the world? I would argue yes, but my NLP professor disagrees — he thinks physical embodiment and the accompanying social cues would be very important for achieving superintelligence.
Your first two points make a related argument: that ML requires lots of high quality data, and that our data might not be high quality, or not in the areas it needs to be. A similar question would be whether AI can generalize to performing the various novel long-term planning challenges of a CEO or politician solely by training on short time-horizon tasks like next token prediction. Again, I take seriously the possibility that they could, but it doesn’t seem inconsistent with our evidence to believe that deep learning will only succeed in domains where we have lots of training data and rapid feedback loops.
I like this way of thinking about how quickly AI will grow smarter, and how much of the world will be amenable to its methods. Is understanding natural language sufficient to take over the world? I would argue yes, but my NLP professor disagrees — he thinks physical embodiment and the accompanying social cues would be very important for achieving superintelligence.
Your first two points make a related argument: that ML requires lots of high quality data, and that our data might not be high quality, or not in the areas it needs to be. A similar question would be whether AI can generalize to performing the various novel long-term planning challenges of a CEO or politician solely by training on short time-horizon tasks like next token prediction. Again, I take seriously the possibility that they could, but it doesn’t seem inconsistent with our evidence to believe that deep learning will only succeed in domains where we have lots of training data and rapid feedback loops.