GPT-4 doesn’t learn when you use it. It doesn’t update its parameters to better predict the text of its users or anything like that. So the answer to the basic question is “no.”
You could also ask “But what if it did keep getting updated? Would it eventually become super-good at predicting the world?” There are these things called “scaling laws” that predict performance based on amount of training data, and they would say that with arbitrary amounts of data, GPT-4 could get arbitrarily smart (though note that this would require new data that’s many times more than all text produced in human history so far). But the scaling laws almot certainly break if you try to extend them too far for a fixed architecture. I actually expect GPT-4 would become (more?) superhuman at many tasks related to writing text, but remain not all that great at prediction of the physical world that’s rare in text and hard for humans.
Charlie is correct in saying that GPT-4 does not actively learn based on its input. But a related question is whether we are missing key technical insights for AGI, and Stampy has an answer for that. He also has an answer explaining scaling laws.
GPT-4 doesn’t learn when you use it. It doesn’t update its parameters to better predict the text of its users or anything like that. So the answer to the basic question is “no.”
You could also ask “But what if it did keep getting updated? Would it eventually become super-good at predicting the world?” There are these things called “scaling laws” that predict performance based on amount of training data, and they would say that with arbitrary amounts of data, GPT-4 could get arbitrarily smart (though note that this would require new data that’s many times more than all text produced in human history so far). But the scaling laws almot certainly break if you try to extend them too far for a fixed architecture. I actually expect GPT-4 would become (more?) superhuman at many tasks related to writing text, but remain not all that great at prediction of the physical world that’s rare in text and hard for humans.
Charlie is correct in saying that GPT-4 does not actively learn based on its input. But a related question is whether we are missing key technical insights for AGI, and Stampy has an answer for that. He also has an answer explaining scaling laws.