Does current AI technology possess the power to work its way up to AGI? For example, if the world all of the sudden put a permanent halt on all AI advancements and we were left with GPT-4, would it, given an infinite amount of time of use, achieve AGI through RLHF and access to the internet? This assumes that GPT-4 is not already an AGI system.
i.e.: Do AI systems need to forgo changes to their actual architecture to become smarter, or do is it possible that they get (significantly) smarter through simply enough usage?
GPT-4 doesn’t learn when you use it. It doesn’t update its parameters to better predict the text of its users or anything like that. So the answer to the basic question is “no.”
You could also ask “But what if it did keep getting updated? Would it eventually become super-good at predicting the world?” There are these things called “scaling laws” that predict performance based on amount of training data, and they would say that with arbitrary amounts of data, GPT-4 could get arbitrarily smart (though note that this would require new data that’s many times more than all text produced in human history so far). But the scaling laws almot certainly break if you try to extend them too far for a fixed architecture. I actually expect GPT-4 would become (more?) superhuman at many tasks related to writing text, but remain not all that great at prediction of the physical world that’s rare in text and hard for humans.
Charlie is correct in saying that GPT-4 does not actively learn based on its input. But a related question is whether we are missing key technical insights for AGI, and Stampy has an answer for that. He also has an answer explaining scaling laws.
Does current AI technology possess the power to work its way up to AGI? For example, if the world all of the sudden put a permanent halt on all AI advancements and we were left with GPT-4, would it, given an infinite amount of time of use, achieve AGI through RLHF and access to the internet? This assumes that GPT-4 is not already an AGI system.
i.e.: Do AI systems need to forgo changes to their actual architecture to become smarter, or do is it possible that they get (significantly) smarter through simply enough usage?
GPT-4 doesn’t learn when you use it. It doesn’t update its parameters to better predict the text of its users or anything like that. So the answer to the basic question is “no.”
You could also ask “But what if it did keep getting updated? Would it eventually become super-good at predicting the world?” There are these things called “scaling laws” that predict performance based on amount of training data, and they would say that with arbitrary amounts of data, GPT-4 could get arbitrarily smart (though note that this would require new data that’s many times more than all text produced in human history so far). But the scaling laws almot certainly break if you try to extend them too far for a fixed architecture. I actually expect GPT-4 would become (more?) superhuman at many tasks related to writing text, but remain not all that great at prediction of the physical world that’s rare in text and hard for humans.
Charlie is correct in saying that GPT-4 does not actively learn based on its input. But a related question is whether we are missing key technical insights for AGI, and Stampy has an answer for that. He also has an answer explaining scaling laws.