Up to a certain size, LLMs are going to be a commodity, and academic/amateur/open-source versions will be available. Currently that scale’s around 7B-20B parameters, likely it will soon increase. However, GPT-4 supposedly cost >$100m to create, of which I’ve seen estimates that the raw foundation model training cost was O($40m) [which admittedly will decrease with Moore’s Law for GPUs], and there is also a significant cost for filtering the training data, doing instruct-following and safety RLHF, and so forth. It’s not clear to me why any organization able to get its hands on that much money and the expertise necessary to spend it would open-source the result, at least while the result remains near-cutting edge (leaks are of course possible, as Meta already managed to prove). So I suspect models as capable as GPT-4 will not get open-sourced any time soon (where things are moving fast enough that ‘soon’ means ‘this year, or maybe next’). But there are quite a lot of companies/governments that can devote >$100m to an important problem, so the current situation that there are only 3-or-4 companies with LLMs with capabilities comparable to GPT-4 isn’t likely to last very long.
As for alignment, it’s very unlikely that all sources for commodity LLMs will do an excellent job of persuading them not to tell you how to hotwire cars, not roleplaying AI waifus, or not being able to simulate 4Chan, and some will just release a foundation model with no such training. So we can expect ‘unaligned’ open-source LLMs. However, none of those are remotely close to civilization-ending problems. The question then is whether Language Model Cognitive Architectures (LMCAs) along the lines of AutoGPT can be made to run effectively on a suitable fine-tuned LLM of less-than-GPT-4 level of complexity and can still increase their capabilities to an AGI level, or if that (if possible at all) requires an LLM of GPT-4-scale or larger. AutoGPT isn’t currently that capable when run with GPT-3.5, but generally, if a foundation LLM shows some signs of a capability at a task, suitable fine-tuning can usually greatly increase the reliability of it performing that task.
Up to a certain size, LLMs are going to be a commodity, and academic/amateur/open-source versions will be available. Currently that scale’s around 7B-20B parameters, likely it will soon increase. However, GPT-4 supposedly cost >$100m to create, of which I’ve seen estimates that the raw foundation model training cost was O($40m) [which admittedly will decrease with Moore’s Law for GPUs], and there is also a significant cost for filtering the training data, doing instruct-following and safety RLHF, and so forth. It’s not clear to me why any organization able to get its hands on that much money and the expertise necessary to spend it would open-source the result, at least while the result remains near-cutting edge (leaks are of course possible, as Meta already managed to prove). So I suspect models as capable as GPT-4 will not get open-sourced any time soon (where things are moving fast enough that ‘soon’ means ‘this year, or maybe next’). But there are quite a lot of companies/governments that can devote >$100m to an important problem, so the current situation that there are only 3-or-4 companies with LLMs with capabilities comparable to GPT-4 isn’t likely to last very long.
As for alignment, it’s very unlikely that all sources for commodity LLMs will do an excellent job of persuading them not to tell you how to hotwire cars, not roleplaying AI waifus, or not being able to simulate 4Chan, and some will just release a foundation model with no such training. So we can expect ‘unaligned’ open-source LLMs. However, none of those are remotely close to civilization-ending problems. The question then is whether Language Model Cognitive Architectures (LMCAs) along the lines of AutoGPT can be made to run effectively on a suitable fine-tuned LLM of less-than-GPT-4 level of complexity and can still increase their capabilities to an AGI level, or if that (if possible at all) requires an LLM of GPT-4-scale or larger. AutoGPT isn’t currently that capable when run with GPT-3.5, but generally, if a foundation LLM shows some signs of a capability at a task, suitable fine-tuning can usually greatly increase the reliability of it performing that task.