Thanks for the prediction. I may write one of my own at sometime but I thought I should put some ideas forward in the meantime.
I think that GPT-X won’t be so much a thing and an large amount of effort will be made by OpenAI and similar to integrate AutoGPT like capabilities back in house again—vertical integration. They may discourage and even significantly reduce usage of the raw API/GPT window, wrapping it in other tools. The reason I think this is I see GPT-4 as more of a intuition/thinking fast type system rather than a complete mind. I think adding another layer to manage it makes a lot of sense. Even simple things like deciding which GPT-X to send a request to. Getting GPT-4 to count to 1000 is a terrible waste etc.
There will be work towards an GPT-overview system that is specifically designed to manage this. I don’t know how it will be trained but it won’t be next word prediction. The combination of the two systems is heading towards self-awareness. Like the brain, the subconscious is more capable for things like seeing but the conscious mind is across all things and directs it. The most important thing that would make GPT more useful to me seems like self awareness. There will be interaction between the two systems where the overview system tries to train smaller GPT subsystems to be just as accurate but use far less compute.
There will be pressure to make GPT-X as efficient as possible, I think that means splitting it into sub-systems specialized to certain tasks—perhaps people oriented and software to start with if possible.
There will likely be an extreme GPU shortage, and much politics about TSMC as a result.
Newer GPT systems won’t be advertised much if at all, the product will continually improve and OpenAI will be opaque about what system/s is even handling your request. There may be no official GPT-5, probably no GPT 6. It’s better politics for OpenAI to behave this way with the letter against large models.
A bit controversially—LW will lose control and not be seen as a clear leader in alignment research or outreach. There will be so many people as mentioned and they will go elsewhere. Perhaps join an existing leader such as Tegmark, Hinton or start a new group entirely. The focus will likely be different, for example regarding AI more as mind-children than alien but still regarding it as just as dangerous. More high level techniques such as psychology rather than formal proof would be emphasized. This will probably take >3 years to happen however.
Thanks for the prediction. I may write one of my own at sometime but I thought I should put some ideas forward in the meantime.
I think that GPT-X won’t be so much a thing and an large amount of effort will be made by OpenAI and similar to integrate AutoGPT like capabilities back in house again—vertical integration. They may discourage and even significantly reduce usage of the raw API/GPT window, wrapping it in other tools. The reason I think this is I see GPT-4 as more of a intuition/thinking fast type system rather than a complete mind. I think adding another layer to manage it makes a lot of sense. Even simple things like deciding which GPT-X to send a request to. Getting GPT-4 to count to 1000 is a terrible waste etc.
There will be work towards an GPT-overview system that is specifically designed to manage this. I don’t know how it will be trained but it won’t be next word prediction. The combination of the two systems is heading towards self-awareness. Like the brain, the subconscious is more capable for things like seeing but the conscious mind is across all things and directs it. The most important thing that would make GPT more useful to me seems like self awareness. There will be interaction between the two systems where the overview system tries to train smaller GPT subsystems to be just as accurate but use far less compute.
There will be pressure to make GPT-X as efficient as possible, I think that means splitting it into sub-systems specialized to certain tasks—perhaps people oriented and software to start with if possible.
There will likely be an extreme GPU shortage, and much politics about TSMC as a result.
Newer GPT systems won’t be advertised much if at all, the product will continually improve and OpenAI will be opaque about what system/s is even handling your request. There may be no official GPT-5, probably no GPT 6. It’s better politics for OpenAI to behave this way with the letter against large models.
A bit controversially—LW will lose control and not be seen as a clear leader in alignment research or outreach. There will be so many people as mentioned and they will go elsewhere. Perhaps join an existing leader such as Tegmark, Hinton or start a new group entirely. The focus will likely be different, for example regarding AI more as mind-children than alien but still regarding it as just as dangerous. More high level techniques such as psychology rather than formal proof would be emphasized. This will probably take >3 years to happen however.