If I were to think about it a little, I’d suspect the big difference that LLMs and humans have is state/memory, where humans do have state/memory, but LLMs are currently more or less stateless today, and RNN training has not been solved to the extent transformers were.
One thing I will also say is that AI winters will be shorter than previous AI winters, because AI products can now be sort of made profitable, and this gives an independent base of money for AI research in ways that weren’t possible pre-2016.
A factor stemming from the same cause but pushing in the opposite direction is that “mundane” AI profitability can “distract” people who would otherwise be AGI hawks.
If I were to think about it a little, I’d suspect the big difference that LLMs and humans have is state/memory, where humans do have state/memory, but LLMs are currently more or less stateless today, and RNN training has not been solved to the extent transformers were.
One thing I will also say is that AI winters will be shorter than previous AI winters, because AI products can now be sort of made profitable, and this gives an independent base of money for AI research in ways that weren’t possible pre-2016.
A factor stemming from the same cause but pushing in the opposite direction is that “mundane” AI profitability can “distract” people who would otherwise be AGI hawks.