Recently I’ve been hearing a lot about AGI, specifically that it’s 5-10 years out. As someone with an interest in neuroscience, I don’t understand how any system so much less complex than the human brain would be able to achieve such a thing. To me, I feel that current models are incapable of actual logical reasoning (which I know is a horribly vague idea—sorry about that) and that any apparent logical reasoning that they are capable of is just a result of the fact that they have been trained on every possible verbal test of logical capacity.
Like, for example, it makes sense that a future LLM would be able to explain a mathematical concept that has been documented and previously discussed but I just can’t see it solving existing frontier problems in mathematical theory, as it’s a completely different “skillset”.
Is my understanding of how LLMs work flawed? Can they perform logical reasoning?
-- P.S. Apologies for the informalities as this is my first post.
[Question] The thing I don’t understand about AGI
Recently I’ve been hearing a lot about AGI, specifically that it’s 5-10 years out. As someone with an interest in neuroscience, I don’t understand how any system so much less complex than the human brain would be able to achieve such a thing. To me, I feel that current models are incapable of actual logical reasoning (which I know is a horribly vague idea—sorry about that) and that any apparent logical reasoning that they are capable of is just a result of the fact that they have been trained on every possible verbal test of logical capacity.
Like, for example, it makes sense that a future LLM would be able to explain a mathematical concept that has been documented and previously discussed but I just can’t see it solving existing frontier problems in mathematical theory, as it’s a completely different “skillset”.
Is my understanding of how LLMs work flawed? Can they perform logical reasoning?
--
P.S. Apologies for the informalities as this is my first post.