In my case introspection lead me to the realisation that human reasoning consists to a large degree out of two interlocking parts: Finding constraint of the solution space and constraint satisfaction.
Which has the interesting corollary that AI systems that reach human or superhuman performance by adding search to NNs are not really implementing reasoning but rather brute-forcing it.
It also makes me sceptical that LLMs+search will be AGI.
I agree with all of that. Even being sceptical that LLMs plus search will reach AGI. The lack of constraint satisfaction as the human brain does it could be a real stumbling block.
But LLMs have copied a good bit of our reasoning and therefore our semantic search. So they can do something like constraint satisfaction.
Put the constraints into a query, and the answer will satisfy those constraints. The process used is different than a human brain, but for every problem I can think of, the results are the same.
Now, that’s partly because every problem I can think of is one I’ve already seen solved. But my ability to do truly novel problem solving is rarely used and pretty limitted. So I’m not sure the LLM can’t do just as good a job if it had a scaffolded script to explore its knowledge base from a few different angles.
In my case introspection lead me to the realisation that human reasoning consists to a large degree out of two interlocking parts: Finding constraint of the solution space and constraint satisfaction.
Which has the interesting corollary that AI systems that reach human or superhuman performance by adding search to NNs are not really implementing reasoning but rather brute-forcing it.
It also makes me sceptical that LLMs+search will be AGI.
I agree with all of that. Even being sceptical that LLMs plus search will reach AGI. The lack of constraint satisfaction as the human brain does it could be a real stumbling block.
But LLMs have copied a good bit of our reasoning and therefore our semantic search. So they can do something like constraint satisfaction.
Put the constraints into a query, and the answer will satisfy those constraints. The process used is different than a human brain, but for every problem I can think of, the results are the same.
Now, that’s partly because every problem I can think of is one I’ve already seen solved. But my ability to do truly novel problem solving is rarely used and pretty limitted. So I’m not sure the LLM can’t do just as good a job if it had a scaffolded script to explore its knowledge base from a few different angles.