I honestly regret that I didn’t make it as clear as I possibly could the first time around, but expressing original, partially developed ideas is not the same thing as reciting facts about well-understood concepts that have been explained and re-explained many times. Flippancy is needlessly hostile.
there are some problems to which search is inapplicable, owing to the lack of a well-defined search space
If not wholly inapplicable, then not performant, yes. Though the problem isn’t that the search-space is not defined at all, but that the definitions which are easiest to give are also the least helpful ( to return to the previous example, in the Platonic real there exists a brainf*ck program that implements an optimal map from symptoms to diagnoses—good luck finding it ). As the original author points out, there’s a tradeoff between knowledge and the need for brute-force. It may be that you can have an agent synthesize knowledge by consolidating the results of a brute-force search into a formal representation which an agent can then use to tune or reformulate the search-space previously given to fit some particular purposes; but this is quite a level of sophistication above pure brute force.
Edit:
this is not an issue with search-based optimization techniques; it’s simply a consequence of the fact that you’re dealing with an ill-posed problem
If the problems of literature or philosophy were not in some sense “ill posed” they would also be dead subjects. The ‘general’ part in AGI would seem to imply some capacity for dealing with vague, partially defined ideas in useful ways.
I honestly regret that I didn’t make it as clear as I possibly could the first time around, but expressing original, partially developed ideas is not the same thing as reciting facts about well-understood concepts that have been explained and re-explained many times. Flippancy is needlessly hostile.
If not wholly inapplicable, then not performant, yes. Though the problem isn’t that the search-space is not defined at all, but that the definitions which are easiest to give are also the least helpful ( to return to the previous example, in the Platonic real there exists a brainf*ck program that implements an optimal map from symptoms to diagnoses—good luck finding it ). As the original author points out, there’s a tradeoff between knowledge and the need for brute-force. It may be that you can have an agent synthesize knowledge by consolidating the results of a brute-force search into a formal representation which an agent can then use to tune or reformulate the search-space previously given to fit some particular purposes; but this is quite a level of sophistication above pure brute force.
Edit:
If the problems of literature or philosophy were not in some sense “ill posed” they would also be dead subjects. The ‘general’ part in AGI would seem to imply some capacity for dealing with vague, partially defined ideas in useful ways.