It’s not really clear why you would have the searching process be more powerful than the evaluating process, if using such a “search” as part of a hypothetical process in the definition of “good.”
Note that in my original proposal (that I believe motivated this post) the only brute force searches were used to find formal descriptions of physics and human brains, as a kind of idealized induction, not to search for “good” worlds.
It’s not really clear why you would have the searching process be more powerful than the evaluating process, if using such a “search” as part of a hypothetical process in the definition of “good.”
Note that in my original proposal (that I believe motivated this post) the only brute force searches were used to find formal descriptions of physics and human brains, as a kind of idealized induction, not to search for “good” worlds.
Because the first supposes a powerful AI, while the second supposes an excellent evaluation process (essentially a value alignment problem solved).
Your post motivated this in part, but it’s a more general issue with optimisation processes and searches.
Neither the search nor the evaluation presupposes an AI when a hypothetical process is used as the definition of “good.”