If the super-powerful SAT solver thing finds the plans but doesn’t execute them, would you still lump it with optimizer_2? (I know it’s just terminology and there’s no right answer, but I’m just curious about what categories you find natural.)
(BTW this is more-or-less a description of my current Grand Vision For AGI Safety, where the “dynamics of the world” are discovered by self-supervised learning, and the search process (and much else) is TBD.)
Hmm, idk, it feels more like an optimizer_1 in that situation. Now that you’ve posed this question, the super-powerful SAT solver that acts in the world feels like both an optimizer_1 and an optimizer_2.
If the super-powerful SAT solver thing finds the plans but doesn’t execute them, would you still lump it with optimizer_2? (I know it’s just terminology and there’s no right answer, but I’m just curious about what categories you find natural.)
(BTW this is more-or-less a description of my current Grand Vision For AGI Safety, where the “dynamics of the world” are discovered by self-supervised learning, and the search process (and much else) is TBD.)
Hmm, idk, it feels more like an optimizer_1 in that situation. Now that you’ve posed this question, the super-powerful SAT solver that acts in the world feels like both an optimizer_1 and an optimizer_2.