How do you construct a maximizer for 0.3X+0.6Y+0.1Z from three maximizers for X, Y, and Z? It certainly isn’t true in general for black box optimizers, so presumably this is something specific to a certain class of neural networks.
My model: suppose we have a DeepDreamer-style architecture, where (given a history of sensory inputs) the babbler module produces a distribution over actions, a world model predicts subsequent sensory inputs, and an evaluator predicts expected future X. If we run a tree-search over some weighted combination of the X, Y, and Z maximizers’ predicted actions, then run each of the X, Y, and Z maximizers’ evaluators, we’d get a reasonable approximation of a weighted maximizers.
This wouldn’t be true if we gave negative weights to the maximizers, because while the evaluator module would still make sense, the action distributions we’d get would probably be incoherent e.g. the model just running into walls or jumping off cliffs.
My conjecture is that, if a large black box model is doing something like modelling X, Y, and Z maximizers acting in the world, that large black box model might be close in model-space to a itself being a maximizer which maximizes 0.3X + 0.6Y + 0.1Z, but it’s far in model-space from being a maximizer which maximizes 0.3X − 0.6Y − 0.1Z due to the above problem.
How do you construct a maximizer for 0.3X+0.6Y+0.1Z from three maximizers for X, Y, and Z? It certainly isn’t true in general for black box optimizers, so presumably this is something specific to a certain class of neural networks.
My model: suppose we have a DeepDreamer-style architecture, where (given a history of sensory inputs) the babbler module produces a distribution over actions, a world model predicts subsequent sensory inputs, and an evaluator predicts expected future X. If we run a tree-search over some weighted combination of the X, Y, and Z maximizers’ predicted actions, then run each of the X, Y, and Z maximizers’ evaluators, we’d get a reasonable approximation of a weighted maximizers.
This wouldn’t be true if we gave negative weights to the maximizers, because while the evaluator module would still make sense, the action distributions we’d get would probably be incoherent e.g. the model just running into walls or jumping off cliffs.
My conjecture is that, if a large black box model is doing something like modelling X, Y, and Z maximizers acting in the world, that large black box model might be close in model-space to a itself being a maximizer which maximizes 0.3X + 0.6Y + 0.1Z, but it’s far in model-space from being a maximizer which maximizes 0.3X − 0.6Y − 0.1Z due to the above problem.