Seems like if you’re working with neural networks there’s not a simple map from an efficient (in terms of program size, working memory, and speed) optimizer which maximizes X to an equivalent optimizer which maximizes -X.
If we consider that an efficient optimizer does something like tree search, then it would be easy to flip the sign of the node-evaluating “prune” module. But the “babble” module is likely to select promising actions based on a big bag of heuristics which aren’t easily flipped. Moreover, flipping a heuristic which upweights a small subset of outputs which lead to X doesn’t lead to a new heuristic which upweights a small subset of outputs which lead to -X.
Generalizing, this means that if you have access to maximizers for X, Y, Z, you can easily construct a maximizer for e.g. 0.3X+0.6Y+0.1Z but it would be non-trivial to construct a maximizer for 0.2X-0.5Y-0.3Z. This might mean that a certain class of mesa-optimizers (those which arise spontaneously as a result of training an AI to predict the behaviour of other optimizers) are likely to lie within a fairly narrow range of utility functions.
True if you don’t count the training process as part of the optimizer (which is a choice that sometimes makes sense and sometimes doesn’t). If you count the training process as part of the optimizer, then you can of course just flip your loss function or RL signal most of the time.
How do you construct a maximizer for 0.3X+0.6Y+0.1Z from three maximizers for X, Y, and Z? It certainly isn’t true in general for black box optimizers, so presumably this is something specific to a certain class of neural networks.
My model: suppose we have a DeepDreamer-style architecture, where (given a history of sensory inputs) the babbler module produces a distribution over actions, a world model predicts subsequent sensory inputs, and an evaluator predicts expected future X. If we run a tree-search over some weighted combination of the X, Y, and Z maximizers’ predicted actions, then run each of the X, Y, and Z maximizers’ evaluators, we’d get a reasonable approximation of a weighted maximizers.
This wouldn’t be true if we gave negative weights to the maximizers, because while the evaluator module would still make sense, the action distributions we’d get would probably be incoherent e.g. the model just running into walls or jumping off cliffs.
My conjecture is that, if a large black box model is doing something like modelling X, Y, and Z maximizers acting in the world, that large black box model might be close in model-space to a itself being a maximizer which maximizes 0.3X + 0.6Y + 0.1Z, but it’s far in model-space from being a maximizer which maximizes 0.3X − 0.6Y − 0.1Z due to the above problem.
Seems like if you’re working with neural networks there’s not a simple map from an efficient (in terms of program size, working memory, and speed) optimizer which maximizes X to an equivalent optimizer which maximizes -X. If we consider that an efficient optimizer does something like tree search, then it would be easy to flip the sign of the node-evaluating “prune” module. But the “babble” module is likely to select promising actions based on a big bag of heuristics which aren’t easily flipped. Moreover, flipping a heuristic which upweights a small subset of outputs which lead to X doesn’t lead to a new heuristic which upweights a small subset of outputs which lead to -X. Generalizing, this means that if you have access to maximizers for X, Y, Z, you can easily construct a maximizer for e.g. 0.3X+0.6Y+0.1Z but it would be non-trivial to construct a maximizer for 0.2X-0.5Y-0.3Z. This might mean that a certain class of mesa-optimizers (those which arise spontaneously as a result of training an AI to predict the behaviour of other optimizers) are likely to lie within a fairly narrow range of utility functions.
True if you don’t count the training process as part of the optimizer (which is a choice that sometimes makes sense and sometimes doesn’t). If you count the training process as part of the optimizer, then you can of course just flip your loss function or RL signal most of the time.
How do you construct a maximizer for 0.3X+0.6Y+0.1Z from three maximizers for X, Y, and Z? It certainly isn’t true in general for black box optimizers, so presumably this is something specific to a certain class of neural networks.
My model: suppose we have a DeepDreamer-style architecture, where (given a history of sensory inputs) the babbler module produces a distribution over actions, a world model predicts subsequent sensory inputs, and an evaluator predicts expected future X. If we run a tree-search over some weighted combination of the X, Y, and Z maximizers’ predicted actions, then run each of the X, Y, and Z maximizers’ evaluators, we’d get a reasonable approximation of a weighted maximizers.
This wouldn’t be true if we gave negative weights to the maximizers, because while the evaluator module would still make sense, the action distributions we’d get would probably be incoherent e.g. the model just running into walls or jumping off cliffs.
My conjecture is that, if a large black box model is doing something like modelling X, Y, and Z maximizers acting in the world, that large black box model might be close in model-space to a itself being a maximizer which maximizes 0.3X + 0.6Y + 0.1Z, but it’s far in model-space from being a maximizer which maximizes 0.3X − 0.6Y − 0.1Z due to the above problem.