Thanks, that’s helpful! I’ll have to think about the “self-consistent probability distribution” issue more, and thanks for the links. (ETA: Meanwhile I also added an “Update 2″ to the post, offering a different way to think about this, which might or might not be helpful.)
Let me try the gradient descent argument again (and note that I am sympathetic, and indeed I made (what I think is) that exact argument a few weeks ago, cf. Self-Supervised Learning and AGI Safety, section title “Why won’t it try to get more predictable data?”). My argument here is not assuming there’s a policy of trying to get more predictable data for its own sake, but rather that this kind of behavior arises as a side-effect of an algorithmic process, and that all the ingredients of that process are either things we would program into the algorithm ourselves or things that would be incentivized by gradient descent.
The ingredients are things like “Look for and learn patterns in all accessible data”, which includes both low-level patterns in the raw data, higher-level patterns in the lower-level patterns, and (perhaps unintentionally) patterns in accessible information about its own thought process (“After I visualize the shape of an elephant tusk, I often visualize an elephant shortly thereafter”). It includes searching for transformations (cause-effect, composition, analogies, etc.) between any two patterns it already knows about (“sneakers are a type of shoe”, or more problematically, “my thought processes resemble the associative memory of an AGI”), and cataloging these transformations when they’re found. Stuff like that.
So, “make smart hypotheses about one’s own embodied situation” is definitely an unintended side-effect, and not rewarded by gradient descent as such. But as its world-model becomes more comprehensive, and as it continues to automatically search for patterns in whatever information it has access to, “make smart hypotheses about one’s own embodied situation” would just be something that happens naturally, unless we somehow prevent it (and I can’t see how to prevent it). Likewise, “model one’s own real-world causal effects on downstream data” is neither desired by us nor rewarded (as such) by gradient descent. But it can happen anyway, as a side-effect of the usually-locally-helpful rule of “search through the world-model for any patterns and relationships which may impact our beliefs about the upcoming data”. Likewise, we have the generally-helpful rule “Hypothesize possible higher-level contexts that span an extended swathe of text surrounding the next word to be predicted, and pick one such context based on how surprising it would be based on what it knows about the preceding text and the world-model, and then make a prediction conditional on that context”. All these ingredients combine to get the pathological behavior of choosing “Help I’m trapped in a GPU”. That’s my argument, anyway...
Thanks, that’s helpful! I’ll have to think about the “self-consistent probability distribution” issue more, and thanks for the links. (ETA: Meanwhile I also added an “Update 2″ to the post, offering a different way to think about this, which might or might not be helpful.)
Let me try the gradient descent argument again (and note that I am sympathetic, and indeed I made (what I think is) that exact argument a few weeks ago, cf. Self-Supervised Learning and AGI Safety, section title “Why won’t it try to get more predictable data?”). My argument here is not assuming there’s a policy of trying to get more predictable data for its own sake, but rather that this kind of behavior arises as a side-effect of an algorithmic process, and that all the ingredients of that process are either things we would program into the algorithm ourselves or things that would be incentivized by gradient descent.
The ingredients are things like “Look for and learn patterns in all accessible data”, which includes both low-level patterns in the raw data, higher-level patterns in the lower-level patterns, and (perhaps unintentionally) patterns in accessible information about its own thought process (“After I visualize the shape of an elephant tusk, I often visualize an elephant shortly thereafter”). It includes searching for transformations (cause-effect, composition, analogies, etc.) between any two patterns it already knows about (“sneakers are a type of shoe”, or more problematically, “my thought processes resemble the associative memory of an AGI”), and cataloging these transformations when they’re found. Stuff like that.
So, “make smart hypotheses about one’s own embodied situation” is definitely an unintended side-effect, and not rewarded by gradient descent as such. But as its world-model becomes more comprehensive, and as it continues to automatically search for patterns in whatever information it has access to, “make smart hypotheses about one’s own embodied situation” would just be something that happens naturally, unless we somehow prevent it (and I can’t see how to prevent it). Likewise, “model one’s own real-world causal effects on downstream data” is neither desired by us nor rewarded (as such) by gradient descent. But it can happen anyway, as a side-effect of the usually-locally-helpful rule of “search through the world-model for any patterns and relationships which may impact our beliefs about the upcoming data”. Likewise, we have the generally-helpful rule “Hypothesize possible higher-level contexts that span an extended swathe of text surrounding the next word to be predicted, and pick one such context based on how surprising it would be based on what it knows about the preceding text and the world-model, and then make a prediction conditional on that context”. All these ingredients combine to get the pathological behavior of choosing “Help I’m trapped in a GPU”. That’s my argument, anyway...