your point about the distinction between “mesa” and “steered” is chiefly that in the latter case, the inner layer is continually receiving reward signal from the outer layer, which in effect heavily restricts the space of possible algorithms the outer layer might give rise to. Does that seem like a decent paraphrase?
Yeah, that’s part of it, but also I tend to be a bit skeptical that a performance-competitive optimizer will spontaneously develop, as opposed to being programmed—just as AlphaGo does MCTS because DeepMind programmed it to do MCTS, not because it was running a generic RNN that discovered MCTS. See also this.
I feel confused about what portion of the concepts currently active in my working memory while writing this paragraph might be labeled by DA
Right now I’m kinda close to “More-or-less every thought I think has higher DA-related reward prediction than other potential thoughts I could have thought.” But it’s a vanishing fraction of cases where there is “ground truth” for that reward prediction that comes from outside of the neocortex. There is “ground truth” for things like pain and fear-of-heights, but not for thinking to yourself “hey, that’s a clever turn of phrase” when you’re writing. (The neocortex is the only place that understands language, in this example.)
Ultimately I think everything has to come from subcortex-provided “ground truth” on what is or isn’t rewarding, but the neocortex can get the idea that Concept X is an appropriate proxy / instrumental goal associated with some subcortex-provided reward, and then it goes and labels Concept X as inherently desirable, and searches for actions / thoughts that will activate Concept X.
There’s still usually some sporadic “ground truth”, e.g. you have an innate desire for social approval and I think the subcortex has ways to figure out when you do or don’t get social approval, so if your “clever turns of phrase” never impress anyone, you might eventually stop trying to come up with them. But if you’re a hermit writing a book, the neocortex might be spinning for years treating “come up with clever turns of phrase” as an important goal, without any external subcortex-provided information to ground that goal.
See here for more on this, if you’re not sick of my endless self-citations yet. :-)
Sorry if any of this is wrong, or missing your point.
Also, I’m probably revealing that I never actually read Wang et al. very carefully :-P I think I skimmed it a year ago and liked it, and then re-read it 3 months ago having developed more opinions about the brain, and didn’t really like it that time, and then listened to that interview recently and still felt the same way.
Yeah, that’s part of it, but also I tend to be a bit skeptical that a performance-competitive optimizer will spontaneously develop, as opposed to being programmed—just as AlphaGo does MCTS because DeepMind programmed it to do MCTS, not because it was running a generic RNN that discovered MCTS. See also this.
Right now I’m kinda close to “More-or-less every thought I think has higher DA-related reward prediction than other potential thoughts I could have thought.” But it’s a vanishing fraction of cases where there is “ground truth” for that reward prediction that comes from outside of the neocortex. There is “ground truth” for things like pain and fear-of-heights, but not for thinking to yourself “hey, that’s a clever turn of phrase” when you’re writing. (The neocortex is the only place that understands language, in this example.)
Ultimately I think everything has to come from subcortex-provided “ground truth” on what is or isn’t rewarding, but the neocortex can get the idea that Concept X is an appropriate proxy / instrumental goal associated with some subcortex-provided reward, and then it goes and labels Concept X as inherently desirable, and searches for actions / thoughts that will activate Concept X.
There’s still usually some sporadic “ground truth”, e.g. you have an innate desire for social approval and I think the subcortex has ways to figure out when you do or don’t get social approval, so if your “clever turns of phrase” never impress anyone, you might eventually stop trying to come up with them. But if you’re a hermit writing a book, the neocortex might be spinning for years treating “come up with clever turns of phrase” as an important goal, without any external subcortex-provided information to ground that goal.
See here for more on this, if you’re not sick of my endless self-citations yet. :-)
Sorry if any of this is wrong, or missing your point.
Also, I’m probably revealing that I never actually read Wang et al. very carefully :-P I think I skimmed it a year ago and liked it, and then re-read it 3 months ago having developed more opinions about the brain, and didn’t really like it that time, and then listened to that interview recently and still felt the same way.