I didn’t really follow the time-derivative idea before, and since you said it was equivalent I didn’t worry about it :p. But either it’s not really equivalent or I misunderstood the previous formulation, because I think everything works for me now.
So if we (1) decide “I will imagine yummy food”, then (2) imagine yummy food, then (3) stop imagining yummy food, we get a positive reward from the second step and a negative reward from the third step, but both of those rewards were already predicted by the first step, so there’s no RPE in either the second or third step, and therefore they don’t feel positive or negative. Unless we’re hungrier than we thought, I guess...
Well, what exactly happens if we’re hungrier than we thought?
(1) “I will imagine food”: No reward yet, expecting moderate positive reward followed by moderate negative reward.
(2) [Imagining food]: Large positive reward, but now expecting large negative reward when we stop imagining, so no RPE on previous step.
(3) [Stops imagining food]: Large negative reward as expected, no RPE for previous step.
The size of the reward can then be informative, but not actually rewarding (since it predictably nets to zero over time). The neocortex obtains hypothetical reward information form the subcortex, without actually extracting a reward—which is the thing I’ve been insisting had to be possible. Turns out we don’t need to use a separate channel! And the subcortex doesn’t have to know or care whether its receiving a genuine prediction or an exploratory imagining from the neocortex—the incentives are right either way.
(We do still need some explanation of why the neocortex can imagine (predict?) food momentarily but can’t keep doing it food forever, avoid step (3), and pocket a positive RPE after step (2). Common sense suggests one: keeping such a thing up is effortful, so you’d be paying ongoing costs for a one-time gain, and unless you can keep it up forever the reward still nets to zero in the end)
Glad to hear this is helpful for you too :)
I didn’t really follow the time-derivative idea before, and since you said it was equivalent I didn’t worry about it :p. But either it’s not really equivalent or I misunderstood the previous formulation, because I think everything works for me now.
Well, what exactly happens if we’re hungrier than we thought?
(1) “I will imagine food”: No reward yet, expecting moderate positive reward followed by moderate negative reward.
(2) [Imagining food]: Large positive reward, but now expecting large negative reward when we stop imagining, so no RPE on previous step.
(3) [Stops imagining food]: Large negative reward as expected, no RPE for previous step.
The size of the reward can then be informative, but not actually rewarding (since it predictably nets to zero over time). The neocortex obtains hypothetical reward information form the subcortex, without actually extracting a reward—which is the thing I’ve been insisting had to be possible. Turns out we don’t need to use a separate channel! And the subcortex doesn’t have to know or care whether its receiving a genuine prediction or an exploratory imagining from the neocortex—the incentives are right either way.
(We do still need some explanation of why the neocortex can imagine (predict?) food momentarily but can’t keep doing it food forever, avoid step (3), and pocket a positive RPE after step (2). Common sense suggests one: keeping such a thing up is effortful, so you’d be paying ongoing costs for a one-time gain, and unless you can keep it up forever the reward still nets to zero in the end)