In the static-context case, let’s first consider what happens when the switch is sitting in “defer-to-predictor mode”: Since the output is looping right back to the supervisor, there is no error in the supervised learning module. The predictions are correct. The synapses aren’t changing. Even if this situation is very common, it has no bearing on how the short-term predictor eventually winds up behaving.
One solution to a −300ms delay connected to its own input is a constant output. However, this is part of an infinite class of solutions. Any function f(tms%300) is a solution to this.
(Admittedly, with any error metric Lx;x>1 the optimum solution is a constant output.)
Output stability here depends on the error gain through the loop. (Control theory is not my forte, but I believe to analyze this rigorously control theory is what you’d want to look into.)
If the error gain is sub-unity, the system is stable and will converge to a constant output. The error gain being unity is the critical value where the system is on the edge of stability. If the error gain is super-unity, the system is unstable and will go into oscillations.
Or, to bring this back to what this means for a predictor:
Sub-unity error gain means ’if the current input is X and the predictor predicts the input will be Y in 300ms, the predictor outputs (X+(Y−X)∗C);C<1.‘ Unity error gain means ’if the current input is X and the predictor predicts the input will be Y in 300ms, the predictor outputs (X+(Y−X)∗C);C=1.‘ Super-unity error gain means ‘if the current input is X and the predictor predicts the input will be Y in 300ms, the predictor outputs (X+(Y−X)∗C);C>1.’
Super-unity error gain is ‘obviously’ suboptimal behavior for a human brain, so we’d probably end up with the error amplification tuned to under the critical value. Ditto, a predictor that systematically underestimated system changes is also “obviously” suboptimal. A ‘perfect’ predictor corresponds to unity error gain.
So all told you’d expect the predictors to be tuned to a gain that’s as close to possible to unity without going over.
...hm. Actually, predictors going haywire with a ~300ms (~3Hz) period sounds a lot like a seizure. Which would nicely explain why humans do occasionally get seizures. (Or rather, why they aren’t evolved out.) For ideal prediction you want an error gain as close as possible to unity… but too close to unity and variations in effective error gain mean that you’re suddenly overunity and get rampant 300ms oscillations.
The predictor is a parametrized function output = f(context, parameters) (where “parameters” are also called “weights”). If (by assumption) context is static, then you’re running the function on the same inputs over and over, so you have to keep getting the same answer. Unless there’s an error changing the parameters / weights. But the learning rate on those parameters can be (and presumably would be) relatively low. For example, the time constant (for the exponential decay of a discrepancy between output and supervisor when in “override mode”) could be many seconds. In that case I don’t think you can get self-sustaining oscillations in “defer to predictor” mode.
Then maybe you’ll say “What if it’s static context except that there’s a time input to the context as well? But I still don’t see how you would learn oscillations that aren’t in the exogenous data.
There could also be a low-pass filter on the supervisor side. Hmm, actually, maybe that amounts to the same thing as the slow parameter updates I mentioned above.
I think I disagree that “perfect predictors” are what’s wanted here. The input data is a mix of regular patterns and noise / one-off idiosyncratic things. You want to learn the patterns but not learn the noise. So it’s good to not immediately and completely adapt to errors in the model. (Also, there’s always learning-during-memory-replay for genuinely important things that only happen only once and quickly.)
I disagree; let me try to work through where we diverge.
A 300ms predictor outputting a sine wave with period 300ms into its own supervisor input has zero error, and hence will continue to do so regardless of the learning rate.
Do you at least agree that in this scheme a predictor outputting a sine wave with period 300ms has zero error while in defer-to-predictor mode?
The predictor is a parametrized function output = f(context, parameters) (where “parameters” are also called “weights”). If (by assumption) context is static, then you’re running the function on the same inputs over and over, so you have to keep getting the same answer. Unless there’s an error changing the parameters / weights.
This is true for a standard function; this is not true once you include time. A neuron absolutely can spike every X milliseconds with a static input. And it is absolutely possible to construct a sine-wave oscillator via a function with a nonzero time delay connected to its own input.
But the learning rate on those parameters can be (and presumably would be) relatively low.
Unfortunately, as long as the % of time spent in override mode is low you need a high learning rate or else the predictor will learn incredibly slowly.
If the supervisor spends a second a week in override mode[1], then the predictor is actively learning ~0.002% of the time.
There could also be a low-pass filter on the supervisor side.
Unfortunately, as long as each override event is relatively short a low-pass filter selectively removes all of your learning signal!
*****
For example, the time constant (for the exponential decay of a discrepancy between output and supervisor when in “override mode”) could be many seconds.
You keep bouncing between a sufficiently-powerful-predictor and a simple exponential-weighted-average. Please pick one, to keep your arguments coherent. This statement of yours is only true for the latter, not the former. A powerful predictor can suddenly modeswitch in the presence of an error signal.
(For a simple example, consider a 300ms predictor trying to predict a system where the signal normally stays at 0, but if it ever goes non-zero, even by a very small amount, it will go to 1 100ms later and stay at 1 for 10s before returning to 0. As long as the signal stays at 0, the predictor will predict it stays a zero[2]. The moment the error is nonzero, the predictor will immediately switch to predicting 1.)
Do you at least agree that in this scheme a predictor outputting a sine wave with period 300ms has zero error while in defer-to-predictor mode?
Yes
A neuron absolutely can spike every X milliseconds with a static input.
Hmm, I think we’re mixing up two levels of abstraction here. At the implementation level, there are no real-valued signals, just spikes. But at the algorithm level, it’s possible that the neuron operations are equivalent to some algorithm that is most simply described in a way that does not involve any spikes, and does involve lots of real-valued signals. For example, one can vaguely imagine setups where a single spike of an upstream neuron isn’t sufficient to generate a spike on the downstream neuron, and you only get effects from a neuron sending a train of spikes whose effects are cumulative. In that case, the circuit would be basically incapable of “fast” dynamics (i.e. it would have implicit low-pass filters everywhere), and the algorithm is really best thought of as “doing operations” on average spike frequencies rather than on individual spikes.
You keep bouncing between a sufficiently-powerful-predictor and a simple exponential-weighted-average. Please pick one, to keep your arguments coherent.
Oh sorry if I was unclear. I was never talking about exponential weighted average. Let’s say our trained model is f(context,θ) (where θ is the parameters a.k.a. weights). Then with static context, I was figuring we’d have a differential equation vaguely like:
∂→θ∂t∝−∇θ(f(context,→θ)−supervisor)2
I was figuring that (in the absence of oscillations) the solution to this differential equation might look like θ(t) asymptotically approaching a limit wherein the error is zero, and I was figuring that this asymptotic approach mightlook like an exponential with a timescale of a few seconds.
I’m not sure if it would be literally an exponential. But probably some kind of asymptotic approach to a steady-state. And I was saying (in a confusing way) that I was imagining that this asymptotic approach would take a few seconds to get most of the way to its limit.
Unfortunately, as long as the % of time spent in override mode is low you need a high learning rate or else the predictor will learn incredibly slowly.
If we go to the Section 5.2.1.1 example of David on the ladder, the learning is happening while he has calmed down, but is still standing at the top of the ladder. I figure he probably stayed up for at least 5 or 10 seconds after calming down but before climbing down.
For example, we can imagine an alternate scenario where David was magically teleported off the ladder within a fraction of a second after the moment that he finally started feeling calm. In that scenario, I would be a lot less confident that the exposure therapy would actually stick.
By the same token, when you’re feeling scared in some situation, you’re probably going to continue feeling scared in that same situation for at least 5 or 10 seconds.
(And if not, there’s always memory replay! The hippocampus can recall both the scared feeling and the associated context 10 more times over the next day and/or while you sleep. And that amounts to the same thing, I think.)
Sorry in advance if I’m misunderstanding your comment. I really appreciate you taking the time to think it through for yourself :)
Alright, so we at least agree with each other on this. Let me try to dig into this a little further...
Consider the following (very contrived) example, for a 300ms predictor trying to minimize L2[1] norm:
Context is static throughput the below.
t=0, overrider circuit forces output=1. t=150ms, overrider circuit switches back to loopback mode. t=450ms, overrider circuit forces output=0. t=600ms, overrider circuit switches back to loopback mode.
t=900ms, overrider circuit forces output=1. etc.
Do you agree that the best a slow-learning predictor that’s a pure function f(context,→θ) can do is to output a static value 0.5, for an overall error rate of, what 0.08¯3? (The exact value doesn’t matter.)
Do you agree that a “temporal-aware”[2] predictor that outputted a 300ms square wave as follows:
I was figuring that (in the absence of oscillations) the solution to this differential equation might look like θ(t) asymptotically approaching a limit wherein the error is zero, and I was figuring that this asymptotic approach mightlook like an exponential with a timescale of a few seconds.
I can see why you’d say this. It’s even true if you’re just looking at e.g. a well-tuned PID controller. But even for a PID controller there are regimes where this behavior breaks down and you get oscillation[4]… and worse, the regimes where this breaks down are regimes that you’re otherwise actively tuning said controller for!
For example, one can vaguely imagine setups where a single spike of an upstream neuron isn’t sufficient to generate a spike on the downstream neuron, and you only get effects from a neuron sending a train of spikes whose effects are cumulative. In that case, the circuit would be basically incapable of “fast” dynamics (i.e. it would have implicit low-pass filters everywhere), and the algorithm is really best thought of as “doing operations” on average spike frequencies rather than on individual spikes.
I think here is the major place we disagree. As you say, this model of these circuits is basically incapable of fast dynamics, and you keep leaning towards setups that forbid fast dynamics in general. But for something like a startle signal, you absolutely want it to be able to handle a step change in the context as a step change in the output[5].
I don’t know of a general-purpose method of predicting fast dynamics[6] that doesn’t have mode-switching regions where seemingly-small learning rates can suddenly change the output.
I am making up this term on the spot. I haven’t formalized it; I suspect one way to formalize it would be to include time % 300ms as an input like the rest of the context.
Just to make sure we’re on the same page, I made up the “300ms” number, it could be something else.
Also to make sure we’re on the same page, I claim that from a design perspective, fast oscillation instabilities are bad, and from an introspective perspective, fast oscillation instabilities don’t happen. (I don’t have goosebumps, then 150ms later I don’t have goosebumps, then 150ms later I do have goosebumps, etc.)
...would have zero error rate?
Sure. But to make sure we’re on the same page, the predictor is trying to minimize L2 norm (or whatever), but that’s just one component of a system, and successfully minimizing the L2 norm might or might not correspond to the larger system performing well at its task. So “zero error rate” doesn’t necessarily mean “good design”.
even for a PID controller there are regimes where this behavior breaks down and you get oscillation
Sorry, I’m confused. There’s an I and a D? I only see a P.
As you say, this model of these circuits is basically incapable of fast dynamics, and you keep leaning towards setups that forbid fast dynamics in general. But for something like a startle signal, you absolutely want it to be able to handle a step change in the context as a step change in the output.
It seems to me that you can start a startle reaction quickly (small fraction of a second), but you can’t stop a startle quickly. Hmm, maybe the fastest thing the amygdala does is to blink (mostly <300ms) , but if you’re getting 3 blink-inducing stimuli a second, your brainstem is not going to keep blinking 3 times a second, instead it will just pinch the eyes shut and turn away, or something. (Source: life experience.) (Also, I can always pull out the “Did I say 300ms prediction? I meant 100ms” card…)
If the supervisor is really tracking the physiological response (sympathetic nervous system response, blink reaction, whatever), and the physiological response can’t oscillate quickly (even if its rise-time by itself is fast), then likewise the supervisor can’t oscillate quickly, right? Think of it like: once I start a startle-reaction, then it flips into override mode for a second, because I’m still startle-reacting until the reaction finishes playing out.
forbid fast dynamics in general
Hmm, I think I want to forbid fast updates of the adjustable parameters / weights (synapse strength or whatever), and I also want to stay very very far away from any situation where there might be fast oscillations that originate in instability rather than already being present in exogenous data. I’m open to a fast dynamic where “context suddenly changes, and then immediately afterwards the output suddenly changes”. If I said something to the contrary earlier, then I have changed my mind! :-)
And I continue to believe that these things are all compatible: you can get the “context suddenly changes → output suddenly changes” behavior, without going right to the edge of unstable oscillations, and also without fast (sub-second) parameter / weight / synapse-strength changes.
Just to make sure we’re on the same page, I made up the “300ms” number, it could be something else.
Sure; the further you get away from ~300ms the less the number makes sense for e.g. predicting neuron latency, as described earlier.
Also to make sure we’re on the same page, I claim that from a design perspective, fast oscillation instabilities are bad, and from an introspective perspective, fast oscillation instabilities don’t happen. (I don’t have goosebumps, then 150ms later I don’t have goosebumps, then 150ms later I do have goosebumps, etc.)
I absolutely agree that most of the time oscillations don’t happen. That being said, oscillations absolutely do happen in at least one case—epilepsy. I remain puzzled that evolution “allows” epilepsy to happen, and epilepsy being a breakdown that does allow ~300ms oscillations to happen, akin to feedback in audio amplifiers, is a better explanation for this than I’ve heard elsewhere.
Sorry, I’m confused. There’s an I and a D? I only see a P.
A generic overdamped PID controller will react to a step-change in its input via (vaguely)-exponential decay towards the new value[1].
Even for a non-overdamped PID controller the magnitude of the tail decreases exponentially with time. (So long as said PID controller is stable at least.)
You are correct that all that is necessary for a PID controller to react in this fashion is a nonzero P term.
It seems to me that you can start a startle reaction quickly (small fraction of a second), but you can’t stop a startle quickly.
Absolutely; a step change followed by a decay still has high-frequency components. (This is the same thing people forget when they route ‘slow’ clocks with fast drivers and then wonder why they are getting crosstalk on other signals and high-frequency interference in general.)
Your slow-responding predictor is going to have a terrible effective reaction time, is what I’m trying to say here, because you’re filtering out the high-frequency components of the prediction error, and so the rising edge of your prediction error gets filtered from a step change to something closer to a sigmoid that takes quite a while to get to full amplitude.… which in turn means that what the predictor learns is not a step-change followed by a decay. It learns the output of a low-pass filter on said step-change followed by a decay, a.k.a. a slow rise and decay.
I also want to stay very very far away from any situation where there might be fast oscillations that originate in instability rather than already being present in exogenous data.
Right. Which brings me back to my puzzle: why does epilepsy continue to exist?
(Do you at least agree that, were there some mechanism where there was enough feedback/crosstalk such that you did get oscillations, it might look something like epilepsy?)
And I continue to believe that these things are all compatible
Can you please give an example of a general-purpose function estimator, that when plugged into this pseudo-TD system, both:
Sure; the further you get away from ~300ms the less the number makes sense for e.g. predicting neuron latency, as described earlier.
I must have missed that part; can you point more specifically to what you’re referring to?
why does epilepsy continue to exist?
I think practically anywhere in the brain, if A connects to B, then it’s a safe bet that B connects to A. (Certainly for regions, and maybe even for individual neurons.) Therefore we have the setup for epileptic seizures, if excitation and inhibition are not properly balanced.
Or more generically, if X% of neurons in the brain are active at time t, then we want around X% of neurons in the brain to be active at time t+1. That means that we want each upstream neuron firing event to (on average) cause exactly one net downstream neuron to fire. But individual neurons have their own inputs and outputs; by default, there seems to be a natural failure mode where the upstream neurons excite not-exactly-one downstream neuron, and we get exponential growth (or decay).
My impression is that there are lots of mechanisms to balance excitation and inhibition—probably different mechanisms in different parts of the brain—and any of those mechanisms can fail. I’m not an epilepsy expert by any means (!!) , but at a glance it does seem like epilepsy has a lot of root causes and can originate in lots of different brain areas, including areas that I don’t think are doing this kind of prediction, e.g. temporal lobe and dorsolateral prefrontal cortex and hippocampus.
the rising edge of your prediction error gets filtered from a step change to something closer to a sigmoid that takes quite a while to get to full amplitude.… which in turn means that what the predictor learns is not a step-change followed by a decay. It learns the output of a low-pass filter on said step-change followed by a decay, a.k.a. a slow rise and decay.
I still think you’re incorrectly mixing up the time-course of learning (changes to parameters / weights / synapse strengths) with the time-course of an output following a sudden change in input. I think they’re unrelated.
To clarify our intuitions here, I propose to go to the slow-learning limit.
However fast you’ve been imagining the parameters / weights / synapse strength changing in any given circumstance, multiply that learning rate by 0.001. And simultaneously imagine that the person experiences everything in their life with 1000× more repetitions. For example, instead of getting whacked by a golf ball once, they get whacked by a golf ball 1000× (on 1000 different days).
(Assume that the algorithm is exactly the same in every other respect.)
I claim that, after this transformation (much lower learning rate, but proportionally more repetitions), the learning algorithm will build the exact same trained model, and the person will flinch the same way under the same circumstances.
(OK, I can imagine it being not literally exactly the same, thanks to the details of the loss landscape and gradient descent etc., but similar.)
Your perspective, if I understand it, would be that this transformation would make the person flinch more slowly—so slowly that they would get hit by the ball before even starting to flinch.
If so, I don’t think that’s right.
Every time the person gets whacked, there’s a little interval of time, let’s say 50ms, wherein the context shows a golf ball flying towards the person’s face, and where the supervisor will shortly declare that the person should have been flinching. That little 50ms interval of time will contribute to updating the synapse strengths. In the slow-learning limit, the update will be proportionally smaller, but OTOH we’ll get that many more repetitions in which the same update will happen. It should cancel out, and it will eventually converge to a good prediction, F(ball-flying-towards-my-face) = I-should-flinch.
And after training, even if we lower the learning rate all the way down to zero, we can still get fast flinching at appropriate times. It would only be a problem if the person changes hobbies from golf to swimming—they wouldn’t learn the new set of flinch cues.
Sorry if I’m misunderstanding where you’re coming from.
Can you please give an example of a general-purpose function estimator, that when plugged into this pseudo-TD system, both:
I must have missed that part; can you point more specifically to what you’re referring to?
It feels wrong to refer you back to your own writing, but much of part 4 was dedicated to talking about these short-term predictors being used to combat neural latency and to do… well, short-term predictions. A flinch detector that goes off 100ms in advance is far less useful than a flinch detector that goes off 300ms in advance, but at the same time a short-term predictor that predicts too far in advance leads to feedback when used as a latency counter (as I asked about/noted in the previous post).
(It’s entirely possible that different predictors have different prediction timescales… but then you’re just replaced the problem with a meta-problem. Namely: how do predictors choose the timescale?)
To clarify our intuitions here, I propose to go to the slow-learning limit.
However fast you’ve been imagining the parameters / weights / synapse strength changing in any given circumstance, multiply that learning rate by 0.001. And simultaneously imagine that the person experiences everything in their life with 1000× more repetitions. For example, instead of getting whacked by a golf ball once, they get whacked by a golf ball 1000× (on 1000 different days).
1x the training data with 1x the training rate is not equivalent to 1000x the training data with 1/1000th of the training rate. Nowhere near. The former is a much harder problem, generally speaking.
(And in a system as complex and chaotic as a human there is no such thing as repeating the same datapoint multiple times… related data points yes. Not the same data point.)
(That being said, 1x the training data with 1x the training rate is still harder than 1x the training data with 1/1000th the training rate, repeated 1000x.)
Your perspective, if I understand it, would be that this transformation would make the person flinch more slowly—so slowly that they would get hit by the ball before even starting to flinch.
You appear to be conflating two things here. It’s worth calling them out as separate.
Putting a low-pass filter on the learning feedback signal absolutely does cause something to learn a low-passed version of the output. Your statement “In that case, the circuit would be basically incapable of “fast” dynamics (i.e. it would have implicit low-pass filters everywhere),” doesn’t really work, precisely because it leads to absurd conclusions. This is what I was calling out.
A low learning rate is something different. (That has other problems...)
If you take any solution to 1, and multiply the learning rate by 0.000001, then it would satisfy 2 as well, right?
My apologies, and you are correct as stated; I should have added something on few-shot learning. Something like a flinch detector likely does not fire 1,000,000x in a human lifetime[1], which means that your slow-learning solution hasn’t learnt anything significant by the time the human dies, and isn’t really a solution.
I am aware that 1m is likely you just hitting ‘0’ a bunch of times’; humans are great few-shot (and even one-shot) learners. You can’t just drop the training rate or else your examples like ‘just stand on the ladder for a few minutes and your predictor will make a major update’ don’t work.
Oh, hmm. In my head, the short-term predictors in the cerebellum are for latency-reduction and discussed in the last post, and meanwhile the short-term predictors in the telencephalon (amygdala & mPFC) are for flinching and discussed here. I think the cerebellum short-term predictors and the telencephalon short-term predictors are built differently for different purposes, and once we zoom in beyond the idea of “short-term prediction” and start talking about parameter settings etc., I really don’t lump them together in my mind, they’re apples and oranges. In the conversation thus far, I thought you were talking about the telencephalon (amygdala & mPFC) ones. If we’re talking about instability from the cerebellum instead, we can continue the Post #4 thread.
~
I think I said some things about low-pass filters up-thread and then retracted it later on, and maybe you missed that. At least for some of the amygdala things like flinching, I agree with you that low-pass filters seem unlikely to be part of the circuit (well, depending on where the frequency cutoff is, I suppose). Sorry, my bad.
~
A common trope is that the hippocampus does one-shot learning in a way that vaguely resembles a lookup table with auto-associative recall, whereas other parts of the cortex learn more generalizable patterns more slowly, including via memory recall (i.e., gradual transfer of information from hippocampus to cortex). I’m not immediately sure whether the amygdala does one-shot learning. I do recall a claim that part of PFC can do one-shot learning, but I forget which part; it might have been a different part than we’re talking about. (And I’m not sure if the claim is true anyway.) Also, as I said before, with continuous-time systems, “one shot learning” is hard to pin down; if David Burns spends 3 seconds on the ladder feeling relaxed, before climbing down, that’s kinda one-shot in an intuitive sense, but it still allows the timescale of synapse changes to be much slower than the timescale of the circuit. Another consideration is that (I think) a synapse can get flagged quickly as “To do: make this synapse stronger / weaker / active / inactive / whatever”, and then it takes 20 minutes or whatever for the new proteins to actually be synthesized etc. so that the change really happens. So that’s “one-shot learning” in a sense, but doesn’t necessarily have the same short-term instabilities, I’d think.
To add to this a little: I think it likely that the gain would be dynamically tuned by some feedback system or another[1]. In order to tune said gain however you need a non-constant signal to be able to measure the gain to adjust it.
...Hm. That sounds a lot like delta waves during sleep. Switch to open-loop operation, disable learning[2], suppress output, input a transient, measure the response, and adjust gain accordingly. (Which would explain higher seizure risk with a lack of sleep...)
I’m liking[1] this theory more and more.
One solution to a −300ms delay connected to its own input is a constant output. However, this is part of an infinite class of solutions. Any function f(tms%300) is a solution to this.
(Admittedly, with any error metric Lx;x>1 the optimum solution is a constant output.)
Output stability here depends on the error gain through the loop. (Control theory is not my forte, but I believe to analyze this rigorously control theory is what you’d want to look into.)
If the error gain is sub-unity, the system is stable and will converge to a constant output.
The error gain being unity is the critical value where the system is on the edge of stability.
If the error gain is super-unity, the system is unstable and will go into oscillations.
Or, to bring this back to what this means for a predictor:
Sub-unity error gain means ’if the current input is X and the predictor predicts the input will be Y in 300ms, the predictor outputs (X+(Y−X)∗C);C<1.‘
Unity error gain means ’if the current input is X and the predictor predicts the input will be Y in 300ms, the predictor outputs (X+(Y−X)∗C);C=1.‘
Super-unity error gain means ‘if the current input is X and the predictor predicts the input will be Y in 300ms, the predictor outputs (X+(Y−X)∗C);C>1.’
Super-unity error gain is ‘obviously’ suboptimal behavior for a human brain, so we’d probably end up with the error amplification tuned to under the critical value. Ditto, a predictor that systematically underestimated system changes is also “obviously” suboptimal. A ‘perfect’ predictor corresponds to unity error gain.
So all told you’d expect the predictors to be tuned to a gain that’s as close to possible to unity without going over.
...hm. Actually, predictors going haywire with a ~300ms (~3Hz) period sounds a lot like a seizure. Which would nicely explain why humans do occasionally get seizures. (Or rather, why they aren’t evolved out.) For ideal prediction you want an error gain as close as possible to unity… but too close to unity and variations in effective error gain mean that you’re suddenly overunity and get rampant 300ms oscillations.
In the sense of “it seems plausible and explains things that I haven’t heard other good explanations for”
The predictor is a parametrized function output = f(context, parameters) (where “parameters” are also called “weights”). If (by assumption) context is static, then you’re running the function on the same inputs over and over, so you have to keep getting the same answer. Unless there’s an error changing the parameters / weights. But the learning rate on those parameters can be (and presumably would be) relatively low. For example, the time constant (for the exponential decay of a discrepancy between output and supervisor when in “override mode”) could be many seconds. In that case I don’t think you can get self-sustaining oscillations in “defer to predictor” mode.
Then maybe you’ll say “What if it’s static context except that there’s a time input to the context as well? But I still don’t see how you would learn oscillations that aren’t in the exogenous data.
There could also be a low-pass filter on the supervisor side. Hmm, actually, maybe that amounts to the same thing as the slow parameter updates I mentioned above.
I think I disagree that “perfect predictors” are what’s wanted here. The input data is a mix of regular patterns and noise / one-off idiosyncratic things. You want to learn the patterns but not learn the noise. So it’s good to not immediately and completely adapt to errors in the model. (Also, there’s always learning-during-memory-replay for genuinely important things that only happen only once and quickly.)
I disagree; let me try to work through where we diverge.
A 300ms predictor outputting a sine wave with period 300ms into its own supervisor input has zero error, and hence will continue to do so regardless of the learning rate.
Do you at least agree that in this scheme a predictor outputting a sine wave with period 300ms has zero error while in defer-to-predictor mode?
This is true for a standard function; this is not true once you include time. A neuron absolutely can spike every X milliseconds with a static input. And it is absolutely possible to construct a sine-wave oscillator via a function with a nonzero time delay connected to its own input.
Unfortunately, as long as the % of time spent in override mode is low you need a high learning rate or else the predictor will learn incredibly slowly.
If the supervisor spends a second a week in override mode[1], then the predictor is actively learning ~0.002% of the time.
Unfortunately, as long as each override event is relatively short a low-pass filter selectively removes all of your learning signal!
*****
You keep bouncing between a sufficiently-powerful-predictor and a simple exponential-weighted-average. Please pick one, to keep your arguments coherent. This statement of yours is only true for the latter, not the former. A powerful predictor can suddenly modeswitch in the presence of an error signal.
(For a simple example, consider a 300ms predictor trying to predict a system where the signal normally stays at 0, but if it ever goes non-zero, even by a very small amount, it will go to 1 100ms later and stay at 1 for 10s before returning to 0. As long as the signal stays at 0, the predictor will predict it stays a zero[2]. The moment the error is nonzero, the predictor will immediately switch to predicting 1.)
And for something like a ‘freeze in terror’ predictor I absolutely could see a rate that low.
Or maybe predict spikes. Meh.
Yes
Hmm, I think we’re mixing up two levels of abstraction here. At the implementation level, there are no real-valued signals, just spikes. But at the algorithm level, it’s possible that the neuron operations are equivalent to some algorithm that is most simply described in a way that does not involve any spikes, and does involve lots of real-valued signals. For example, one can vaguely imagine setups where a single spike of an upstream neuron isn’t sufficient to generate a spike on the downstream neuron, and you only get effects from a neuron sending a train of spikes whose effects are cumulative. In that case, the circuit would be basically incapable of “fast” dynamics (i.e. it would have implicit low-pass filters everywhere), and the algorithm is really best thought of as “doing operations” on average spike frequencies rather than on individual spikes.
Oh sorry if I was unclear. I was never talking about exponential weighted average. Let’s say our trained model is f(context,θ) (where θ is the parameters a.k.a. weights). Then with static context, I was figuring we’d have a differential equation vaguely like:
∂→θ∂t∝−∇θ(f(context,→θ)−supervisor)2I was figuring that (in the absence of oscillations) the solution to this differential equation might look like θ(t) asymptotically approaching a limit wherein the error is zero, and I was figuring that this asymptotic approach might look like an exponential with a timescale of a few seconds.
I’m not sure if it would be literally an exponential. But probably some kind of asymptotic approach to a steady-state. And I was saying (in a confusing way) that I was imagining that this asymptotic approach would take a few seconds to get most of the way to its limit.
If we go to the Section 5.2.1.1 example of David on the ladder, the learning is happening while he has calmed down, but is still standing at the top of the ladder. I figure he probably stayed up for at least 5 or 10 seconds after calming down but before climbing down.
For example, we can imagine an alternate scenario where David was magically teleported off the ladder within a fraction of a second after the moment that he finally started feeling calm. In that scenario, I would be a lot less confident that the exposure therapy would actually stick.
By the same token, when you’re feeling scared in some situation, you’re probably going to continue feeling scared in that same situation for at least 5 or 10 seconds.
(And if not, there’s always memory replay! The hippocampus can recall both the scared feeling and the associated context 10 more times over the next day and/or while you sleep. And that amounts to the same thing, I think.)
Sorry in advance if I’m misunderstanding your comment. I really appreciate you taking the time to think it through for yourself :)
Alright, so we at least agree with each other on this. Let me try to dig into this a little further...
Consider the following (very contrived) example, for a 300ms predictor trying to minimize L2[1] norm:
Context is static throughput the below.
t=0, overrider circuit forces output=1.
t=150ms, overrider circuit switches back to loopback mode.
t=450ms, overrider circuit forces output=0.
t=600ms, overrider circuit switches back to loopback mode.
t=900ms, overrider circuit forces output=1.
etc.
Do you agree that the best a slow-learning predictor that’s a pure function f(context,→θ) can do is to output a static value 0.5, for an overall error rate of, what 0.08¯3? (The exact value doesn’t matter.)
Do you agree that a “temporal-aware”[2] predictor that outputted a 300ms square wave as follows:
t=0, predictor switches output=1.
t=150ms, predictor switches output=0.
t=300ms, predictor switches output=1.
t=450ms, predictor switches output=0.
t=600ms, predictor switches output=1.
etc
...would have zero error rate[3]?
I can see why you’d say this. It’s even true if you’re just looking at e.g. a well-tuned PID controller. But even for a PID controller there are regimes where this behavior breaks down and you get oscillation[4]… and worse, the regimes where this breaks down are regimes that you’re otherwise actively tuning said controller for!
I think here is the major place we disagree. As you say, this model of these circuits is basically incapable of fast dynamics, and you keep leaning towards setups that forbid fast dynamics in general. But for something like a startle signal, you absolutely want it to be able to handle a step change in the context as a step change in the output[5].
I don’t know of a general-purpose method of predicting fast dynamics[6] that doesn’t have mode-switching regions where seemingly-small learning rates can suddenly change the output.
Almost anything would would here, really. L1 is just annoying due to the lack of unique solution.
I am making up this term on the spot. I haven’t formalized it; I suspect one way to formalize it would be to include time % 300ms as an input like the rest of the context.
Please ignore clock skew for now.
“Normally” the feedback path is through the input->output path, not the PID parameters… but you can get oscillations in the PID-parameter path too
...and a ‘step change’ inherently has high frequency components.
Perhaps a better term might be ‘high-bandwidth’ dynamics. Predicting a 10MHz sine wave is easy. Predicting <=10kHz noise, less so.
Just to make sure we’re on the same page, I made up the “300ms” number, it could be something else.
Also to make sure we’re on the same page, I claim that from a design perspective, fast oscillation instabilities are bad, and from an introspective perspective, fast oscillation instabilities don’t happen. (I don’t have goosebumps, then 150ms later I don’t have goosebumps, then 150ms later I do have goosebumps, etc.)
Sure. But to make sure we’re on the same page, the predictor is trying to minimize L2 norm (or whatever), but that’s just one component of a system, and successfully minimizing the L2 norm might or might not correspond to the larger system performing well at its task. So “zero error rate” doesn’t necessarily mean “good design”.
Sorry, I’m confused. There’s an I and a D? I only see a P.
It seems to me that you can start a startle reaction quickly (small fraction of a second), but you can’t stop a startle quickly. Hmm, maybe the fastest thing the amygdala does is to blink (mostly <300ms) , but if you’re getting 3 blink-inducing stimuli a second, your brainstem is not going to keep blinking 3 times a second, instead it will just pinch the eyes shut and turn away, or something. (Source: life experience.) (Also, I can always pull out the “Did I say 300ms prediction? I meant 100ms” card…)
If the supervisor is really tracking the physiological response (sympathetic nervous system response, blink reaction, whatever), and the physiological response can’t oscillate quickly (even if its rise-time by itself is fast), then likewise the supervisor can’t oscillate quickly, right? Think of it like: once I start a startle-reaction, then it flips into override mode for a second, because I’m still startle-reacting until the reaction finishes playing out.
Hmm, I think I want to forbid fast updates of the adjustable parameters / weights (synapse strength or whatever), and I also want to stay very very far away from any situation where there might be fast oscillations that originate in instability rather than already being present in exogenous data. I’m open to a fast dynamic where “context suddenly changes, and then immediately afterwards the output suddenly changes”. If I said something to the contrary earlier, then I have changed my mind! :-)
And I continue to believe that these things are all compatible: you can get the “context suddenly changes → output suddenly changes” behavior, without going right to the edge of unstable oscillations, and also without fast (sub-second) parameter / weight / synapse-strength changes.
Sure; the further you get away from ~300ms the less the number makes sense for e.g. predicting neuron latency, as described earlier.
I absolutely agree that most of the time oscillations don’t happen. That being said, oscillations absolutely do happen in at least one case—epilepsy. I remain puzzled that evolution “allows” epilepsy to happen, and epilepsy being a breakdown that does allow ~300ms oscillations to happen, akin to feedback in audio amplifiers, is a better explanation for this than I’ve heard elsewhere.
A generic overdamped PID controller will react to a step-change in its input via (vaguely)-exponential decay towards the new value[1].
Even for a non-overdamped PID controller the magnitude of the tail decreases exponentially with time. (So long as said PID controller is stable at least.)
You are correct that all that is necessary for a PID controller to react in this fashion is a nonzero P term.
Absolutely; a step change followed by a decay still has high-frequency components. (This is the same thing people forget when they route ‘slow’ clocks with fast drivers and then wonder why they are getting crosstalk on other signals and high-frequency interference in general.)
Your slow-responding predictor is going to have a terrible effective reaction time, is what I’m trying to say here, because you’re filtering out the high-frequency components of the prediction error, and so the rising edge of your prediction error gets filtered from a step change to something closer to a sigmoid that takes quite a while to get to full amplitude.… which in turn means that what the predictor learns is not a step-change followed by a decay. It learns the output of a low-pass filter on said step-change followed by a decay, a.k.a. a slow rise and decay.
Right. Which brings me back to my puzzle: why does epilepsy continue to exist?
(Do you at least agree that, were there some mechanism where there was enough feedback/crosstalk such that you did get oscillations, it might look something like epilepsy?)
Can you please give an example of a general-purpose function estimator, that when plugged into this pseudo-TD system, both:
Can learn “most[2]” functions
Has a low-and-bounded learning rate regardless of current parameters, such that |dOutdFeedback|<1 (after a single update, that is).
I know of schemes that achieve 1, and schemes that achieve 2. I don’t know of any schemes that achieve both offhand[3].
*****
Thank you again for going back and forth with me on this by the way. I appreciate it.
...or some offset from the new value, in some cases.
I’m not going to worry too much if e.g. there’s a single unstable pathological case.
LReLU violates 2. LReLU with regularization violates 1. Etc.
I must have missed that part; can you point more specifically to what you’re referring to?
I think practically anywhere in the brain, if A connects to B, then it’s a safe bet that B connects to A. (Certainly for regions, and maybe even for individual neurons.) Therefore we have the setup for epileptic seizures, if excitation and inhibition are not properly balanced.
Or more generically, if X% of neurons in the brain are active at time t, then we want around X% of neurons in the brain to be active at time t+1. That means that we want each upstream neuron firing event to (on average) cause exactly one net downstream neuron to fire. But individual neurons have their own inputs and outputs; by default, there seems to be a natural failure mode where the upstream neurons excite not-exactly-one downstream neuron, and we get exponential growth (or decay).
My impression is that there are lots of mechanisms to balance excitation and inhibition—probably different mechanisms in different parts of the brain—and any of those mechanisms can fail. I’m not an epilepsy expert by any means (!!) , but at a glance it does seem like epilepsy has a lot of root causes and can originate in lots of different brain areas, including areas that I don’t think are doing this kind of prediction, e.g. temporal lobe and dorsolateral prefrontal cortex and hippocampus.
I still think you’re incorrectly mixing up the time-course of learning (changes to parameters / weights / synapse strengths) with the time-course of an output following a sudden change in input. I think they’re unrelated.
To clarify our intuitions here, I propose to go to the slow-learning limit.
However fast you’ve been imagining the parameters / weights / synapse strength changing in any given circumstance, multiply that learning rate by 0.001. And simultaneously imagine that the person experiences everything in their life with 1000× more repetitions. For example, instead of getting whacked by a golf ball once, they get whacked by a golf ball 1000× (on 1000 different days).
(Assume that the algorithm is exactly the same in every other respect.)
I claim that, after this transformation (much lower learning rate, but proportionally more repetitions), the learning algorithm will build the exact same trained model, and the person will flinch the same way under the same circumstances.
(OK, I can imagine it being not literally exactly the same, thanks to the details of the loss landscape and gradient descent etc., but similar.)
Your perspective, if I understand it, would be that this transformation would make the person flinch more slowly—so slowly that they would get hit by the ball before even starting to flinch.
If so, I don’t think that’s right.
Every time the person gets whacked, there’s a little interval of time, let’s say 50ms, wherein the context shows a golf ball flying towards the person’s face, and where the supervisor will shortly declare that the person should have been flinching. That little 50ms interval of time will contribute to updating the synapse strengths. In the slow-learning limit, the update will be proportionally smaller, but OTOH we’ll get that many more repetitions in which the same update will happen. It should cancel out, and it will eventually converge to a good prediction, F(ball-flying-towards-my-face) = I-should-flinch.
And after training, even if we lower the learning rate all the way down to zero, we can still get fast flinching at appropriate times. It would only be a problem if the person changes hobbies from golf to swimming—they wouldn’t learn the new set of flinch cues.
Sorry if I’m misunderstanding where you’re coming from.
If you take any solution to 1, and multiply the learning rate by 0.000001, then it would satisfy 2 as well, right?
It feels wrong to refer you back to your own writing, but much of part 4 was dedicated to talking about these short-term predictors being used to combat neural latency and to do… well, short-term predictions. A flinch detector that goes off 100ms in advance is far less useful than a flinch detector that goes off 300ms in advance, but at the same time a short-term predictor that predicts too far in advance leads to feedback when used as a latency counter (as I asked about/noted in the previous post).
(It’s entirely possible that different predictors have different prediction timescales… but then you’re just replaced the problem with a meta-problem. Namely: how do predictors choose the timescale?)
1x the training data with 1x the training rate is not equivalent to 1000x the training data with 1/1000th of the training rate. Nowhere near. The former is a much harder problem, generally speaking.
(And in a system as complex and chaotic as a human there is no such thing as repeating the same datapoint multiple times… related data points yes. Not the same data point.)
(That being said, 1x the training data with 1x the training rate is still harder than 1x the training data with 1/1000th the training rate, repeated 1000x.)
You appear to be conflating two things here. It’s worth calling them out as separate.
Putting a low-pass filter on the learning feedback signal absolutely does cause something to learn a low-passed version of the output. Your statement “In that case, the circuit would be basically incapable of “fast” dynamics (i.e. it would have implicit low-pass filters everywhere),” doesn’t really work, precisely because it leads to absurd conclusions. This is what I was calling out.
A low learning rate is something different. (That has other problems...)
My apologies, and you are correct as stated; I should have added something on few-shot learning. Something like a flinch detector likely does not fire 1,000,000x in a human lifetime[1], which means that your slow-learning solution hasn’t learnt anything significant by the time the human dies, and isn’t really a solution.
I am aware that 1m is likely you just hitting ‘0’ a bunch of times’; humans are great few-shot (and even one-shot) learners. You can’t just drop the training rate or else your examples like ‘just stand on the ladder for a few minutes and your predictor will make a major update’ don’t work.
My flinch reflex works fine and I’d put a trivial upper-bound of 10k total flinches (probably even 1k is too high). (I lead a relatively quiet life.)
Oh, hmm. In my head, the short-term predictors in the cerebellum are for latency-reduction and discussed in the last post, and meanwhile the short-term predictors in the telencephalon (amygdala & mPFC) are for flinching and discussed here. I think the cerebellum short-term predictors and the telencephalon short-term predictors are built differently for different purposes, and once we zoom in beyond the idea of “short-term prediction” and start talking about parameter settings etc., I really don’t lump them together in my mind, they’re apples and oranges. In the conversation thus far, I thought you were talking about the telencephalon (amygdala & mPFC) ones. If we’re talking about instability from the cerebellum instead, we can continue the Post #4 thread.
~
I think I said some things about low-pass filters up-thread and then retracted it later on, and maybe you missed that. At least for some of the amygdala things like flinching, I agree with you that low-pass filters seem unlikely to be part of the circuit (well, depending on where the frequency cutoff is, I suppose). Sorry, my bad.
~
A common trope is that the hippocampus does one-shot learning in a way that vaguely resembles a lookup table with auto-associative recall, whereas other parts of the cortex learn more generalizable patterns more slowly, including via memory recall (i.e., gradual transfer of information from hippocampus to cortex). I’m not immediately sure whether the amygdala does one-shot learning. I do recall a claim that part of PFC can do one-shot learning, but I forget which part; it might have been a different part than we’re talking about. (And I’m not sure if the claim is true anyway.) Also, as I said before, with continuous-time systems, “one shot learning” is hard to pin down; if David Burns spends 3 seconds on the ladder feeling relaxed, before climbing down, that’s kinda one-shot in an intuitive sense, but it still allows the timescale of synapse changes to be much slower than the timescale of the circuit. Another consideration is that (I think) a synapse can get flagged quickly as “To do: make this synapse stronger / weaker / active / inactive / whatever”, and then it takes 20 minutes or whatever for the new proteins to actually be synthesized etc. so that the change really happens. So that’s “one-shot learning” in a sense, but doesn’t necessarily have the same short-term instabilities, I’d think.
To add to this a little: I think it likely that the gain would be dynamically tuned by some feedback system or another[1]. In order to tune said gain however you need a non-constant signal to be able to measure the gain to adjust it.
...Hm. That sounds a lot like delta waves during sleep. Switch to open-loop operation, disable learning[2], suppress output, input a transient, measure the response, and adjust gain accordingly. (Which would explain higher seizure risk with a lack of sleep...)
Too much variance to be able to hardcode the gain, I’d imagine.
Ideally.
This makes too much sense. I’m somewhat concerned that I’m off in the weeds as a result.
Do these comments make sense to people that aren’t me?