It shouldn’t theoretically be the case that false beliefs lead to better predictions than true beliefs, so I guess when memory doesn’t optimize for accuracy, there has to be a different bias that it’s canceling out?
(edited to add something that needs to be said from time to time: when I say “theoretically” I don’t mean “according to the correct theory”, but “according to a simple and salient theory that isn’t exactly right”)
False beliefs lead to better predictions if they keep you safe. The probability of being attacked by a crocodile at the riverbank might be low, but this doesn’t mean you shouldn’t act as if you’re going to be attacked.
Perhaps I should have emphasized the part where the predictions are for the purpose of making decisions. Really, you could say that memory IS a decision-making system, or at least a decision-support database. What we store for later recall, and what we recall, are based on what evolutionarily “works”, rather than on theoretically-correct probabilities. Evolution is a biased Bayesian, because some probabilities matter more than others.
You may afford forgetting about the sky’s color and may not afford forgetting about poisonous snakes, but that doesn’t mean you should increase your probability estimate of encountering a poisonous snake, or that you should decrease the probability of the sky being blue. Some parts of the map are known to have different importance, but that doesn’t make it a good idea to systematically distort the picture.
Er, what does “should” mean, here? My comments in this thread are about how brains actually work, not how we might prefer them to work.
Bear in mind that evolution doesn’t get to do “should”—it does “what works now”. If you have to evolve a working system, it’s easier to start by using memory as a direct activation system. To consider probabilities in the way you seem to be describing, you have to have something that then evaluates those probabilities. It’s a lot simpler to build a single mechanism that incorporates both the probabilities and the decision-making strategy, all rolled into one.
Sure, but in this case you can’t easily interpret that strange combined decision-making mechanism in terms of probabilities. Probabilities-utilities is a mathematical model that we understand, unlike the semantics of brain’s workings. The model can be used explicitly to correct the intuitively drawn decisions, so it’s a good idea to at least intuitively learn to interface between these modes.
In conclusion, the “should” refers to how you should strive to interpret your memory in terms of probabilities. If you know that in certain situations you are overvaluing the probabilities of events, you should try to correct for the bias. If your mind tells “often!”, and you know that in situations like this your mind lies, then “often!” meansrarely.
Probabilities-utilities is a mathematical model that we understand, unlike the semantics of brain’s workings.
I prefer to work harder on understanding the brain’s semantics, since we don’t really have the option of replacing them at the moment.
In conclusion, the “should” refers to how you should strive to interpret your memory in terms of probabilities.
That makes it sound like I have a choice. In practice, only when I have time to reflect, do I have the option of “interpreting” my memory.
Under normal circumstances, we act in ways that are directly determined by the contents of our memories, without any intermediary. It’s only the verbal rationalizations of the Gossip that make it sound like we could have chosen differently.
Thus, I benefit more from altering the memories that generate my actions, in order to produce the desired behaviors automatically.… instead of trying to run every experience in my life through a “rational” filtering process.
If your mind tells “often!”, and you know that in situations like this your mind lies, then “often!” means rarely.
That’s only relevant insofar as how it relates to my choice of actions. I don’t care what “right” is—I care what the right thing to do is. So in that at least, I agree with my brain. ;-)
But my care is more for what goes in, and changing what’s currently stored, than for trying to correct things on the fly as they come out. The way most of our biases manifest, they affect what goes into the cache, more than they affect what comes out. And that means we have the option of implementing “software patches” for the bugs the hardware introduces, instead of needing to do manual workarounds, or wait for a hardware upgrade capability.
It shouldn’t theoretically be the case that false beliefs lead to better predictions than true beliefs, so I guess when memory doesn’t optimize for accuracy, there has to be a different bias that it’s canceling out?
(edited to add something that needs to be said from time to time: when I say “theoretically” I don’t mean “according to the correct theory”, but “according to a simple and salient theory that isn’t exactly right”)
False beliefs lead to better predictions if they keep you safe. The probability of being attacked by a crocodile at the riverbank might be low, but this doesn’t mean you shouldn’t act as if you’re going to be attacked.
Perhaps I should have emphasized the part where the predictions are for the purpose of making decisions. Really, you could say that memory IS a decision-making system, or at least a decision-support database. What we store for later recall, and what we recall, are based on what evolutionarily “works”, rather than on theoretically-correct probabilities. Evolution is a biased Bayesian, because some probabilities matter more than others.
You may afford forgetting about the sky’s color and may not afford forgetting about poisonous snakes, but that doesn’t mean you should increase your probability estimate of encountering a poisonous snake, or that you should decrease the probability of the sky being blue. Some parts of the map are known to have different importance, but that doesn’t make it a good idea to systematically distort the picture.
Er, what does “should” mean, here? My comments in this thread are about how brains actually work, not how we might prefer them to work.
Bear in mind that evolution doesn’t get to do “should”—it does “what works now”. If you have to evolve a working system, it’s easier to start by using memory as a direct activation system. To consider probabilities in the way you seem to be describing, you have to have something that then evaluates those probabilities. It’s a lot simpler to build a single mechanism that incorporates both the probabilities and the decision-making strategy, all rolled into one.
Sure, but in this case you can’t easily interpret that strange combined decision-making mechanism in terms of probabilities. Probabilities-utilities is a mathematical model that we understand, unlike the semantics of brain’s workings. The model can be used explicitly to correct the intuitively drawn decisions, so it’s a good idea to at least intuitively learn to interface between these modes.
In conclusion, the “should” refers to how you should strive to interpret your memory in terms of probabilities. If you know that in certain situations you are overvaluing the probabilities of events, you should try to correct for the bias. If your mind tells “often!”, and you know that in situations like this your mind lies, then “often!” means rarely.
I prefer to work harder on understanding the brain’s semantics, since we don’t really have the option of replacing them at the moment.
That makes it sound like I have a choice. In practice, only when I have time to reflect, do I have the option of “interpreting” my memory.
Under normal circumstances, we act in ways that are directly determined by the contents of our memories, without any intermediary. It’s only the verbal rationalizations of the Gossip that make it sound like we could have chosen differently.
Thus, I benefit more from altering the memories that generate my actions, in order to produce the desired behaviors automatically.… instead of trying to run every experience in my life through a “rational” filtering process.
That’s only relevant insofar as how it relates to my choice of actions. I don’t care what “right” is—I care what the right thing to do is. So in that at least, I agree with my brain. ;-)
But my care is more for what goes in, and changing what’s currently stored, than for trying to correct things on the fly as they come out. The way most of our biases manifest, they affect what goes into the cache, more than they affect what comes out. And that means we have the option of implementing “software patches” for the bugs the hardware introduces, instead of needing to do manual workarounds, or wait for a hardware upgrade capability.