Er, what does “should” mean, here? My comments in this thread are about how brains actually work, not how we might prefer them to work.
Bear in mind that evolution doesn’t get to do “should”—it does “what works now”. If you have to evolve a working system, it’s easier to start by using memory as a direct activation system. To consider probabilities in the way you seem to be describing, you have to have something that then evaluates those probabilities. It’s a lot simpler to build a single mechanism that incorporates both the probabilities and the decision-making strategy, all rolled into one.
Sure, but in this case you can’t easily interpret that strange combined decision-making mechanism in terms of probabilities. Probabilities-utilities is a mathematical model that we understand, unlike the semantics of brain’s workings. The model can be used explicitly to correct the intuitively drawn decisions, so it’s a good idea to at least intuitively learn to interface between these modes.
In conclusion, the “should” refers to how you should strive to interpret your memory in terms of probabilities. If you know that in certain situations you are overvaluing the probabilities of events, you should try to correct for the bias. If your mind tells “often!”, and you know that in situations like this your mind lies, then “often!” meansrarely.
Probabilities-utilities is a mathematical model that we understand, unlike the semantics of brain’s workings.
I prefer to work harder on understanding the brain’s semantics, since we don’t really have the option of replacing them at the moment.
In conclusion, the “should” refers to how you should strive to interpret your memory in terms of probabilities.
That makes it sound like I have a choice. In practice, only when I have time to reflect, do I have the option of “interpreting” my memory.
Under normal circumstances, we act in ways that are directly determined by the contents of our memories, without any intermediary. It’s only the verbal rationalizations of the Gossip that make it sound like we could have chosen differently.
Thus, I benefit more from altering the memories that generate my actions, in order to produce the desired behaviors automatically.… instead of trying to run every experience in my life through a “rational” filtering process.
If your mind tells “often!”, and you know that in situations like this your mind lies, then “often!” means rarely.
That’s only relevant insofar as how it relates to my choice of actions. I don’t care what “right” is—I care what the right thing to do is. So in that at least, I agree with my brain. ;-)
But my care is more for what goes in, and changing what’s currently stored, than for trying to correct things on the fly as they come out. The way most of our biases manifest, they affect what goes into the cache, more than they affect what comes out. And that means we have the option of implementing “software patches” for the bugs the hardware introduces, instead of needing to do manual workarounds, or wait for a hardware upgrade capability.
Er, what does “should” mean, here? My comments in this thread are about how brains actually work, not how we might prefer them to work.
Bear in mind that evolution doesn’t get to do “should”—it does “what works now”. If you have to evolve a working system, it’s easier to start by using memory as a direct activation system. To consider probabilities in the way you seem to be describing, you have to have something that then evaluates those probabilities. It’s a lot simpler to build a single mechanism that incorporates both the probabilities and the decision-making strategy, all rolled into one.
Sure, but in this case you can’t easily interpret that strange combined decision-making mechanism in terms of probabilities. Probabilities-utilities is a mathematical model that we understand, unlike the semantics of brain’s workings. The model can be used explicitly to correct the intuitively drawn decisions, so it’s a good idea to at least intuitively learn to interface between these modes.
In conclusion, the “should” refers to how you should strive to interpret your memory in terms of probabilities. If you know that in certain situations you are overvaluing the probabilities of events, you should try to correct for the bias. If your mind tells “often!”, and you know that in situations like this your mind lies, then “often!” means rarely.
I prefer to work harder on understanding the brain’s semantics, since we don’t really have the option of replacing them at the moment.
That makes it sound like I have a choice. In practice, only when I have time to reflect, do I have the option of “interpreting” my memory.
Under normal circumstances, we act in ways that are directly determined by the contents of our memories, without any intermediary. It’s only the verbal rationalizations of the Gossip that make it sound like we could have chosen differently.
Thus, I benefit more from altering the memories that generate my actions, in order to produce the desired behaviors automatically.… instead of trying to run every experience in my life through a “rational” filtering process.
That’s only relevant insofar as how it relates to my choice of actions. I don’t care what “right” is—I care what the right thing to do is. So in that at least, I agree with my brain. ;-)
But my care is more for what goes in, and changing what’s currently stored, than for trying to correct things on the fly as they come out. The way most of our biases manifest, they affect what goes into the cache, more than they affect what comes out. And that means we have the option of implementing “software patches” for the bugs the hardware introduces, instead of needing to do manual workarounds, or wait for a hardware upgrade capability.