However it is not known if the stored memories (of either type) actually are stored individually or not; they are many competing models for how a memory is stored and recalled, up to the lowest/”lowest” -for there may be no lowest in reality—level of neurons.
How memories are actually stored doesn’t matter directly.
That said, I was only asking about other people’s intuitive sense of what works better.
Yes, and I don’t think that the distinction that the question makes is a useful one.
Anybody who has used Anki for a while actually has hard data for which information takes a lot of repetitions for the person to remember and which doesn’t. It’s not a matter of intuition of guessing what’s hard to remember based on some model of what neurons do.
Given how objective Anki statistics often differ from people’s intuition about what costs them time I would also not put too much stock into pure intuition based on a few anecdotes you have in your mind because the vast bulk of the information that you successfully remember doesn’t make it into your anecdotes.
How memories are stored certainly matters, it is too much of an assumption that levels are sealed off. Such an assumption may be implicitly negated in a model, but obviously this doesn’t mean something has changed; the nature of material systems has this issue, unlike mathematical ones.
Another poignant property of material systems is that at times there is a special status of observer for them. In the case of the mind, you have the consciousness of the person and while certainly it can be juxtaposed to other instances of it, it is a different relation from the one which would allow carefree use of the term “anecdote”. Notice “special”, which in no way means infallible or anything of such a class, but it does connote a qualitative difference: apart from other means of observation—those available to everyone else, like the tool you mentioned—there is also the sense through consciousness itself, which here was for reasons of brevity referred to as intuition.
Of course consciousness itself is problematic as an observer. But it is used—in a different capacity—in all other input procedures, since you need an observer to take those in as well. If one treats consciousness as a block which acts with built-in biases, it is too much to believe those are cancelled if one simply uses it as an observer of another type of input. It’s due to this (particular) loop that posing a question about intuition is not without merit.
How memories are stored certainly matters, it is too much of an assumption that levels are sealed off.
It matters as much for this discussion as how the physics of a transitor work for programming computers.
If one treats consciousness as a block which acts with built-in biases, it is too much to believe those are cancelled if one simply uses it as an observer of another type of input.
Biases aren’t a black box. One can understand where human intuition is good and where it isn’t by looking at empirical feedback.
In general people who engage with empiric reality don’t spend much time speaking about the problem of consciousness in their fields.
Machine language is a known lower level; neurons aren’t; perhaps in the future there will be more microscopic building blocks examined; maybe there is no end to the division itself.
In a computer it would indeed make no sense for a programmer to examine something below machine language, since you are compiling or otherwise acting upon it. But it’s not a known isomorphism to the mind.
If you’d like a parallel to the above, from the history of philosophy, you might be interested in comparing dialectic reasoning and Aristotelian logic. It’s not by accident that Aristotle explicitly argued that for any system to include the means to prove something (proof isn’t there in dialectics, not past some level, exactly because no lower level is built into the system) it has to be set with at least one axiom: the inability of anything to simultaneously include and not include a quality (in math you’d more often see this as A∨¬A). In dialectics (Parmenides, Zeno etc), this explicitly is argued against, the possibility of infinite division of matter being one of their premises.
Machine language is a known lower level; neurons aren’t; perhaps in the future there will be more microscopic building blocks examined; maybe there is no end to the division itself.
That doesn’t change that models of neither are of much use for most practical applications. If you do gene therapy with the target of changing cognition, it helps to understand what neurons do. If you care about how to memorize information it’s irrelevant and you rather focus on empirics of what happens when human memorize information.
It’s not by accident that Aristotle explicitly argued that for any system to include the means to prove something
Aristotle knew little about how to do science and learn through empiricism and today we have a much better idea of how to learn about the world then we had back then. Thinking in thousand year old terms while ignoring recent advances in how to gather knowledge is ineffective.
How memories are actually stored doesn’t matter directly.
Yes, and I don’t think that the distinction that the question makes is a useful one.
Anybody who has used Anki for a while actually has hard data for which information takes a lot of repetitions for the person to remember and which doesn’t. It’s not a matter of intuition of guessing what’s hard to remember based on some model of what neurons do.
Given how objective Anki statistics often differ from people’s intuition about what costs them time I would also not put too much stock into pure intuition based on a few anecdotes you have in your mind because the vast bulk of the information that you successfully remember doesn’t make it into your anecdotes.
How memories are stored certainly matters, it is too much of an assumption that levels are sealed off. Such an assumption may be implicitly negated in a model, but obviously this doesn’t mean something has changed; the nature of material systems has this issue, unlike mathematical ones.
Another poignant property of material systems is that at times there is a special status of observer for them. In the case of the mind, you have the consciousness of the person and while certainly it can be juxtaposed to other instances of it, it is a different relation from the one which would allow carefree use of the term “anecdote”. Notice “special”, which in no way means infallible or anything of such a class, but it does connote a qualitative difference: apart from other means of observation—those available to everyone else, like the tool you mentioned—there is also the sense through consciousness itself, which here was for reasons of brevity referred to as intuition.
Of course consciousness itself is problematic as an observer. But it is used—in a different capacity—in all other input procedures, since you need an observer to take those in as well. If one treats consciousness as a block which acts with built-in biases, it is too much to believe those are cancelled if one simply uses it as an observer of another type of input. It’s due to this (particular) loop that posing a question about intuition is not without merit.
It matters as much for this discussion as how the physics of a transitor work for programming computers.
Biases aren’t a black box. One can understand where human intuition is good and where it isn’t by looking at empirical feedback.
In general people who engage with empiric reality don’t spend much time speaking about the problem of consciousness in their fields.
Machine language is a known lower level; neurons aren’t; perhaps in the future there will be more microscopic building blocks examined; maybe there is no end to the division itself.
In a computer it would indeed make no sense for a programmer to examine something below machine language, since you are compiling or otherwise acting upon it. But it’s not a known isomorphism to the mind.
If you’d like a parallel to the above, from the history of philosophy, you might be interested in comparing dialectic reasoning and Aristotelian logic. It’s not by accident that Aristotle explicitly argued that for any system to include the means to prove something (proof isn’t there in dialectics, not past some level, exactly because no lower level is built into the system) it has to be set with at least one axiom: the inability of anything to simultaneously include and not include a quality (in math you’d more often see this as A∨¬A). In dialectics (Parmenides, Zeno etc), this explicitly is argued against, the possibility of infinite division of matter being one of their premises.
That doesn’t change that models of neither are of much use for most practical applications. If you do gene therapy with the target of changing cognition, it helps to understand what neurons do. If you care about how to memorize information it’s irrelevant and you rather focus on empirics of what happens when human memorize information.
Aristotle knew little about how to do science and learn through empiricism and today we have a much better idea of how to learn about the world then we had back then. Thinking in thousand year old terms while ignoring recent advances in how to gather knowledge is ineffective.