My goal is not to define a “true” model of the brain; my goals are about doing useful things with the brain. The model I have exists to serve the results, not the other way around.
Just to check, you agree that to be useful any model of the brain has to correspond to how the brain actually works? To that extent, you are seeking a true model. However, if I understand you correctly, your model is a highly compressed representation of how the mind works, so it might not superficially resemble a more detailed model. If this is correct, I can empathize with your position here: any practically useful model of the brain has to be highly compressed, but at this high level of compression, accurate models are mostly indistinguishable from bullshit at first glance.
I am still very unsure about the accuracy of what you are propounding, but anecdotally your comments here have been useful to me.
Just to check, you agree that to be useful any model of the brain has to correspond to how the brain actually works?
No, it only has to produce the same predictions that a “corresponding” model would, within the area of useful application.
Note, for example, that the original model of electricity is backwards—Benjamin Franklin thought the electrons flowed from the “positive” end of a battery, but we found out later it was the other way ’round.
Nonetheless, this mistake did not keep electricity from working!
Now, let’s compare to the LoA people, who claim that there is a mystical law of the universe that causes nice thoughts to attract nice things. This notion is clearly false… and yet some people are able to produce results that make it seem true.
So, while I would prefer to have a “true” model that explains the results (and I think I have a more-parsimonious model that does), this does not stop anyone from making use of the “false” model to produce a result, as long as they don’t allow their knowledge of its falsity to interfere with them using it.
See also dating advice, i.e., “pickup”—some schools of pickup have models of human behavior which may be false, yet still produce results. Others have refined those models to be more parsimonious, and produced improved results.
Yet all the models produce results for some people—most likely the people who devote their efforts to application first, critique second… rather than the other way around.
but at this high level of compression, accurate models are mostly indistinguishable from bullshit at first glance.
A model can actually BE bullshit and still produce valuable results! It’s not that the model is too compressed, it’s that it includes excessive description.
For example, the LoA is bullshit because it’s just a made-up explanation for a real phenomenon. If all the LoA people said was, “look, we found that if we take this attitude and think certain thoughts in a certain way, we experience increased perception of ways to exploit circumstances to meet our goals, and increased motivation to act on these opportunities”, then that would be a compressed model!
NLP is such a model over a slightly different sphere, in that it says, “when we act as if this set of ideas (the presuppositions) are true, we are able to obtain thus-and-such results.” It is more parsimonious than the LoA and pickup people, in that it explicitly disclaims being a direct description of “reality”.
In particular, NLP explicitly says that the state of mind of the person doing things must be taken into account: if you are not willing to commit to acting as-if the presuppositions are true, you will not necessarily obtain the same results. (However, this does not mean you need to believe the presuppositions are true, any more than the actor playing Hamlet on stage needs to believe his father has been murdered!)
Now, I personally do believe that portions of the NLP model, and most of mine, do in fact reflect reality in some way. But I don’t care much whether this is actually the
case, or that it has any bearing on whether the model is useful. It’s clearly useful to me and lots of other people, so it would be irrational for me to worry about whether it’s also “true”.
However, in the event that science discovers that NLP or I have the terminals labeled backwards, I’ll happily update, as I’ve already happily updated whenever any little bit of experimental data offers a better explanation for one of my puzzling edge cases, or a better evolutionary hypothesis for why something works in a certain way, etc.
But I don’t make these updates for the sake of truth, they’re for the sake of useful.
A more convincing evolutionary explanation is useful for my writing, as it gives a better reason for suspending disbelief. Better explanations of certain brain processes (e.g. the memory reconsolidation hypothesis, affective asynchrony, near/farl, the somatic marker hypothesis, etc.) are also useful for refining procedural instructions and my explanations for why you have to do something in a particular way for it to work. (e.g., memory reconsolidation studies explain why you need to access a memory to change it—a practical truth I discovered for myself in 2006.)
In a sense, these are less updates to the real model (do X to get Y), and more updates to the story or explanation that surrounds the model. The real model is that “if I act as if these things or something like them are true, and perform these other steps, then these other results reliably occur”.
And that model can’t be updated by somebody else’s experiment. All they can possibly change is the explanation for how I got the results to occur.
Meanwhile, if you’re looking for “the truth”, we don’t have the “real” model of what lies under NLP or hypnosis or LoA or my work, and I expect we won’t have it for at least another decade or two. Reconsolidation has been under study for about a decade now, I believe, likewise the roots of affective asynchrony and the SMH. A few of these are still in the “promising hypothesis, but still needs more support” stage.
But the things they’re trying to describe already exist, whether we have the words yet to describe them or not. And if you have something more important to protect than “truth”, you probably can’t afford to wait another decade or two for the research, any more than you’d wait that long for a reverse engineered circuit diagram before you tried turning on your TV.
If all the LoA people said was, “look, we found that if we take this attitude and think certain thoughts in a certain way, we experience increased perception of ways to exploit circumstances to meet our goals, and increased motivation to act on these opportunities”, then that would be a compressed model!
By the way, the technique given in my thoughts-into-action video is based on extracting precisely the above notion, and reproducing the effect on a small scale, with a short timeframe, and without resorting to mysticism or “quantum physics”.
IOW, the people who successfully used the technique therein have already experienced an “increased perception of ways to exploit the circumstances (of a messy desk) to meet the goal (of a clean one), and increased motivation to act on those opportunities”.
Just to check, you agree that to be useful any model of the brain has to correspond to how the brain actually works? To that extent, you are seeking a true model. However, if I understand you correctly, your model is a highly compressed representation of how the mind works, so it might not superficially resemble a more detailed model. If this is correct, I can empathize with your position here: any practically useful model of the brain has to be highly compressed, but at this high level of compression, accurate models are mostly indistinguishable from bullshit at first glance.
I am still very unsure about the accuracy of what you are propounding, but anecdotally your comments here have been useful to me.
No, it only has to produce the same predictions that a “corresponding” model would, within the area of useful application.
Note, for example, that the original model of electricity is backwards—Benjamin Franklin thought the electrons flowed from the “positive” end of a battery, but we found out later it was the other way ’round.
Nonetheless, this mistake did not keep electricity from working!
Now, let’s compare to the LoA people, who claim that there is a mystical law of the universe that causes nice thoughts to attract nice things. This notion is clearly false… and yet some people are able to produce results that make it seem true.
So, while I would prefer to have a “true” model that explains the results (and I think I have a more-parsimonious model that does), this does not stop anyone from making use of the “false” model to produce a result, as long as they don’t allow their knowledge of its falsity to interfere with them using it.
See also dating advice, i.e., “pickup”—some schools of pickup have models of human behavior which may be false, yet still produce results. Others have refined those models to be more parsimonious, and produced improved results.
Yet all the models produce results for some people—most likely the people who devote their efforts to application first, critique second… rather than the other way around.
A model can actually BE bullshit and still produce valuable results! It’s not that the model is too compressed, it’s that it includes excessive description.
For example, the LoA is bullshit because it’s just a made-up explanation for a real phenomenon. If all the LoA people said was, “look, we found that if we take this attitude and think certain thoughts in a certain way, we experience increased perception of ways to exploit circumstances to meet our goals, and increased motivation to act on these opportunities”, then that would be a compressed model!
NLP is such a model over a slightly different sphere, in that it says, “when we act as if this set of ideas (the presuppositions) are true, we are able to obtain thus-and-such results.” It is more parsimonious than the LoA and pickup people, in that it explicitly disclaims being a direct description of “reality”.
In particular, NLP explicitly says that the state of mind of the person doing things must be taken into account: if you are not willing to commit to acting as-if the presuppositions are true, you will not necessarily obtain the same results. (However, this does not mean you need to believe the presuppositions are true, any more than the actor playing Hamlet on stage needs to believe his father has been murdered!)
Now, I personally do believe that portions of the NLP model, and most of mine, do in fact reflect reality in some way. But I don’t care much whether this is actually the case, or that it has any bearing on whether the model is useful. It’s clearly useful to me and lots of other people, so it would be irrational for me to worry about whether it’s also “true”.
However, in the event that science discovers that NLP or I have the terminals labeled backwards, I’ll happily update, as I’ve already happily updated whenever any little bit of experimental data offers a better explanation for one of my puzzling edge cases, or a better evolutionary hypothesis for why something works in a certain way, etc.
But I don’t make these updates for the sake of truth, they’re for the sake of useful.
A more convincing evolutionary explanation is useful for my writing, as it gives a better reason for suspending disbelief. Better explanations of certain brain processes (e.g. the memory reconsolidation hypothesis, affective asynchrony, near/farl, the somatic marker hypothesis, etc.) are also useful for refining procedural instructions and my explanations for why you have to do something in a particular way for it to work. (e.g., memory reconsolidation studies explain why you need to access a memory to change it—a practical truth I discovered for myself in 2006.)
In a sense, these are less updates to the real model (do X to get Y), and more updates to the story or explanation that surrounds the model. The real model is that “if I act as if these things or something like them are true, and perform these other steps, then these other results reliably occur”.
And that model can’t be updated by somebody else’s experiment. All they can possibly change is the explanation for how I got the results to occur.
Meanwhile, if you’re looking for “the truth”, we don’t have the “real” model of what lies under NLP or hypnosis or LoA or my work, and I expect we won’t have it for at least another decade or two. Reconsolidation has been under study for about a decade now, I believe, likewise the roots of affective asynchrony and the SMH. A few of these are still in the “promising hypothesis, but still needs more support” stage.
But the things they’re trying to describe already exist, whether we have the words yet to describe them or not. And if you have something more important to protect than “truth”, you probably can’t afford to wait another decade or two for the research, any more than you’d wait that long for a reverse engineered circuit diagram before you tried turning on your TV.
By the way, the technique given in my thoughts-into-action video is based on extracting precisely the above notion, and reproducing the effect on a small scale, with a short timeframe, and without resorting to mysticism or “quantum physics”.
IOW, the people who successfully used the technique therein have already experienced an “increased perception of ways to exploit the circumstances (of a messy desk) to meet the goal (of a clean one), and increased motivation to act on those opportunities”.