It seems that our models are computationally equivalent. After all, a state machine with arbitrary extensible memory is Turing complete, and with adaptive response to the environment it is a complex adaptive system, what-ever model you have of it.
I have spent a great deal of time and reasoning on developing models of people in such a way. So, with my cognitive infrastructure, it makes more sense to model appropriately complex adaptive systems as people-like systems. Obviously you are more comfortable with computer science models.
But the danger with models is that they are always limiting in what they can reveal.
In the case of this example, I find it unsurprising that while you have extended the look up table to include the potential to reincorporate previously seen solutions, you avoid the subject of novel solutions being generated, even by a standard combinatorial rule. I suspect this is one particular short-coming of the look-up table basis for modeling the subconscious.
I suspect my models have similar problems, but it’s always hardest to see them from within.
After all, a state machine with arbitrary extensible memory is Turing complete, and with adaptive response to the environment it is a complex adaptive system, what-ever model you have of it.
Of course. But mine is a model specifically oriented towards being able to change and re-program it—as well as understanding more precisely how certain responses are generated.
One of the really important parts of thinking in terms of a lookup table is that it simplifies debugging. That is, one can be taught to “single-step” the brain, and identify the specific lookup that is causing a problem in a sequence of thought-and-action.
How do you do that with a mind-projection model?
So, with my cognitive infrastructure, it makes more sense to model appropriately complex adaptive systems as people-like systems. Obviously you are more comfortable with computer science models.
The problem with modeling one’s self as a “person”, is that it gives you wrong ideas about how to change, and creates maladaptive responses to unwanted behavior.
Whereas, with my more “primitive” model:
I can solve significant problems of myself or others by changing a conceptually-single “entry” in that table, and
The lookup-table metaphor depersonalizes undesired responses in my clients, allowing them to view themselves in a non-reactive way.
Personalizing one’s unconscious responses leads to all kinds of unuseful carry-over from “adversarial” concepts: fighting, deception, negotiation, revenge, etc. This is very counterproductive, compared to simply changing the contents of the table.
Interestingly, this is one of the metaphors that I hear back from my clients the most, referencing personal actions to change. That is, AFAICT, people find it tremendously empowering to realize that they can develop any skill or change any behavior if they can simply load or remove the right data from the table.
In the case of this example, I find it unsurprising that while you have extended the look up table to include the potential to reincorporate previously seen solutions, you avoid the subject of novel solutions being generated, even by a standard combinatorial rule.
Of course novel solutions can be generated—I do it all the time. You can pull data out of the system in all sorts of ways, and then feed it back in. For talking about that, I use search-engine or database metaphors.
I’m not talking about a mind projection model, I’m talking about using using information models constructed and vetted to effectively model people as a foundation for a different model of a part of a person.
I’ve modeled my subconscious in a similar manner before, I’ve gained benefits from it not unlike some you describe. I’ve even gone so far as to model up to sub-processor levels of capabilities and multi-threading. At the same time I was developing the Other models I mentioned, but they were incomplete.
Then during adolescence I refined my Other models well enough for them to start working. I can go more into that later, but as time went on it became clear that computation models simply didn’t let me pack enough information in my interactions with my subconscious, so I needed a more information rich model. That is what I’m talking about.
So bluntly, but honestly, I feel what you’re describing is, at best, what an eight year old should be doing to train their subconscious. But mostly I’m hoping you’ll be moving forward.
Of course novel solutions can be generated—I do it all the time. You can pull data out of the system in all sorts of ways, and then feed it back in. For talking about that, I use search-engine or database metaphors.
Search-engines and databases don’t produce novel solutions on their own, even in the sense of a combinatorial algorithm. And certainly not in the sense of more creative innovation. There are many anecdotes claiming the subconscious can incorporate more dimensions in problems solving than the conscious—some more poetic than others (answers coming in dreams or in showers), it seems dangerous to simply disregard it.
So bluntly, but honestly, I feel what you’re describing is, at best, what an eight year old should be doing to train their subconscious. But mostly I’m hoping you’ll be moving forward.
Bluntly, but honestly, I think you’d be better off describing more precisely what model you think I should be using, and what testable benefits it provides. I’m always willing to upgrade, if a model lets me do something faster, easier, quicker to teach, etc. -- Just give me enough information to reproduce one of your techniques and I’ll happily try it.
I said what I meant there. It’s a feeling. Which combined with lacking a personalized model of your cognitive architecture makes it foolish for me to suggest a specific replacement model. My comment about deep innovation is intended to point you towards one of the blind spots of your current work (which may or may not be helpful).
I’ve been somewhere similar a long time ago, but I was working on other areas at the same time, which have led me to the models I use now. I sincerely doubt that that same avenue will work for you. Instead, I suggest you cultivate a skepticism of your work, plan a line of retreat, and start delving into the dark corners.
As an aside: if you want a technique—using a model close to yours—consider volitional initiation of a problem on your subconscious “backburner” to get an increased response rate. You tie the problem into the subconscious processing, set up an association trigger to check on it sometime later and then remove all links that would pull it back to the consciousness. You can then test the performance of this method versus standard worrying a problem or standard forgetting a problem using a diary method.
Using a more nuanced model, you can get much better results, but this should suffice to show you something of what I mean.
consider volitional initiation of a problem on your subconscious “backburner” to get an increased response rate. You tie the problem into the subconscious processing, set up an association trigger to check on it sometime later and then remove all links that would pull it back to the consciousness. You can then test the performance of this method versus standard worrying a problem or standard forgetting a problem using a diary method.
I’ve been doing that for about 24 years now. I fail to see how it has relevance to the model of mind I use for helping people change beliefs and behaviors. Perhaps you are assuming that I need to have ONE model of mind that explains everything? I don’t consider myself under such a constraint. Note, too, that autonomous processing isn’t inconsistent with a lookup-table subconscious. Indeed, autonomous processing independent of consciousness is the whole point of haivng a state-machine model of brain function. Consciousness is an add-on feature, not the point of having a brain.
Meanwhile, the rest of your comment was extraordinarily unhelpful; it reminds me of Eliezer’s parents telling him, “you’ll give up your childish ideas as soon as you get older”.
Good. It seemed the next logical step considering what you were describing as your model. It’s also very promising that you are not trying to have a singular model.
Meanwhile, the rest of your comment was extraordinarily unhelpful; it reminds me of Eliezer’s parents telling him, “you’ll give up your childish ideas as soon as you get older”.
Which at least is useful data on my part. Developing meta-cognitive technology means having negative as well as positive results. I do appreciate you taking the time to discuss things, though.
Any computational process can be emulated by a sufficiently complicated lookup table. We could, if we wished, consider the “conscious mind” to be such a table.
Dismissing the unconscious because it’s supposedly a lookup table is thus wrong in two ways: firstly, it’s not implemented as such a table, and secondly, even if it were, that puts no limitations, restrictions, or reductions on what it’s capable of doing.
The original statement in question is not just factually incorrect, but conceptually misguided, and the likely harm to the resulting model’s usefulness incalculable.
It seems that our models are computationally equivalent. After all, a state machine with arbitrary extensible memory is Turing complete, and with adaptive response to the environment it is a complex adaptive system, what-ever model you have of it.
I have spent a great deal of time and reasoning on developing models of people in such a way. So, with my cognitive infrastructure, it makes more sense to model appropriately complex adaptive systems as people-like systems. Obviously you are more comfortable with computer science models.
But the danger with models is that they are always limiting in what they can reveal.
In the case of this example, I find it unsurprising that while you have extended the look up table to include the potential to reincorporate previously seen solutions, you avoid the subject of novel solutions being generated, even by a standard combinatorial rule. I suspect this is one particular short-coming of the look-up table basis for modeling the subconscious.
I suspect my models have similar problems, but it’s always hardest to see them from within.
Of course. But mine is a model specifically oriented towards being able to change and re-program it—as well as understanding more precisely how certain responses are generated.
One of the really important parts of thinking in terms of a lookup table is that it simplifies debugging. That is, one can be taught to “single-step” the brain, and identify the specific lookup that is causing a problem in a sequence of thought-and-action.
How do you do that with a mind-projection model?
The problem with modeling one’s self as a “person”, is that it gives you wrong ideas about how to change, and creates maladaptive responses to unwanted behavior.
Whereas, with my more “primitive” model:
I can solve significant problems of myself or others by changing a conceptually-single “entry” in that table, and
The lookup-table metaphor depersonalizes undesired responses in my clients, allowing them to view themselves in a non-reactive way.
Personalizing one’s unconscious responses leads to all kinds of unuseful carry-over from “adversarial” concepts: fighting, deception, negotiation, revenge, etc. This is very counterproductive, compared to simply changing the contents of the table.
Interestingly, this is one of the metaphors that I hear back from my clients the most, referencing personal actions to change. That is, AFAICT, people find it tremendously empowering to realize that they can develop any skill or change any behavior if they can simply load or remove the right data from the table.
Of course novel solutions can be generated—I do it all the time. You can pull data out of the system in all sorts of ways, and then feed it back in. For talking about that, I use search-engine or database metaphors.
I’m not talking about a mind projection model, I’m talking about using using information models constructed and vetted to effectively model people as a foundation for a different model of a part of a person.
I’ve modeled my subconscious in a similar manner before, I’ve gained benefits from it not unlike some you describe. I’ve even gone so far as to model up to sub-processor levels of capabilities and multi-threading. At the same time I was developing the Other models I mentioned, but they were incomplete.
Then during adolescence I refined my Other models well enough for them to start working. I can go more into that later, but as time went on it became clear that computation models simply didn’t let me pack enough information in my interactions with my subconscious, so I needed a more information rich model. That is what I’m talking about.
So bluntly, but honestly, I feel what you’re describing is, at best, what an eight year old should be doing to train their subconscious. But mostly I’m hoping you’ll be moving forward.
Search-engines and databases don’t produce novel solutions on their own, even in the sense of a combinatorial algorithm. And certainly not in the sense of more creative innovation. There are many anecdotes claiming the subconscious can incorporate more dimensions in problems solving than the conscious—some more poetic than others (answers coming in dreams or in showers), it seems dangerous to simply disregard it.
Bluntly, but honestly, I think you’d be better off describing more precisely what model you think I should be using, and what testable benefits it provides. I’m always willing to upgrade, if a model lets me do something faster, easier, quicker to teach, etc. -- Just give me enough information to reproduce one of your techniques and I’ll happily try it.
I said what I meant there. It’s a feeling. Which combined with lacking a personalized model of your cognitive architecture makes it foolish for me to suggest a specific replacement model. My comment about deep innovation is intended to point you towards one of the blind spots of your current work (which may or may not be helpful).
I’ve been somewhere similar a long time ago, but I was working on other areas at the same time, which have led me to the models I use now. I sincerely doubt that that same avenue will work for you. Instead, I suggest you cultivate a skepticism of your work, plan a line of retreat, and start delving into the dark corners.
As an aside: if you want a technique—using a model close to yours—consider volitional initiation of a problem on your subconscious “backburner” to get an increased response rate. You tie the problem into the subconscious processing, set up an association trigger to check on it sometime later and then remove all links that would pull it back to the consciousness. You can then test the performance of this method versus standard worrying a problem or standard forgetting a problem using a diary method.
Using a more nuanced model, you can get much better results, but this should suffice to show you something of what I mean.
I’ve been doing that for about 24 years now. I fail to see how it has relevance to the model of mind I use for helping people change beliefs and behaviors. Perhaps you are assuming that I need to have ONE model of mind that explains everything? I don’t consider myself under such a constraint. Note, too, that autonomous processing isn’t inconsistent with a lookup-table subconscious. Indeed, autonomous processing independent of consciousness is the whole point of haivng a state-machine model of brain function. Consciousness is an add-on feature, not the point of having a brain.
Meanwhile, the rest of your comment was extraordinarily unhelpful; it reminds me of Eliezer’s parents telling him, “you’ll give up your childish ideas as soon as you get older”.
Good. It seemed the next logical step considering what you were describing as your model. It’s also very promising that you are not trying to have a singular model.
Which at least is useful data on my part. Developing meta-cognitive technology means having negative as well as positive results. I do appreciate you taking the time to discuss things, though.
Any computational process can be emulated by a sufficiently complicated lookup table. We could, if we wished, consider the “conscious mind” to be such a table.
Dismissing the unconscious because it’s supposedly a lookup table is thus wrong in two ways: firstly, it’s not implemented as such a table, and secondly, even if it were, that puts no limitations, restrictions, or reductions on what it’s capable of doing.
The original statement in question is not just factually incorrect, but conceptually misguided, and the likely harm to the resulting model’s usefulness incalculable.