I didn’t say it was a simple lookup table. It’s indexed in lots of non-trivial ways; see e.g. my post here about “Spock’s Dirty Little Secret”. I just said that fundamentally, it’s a lookup table.
I also didn’t say it’s not capable of complex behavior. A state machine is “just a lookup table”, and that in no way diminishes its potential complexity of behavior.
When I say the subconscious doesn’t “think”, I specifically mean that if you point your built-in “mind projection” at your subconscious, you will misunderstand it, in the same way that people end up believing in gods and ghosts: projecting intention where none exists.
This is a major misunderstanding—if not THE major misunderstanding—of the other-than-conscious mind. It’s not really a mind, it’s a “Chinese room”.
That doesn’t mean we don’t have complex behavior or can’t do things like self-sabotage. The mistake is in projecting personhood onto our self-sabotaging behaviors, rather than seeing the state machine that drives them: condition A triggers appetite B leading to action C. There’s no “agency” there, no “mind”. So if you use an agency model (including Ainslie’s “interests” to some extent), you’ll take incorrect approaches to change.
But if you realize it’s a state machine, stored in a lookup table, then you can change it directly. And for that matter, you can use it more effectively as well. I’ve been far more creative and better at strategy since I learned to engage my creative imagination in a mechanical way, rather than waiting for the muse to strike.
Meanwhile, it’d also be a mistake to think of it as a single lookup table; it includes many things that seem to me like specialized lookup tables. However, they are accessible through the same basic “API” of the senses, so I don’t worry about drawing too fine of a distinction between the tables, except insofar as how they appear relate to specific techniques.
I look forward to seeing where your model goes as it becomes more nuanced. Among other things, I’m very curious about how your model takes into account actual computations (for example finding answers to combinatorial puzzles) that are performed by the subconscious.
I’m very curious about how your model takes into account actual computations (for example finding answers to combinatorial puzzles) that are performed by the subconscious.
Well, I’ll use sudoku since I’ve experienced both conscious and unconscious success at it. It used to drive me nuts how my wife could just look at a puzzle and start writing numbers, on puzzles that were difficult enough that I needed to explicitly track possibilities.
Then, I tried playing some easy puzzles on our Tivo, and found that the “ding” reward sound when you completed a box or line made it much easier to learn, once I focused on speed. I found that I was training myself to recognize patterns and missing numbers, combined with efficient eye movement.
I’m still a little slower than my wife, but it’s fascinating to observe that I can now tell the available possibilities for larger and larger numbers of spaces without consciously thinking about it. I just look at the numbers and the missing ones pop into my head. Over time, this happens less and less consciously, such that I can just glance at five or six numbers and know what the missing ones are without a conscious step.
This doesn’t require a complex subconscious; it’s sufficient to have a state machine that generates candidate numbers based on seen numbers and drops candidates as they’re seen. It might be more efficient in some sense to cross off candidates from a master list, except that the visualization would be more costly. One thing about how visualization works is that it takes roughly the same time to visualize something in detail as it does to look at it… which means that visualizing nine numbers would take about the same amount of time as it would for you to scan the boxes. Also, I can sometimes tell my brain is generating candidates while I scan… I hear them auditorially verbalized as the scan goes, although it’s variable at what point in the scan they pop up; sometimes it’s early and my eyes scan forward or back to double check.
It seems that our models are computationally equivalent. After all, a state machine with arbitrary extensible memory is Turing complete, and with adaptive response to the environment it is a complex adaptive system, what-ever model you have of it.
I have spent a great deal of time and reasoning on developing models of people in such a way. So, with my cognitive infrastructure, it makes more sense to model appropriately complex adaptive systems as people-like systems. Obviously you are more comfortable with computer science models.
But the danger with models is that they are always limiting in what they can reveal.
In the case of this example, I find it unsurprising that while you have extended the look up table to include the potential to reincorporate previously seen solutions, you avoid the subject of novel solutions being generated, even by a standard combinatorial rule. I suspect this is one particular short-coming of the look-up table basis for modeling the subconscious.
I suspect my models have similar problems, but it’s always hardest to see them from within.
After all, a state machine with arbitrary extensible memory is Turing complete, and with adaptive response to the environment it is a complex adaptive system, what-ever model you have of it.
Of course. But mine is a model specifically oriented towards being able to change and re-program it—as well as understanding more precisely how certain responses are generated.
One of the really important parts of thinking in terms of a lookup table is that it simplifies debugging. That is, one can be taught to “single-step” the brain, and identify the specific lookup that is causing a problem in a sequence of thought-and-action.
How do you do that with a mind-projection model?
So, with my cognitive infrastructure, it makes more sense to model appropriately complex adaptive systems as people-like systems. Obviously you are more comfortable with computer science models.
The problem with modeling one’s self as a “person”, is that it gives you wrong ideas about how to change, and creates maladaptive responses to unwanted behavior.
Whereas, with my more “primitive” model:
I can solve significant problems of myself or others by changing a conceptually-single “entry” in that table, and
The lookup-table metaphor depersonalizes undesired responses in my clients, allowing them to view themselves in a non-reactive way.
Personalizing one’s unconscious responses leads to all kinds of unuseful carry-over from “adversarial” concepts: fighting, deception, negotiation, revenge, etc. This is very counterproductive, compared to simply changing the contents of the table.
Interestingly, this is one of the metaphors that I hear back from my clients the most, referencing personal actions to change. That is, AFAICT, people find it tremendously empowering to realize that they can develop any skill or change any behavior if they can simply load or remove the right data from the table.
In the case of this example, I find it unsurprising that while you have extended the look up table to include the potential to reincorporate previously seen solutions, you avoid the subject of novel solutions being generated, even by a standard combinatorial rule.
Of course novel solutions can be generated—I do it all the time. You can pull data out of the system in all sorts of ways, and then feed it back in. For talking about that, I use search-engine or database metaphors.
I’m not talking about a mind projection model, I’m talking about using using information models constructed and vetted to effectively model people as a foundation for a different model of a part of a person.
I’ve modeled my subconscious in a similar manner before, I’ve gained benefits from it not unlike some you describe. I’ve even gone so far as to model up to sub-processor levels of capabilities and multi-threading. At the same time I was developing the Other models I mentioned, but they were incomplete.
Then during adolescence I refined my Other models well enough for them to start working. I can go more into that later, but as time went on it became clear that computation models simply didn’t let me pack enough information in my interactions with my subconscious, so I needed a more information rich model. That is what I’m talking about.
So bluntly, but honestly, I feel what you’re describing is, at best, what an eight year old should be doing to train their subconscious. But mostly I’m hoping you’ll be moving forward.
Of course novel solutions can be generated—I do it all the time. You can pull data out of the system in all sorts of ways, and then feed it back in. For talking about that, I use search-engine or database metaphors.
Search-engines and databases don’t produce novel solutions on their own, even in the sense of a combinatorial algorithm. And certainly not in the sense of more creative innovation. There are many anecdotes claiming the subconscious can incorporate more dimensions in problems solving than the conscious—some more poetic than others (answers coming in dreams or in showers), it seems dangerous to simply disregard it.
So bluntly, but honestly, I feel what you’re describing is, at best, what an eight year old should be doing to train their subconscious. But mostly I’m hoping you’ll be moving forward.
Bluntly, but honestly, I think you’d be better off describing more precisely what model you think I should be using, and what testable benefits it provides. I’m always willing to upgrade, if a model lets me do something faster, easier, quicker to teach, etc. -- Just give me enough information to reproduce one of your techniques and I’ll happily try it.
I said what I meant there. It’s a feeling. Which combined with lacking a personalized model of your cognitive architecture makes it foolish for me to suggest a specific replacement model. My comment about deep innovation is intended to point you towards one of the blind spots of your current work (which may or may not be helpful).
I’ve been somewhere similar a long time ago, but I was working on other areas at the same time, which have led me to the models I use now. I sincerely doubt that that same avenue will work for you. Instead, I suggest you cultivate a skepticism of your work, plan a line of retreat, and start delving into the dark corners.
As an aside: if you want a technique—using a model close to yours—consider volitional initiation of a problem on your subconscious “backburner” to get an increased response rate. You tie the problem into the subconscious processing, set up an association trigger to check on it sometime later and then remove all links that would pull it back to the consciousness. You can then test the performance of this method versus standard worrying a problem or standard forgetting a problem using a diary method.
Using a more nuanced model, you can get much better results, but this should suffice to show you something of what I mean.
consider volitional initiation of a problem on your subconscious “backburner” to get an increased response rate. You tie the problem into the subconscious processing, set up an association trigger to check on it sometime later and then remove all links that would pull it back to the consciousness. You can then test the performance of this method versus standard worrying a problem or standard forgetting a problem using a diary method.
I’ve been doing that for about 24 years now. I fail to see how it has relevance to the model of mind I use for helping people change beliefs and behaviors. Perhaps you are assuming that I need to have ONE model of mind that explains everything? I don’t consider myself under such a constraint. Note, too, that autonomous processing isn’t inconsistent with a lookup-table subconscious. Indeed, autonomous processing independent of consciousness is the whole point of haivng a state-machine model of brain function. Consciousness is an add-on feature, not the point of having a brain.
Meanwhile, the rest of your comment was extraordinarily unhelpful; it reminds me of Eliezer’s parents telling him, “you’ll give up your childish ideas as soon as you get older”.
Good. It seemed the next logical step considering what you were describing as your model. It’s also very promising that you are not trying to have a singular model.
Meanwhile, the rest of your comment was extraordinarily unhelpful; it reminds me of Eliezer’s parents telling him, “you’ll give up your childish ideas as soon as you get older”.
Which at least is useful data on my part. Developing meta-cognitive technology means having negative as well as positive results. I do appreciate you taking the time to discuss things, though.
Any computational process can be emulated by a sufficiently complicated lookup table. We could, if we wished, consider the “conscious mind” to be such a table.
Dismissing the unconscious because it’s supposedly a lookup table is thus wrong in two ways: firstly, it’s not implemented as such a table, and secondly, even if it were, that puts no limitations, restrictions, or reductions on what it’s capable of doing.
The original statement in question is not just factually incorrect, but conceptually misguided, and the likely harm to the resulting model’s usefulness incalculable.
I didn’t say it was a simple lookup table. It’s indexed in lots of non-trivial ways; see e.g. my post here about “Spock’s Dirty Little Secret”. I just said that fundamentally, it’s a lookup table.
I also didn’t say it’s not capable of complex behavior. A state machine is “just a lookup table”, and that in no way diminishes its potential complexity of behavior.
When I say the subconscious doesn’t “think”, I specifically mean that if you point your built-in “mind projection” at your subconscious, you will misunderstand it, in the same way that people end up believing in gods and ghosts: projecting intention where none exists.
This is a major misunderstanding—if not THE major misunderstanding—of the other-than-conscious mind. It’s not really a mind, it’s a “Chinese room”.
That doesn’t mean we don’t have complex behavior or can’t do things like self-sabotage. The mistake is in projecting personhood onto our self-sabotaging behaviors, rather than seeing the state machine that drives them: condition A triggers appetite B leading to action C. There’s no “agency” there, no “mind”. So if you use an agency model (including Ainslie’s “interests” to some extent), you’ll take incorrect approaches to change.
But if you realize it’s a state machine, stored in a lookup table, then you can change it directly. And for that matter, you can use it more effectively as well. I’ve been far more creative and better at strategy since I learned to engage my creative imagination in a mechanical way, rather than waiting for the muse to strike.
Meanwhile, it’d also be a mistake to think of it as a single lookup table; it includes many things that seem to me like specialized lookup tables. However, they are accessible through the same basic “API” of the senses, so I don’t worry about drawing too fine of a distinction between the tables, except insofar as how they appear relate to specific techniques.
I look forward to seeing where your model goes as it becomes more nuanced. Among other things, I’m very curious about how your model takes into account actual computations (for example finding answers to combinatorial puzzles) that are performed by the subconscious.
What, you mean like Sudoku or something?
Sudoku would be one example. I meant generally puzzles or problems involving search spaces of combinations.
Well, I’ll use sudoku since I’ve experienced both conscious and unconscious success at it. It used to drive me nuts how my wife could just look at a puzzle and start writing numbers, on puzzles that were difficult enough that I needed to explicitly track possibilities.
Then, I tried playing some easy puzzles on our Tivo, and found that the “ding” reward sound when you completed a box or line made it much easier to learn, once I focused on speed. I found that I was training myself to recognize patterns and missing numbers, combined with efficient eye movement.
I’m still a little slower than my wife, but it’s fascinating to observe that I can now tell the available possibilities for larger and larger numbers of spaces without consciously thinking about it. I just look at the numbers and the missing ones pop into my head. Over time, this happens less and less consciously, such that I can just glance at five or six numbers and know what the missing ones are without a conscious step.
This doesn’t require a complex subconscious; it’s sufficient to have a state machine that generates candidate numbers based on seen numbers and drops candidates as they’re seen. It might be more efficient in some sense to cross off candidates from a master list, except that the visualization would be more costly. One thing about how visualization works is that it takes roughly the same time to visualize something in detail as it does to look at it… which means that visualizing nine numbers would take about the same amount of time as it would for you to scan the boxes. Also, I can sometimes tell my brain is generating candidates while I scan… I hear them auditorially verbalized as the scan goes, although it’s variable at what point in the scan they pop up; sometimes it’s early and my eyes scan forward or back to double check.
Is this the sort of thing you’re asking about?
It seems that our models are computationally equivalent. After all, a state machine with arbitrary extensible memory is Turing complete, and with adaptive response to the environment it is a complex adaptive system, what-ever model you have of it.
I have spent a great deal of time and reasoning on developing models of people in such a way. So, with my cognitive infrastructure, it makes more sense to model appropriately complex adaptive systems as people-like systems. Obviously you are more comfortable with computer science models.
But the danger with models is that they are always limiting in what they can reveal.
In the case of this example, I find it unsurprising that while you have extended the look up table to include the potential to reincorporate previously seen solutions, you avoid the subject of novel solutions being generated, even by a standard combinatorial rule. I suspect this is one particular short-coming of the look-up table basis for modeling the subconscious.
I suspect my models have similar problems, but it’s always hardest to see them from within.
Of course. But mine is a model specifically oriented towards being able to change and re-program it—as well as understanding more precisely how certain responses are generated.
One of the really important parts of thinking in terms of a lookup table is that it simplifies debugging. That is, one can be taught to “single-step” the brain, and identify the specific lookup that is causing a problem in a sequence of thought-and-action.
How do you do that with a mind-projection model?
The problem with modeling one’s self as a “person”, is that it gives you wrong ideas about how to change, and creates maladaptive responses to unwanted behavior.
Whereas, with my more “primitive” model:
I can solve significant problems of myself or others by changing a conceptually-single “entry” in that table, and
The lookup-table metaphor depersonalizes undesired responses in my clients, allowing them to view themselves in a non-reactive way.
Personalizing one’s unconscious responses leads to all kinds of unuseful carry-over from “adversarial” concepts: fighting, deception, negotiation, revenge, etc. This is very counterproductive, compared to simply changing the contents of the table.
Interestingly, this is one of the metaphors that I hear back from my clients the most, referencing personal actions to change. That is, AFAICT, people find it tremendously empowering to realize that they can develop any skill or change any behavior if they can simply load or remove the right data from the table.
Of course novel solutions can be generated—I do it all the time. You can pull data out of the system in all sorts of ways, and then feed it back in. For talking about that, I use search-engine or database metaphors.
I’m not talking about a mind projection model, I’m talking about using using information models constructed and vetted to effectively model people as a foundation for a different model of a part of a person.
I’ve modeled my subconscious in a similar manner before, I’ve gained benefits from it not unlike some you describe. I’ve even gone so far as to model up to sub-processor levels of capabilities and multi-threading. At the same time I was developing the Other models I mentioned, but they were incomplete.
Then during adolescence I refined my Other models well enough for them to start working. I can go more into that later, but as time went on it became clear that computation models simply didn’t let me pack enough information in my interactions with my subconscious, so I needed a more information rich model. That is what I’m talking about.
So bluntly, but honestly, I feel what you’re describing is, at best, what an eight year old should be doing to train their subconscious. But mostly I’m hoping you’ll be moving forward.
Search-engines and databases don’t produce novel solutions on their own, even in the sense of a combinatorial algorithm. And certainly not in the sense of more creative innovation. There are many anecdotes claiming the subconscious can incorporate more dimensions in problems solving than the conscious—some more poetic than others (answers coming in dreams or in showers), it seems dangerous to simply disregard it.
Bluntly, but honestly, I think you’d be better off describing more precisely what model you think I should be using, and what testable benefits it provides. I’m always willing to upgrade, if a model lets me do something faster, easier, quicker to teach, etc. -- Just give me enough information to reproduce one of your techniques and I’ll happily try it.
I said what I meant there. It’s a feeling. Which combined with lacking a personalized model of your cognitive architecture makes it foolish for me to suggest a specific replacement model. My comment about deep innovation is intended to point you towards one of the blind spots of your current work (which may or may not be helpful).
I’ve been somewhere similar a long time ago, but I was working on other areas at the same time, which have led me to the models I use now. I sincerely doubt that that same avenue will work for you. Instead, I suggest you cultivate a skepticism of your work, plan a line of retreat, and start delving into the dark corners.
As an aside: if you want a technique—using a model close to yours—consider volitional initiation of a problem on your subconscious “backburner” to get an increased response rate. You tie the problem into the subconscious processing, set up an association trigger to check on it sometime later and then remove all links that would pull it back to the consciousness. You can then test the performance of this method versus standard worrying a problem or standard forgetting a problem using a diary method.
Using a more nuanced model, you can get much better results, but this should suffice to show you something of what I mean.
I’ve been doing that for about 24 years now. I fail to see how it has relevance to the model of mind I use for helping people change beliefs and behaviors. Perhaps you are assuming that I need to have ONE model of mind that explains everything? I don’t consider myself under such a constraint. Note, too, that autonomous processing isn’t inconsistent with a lookup-table subconscious. Indeed, autonomous processing independent of consciousness is the whole point of haivng a state-machine model of brain function. Consciousness is an add-on feature, not the point of having a brain.
Meanwhile, the rest of your comment was extraordinarily unhelpful; it reminds me of Eliezer’s parents telling him, “you’ll give up your childish ideas as soon as you get older”.
Good. It seemed the next logical step considering what you were describing as your model. It’s also very promising that you are not trying to have a singular model.
Which at least is useful data on my part. Developing meta-cognitive technology means having negative as well as positive results. I do appreciate you taking the time to discuss things, though.
Any computational process can be emulated by a sufficiently complicated lookup table. We could, if we wished, consider the “conscious mind” to be such a table.
Dismissing the unconscious because it’s supposedly a lookup table is thus wrong in two ways: firstly, it’s not implemented as such a table, and secondly, even if it were, that puts no limitations, restrictions, or reductions on what it’s capable of doing.
The original statement in question is not just factually incorrect, but conceptually misguided, and the likely harm to the resulting model’s usefulness incalculable.