I said what I meant there. It’s a feeling. Which combined with lacking a personalized model of your cognitive architecture makes it foolish for me to suggest a specific replacement model. My comment about deep innovation is intended to point you towards one of the blind spots of your current work (which may or may not be helpful).
I’ve been somewhere similar a long time ago, but I was working on other areas at the same time, which have led me to the models I use now. I sincerely doubt that that same avenue will work for you. Instead, I suggest you cultivate a skepticism of your work, plan a line of retreat, and start delving into the dark corners.
As an aside: if you want a technique—using a model close to yours—consider volitional initiation of a problem on your subconscious “backburner” to get an increased response rate. You tie the problem into the subconscious processing, set up an association trigger to check on it sometime later and then remove all links that would pull it back to the consciousness. You can then test the performance of this method versus standard worrying a problem or standard forgetting a problem using a diary method.
Using a more nuanced model, you can get much better results, but this should suffice to show you something of what I mean.
consider volitional initiation of a problem on your subconscious “backburner” to get an increased response rate. You tie the problem into the subconscious processing, set up an association trigger to check on it sometime later and then remove all links that would pull it back to the consciousness. You can then test the performance of this method versus standard worrying a problem or standard forgetting a problem using a diary method.
I’ve been doing that for about 24 years now. I fail to see how it has relevance to the model of mind I use for helping people change beliefs and behaviors. Perhaps you are assuming that I need to have ONE model of mind that explains everything? I don’t consider myself under such a constraint. Note, too, that autonomous processing isn’t inconsistent with a lookup-table subconscious. Indeed, autonomous processing independent of consciousness is the whole point of haivng a state-machine model of brain function. Consciousness is an add-on feature, not the point of having a brain.
Meanwhile, the rest of your comment was extraordinarily unhelpful; it reminds me of Eliezer’s parents telling him, “you’ll give up your childish ideas as soon as you get older”.
Good. It seemed the next logical step considering what you were describing as your model. It’s also very promising that you are not trying to have a singular model.
Meanwhile, the rest of your comment was extraordinarily unhelpful; it reminds me of Eliezer’s parents telling him, “you’ll give up your childish ideas as soon as you get older”.
Which at least is useful data on my part. Developing meta-cognitive technology means having negative as well as positive results. I do appreciate you taking the time to discuss things, though.
I said what I meant there. It’s a feeling. Which combined with lacking a personalized model of your cognitive architecture makes it foolish for me to suggest a specific replacement model. My comment about deep innovation is intended to point you towards one of the blind spots of your current work (which may or may not be helpful).
I’ve been somewhere similar a long time ago, but I was working on other areas at the same time, which have led me to the models I use now. I sincerely doubt that that same avenue will work for you. Instead, I suggest you cultivate a skepticism of your work, plan a line of retreat, and start delving into the dark corners.
As an aside: if you want a technique—using a model close to yours—consider volitional initiation of a problem on your subconscious “backburner” to get an increased response rate. You tie the problem into the subconscious processing, set up an association trigger to check on it sometime later and then remove all links that would pull it back to the consciousness. You can then test the performance of this method versus standard worrying a problem or standard forgetting a problem using a diary method.
Using a more nuanced model, you can get much better results, but this should suffice to show you something of what I mean.
I’ve been doing that for about 24 years now. I fail to see how it has relevance to the model of mind I use for helping people change beliefs and behaviors. Perhaps you are assuming that I need to have ONE model of mind that explains everything? I don’t consider myself under such a constraint. Note, too, that autonomous processing isn’t inconsistent with a lookup-table subconscious. Indeed, autonomous processing independent of consciousness is the whole point of haivng a state-machine model of brain function. Consciousness is an add-on feature, not the point of having a brain.
Meanwhile, the rest of your comment was extraordinarily unhelpful; it reminds me of Eliezer’s parents telling him, “you’ll give up your childish ideas as soon as you get older”.
Good. It seemed the next logical step considering what you were describing as your model. It’s also very promising that you are not trying to have a singular model.
Which at least is useful data on my part. Developing meta-cognitive technology means having negative as well as positive results. I do appreciate you taking the time to discuss things, though.