I find this truly incredible. When you actually understand your emotions it actually makes you feel really good apparently.
It would make some sense, from a design perspective, if emotions that indicated the presence of some problem would stick around while you didn’t understand the problem, and would evaporate once you understood it and knew for certain what you would do about it. This would fit with others’ writings about felt-sense introspection, also known as Gendlin’s Focusing.
Yes. It seems so ridiculous that I literally have been feeling this for the first time, 2 months ago or so. I wish somebody had told me this sooner. I basically started to understand this because I talked a bunch about this with @plex.
The thing is that I have not read about IDC. And the other mind stuff. I am not sure if I am doing the thing that other people described. What I am doing is mainly based on doing an IDC once with you, and from things I have been figuring out by reflecting when feeling bad.
Right, it can be way easier to learn it live. My guess is you’re doing something quite IDC flavoured, but mixed with some other models of mind which IDC does not make explicit. Specific mind algorithms are useful, but exploring based on them and finding things which fit you is often best.
Is “mind algorithms” a known concept? I definitely have a concept like this in my head that matches this name. I have never seen anybody else talk about it though. Also each time I tell somebody about this concept they don’t seem to get it. They tend to dismiss it as trivial and obvious. Probably because they have a model in their mind that fits the name “mind algorithm”. But I expect the concept in my head to be much more powerful. I expect that I can think of certain thoughts that are inaccessible to them because their model is less powerful.
I would ask things like to what extent is it true that you can run arbitrary algorithms on your bain? Certainly, there are limits but I am not sure where they are. E.g. it is definitely possible to temporarily become a different person, by creating a separate personality. And that personality can be very different. E.g. it could not get upset at something that you get normally upset by.
It should not be too surprising that this is possible. It is normal to behave differently depending on who you talk to. I am just talking about a much stronger version of this, where you have more explicit control.
In my experience, you can also create an algorithm that arbitrarily triggers the reward circuitry in your brain. E.g. I can make it such that each time I tap on the top of my laptop it feels really good. I.e. I am creating a new algorithm that watches for an event and then triggers some rewards circuitry.
It also shouldn’t be surprising that this is possible. Why do I feel good when I get a good weapon drop in a video game? That seems to be learned too. The thing I just described is likely doing a similar thing, only that there you don’t rely on some subconscious process to set the reward trigger. Instead, you explicitly construct it. When you look at the reward trigger it might be impossible to tell, whether it was created by some subconscious process or explicitly.
I do it in a very convoluted way. Basically, I have created a subagent in my mind that somehow has access to this aspect, and then I can tell the subagent to make me feel good when I tap the laptop. If I just try to make it feel good myself to tap the laptop then it does not work. It works best with discrete events that give you feedback like tapping. Throwing something in the trash does not work as easily. I actually have used this technique almost never, which seems strange, because it seems very powerful.
It would make some sense, from a design perspective, if emotions that indicated the presence of some problem would stick around while you didn’t understand the problem, and would evaporate once you understood it and knew for certain what you would do about it. This would fit with others’ writings about felt-sense introspection, also known as Gendlin’s Focusing.
Yes. It seems so ridiculous that I literally have been feeling this for the first time, 2 months ago or so. I wish somebody had told me this sooner. I basically started to understand this because I talked a bunch about this with @plex.
Nice, glad you’re getting value out of IDC and other mind stuff :)
Do you think an annotated reading list of mind stuff be worth putting together?
I’m guessing IDC is short for internally-directed cognition.
Internal Double Crux, a cfar technique.
It is short for internal double crux.
The thing is that I have not read about IDC. And the other mind stuff. I am not sure if I am doing the thing that other people described. What I am doing is mainly based on doing an IDC once with you, and from things I have been figuring out by reflecting when feeling bad.
Right, it can be way easier to learn it live. My guess is you’re doing something quite IDC flavoured, but mixed with some other models of mind which IDC does not make explicit. Specific mind algorithms are useful, but exploring based on them and finding things which fit you is often best.
Is “mind algorithms” a known concept? I definitely have a concept like this in my head that matches this name. I have never seen anybody else talk about it though. Also each time I tell somebody about this concept they don’t seem to get it. They tend to dismiss it as trivial and obvious. Probably because they have a model in their mind that fits the name “mind algorithm”. But I expect the concept in my head to be much more powerful. I expect that I can think of certain thoughts that are inaccessible to them because their model is less powerful.
I would ask things like to what extent is it true that you can run arbitrary algorithms on your bain? Certainly, there are limits but I am not sure where they are. E.g. it is definitely possible to temporarily become a different person, by creating a separate personality. And that personality can be very different. E.g. it could not get upset at something that you get normally upset by.
It should not be too surprising that this is possible. It is normal to behave differently depending on who you talk to. I am just talking about a much stronger version of this, where you have more explicit control.
In my experience, you can also create an algorithm that arbitrarily triggers the reward circuitry in your brain. E.g. I can make it such that each time I tap on the top of my laptop it feels really good. I.e. I am creating a new algorithm that watches for an event and then triggers some rewards circuitry.
It also shouldn’t be surprising that this is possible. Why do I feel good when I get a good weapon drop in a video game? That seems to be learned too. The thing I just described is likely doing a similar thing, only that there you don’t rely on some subconscious process to set the reward trigger. Instead, you explicitly construct it. When you look at the reward trigger it might be impossible to tell, whether it was created by some subconscious process or explicitly.
I think not super broadly known, but many cfar techniques fit into the category so it’s around to some extent.
And yeah, brains are pretty programmable.
I can’t do that.
I do it in a very convoluted way. Basically, I have created a subagent in my mind that somehow has access to this aspect, and then I can tell the subagent to make me feel good when I tap the laptop. If I just try to make it feel good myself to tap the laptop then it does not work. It works best with discrete events that give you feedback like tapping. Throwing something in the trash does not work as easily. I actually have used this technique almost never, which seems strange, because it seems very powerful.