Maybe you could see the fixed points that OP is pointing towards as priors in the search process for frames.
Like, your search is determined by your priors which are learnt through your upbringing. The problem is that they’re often maladaptive and misleading. Therefore, working through these priors and generating new ones is a bit like relearning from overfitting or similar.
Another nice thing about meditation is that it sharpens your mind’s perception which makes your new priors better. It also makes you less dependent on attractor states you could have gotten into from before since you become less emotionally dependent on past behaviour. (there’s obviously more complexity here) (I’m referring to dependent origination for you meditators out there)
It’s like pruning the bad data from your dataset and retraining your model, you’re basically guaranteed to find better ontologies from that (or that’s the hope at least).
This does seem kind of correct to me?
Maybe you could see the fixed points that OP is pointing towards as priors in the search process for frames.
Like, your search is determined by your priors which are learnt through your upbringing. The problem is that they’re often maladaptive and misleading. Therefore, working through these priors and generating new ones is a bit like relearning from overfitting or similar.
Another nice thing about meditation is that it sharpens your mind’s perception which makes your new priors better. It also makes you less dependent on attractor states you could have gotten into from before since you become less emotionally dependent on past behaviour. (there’s obviously more complexity here) (I’m referring to dependent origination for you meditators out there)
It’s like pruning the bad data from your dataset and retraining your model, you’re basically guaranteed to find better ontologies from that (or that’s the hope at least).