In particular, I would also add this warning: it’s (mildly) dangerous to try to convince yourself of this no-self stuff too deeply on a purely intellectual level.
There was one point where I had read intellectual descriptions of the no-self thing, but hadn’t had the experience of it. But I figured that maybe if I really thought it through and used a lot of compelling arguments, I could convince myself of it—after all, the intellectual argument seemed reasonable, but I clearly wasn’t believing it on an emotional level, so maybe if I tried really hard to make the intellectual argument sink in?
This does not work. (At least, it didn’t work for me, and I doubt it works for the average person.) The “no-self” thing was still getting interpreted in terms of my existing ontology, rather than the ontology updating. What I ended up with was some kind of a notion, temporarily and imperfectly believed on an emotional level, that every second of existence involved me dying and a new entity being created, and that every consciousness-moment would be my last.
That was not a healthy state of mind to be in; fortunately, my normal thinking patterns pretty quickly overrode it and I went back to normal. That is also not what the “kensho” experience that I described felt like. That experience felt calming and liberating, with none of the kind of discomfort that you’d get if you tried to force the assumption of no self existing into an ontology which always presupposed the existence of a self.
The “no-self” thing was still getting interpreted in terms of my existing ontology, rather than the ontology updating.
This.
I’ll finish reading the other comments and then, time permitting, I’ll add my own.
I’ll just note for now that there’s a kind of “being clear” that I think is dangerous for rationality, in a way analogous to what you describe here about no-self. The sketch is something like: if an epistemology is built on top of an ontology, then that epistemology is going to have a hard time with a wide swath of ontological updates. Getting around this seems to require Looking at one’s ontologies and somehow integrating Looking into one’s epistemology. Being required to explain that in terms of a very specific ontology seems to give an illusion of understanding that often becomes sticky.
In particular, I would also add this warning: it’s (mildly) dangerous to try to convince yourself of this no-self stuff too deeply on a purely intellectual level.
There was one point where I had read intellectual descriptions of the no-self thing, but hadn’t had the experience of it. But I figured that maybe if I really thought it through and used a lot of compelling arguments, I could convince myself of it—after all, the intellectual argument seemed reasonable, but I clearly wasn’t believing it on an emotional level, so maybe if I tried really hard to make the intellectual argument sink in?
This does not work. (At least, it didn’t work for me, and I doubt it works for the average person.) The “no-self” thing was still getting interpreted in terms of my existing ontology, rather than the ontology updating. What I ended up with was some kind of a notion, temporarily and imperfectly believed on an emotional level, that every second of existence involved me dying and a new entity being created, and that every consciousness-moment would be my last.
That was not a healthy state of mind to be in; fortunately, my normal thinking patterns pretty quickly overrode it and I went back to normal. That is also not what the “kensho” experience that I described felt like. That experience felt calming and liberating, with none of the kind of discomfort that you’d get if you tried to force the assumption of no self existing into an ontology which always presupposed the existence of a self.
This.
I’ll finish reading the other comments and then, time permitting, I’ll add my own.
I’ll just note for now that there’s a kind of “being clear” that I think is dangerous for rationality, in a way analogous to what you describe here about no-self. The sketch is something like: if an epistemology is built on top of an ontology, then that epistemology is going to have a hard time with a wide swath of ontological updates. Getting around this seems to require Looking at one’s ontologies and somehow integrating Looking into one’s epistemology. Being required to explain that in terms of a very specific ontology seems to give an illusion of understanding that often becomes sticky.