I’m not sure that in reality the differences here are that great. We all have a tremendously human lens through which we view and experience the world, and some of the distinctions here strike me as pretty arbitrary.
Why should ‘loved ones’ enter into the causal reality (in order to motivate a desire to end death and suffering)? Why not view each person as equal moral agents with equal moral worth? Are flowers a gift of value that bring colour and scent, or are they decaying plant matter? It seems to me there’s an implicit value set that’s been smuggled in to the world of ‘how things really are’.
Often, an argument from causal reality is not enough because arguments are themselves not enough. Often, I think it is a fine argument but I remain unconvinced because you are trying to convince me of something that I don’t think maps to reality, but I don’t have the ability, expertise, time or verbal skill to overcome your argument. The need for experts is a shortcut here for someone else who may have some or all of those components.
The passage quoted above seems to me to be the expression of the quite rational, Bayesian analysis: ‘Death seems like a very fundamental component of life, we have had a very, very long time where this is so. How sure can I reasonably be that this paradigm will change within a human generation, even taking into my account the belief of the strong possibility of a transformatively intelligent AI?’
I think there’s something here in this post- some people are absolutely more able to go against the grain. But I’m not sure this phenomenon is as strongly siloed as presented, not as clearly valuable as presented nor as unique to rationalism as presented.
Agreed @ the differences not being that great. I’ve heard this model around for a while, and I feel like while it does describe a distinction, that distinction is not clean in the territory.
I do think that the distinction between Kegan 3 and Kegan 4 is pointing at the same thing, and when you look at for instance the test-retest reliability of Kegan levels you realize that OK there does seem to be something real here that’s being pointed at in the territory.
However, I think it’s very easy to make a case for caring deeply about Social Reality from the perspective of Causal reality (the point I was trying to make in my response to this post), so it’s not at all clear that you can cleanly seperate the people who are doing that from the people who just haven’t realized from the inside that Causal reality is a thing and they can focus on it.
I’m not sure that in reality the differences here are that great. We all have a tremendously human lens through which we view and experience the world, and some of the distinctions here strike me as pretty arbitrary.
Why should ‘loved ones’ enter into the causal reality (in order to motivate a desire to end death and suffering)? Why not view each person as equal moral agents with equal moral worth? Are flowers a gift of value that bring colour and scent, or are they decaying plant matter? It seems to me there’s an implicit value set that’s been smuggled in to the world of ‘how things really are’.
Often, an argument from causal reality is not enough because arguments are themselves not enough. Often, I think it is a fine argument but I remain unconvinced because you are trying to convince me of something that I don’t think maps to reality, but I don’t have the ability, expertise, time or verbal skill to overcome your argument. The need for experts is a shortcut here for someone else who may have some or all of those components.
The passage quoted above seems to me to be the expression of the quite rational, Bayesian analysis: ‘Death seems like a very fundamental component of life, we have had a very, very long time where this is so. How sure can I reasonably be that this paradigm will change within a human generation, even taking into my account the belief of the strong possibility of a transformatively intelligent AI?’
I think there’s something here in this post- some people are absolutely more able to go against the grain. But I’m not sure this phenomenon is as strongly siloed as presented, not as clearly valuable as presented nor as unique to rationalism as presented.
Agreed @ the differences not being that great. I’ve heard this model around for a while, and I feel like while it does describe a distinction, that distinction is not clean in the territory.
I do think that the distinction between Kegan 3 and Kegan 4 is pointing at the same thing, and when you look at for instance the test-retest reliability of Kegan levels you realize that OK there does seem to be something real here that’s being pointed at in the territory.
However, I think it’s very easy to make a case for caring deeply about Social Reality from the perspective of Causal reality (the point I was trying to make in my response to this post), so it’s not at all clear that you can cleanly seperate the people who are doing that from the people who just haven’t realized from the inside that Causal reality is a thing and they can focus on it.