I am conflicted about this post. On the one hand, it smells like new-agey nonsense. I worry that posts like this could hurt the credibility of rationalists trying to spread other non-obvious ideas into the mainstream.
On the other hand, even if the only mechanism of this idea is the placebo effect, it’s an emotionally satisfying story to trigger that effect. As someone who grew up with strong religious beliefs, I can appreciate it as… something more than mere art.
Ultimately it’s not obvious to me if this post was supposed to convey a genuine psychological insight, and was just unclear, or if it’s more metaphorical and I’m being too pedantic?
This comment is probably confusing, but I think that merely reflects my own confusion here.
I will definitely attest that this post is not doing Grade A Rationality Qua Rationality, and I wouldn’t want most posts on LW to be like this. But, I do think Grade A Rationality needs to be able to handle things kinda like this.
My overall belief is that techniques and posts like this are often important, but one should have a longterm goal of writing publicly about the ideas in a way that transparently checks against empiricism, reductionism, etc. (This may take awhile though, and often isn’t worth doing for the first off-the-cuff version of a thing you do). But this is how I feel about things like Circling and the Multi-Agent Models of Mind concepts (I’m glad that weird pioneers looked into them without trying to justify themselves at every step, and I’m glad people eventually began making laborious efforts to get them into a state that could be properly evaluated)
The new meta-introduction (is there a better term of art for those italic bits at the top?) definitely helps read it in the proper frame. Thank you for clarifying.
By the way this is exactly the kind of article I would want to write if I had more free time and better verbal skills. Also not directly on LW.
I think there is nothing intrinsically wrong with the article, but there is a risk that a blog containing more articles like this would attract the wrong kind of writer. (Someone writing a similar encouraging article, but with wrong metaphysics. And it would feel bad to downvote them.) If you publish on your personal blog, that risk does not exist.
I think there are two issues. On the one hand there’s the general topic that the article is about. Then there’s the issue that the post doesn’t feel very clear in approaching the subject.
I think one way to deal with cases like this is to start with an intellecutal rigor status of: “Trying to put words on a subject that’s still a bit unclear to me”.
Do you think the current disclaimer is in fact not good enough?
The fact is the subject isn’t unclear to me at all, I just don’t know how useful it is fo arbitrary people, or what the longterm side effects might be.
There are a few issues why the thinking seems unclear to me.
Ontological there are two entities that you might call “the future self”. FutureSelf_A is a mental part that you can interact with by using all of the toolbox for dealing with mental parts. FutureSelf_B doesn’t exist in the present but only when the future will actually take place.
The post seems to me muddled about the distinction between FutureSelf_A and FutureSelf_B.
You say that “Your future self loves you unconditionally”. Many people don’t love themselves. They also don’t love their past selves. Making that unfounded assumption seems bad to me because it might hinder a person from have a more reality aligned view of the relationships. You don’t have access to FutureSelf_B and can’t know to what extend it loves you. When it comes to FutureSelf_A, things get more interesting because that’s a relationship that you can actually investigate and work on.
You say “And they will know exactly what you have experienced.” For FutureSelf_B that’s not how human memory works. That’s practically relevant because if you make plans that you hope your future self will execute it’s important to keep in mind that your future self won’t remember everything about the present moment where you make the plan.
While a lot of New Agey literature frequently fails to distinguish different entities I expect from high quality rationalist material that it builds upon concepts that are not muddled up. If I go to a NLP seminar that’s the level of rigour I expect but I want more from rationalists.
That said I’m also okay with writing up a concept when one hasn’t yet the clarity to distinguish the different entities that are involved because the act of writing up concepts is a good way to gain more clarity about the concepts. I’m also in favor of sharing write ups publically instead of letting them stay in a drawer. But I find it benefitial that such post have disclaimers so that readers who don’t know much about the subject can distinguish them for more developed posts on LW.
So, I do think it was important to say “this was a post originally shared to FB, and not the sort of thing I normally post on LW”. I might also add a concrete disclaimer: “Poetry.” But, I think being Poetry is different from the idea being unclear.
(Yes, Poetry blurs the line between things. That’s kind of the point of poetry. I think it is actively important for Poetry to accomplish particular goals to blur lines between things. I think it is important, if you’re using the poetry for significant mind-hacking, to after-the-fact be clear about what’s going on. But I think it would have ruined the original post to do it pre-emptively)
To be clear, this post is talking almost entirely about Future Self A (i.e. Mental Construct Best Future Self).
It’s useful to also reflect on B, (i.e. Actual Theoretically Possible Best Future Self), because in order for A to do its job, it’s helpful (at least, I find it so), for it to be built on something real. Or at least, the more real its foundation, the easier it is to trust it.
Actual Best Future Self B (in the actual future) probably doesn’t directly remember the traumatic day where you asked for its help. (Although it can, if you do things like write things down, which might or might not be helpful to you). But I’m mostly not talking to Self B, I’m talking to Self A (mental construct), who is actually here in the moment. Self A is constructed to be a reflection of what Self B would do if they were actually here, telepathically connected.
(I’m not at all confident which combination of these is most useful to the average person, just that I found it helpful on two particular days during the most stressful year of my life)
I am conflicted about this post. On the one hand, it smells like new-agey nonsense. I worry that posts like this could hurt the credibility of rationalists trying to spread other non-obvious ideas into the mainstream.
On the other hand, even if the only mechanism of this idea is the placebo effect, it’s an emotionally satisfying story to trigger that effect. As someone who grew up with strong religious beliefs, I can appreciate it as… something more than mere art.
Ultimately it’s not obvious to me if this post was supposed to convey a genuine psychological insight, and was just unclear, or if it’s more metaphorical and I’m being too pedantic?
This comment is probably confusing, but I think that merely reflects my own confusion here.
I will definitely attest that this post is not doing Grade A Rationality Qua Rationality, and I wouldn’t want most posts on LW to be like this. But, I do think Grade A Rationality needs to be able to handle things kinda like this.
My overall belief is that techniques and posts like this are often important, but one should have a longterm goal of writing publicly about the ideas in a way that transparently checks against empiricism, reductionism, etc. (This may take awhile though, and often isn’t worth doing for the first off-the-cuff version of a thing you do). But this is how I feel about things like Circling and the Multi-Agent Models of Mind concepts (I’m glad that weird pioneers looked into them without trying to justify themselves at every step, and I’m glad people eventually began making laborious efforts to get them into a state that could be properly evaluated)
The new meta-introduction (is there a better term of art for those italic bits at the top?) definitely helps read it in the proper frame. Thank you for clarifying.
By the way this is exactly the kind of article I would want to write if I had more free time and better verbal skills. Also not directly on LW.
I think there is nothing intrinsically wrong with the article, but there is a risk that a blog containing more articles like this would attract the wrong kind of writer. (Someone writing a similar encouraging article, but with wrong metaphysics. And it would feel bad to downvote them.) If you publish on your personal blog, that risk does not exist.
I think there are two issues. On the one hand there’s the general topic that the article is about. Then there’s the issue that the post doesn’t feel very clear in approaching the subject.
I think one way to deal with cases like this is to start with an intellecutal rigor status of: “Trying to put words on a subject that’s still a bit unclear to me”.
Do you think the current disclaimer is in fact not good enough?
The fact is the subject isn’t unclear to me at all, I just don’t know how useful it is fo arbitrary people, or what the longterm side effects might be.
There are a few issues why the thinking seems unclear to me.
Ontological there are two entities that you might call “the future self”. FutureSelf_A is a mental part that you can interact with by using all of the toolbox for dealing with mental parts. FutureSelf_B doesn’t exist in the present but only when the future will actually take place.
The post seems to me muddled about the distinction between FutureSelf_A and FutureSelf_B.
You say that “Your future self loves you unconditionally”. Many people don’t love themselves. They also don’t love their past selves. Making that unfounded assumption seems bad to me because it might hinder a person from have a more reality aligned view of the relationships. You don’t have access to FutureSelf_B and can’t know to what extend it loves you. When it comes to FutureSelf_A, things get more interesting because that’s a relationship that you can actually investigate and work on.
You say “And they will know exactly what you have experienced.” For FutureSelf_B that’s not how human memory works. That’s practically relevant because if you make plans that you hope your future self will execute it’s important to keep in mind that your future self won’t remember everything about the present moment where you make the plan.
While a lot of New Agey literature frequently fails to distinguish different entities I expect from high quality rationalist material that it builds upon concepts that are not muddled up. If I go to a NLP seminar that’s the level of rigour I expect but I want more from rationalists.
That said I’m also okay with writing up a concept when one hasn’t yet the clarity to distinguish the different entities that are involved because the act of writing up concepts is a good way to gain more clarity about the concepts. I’m also in favor of sharing write ups publically instead of letting them stay in a drawer. But I find it benefitial that such post have disclaimers so that readers who don’t know much about the subject can distinguish them for more developed posts on LW.
So, I do think it was important to say “this was a post originally shared to FB, and not the sort of thing I normally post on LW”. I might also add a concrete disclaimer: “Poetry.” But, I think being Poetry is different from the idea being unclear.
(Yes, Poetry blurs the line between things. That’s kind of the point of poetry. I think it is actively important for Poetry to accomplish particular goals to blur lines between things. I think it is important, if you’re using the poetry for significant mind-hacking, to after-the-fact be clear about what’s going on. But I think it would have ruined the original post to do it pre-emptively)
To be clear, this post is talking almost entirely about Future Self A (i.e. Mental Construct Best Future Self).
It’s useful to also reflect on B, (i.e. Actual Theoretically Possible Best Future Self), because in order for A to do its job, it’s helpful (at least, I find it so), for it to be built on something real. Or at least, the more real its foundation, the easier it is to trust it.
Actual Best Future Self B (in the actual future) probably doesn’t directly remember the traumatic day where you asked for its help. (Although it can, if you do things like write things down, which might or might not be helpful to you). But I’m mostly not talking to Self B, I’m talking to Self A (mental construct), who is actually here in the moment. Self A is constructed to be a reflection of what Self B would do if they were actually here, telepathically connected.
(I’m not at all confident which combination of these is most useful to the average person, just that I found it helpful on two particular days during the most stressful year of my life)