Could you go into more detail on how exactly one sets these policy-based intentions? I don’t think I know how, nor did I ever even suspect that such a possibility exists.
It does slightly remind me of the KonMari thing where you go through all of your belongings to get a felt sense of where in the apartment they belong, and then automatically put things in their right places because any other place feels wrong.
I’ve used that for a few things like “where should I put my keys”, and phenomenologically it does feel like it just kinda… rewrites my brain to make that feel like the only right place to put them, and then it requires no willpower after that. But that procedure feels more like discovering the right place rather than consciously deciding it; if there’s a more general thing that I can do in this category, I’d be curious about hearing more.
How do other people handle the tipping thing? Whether for a driver or at a restaurant? Are you kind of deciding each time?
How do you handle the question of “who pays for a meal” with acquaintances / new people / on dates? My policy in this area is to always offer to split.
How do you handle whether to give money to homeless people or if someone is trying to offer you something on the street? My policy here is to always say no.
I’m curious what other people are doing here because I assumed most people use policies to handle these things.
We don’t really do tipping or paying-for-the-other here, so those don’t come up. I do have a general policy of not giving money, but the way by which I “installed” it doesn’t resemble your description: it was something like, I ran into people asking for money on several occasions, then on each occasion made a decision about what to do, until I got sufficiently frustrated with feeling manipulated after giving money that the emotion drove me into making a decision which I’d remember in the future.
I guess the thing that sounded the most unusual to me was this part:
It basically costs no willpower to implement the policy. I’m not having to nudge myself, “Now remember I decided I’d do X in these situations.” I’m not having to consciously hold the intention in my mind. It’s more like I changed the underlying code—the old, default behavior—and now it just runs the new script automatically.
It’s not that I don’t have policies, it’s that this description sounds like you can just… decide to change a policy, and then have that happen automatically. And sure, sometimes it’s a small enough thing that I can do this. But like with the key thing, if I suddenly decided to put my keys somewhere else than I used to, I bet that I’d keep putting them in the wrong place until I gradually learned the new policy.
Or maybe your description just makes the concept sound more exotic than you intended and I’m misinterpreting. It sounded to me like you have some magic method of automatically instantiating policy changes that I would need to spend willpower on, without you needing to spend willpower on them. E.g. the Lyft thing sounded complicated to memorize and I would probably need to consciously think about it on several times when I was actually doing the tipping before I had it committed into memory. But you said that you can’t use this on everything, so maybe the policies that I would need willpower to install just happen to be different from the policies that you would need willpower to install.
I guess one thing that confuses me is that you say that it costs no willpower to implement a policy, and then you contrast it with willpower-based intentions, which do cost willpower. That makes it sound like costing willpower, is a property of how you’ve made the decision? But to me it feels like costing willpower is more of a property of the choice. In the other comment you gave the example of having a policy of how you arrange your kitchen, and this sounds to me like the kind of thing that everyone would have as a policy-that-cost-no-willpower, because you can just decide it and there’s no need to use willpower on it. Whereas (to again use your example) something like drinking water every day is more complicated, so you can’t just make it into a policy without using willpower.
So my model is that everyone already does everything as a policy-based intention, unless there’s some property of the act which forces them to use willpower, in which case they can’t do it as a policy and have to use some willpower instead. But your post sounded like people have an actual choice in which way around they do it, as opposed to having it decided for them by the nature of the task?
Oh yeah. I do think the nature of the task is an important factor. It’s not like you can willy-nilly choose policy-based or willpower-based. I did not mean to present it as though you had a choice between them.
I was more describing that there are (at least) two different ways to create intentions, and these are two that I’ve noticed.
But you said that you can’t use this on everything, so maybe the policies that I would need willpower to install just happen to be different from the policies that you would need willpower to install.
This seems likely true.
It’s not that I don’t have policies, it’s that this description sounds like you can just… decide to change a policy, and then have that happen automatically.
It is true that I can immediately change certain policies such that I don’t need to practice the new way. I just install the new way, and it works. But I can’t install large complex policies all in one go. I will explain.
the Lyft thing sounded complicated to memorize and I would probably need to consciously think about it on several times when I was actually doing the tipping before I had it committed into memory.
With zero experience of Lyft tipping, I would not just be able to think up a policy and then implement it. Policy-driven intentions are collaborations between my S1 and S2, so S2 can’t be doing all the work alone. But maybe after a few Lyft rides, I notice confusion about how much to tip. Then maybe I think about that for a while or do some reading. Eventually I notice I need a policy because deciding each time is tiring or effortful.
I notice I feel fine tipping a bit each time when I have a programming job. I feel I can afford it, and I feel better about it. So I create and install a policy to tip $1 each time and run with that; I make room for exceptions when I feel like it.
Later, I stop having a programming job, and now I feel bad about spending that money. So I create a new if-then clause. If I have good income, I will tip $1. If not, I will tip $0. That code gets rewritten.
Later, I notice my policy is inadequate for handling situations where I have heavy luggage (because I find myself in a situation where I’m not tipping people who help me with my bag, and it bothers me a little). I rewrite the code again to add a clause about adding $1 when that happens.
Policy re-writes are motivated by S1 emotions telling me they want something different. They knock on the door of S2. S2 is like, I can help with that! S2 suggests a policy. S1 is relieved and installs it. The change is immediate.
Could you go into more detail on how exactly one sets these policy-based intentions? I don’t think I know how, nor did I ever even suspect that such a possibility exists.
It does slightly remind me of the KonMari thing where you go through all of your belongings to get a felt sense of where in the apartment they belong, and then automatically put things in their right places because any other place feels wrong.
I’ve used that for a few things like “where should I put my keys”, and phenomenologically it does feel like it just kinda… rewrites my brain to make that feel like the only right place to put them, and then it requires no willpower after that. But that procedure feels more like discovering the right place rather than consciously deciding it; if there’s a more general thing that I can do in this category, I’d be curious about hearing more.
That’s interesting!
How do other people handle the tipping thing? Whether for a driver or at a restaurant? Are you kind of deciding each time?
How do you handle the question of “who pays for a meal” with acquaintances / new people / on dates? My policy in this area is to always offer to split.
How do you handle whether to give money to homeless people or if someone is trying to offer you something on the street? My policy here is to always say no.
I’m curious what other people are doing here because I assumed most people use policies to handle these things.
We don’t really do tipping or paying-for-the-other here, so those don’t come up. I do have a general policy of not giving money, but the way by which I “installed” it doesn’t resemble your description: it was something like, I ran into people asking for money on several occasions, then on each occasion made a decision about what to do, until I got sufficiently frustrated with feeling manipulated after giving money that the emotion drove me into making a decision which I’d remember in the future.
I guess the thing that sounded the most unusual to me was this part:
It’s not that I don’t have policies, it’s that this description sounds like you can just… decide to change a policy, and then have that happen automatically. And sure, sometimes it’s a small enough thing that I can do this. But like with the key thing, if I suddenly decided to put my keys somewhere else than I used to, I bet that I’d keep putting them in the wrong place until I gradually learned the new policy.
Or maybe your description just makes the concept sound more exotic than you intended and I’m misinterpreting. It sounded to me like you have some magic method of automatically instantiating policy changes that I would need to spend willpower on, without you needing to spend willpower on them. E.g. the Lyft thing sounded complicated to memorize and I would probably need to consciously think about it on several times when I was actually doing the tipping before I had it committed into memory. But you said that you can’t use this on everything, so maybe the policies that I would need willpower to install just happen to be different from the policies that you would need willpower to install.
I guess one thing that confuses me is that you say that it costs no willpower to implement a policy, and then you contrast it with willpower-based intentions, which do cost willpower. That makes it sound like costing willpower, is a property of how you’ve made the decision? But to me it feels like costing willpower is more of a property of the choice. In the other comment you gave the example of having a policy of how you arrange your kitchen, and this sounds to me like the kind of thing that everyone would have as a policy-that-cost-no-willpower, because you can just decide it and there’s no need to use willpower on it. Whereas (to again use your example) something like drinking water every day is more complicated, so you can’t just make it into a policy without using willpower.
So my model is that everyone already does everything as a policy-based intention, unless there’s some property of the act which forces them to use willpower, in which case they can’t do it as a policy and have to use some willpower instead. But your post sounded like people have an actual choice in which way around they do it, as opposed to having it decided for them by the nature of the task?
Oh yeah. I do think the nature of the task is an important factor. It’s not like you can willy-nilly choose policy-based or willpower-based. I did not mean to present it as though you had a choice between them.
I was more describing that there are (at least) two different ways to create intentions, and these are two that I’ve noticed.
This seems likely true.
It is true that I can immediately change certain policies such that I don’t need to practice the new way. I just install the new way, and it works. But I can’t install large complex policies all in one go. I will explain.
With zero experience of Lyft tipping, I would not just be able to think up a policy and then implement it. Policy-driven intentions are collaborations between my S1 and S2, so S2 can’t be doing all the work alone. But maybe after a few Lyft rides, I notice confusion about how much to tip. Then maybe I think about that for a while or do some reading. Eventually I notice I need a policy because deciding each time is tiring or effortful.
I notice I feel fine tipping a bit each time when I have a programming job. I feel I can afford it, and I feel better about it. So I create and install a policy to tip $1 each time and run with that; I make room for exceptions when I feel like it.
Later, I stop having a programming job, and now I feel bad about spending that money. So I create a new if-then clause. If I have good income, I will tip $1. If not, I will tip $0. That code gets rewritten.
Later, I notice my policy is inadequate for handling situations where I have heavy luggage (because I find myself in a situation where I’m not tipping people who help me with my bag, and it bothers me a little). I rewrite the code again to add a clause about adding $1 when that happens.
Policy re-writes are motivated by S1 emotions telling me they want something different. They knock on the door of S2. S2 is like, I can help with that! S2 suggests a policy. S1 is relieved and installs it. The change is immediate.
Thanks, that clarifies it considerably. That tipping example does sound like the kind of a process that I might also go through.