I guess my way of thinking of virtue is a bit weird. Virtue is classically described as something like balancing values, but because I take an existentialist view the only source of values I can have is my own preferences, I might as well admit I’m trading off against preferences. That I prefer for my preferences to reflect available knowledge of what worked for others (wisdom), my sense of virtue tends to flatten out as something pretty much like how ‘virtue’ is normally used.
As to doing unappealing things, I think that’s a weird way to ask the question. It’s not that I never do anything that, all else equal, I would prefer not to do, so in some sense I do unappealing things all the time. But there is always some action that, on balance, is most preferred, so I always do that one (though I must freely admit by ability to determine what is most preferred is by no means perfect and I make mistakes all the time). Sometimes the most preferred action is to just wait and do nothing. Questions of long-term and short-term outcomes don’t really come into the picture here.
As to Maslow, I’m pretty sure I don’t stop being self-actualized when I’m hungry, I’m just now in need of food. Same for many other necessities. And I still don’t need to care very much about the specifics of satisfying my needs for food, shelter, etc. so long as I find a way to satisfy those needs. To me Maslow kind of gets it backwards in that self-actualization eliminates the ability of more “basic” needs to dominate your thinking, though to be fair because of where Maslow stops I basically have to lump all post-formal thinking about the self into “self-actualization”.
Finally, as to alignment of preferences for short, medium, and long term objectives, my best answer is lots of experience at being honest with myself. I can either want something enough to take actions to get it or not; there’s no need to have the form of regret we call akrasia if I end up not wanting to do something. I balanced the preferences, made my choice, and now I live with it. If it turns out I’m not getting the things I want and living the life I want to live, that’s pressure to change my preferences.
Let me make this concrete. I like donuts. All else equal, I’d eat donuts every morning until I felt sick of eating them and had to stop (maybe after about 8 donuts I’m guessing). So let’s say I do this. Donuts are a lot of calories but not very filling, so I’m going to be hungry again soon after eating them. Over a course of weeks, I’m going to gain weight as a result of the excess calories. At some point I’ll notice and think “I’d like to be less fat” and think about how to achieve that goal. I’ll notice that eating donuts is adding lots of unnecessary calories, so then I feel pressure not to eat donuts. Having experienced donuts making me fat and not wanting to be fat I’ll less eat donuts. If I fail to cut down my donut consumption and continue to gain weight, then fine, I apparently like donuts more than being skinny. I’m the only one reasonable for how fat I am, so I’m the only one it really matters to how fat I am. I fully accept responsibility for myself, I’m the only person who I can control how they change the state of the world, and so I simply must act having accepted that responsibility for myself since no one else will.
So my alignment can’t really break for any longer than I can fail to update on the evidence. This is maybe the whole point: having accepted radical self responsibility, I’m the only person doing anything about my preferences, so the only sense I can “break” is to choose otherwise. I have no form to break from; there’s just being.
I understand how it works for you, but I have two associations that came up while reading your comment. One is taking the path of the least resistance—you float to where the river carries you. The other one is treating your own decision mechanism as a black box which you refuse to peer into. The box says that it weighted the alternatives and you should do X, so you nod and do X.
I think the critical point here is the one you mentioned: “seeing yourself as a single agent”. Most approaches to akrasia start with positing two yous: one which wants to find immediate satisfaction and avoid unpleasantness and effort right now, and one which is capable of planning and wants to sacrifice some utility now in the hopes of getting more utility tomorrow.
You say you transcended that, but I wonder if you just stuffed these yous into that black box and closed your eyes to their fights—as long as the winner (at the moment) tells you what to do, you don’t care about the process by which this decision was arrived at.
As far as floating where the river carries me, this is in fact my position and a metaphor I like, although what most people would mean by “path of least resistance” supposes a lack of complexity. I guess if you could only reason about one preference at a time you’d always do the one thing that was most preferred, but being able to balance multiple preferences, what is “easiest” is often not obvious before composing preferences. I am of course limited by how much deliberation and memory (time and space complexity) I can devote to a decision, so I can of course make no claim to global optimality.
I think this also addresses the black box concerns. There is a way in which you could take this position without awareness that you have competing preferences and some thought process by which you resolve them. This is similar to the popular notion of what Buddhist or Daoist practice looks like, and although I have no doubt some people actually do it this way because the traditions definitely have that interpretation and present similarly if you ignore mental phenomena, there’s a more nuanced position which sublimates unification and differentiation to each other to yield a complex, single “gray box” approach.
there’s a more nuanced position which sublimates unification and differentiation to each other to yield a complex, single “gray box” approach
And we’re off to Hegelian dialectics and the thesis—antithesis—synthesis triad :-)
But how is your position different from the trivial observation that everyone always does what he wants, even though he might be conflicted about it and experience regret afterwards?
Primarily perhaps it’s a difference in relationship to regret. Because we seem to live in a world where causality flows in one direction, there’s no way to go back and change the history of the world we find ourselves in. Literally every action, including “non” actions, results in find oneself in one world or another. Thus no matter what we do we can regret not finding ourselves in some other world. Regret is powered by a kind of evidence of counterfactuals that is perhaps worth considering for its own sake, but need not generate a feeling of regret at having found one self in one world rather than another. Regret is a kind of self imposed suffering, and one which evaporates by accepting all counterfactual worlds are a source of regret and so the feeling of regret itself provides no additional information to update on from the counterfactuals.
I’d perhaps describe regret as a kind of weighting function that causes you to more notice the evidence of some counterfactuals than others because they contain large losses to or from the world you find yourself in.
Primarily perhaps it’s a difference in relationship to regret.
Ah, I see.
and so the feeling of regret itself provides no additional information
I think it does: it provides information about yourself to you. You don’t necessarily know which actions and/or counterfactuals will lead to feelings of regret in the future and how intense will it be.
All in all, you seem to be operating in a somewhat different framework than the OP and so the question might need to be translated to something like “Do you deliberately manage the conflict between your different preferences, specifically short-term and long-term ones, and if you do, what kind of techniques do you use?”
Ah, then to that question I can give some more specific answers that will likely work even for people who don’t share my model.
Preference integration
Basically equivalent to what I think CFAR calls propagation, although with a lot of different “flavor” since there are no subagents.
+6. Generally works but can be time consuming and is often limited by availability of experiences to change relative preference weights on. Is a trainable skill though so you get better at it over time. I don’t think there’s any unwanted side effects with this one.
Write down future actions
Some version of GTD. I specifically write things down in email I send to myself that I then see later and act on. Since I also practice inbox zero my inbox is a list of things that need immediate action. If I’m not going to do something immediately then I use the email as a trigger to schedule to do it later.
+4. Again, generally works, but is limited to only those things you remember to write down. Also documenting everything can be annoying, so it’s only for stuff I think I’d likely otherwise forget. Also trainable and you get better at it over time (what to put in emails, when to send them, etc.). Maybe negative side effect is you get slightly less good at using your memory since you are now using a memory enhancer.
Get enough sleep
Best way to do this I know is set a fixed time for waking up, then go to bed when you are tired. Your body will automatically regulate unless you have a sleep disorder and make you tired at an appropriate time to wake up at your fixed time. Even if you do have a sleep disorder this can work: I have narcolepsy and it works for me.
+2. Sleep is great in general but won’t do that much about this specific issue other than give you more energy to deal with it. Unwanted side effect might be that you discover you need a lot more sleep than you would like to need, but then that was already the case before you were just tired all the time.
Eat enough food
Your body won’t work if you’re hungry. If you are hungry, eat. Get enough protein, carbs, and fats to make your body go. Also get enough micronutrients or else you’ll still have a hard time though you won’t die.
+2. Like sleep, just a general enhancer that makes everything better, so will naturally help with aligning your preferences to your long term and short term objectives. Downside is you might turn out to have an eating homeostasis issue and get fat.
Exercise
Bodies evolved to do work. If your body doesn’t do work it seems to languish in various ways that affect your mental health.
+2. General enhancer again. Downside is time investment and possibly suffering if you can’t find exercise you enjoy.
You seem to be answering my question after all, even if you don’t ask it: the answer is that it doesn’t bother you. Consider the situation I mentioned, fleshed out a bit. Suppose someone is deciding whether to clean his room or browse internet. “It would be better to clean the room,” he says, and then browses the net. The normal akratic person in this situation would be upset that he did not manage to clean the room. You would say, “Actually, I wanted to browse the net instead, obviously, since that’s what I did.”
But then suppose it happens again day after day. The akratic person will be upset again and again, in the same way. You will instead say, “Apparently my desire to browse the net is pretty strong.”
At long last the person cleans the room. He says, “I managed to overcome my akrasia.” You say, “The room was messy enough that I actually preferred to clean it.”
The main problem with your attitude is that it seems to depend on denying moral realism: right at the beginning, when you say that you must have preferred browsing the net, the other person may say, “Sure, I preferred browsing the net. But that’s bad, because it would have been objectively better to clean the room.”
Sure, I absolutely reject moral realism, and see no way to judge whether browsing the net or cleaning your room is better than in terms of preference satisfaction. I’ve covered this position extensively elsewhere.
I disagree and have argued the point on LW at other times. But I think the most obvious problem is the fact that you talk about “ontology”, as though moral realism implies the existence of moral atoms, or something like that.
I’m seeing a deep division between our worldviews. Because I take the phenomenological and existentialist stances, ontology and metaphysics are separated rather than combined because there is no true ontology one might hold, and thus although I may not have moral truth in my ontology (model of the world) nothing prevents you from having it as a construct in your understand of your lifeworld, so you’re right that to me it looks like at least positing the existence of moral essence as a useful sense-making structure, although I assume you take more a nuanced view than proposing an equivalent of moral phlogiston.
But maybe we should discuss this issue elsewhere than this thread? I’ll just say that there are moral realist positions that seem sensible, but I believe they only stand by rejecting phenomenology or existentialism, hence why I don’t hold them, and instead end up closer to what’s often called the moral constructivist position where moral “facts” are derived from intersubjective experience yet remain false in the normal meaning of the word.
I guess my way of thinking of virtue is a bit weird. Virtue is classically described as something like balancing values, but because I take an existentialist view the only source of values I can have is my own preferences, I might as well admit I’m trading off against preferences. That I prefer for my preferences to reflect available knowledge of what worked for others (wisdom), my sense of virtue tends to flatten out as something pretty much like how ‘virtue’ is normally used.
As to doing unappealing things, I think that’s a weird way to ask the question. It’s not that I never do anything that, all else equal, I would prefer not to do, so in some sense I do unappealing things all the time. But there is always some action that, on balance, is most preferred, so I always do that one (though I must freely admit by ability to determine what is most preferred is by no means perfect and I make mistakes all the time). Sometimes the most preferred action is to just wait and do nothing. Questions of long-term and short-term outcomes don’t really come into the picture here.
As to Maslow, I’m pretty sure I don’t stop being self-actualized when I’m hungry, I’m just now in need of food. Same for many other necessities. And I still don’t need to care very much about the specifics of satisfying my needs for food, shelter, etc. so long as I find a way to satisfy those needs. To me Maslow kind of gets it backwards in that self-actualization eliminates the ability of more “basic” needs to dominate your thinking, though to be fair because of where Maslow stops I basically have to lump all post-formal thinking about the self into “self-actualization”.
Finally, as to alignment of preferences for short, medium, and long term objectives, my best answer is lots of experience at being honest with myself. I can either want something enough to take actions to get it or not; there’s no need to have the form of regret we call akrasia if I end up not wanting to do something. I balanced the preferences, made my choice, and now I live with it. If it turns out I’m not getting the things I want and living the life I want to live, that’s pressure to change my preferences.
Let me make this concrete. I like donuts. All else equal, I’d eat donuts every morning until I felt sick of eating them and had to stop (maybe after about 8 donuts I’m guessing). So let’s say I do this. Donuts are a lot of calories but not very filling, so I’m going to be hungry again soon after eating them. Over a course of weeks, I’m going to gain weight as a result of the excess calories. At some point I’ll notice and think “I’d like to be less fat” and think about how to achieve that goal. I’ll notice that eating donuts is adding lots of unnecessary calories, so then I feel pressure not to eat donuts. Having experienced donuts making me fat and not wanting to be fat I’ll less eat donuts. If I fail to cut down my donut consumption and continue to gain weight, then fine, I apparently like donuts more than being skinny. I’m the only one reasonable for how fat I am, so I’m the only one it really matters to how fat I am. I fully accept responsibility for myself, I’m the only person who I can control how they change the state of the world, and so I simply must act having accepted that responsibility for myself since no one else will.
So my alignment can’t really break for any longer than I can fail to update on the evidence. This is maybe the whole point: having accepted radical self responsibility, I’m the only person doing anything about my preferences, so the only sense I can “break” is to choose otherwise. I have no form to break from; there’s just being.
I understand how it works for you, but I have two associations that came up while reading your comment. One is taking the path of the least resistance—you float to where the river carries you. The other one is treating your own decision mechanism as a black box which you refuse to peer into. The box says that it weighted the alternatives and you should do X, so you nod and do X.
I think the critical point here is the one you mentioned: “seeing yourself as a single agent”. Most approaches to akrasia start with positing two yous: one which wants to find immediate satisfaction and avoid unpleasantness and effort right now, and one which is capable of planning and wants to sacrifice some utility now in the hopes of getting more utility tomorrow.
You say you transcended that, but I wonder if you just stuffed these yous into that black box and closed your eyes to their fights—as long as the winner (at the moment) tells you what to do, you don’t care about the process by which this decision was arrived at.
As far as floating where the river carries me, this is in fact my position and a metaphor I like, although what most people would mean by “path of least resistance” supposes a lack of complexity. I guess if you could only reason about one preference at a time you’d always do the one thing that was most preferred, but being able to balance multiple preferences, what is “easiest” is often not obvious before composing preferences. I am of course limited by how much deliberation and memory (time and space complexity) I can devote to a decision, so I can of course make no claim to global optimality.
I think this also addresses the black box concerns. There is a way in which you could take this position without awareness that you have competing preferences and some thought process by which you resolve them. This is similar to the popular notion of what Buddhist or Daoist practice looks like, and although I have no doubt some people actually do it this way because the traditions definitely have that interpretation and present similarly if you ignore mental phenomena, there’s a more nuanced position which sublimates unification and differentiation to each other to yield a complex, single “gray box” approach.
And we’re off to Hegelian dialectics and the thesis—antithesis—synthesis triad :-)
But how is your position different from the trivial observation that everyone always does what he wants, even though he might be conflicted about it and experience regret afterwards?
Primarily perhaps it’s a difference in relationship to regret. Because we seem to live in a world where causality flows in one direction, there’s no way to go back and change the history of the world we find ourselves in. Literally every action, including “non” actions, results in find oneself in one world or another. Thus no matter what we do we can regret not finding ourselves in some other world. Regret is powered by a kind of evidence of counterfactuals that is perhaps worth considering for its own sake, but need not generate a feeling of regret at having found one self in one world rather than another. Regret is a kind of self imposed suffering, and one which evaporates by accepting all counterfactual worlds are a source of regret and so the feeling of regret itself provides no additional information to update on from the counterfactuals.
I’d perhaps describe regret as a kind of weighting function that causes you to more notice the evidence of some counterfactuals than others because they contain large losses to or from the world you find yourself in.
Ah, I see.
I think it does: it provides information about yourself to you. You don’t necessarily know which actions and/or counterfactuals will lead to feelings of regret in the future and how intense will it be.
All in all, you seem to be operating in a somewhat different framework than the OP and so the question might need to be translated to something like “Do you deliberately manage the conflict between your different preferences, specifically short-term and long-term ones, and if you do, what kind of techniques do you use?”
Ah, then to that question I can give some more specific answers that will likely work even for people who don’t share my model.
Preference integration
Basically equivalent to what I think CFAR calls propagation, although with a lot of different “flavor” since there are no subagents.
+6. Generally works but can be time consuming and is often limited by availability of experiences to change relative preference weights on. Is a trainable skill though so you get better at it over time. I don’t think there’s any unwanted side effects with this one.
Write down future actions
Some version of GTD. I specifically write things down in email I send to myself that I then see later and act on. Since I also practice inbox zero my inbox is a list of things that need immediate action. If I’m not going to do something immediately then I use the email as a trigger to schedule to do it later.
+4. Again, generally works, but is limited to only those things you remember to write down. Also documenting everything can be annoying, so it’s only for stuff I think I’d likely otherwise forget. Also trainable and you get better at it over time (what to put in emails, when to send them, etc.). Maybe negative side effect is you get slightly less good at using your memory since you are now using a memory enhancer.
Get enough sleep
Best way to do this I know is set a fixed time for waking up, then go to bed when you are tired. Your body will automatically regulate unless you have a sleep disorder and make you tired at an appropriate time to wake up at your fixed time. Even if you do have a sleep disorder this can work: I have narcolepsy and it works for me.
+2. Sleep is great in general but won’t do that much about this specific issue other than give you more energy to deal with it. Unwanted side effect might be that you discover you need a lot more sleep than you would like to need, but then that was already the case before you were just tired all the time.
Eat enough food
Your body won’t work if you’re hungry. If you are hungry, eat. Get enough protein, carbs, and fats to make your body go. Also get enough micronutrients or else you’ll still have a hard time though you won’t die.
+2. Like sleep, just a general enhancer that makes everything better, so will naturally help with aligning your preferences to your long term and short term objectives. Downside is you might turn out to have an eating homeostasis issue and get fat.
Exercise
Bodies evolved to do work. If your body doesn’t do work it seems to languish in various ways that affect your mental health.
+2. General enhancer again. Downside is time investment and possibly suffering if you can’t find exercise you enjoy.
You seem to be answering my question after all, even if you don’t ask it: the answer is that it doesn’t bother you. Consider the situation I mentioned, fleshed out a bit. Suppose someone is deciding whether to clean his room or browse internet. “It would be better to clean the room,” he says, and then browses the net. The normal akratic person in this situation would be upset that he did not manage to clean the room. You would say, “Actually, I wanted to browse the net instead, obviously, since that’s what I did.”
But then suppose it happens again day after day. The akratic person will be upset again and again, in the same way. You will instead say, “Apparently my desire to browse the net is pretty strong.”
At long last the person cleans the room. He says, “I managed to overcome my akrasia.” You say, “The room was messy enough that I actually preferred to clean it.”
The main problem with your attitude is that it seems to depend on denying moral realism: right at the beginning, when you say that you must have preferred browsing the net, the other person may say, “Sure, I preferred browsing the net. But that’s bad, because it would have been objectively better to clean the room.”
Sure, I absolutely reject moral realism, and see no way to judge whether browsing the net or cleaning your room is better than in terms of preference satisfaction. I’ve covered this position extensively elsewhere.
https://mapandterritory.org/nothing-is-forbidden-but-some-things-are-good-b57f2aa84f1b
Moral realism is simply at odds with the structure of the world as we find it, so I find it unhelpful to include it in my ontology.
I disagree and have argued the point on LW at other times. But I think the most obvious problem is the fact that you talk about “ontology”, as though moral realism implies the existence of moral atoms, or something like that.
I’m seeing a deep division between our worldviews. Because I take the phenomenological and existentialist stances, ontology and metaphysics are separated rather than combined because there is no true ontology one might hold, and thus although I may not have moral truth in my ontology (model of the world) nothing prevents you from having it as a construct in your understand of your lifeworld, so you’re right that to me it looks like at least positing the existence of moral essence as a useful sense-making structure, although I assume you take more a nuanced view than proposing an equivalent of moral phlogiston.
But maybe we should discuss this issue elsewhere than this thread? I’ll just say that there are moral realist positions that seem sensible, but I believe they only stand by rejecting phenomenology or existentialism, hence why I don’t hold them, and instead end up closer to what’s often called the moral constructivist position where moral “facts” are derived from intersubjective experience yet remain false in the normal meaning of the word.
I replied on the open thread.