You seem to suggest to perform sub-optimal actions to put to use an item just because you happen to own it and to have spent a significant amount of money to acquire it: I’m assuming that if bought your cloak for the same price of a typical sweater, you would preferably use sweaters rather than the cloak. If this assumption is correct, then you are committing the sunk (edited, thanks wedrifid) cost fallacy.
If the cloak had been the price of my typical sweater (which I get from thrift stores), not only would I already own five of them, I would choose between the individual cloaks and sweaters in the same way that I currently choose between just my sweaters (based on what would look best with my outfit). The fact that I give my cloak preference doesn’t actually imply sunk-costing, though. I made up my mind to do what I am doing, cloakwise, before I bought the cloak: I wanted the cloak so I came up with a way to ensure that I’d get the relevant amount of value out of it so I could sensibly acquire it.
And now I’m done engaging your criticisms; they are tiresome.
From the few comments of yours I’ve read, I’ve noticed that you have a pattern of taking criticism as personal attacks.
Nobody here is trying to teach you how to live your life, but if you engage in public discussion, you can’t expect your claims to remain unchallenged, whether for good or wrong reasons.
If you are offended when you receive criticism on examples taken from your real life, then don’t use them.
As army1987 pointed out, the advice you give appears to entail the sunken cost fallacy, and your last answer doesn’t seem to refute that: you first committed to acquire the cloak because you “wanted” it, and then you decided to give it a priority that is dystonic with your true preferences (which would be to use the cloak only when it matches the rest of your outfit) in order to retroactively justify your commitment.
Assuming that your cloak is not a Veblen good (which would gain you utility directly from its price due to status signalling), then, if you wanted to be instrumentally rational, you should have based your decision to buy the cloak on the utility you expected to get from it by using it according to your true preferences, irrespective on its price, and then compare it to the expected utility of other uses of the same amount of money (including savings or donations).
I made up my mind to do what I am doing, cloakwise, before I bought the cloak
As army1987 pointed out, the advice you give appears to entail the sunken cost fallacy, and your last answer doesn’t seem to refute that
Read it again. Then keep reading it. Look up sunk cost fallacy again if necessary. You are just trivially wrong.
From the few comments of yours I’ve read, I’ve noticed that you have a pattern of taking criticism as personal attacks.
No, she called your criticisms tiresome because they were repetitive, inane and completely unresponsive to her actual words on the subject. Of course there isn’t any point in her trying to engage with them further.
Did you decide what you were going to do with the cloak before you bought it, or did you correctly predict what you would do with that cloak if you had it?
Did you decide what you were going to do with the cloak before you bought it, or did you correctly predict what you would do with that cloak if you had it?
The post seems to be directly advocating deciding to do more things with your stuff and choosing stuff to by that you will be able to successfully make yourself do more stuff with. While it is conceivable that this is rationonalisation all the way down it would be polite to accept the explicit testimony that she made up her mind before the purchase. If this is to be denied then it would need to be done by questioning whether some part of the post represents formalising rationalisation. Because whatever else it is clear that Alicorn’s testimony in the replies is entirely consistent with the great big essay she wrote on the subject.
ie. It can’t coherently be said of the post “If true, this represents execution of the sunk cost fallacy”. It would be coherent (albeit rather implausible) to claim of the post “This post represents a failure of Luminosity#Living_Luminously_by_Alicorn). You really just use stuff due to the sunk cost fallacy and are now writing a post that advocates deliberately buying stuff that you will be able to actively make yourself use more as a mere rationalisation.”
Right, I’m just uncertain how to tell the difference between “I want a flannel cloak, and in order to justify the purchase of a flannel cloak, I will wear it when it is not the best thing in my wardrobe.” and “I believe that the addition of this flannel cloak to my wardrobe will have a positive net effect.”
This is because I know it would be awesome for me to have a cloak, but I would have to change my current patterns of behavior in order to wear it regularly. I am one of the people that would leave it in the back of the closet except for special events, and I would not be satisfied with the purchase of a cloak. I am not someone for whom the addition of a cloak would be a net positive.
I also know that I could decide to change, and instead wear a cloak often enough that the investment was justified. I could choose to change into the kind of person who gets value out of wearing a cloak. I don’t know what other changes would be associated with that- and without a lot more information on which to gauge the expected results of changing the kind of person I am, I won’t do it.
I was trying to determine if the particular anecdote I just encountered was an example of someone changing their habits knowingly and in advance, or someone who simply knew what their habits were already going to be, and made a correct value judgement.
I’m playing a game with my brain: I will let it make me spend $80 on a cloak, if it will produce enough impetus towards cloak-wearing and cloak-enjoying that I actually get $80 of value out of it.
...
If that doesn’t sound appealing to you—if you don’t want to bake muffins—then you shouldn’t have acquired a muffin tin.
The opening of that section starts with the idea of ‘change into the person who will benefit from the decisions you have made’, while the closing is very much ‘correctly evaluate whether you will benefit from the decisions you are about to make’.
Right, I’m just uncertain how to tell the difference between “I want a flannel cloak, and in order to justify the purchase of a flannel cloak, I will wear it when it is not the best thing in my wardrobe.” and “I believe that the addition of this flannel cloak to my wardrobe will have a positive net effect.”
Consider it as a basic optimisation algorithm. You evaluate the expected utility of your life if you buy the cloak and compare it to the expected utility when you do not buy the cloak. Answering the first question requires recursing to the optimisation of how you will live your life assuming you have the cloak. Because naturally the utility you wish to estimate is the utility of having the cloak assuming you use it well, not the utility of the cloak assuming you use the cloak to suffocate babies.
I’m playing a game with my brain: I will let it make me spend $80 on a cloak, if it will produce enough impetus towards cloak-wearing and cloak-enjoying that I actually get $80 of value out of it.
...
If that doesn’t sound appealing to you—if you don’t want to bake muffins—then you shouldn’t have acquired a muffin tin.
Here some of the principles of “Min-Max” come into play. That means not assuming that we have perfect control of all future actions but instead making a realistic assessment of how we will behave, with other parts being partially modeled as agents with different incentives in a game.
For instance if I happened to assign ten quintillion utility to me making muffins every day for the next sixty years and yet know that even if I buy a muffin tin I’m still not going to make muffins I will choose to not buy a muffin tin. This is the same reasoning that applies when I am playing chess and white opens with F3---I don’t instantly move E5 under the expectation that white will move to G4, because that isn’t likely to happen.
But if I know that if I buy a kettlebell I will do kettlebell swings regularly as well as show it off to my sports-nerd friends and family then I will choose to buy kettlebell.
The opening of that section starts with the idea of ‘change into the person who will benefit from the decisions you have made’, while the closing is very much ‘correctly evaluate whether you will benefit from the decisions you are about to make’.
Overall this means “Make purchases where you predict that you will change into (or already be) a person who gets a net benefit from having made that purchase”.
If you assign some amount of utility to making muffins, and then choose not to make muffins, then you are either failing to optimize for utility, or assign some larger amount of utility to something which is mutually exclusive with making muffins.
What you have just described is correctly predicting your future actions, not deciding what your future actions will be for the purpose of reducing the negative effects of an action which would otherwise be a short-term benefit followed by a long-term harm. I predict that I would benefit greatly from buying a cloak, but I would be harmed more in the long term from having to keep and maintain a cloak that I rarely used. Without hacking myself, I would rarely use a cloak, and that is why I haven’t purchased one.
I thought I saw someone who had managed to to hack their own future preferences for a purpose similar to my own, and was trying to confirm if that was the case and if so gather a data point from which to evaluate the expected results of imitation. It appears as though I just found someone who made the same decision based on simple evaluation of the expected results, without a prior attempt to alter the expected results.
And while my opinions regarding cloaks are literally true, my final goal isn’t to decide whether to buy a cloak, but to practice in a low-risk environment like garb the skills required for high-risk behavior with multiple complicating factors.
Consider it as a basic optimisation algorithm. You evaluate the expected utility of your life if you buy the cloak and compare it to the expected utility when you do not buy the cloak. Answering the first question requires recursing to the optimisation of how you will live your life assuming you have the cloak. Because naturally the utility you wish to estimate is the utility of having the cloak assuming you use it well, not the utility of the cloak assuming you use the cloak to suffocate babies.
But she specifically stated that she would use that cloak less often if she had paid less for it. Since we are not talking about a luxury item that you can display in public for a status signalling effect that increases with its price, this means she is behaving suboptimally according to her own preferences in order to retroactively justify the expenditure.
She may have planned to do that before she actually bought the cloak, but this is irrelevant: in principle you could plan ahead all your actions and turn yourself into a lookup table, but you can still commit the sunk (edited, thanks wedrifid) cost fallacy.
I believe you are mistaken for the reasons previously explained in redundant detail. I have indicated my support of the decision algorithm suggested by the post and reasonable interpretations thereof in as much as it represents a coherent strategy for optimising the preferences the author indicates that she has. (I reserve judgement as to what extent the specific behavioral suggestions can be generalised to either all people or to myself in particular. I lack evidence regarding either.)
I do not believe this conversation will be able to progress beyond further repetitions of “She said use the sunk cost fallacy”, “No, that isn’t the strategy advocated at all, and you don’t understand what the term means”. I suspect readers would prefer that I did not spam them in that manner.
It’s not the sunk cost fallacy- The cloak IS the best thing in the wardrobe to wear. The cloak, minus $80, is a net positive; the question I am trying to answer is “Would the cloak and -$80 have been a net positive without the prior commitment?”
The point is whether it is the best thing to wear because of its price. If it is, then it’s a sunk cost fallacy, if it isn’t then there is no fallacy.
According to my interpretation of what the OP said, in particular:
If the cloak had been the price of my typical sweater (which I get from thrift stores), not only would I already own five of them, I would choose between the individual cloaks and sweaters in the same way that I currently choose between just my sweaters (based on what would look best with my outfit).
the fallacy seems be present.
Anyway, for the purpose of this discussion, the actual OP thought processes aren’t of particular interest: we are not here to decide whether to award Alicorn a rationality badge. What matters is whether the general form of the argument is correct or fallacious.
1) buy a new spatula and use it = positive utility
2) buy a new spatula, then forget it exists = negative utility
3) never buy the spatula = neutral
The ideal situation is #1. Alicorn is describing a fix to the common failure state, which is #2. Using this technique should result in either #3 (you won’t use the item and therefor don’t buy it) or #1 (you remember to use it)
The issue is that to get $80 worth of value from the cloak, you have to remember to use it. This will require changing habits that would previously have relied on other possessions (in Alicorn’s case, sweaters). For some people and some purchases, people forget to evaluate this. Alicorn has developed a technique that makes her aware of this requirement, helps her evaluate whether she really will remember to use it, and then further helps her to actually use it.
I think you’re confusing commitment with the sunk cost fallacy. Alicorn is committing to using a new wardrobe algorithm, which will properly value the cloak. Her old algorithm would undervalue it, because it wasn’t designed to handle “I have a cloak”. The sunk cost fallacy applies only if Alicorn continues wearing the cloak despite it being a reduction in utility; everything she has said here seems to indicate that wearing the cloak increases her utility; she just has to be careful to remember it as an option.
If the cloak is the best thing to wear because of its price, it still isn’t sunk cost.
It’s when the cloak is worn because of its cost, despite not being the best thing in the wardrobe, that sunk cost applies.
In poker, if I raise $80 on a value bet, and an opponent raises that by $1 (causing me to update the chances that I have a better hand), it is sunk cost to figure “There is a 1% chance that I will win the pot, but I’ve already put in $80 so I might as well lose another dollar”. It is perfectly rational to figure “There is a 1% chance that I will win the pot, and the pot will contain $162 dollars if I risk $1, and that call will end further betting. I expect +$.62 dollars from calling, and +0 dollars from folding.”
If the cloak is the best thing to wear because of its price, it still isn’t sunk cost.
It’s when the cloak is worn because of its cost, despite not being the best thing in the wardrobe, that sunk cost applies.
If the price correlates with the amount of effort you put into convincing yourself to enjoy wearing it, and that effort correlates with how much you enjoy wearing the cloak later on, then yes the price is correlated with the utility of wearing the cloak.
If this price-causes-effort-causes-enjoyment chain brings the total net utility of the cloak from “not the best thing” to “the best thing”, then you’re still winning because there was a potential profit margin in the enjoyment compared to the price and effort costs, and you took the opportunity to make some utilitarian profit.
Can you please explain this to me, in PM if necessary? I’ve read all your comments on this thread, and I still seem to be fairly convinced the sunk cost fallacy is in play. I might just be pattern matching “I spent X on it, so...”, which seems to be the only requirement for the sunk cost fallacy.
If the difference is planning, it still seems like the plan involves “My brain will fall prey to the sunk cost fallacy, so let me trick it into doing what I want using sunk costs”.
Making commitments doesn’t mean that you are engaging in the sunk cost fallacy.
Let’s say I want to exercise every day in the next week for 30 minutes. I could promise you to pay you $1000 if I don’t fulfill my goal. Making that promise will increase the likelihood that I will exercise every day.
You have to ask yourself two things: “What’s the value of the increased likelihood of exercising for yourself?” and “What’s the likelihood of having to pay the $100. What’s the expected cost of making the commitment?”
If being healthy is really valuable for yourself the benefit of the increased likelihood of exercising might be $75 for you. The chance of losing the money might be 0.25 and therefore cost you an expected $25.
The net value of making that commitment is $50.
It’s a good idea to make the commitment contract.
Alicon advocates to make a commitment when buying an item.
This commitment is supposed to have two advantages:
1) Increasing the expected utility in case of buying the item.
2) Decreasing the chance that she makes a bad buying decision.
It should be fairly trival to see that 2) is true. Determining whether 1) is true is more complicated.
It’s about far mode vs. near mode and about the value of focusing attention.
Alicorn doesn’t use a formal commitment contract for her robe. She would feel a bit of emotional pain if she would
valuate her commitment. For illustration purposes let’s imagine she would use a commitment contract.
She would take a friend and say: “If I don’t wear the robe 10 times in the next three month I will pay you $100”.
Let’s say we have the last day of the three month and she wore a robe 9 times. She needs to wear the robe today or lose the $100 dollar. Today, she’s interviewing for a new job. If she wears the robe to the interview she expects to have lower chances of getting the job. She estimates the costs of wearing the robe to the interview as $200.
If she still decides to wear the robe she’s commiting a fallacy.
That doesn’t mean that it was a mistake to make the commitment . Otherwise she might have worn the robe only 3 times. The 6 added times of wearing the robe might be worth $100.
The scenario you describe seems relevantly different from the one Alicorn described. I completely understand how commitment contracts are helpful, and even though they rely on (or are necessitated by) human cognitive flaws, the sunk cost fallacy is not among them.
In your scenario, at time #10 you’re making the rational decision between wearing the robe today and $100. In a parallel scenario to this one based on what Alicorn did, you instead would pay the $100 ahead of time and then (for some reason) be committed to wearing it 10 times. Now, what decision are you making at time #10 that is similar to the case above?
The opening of that section starts with the idea of ‘change into the person who will benefit from the decisions you have made’, while the closing is very much ‘correctly evaluate whether you will benefit from the decisions you are about to make’.
These aren’t really in tension. Do you want to bake muffins? Then arrange to bake muffins. Do you not want to bake muffins? Then don’t buy a muffin tin. Do you want to have and benefit from a cloak? Then acquire and arrange to wear a cloak. Do you not want to have and benefit from a cloak? Then don’t acquire a cloak. The failure mode we wish to avoid is “want to use thing, wind up not actually using thing”.
Thank you- what you have just described, in terms of my original question, is that you correctly predicted that you would wear that cloak and acquire significant value from it.
Perhaps it might be better off described as “Get into the habit of doing the things that you believe are optimal.” If you get into the habit of not baking muffins because you don’t have a muffin tin, then getting a muffin tin will not break the habit of not baking. The step ‘break the habit of not doing something you want to do’ is pretty important in acquiring value out of an item which enables you to do something that you want to do.
I suspected, however, that your actual advice was ‘change so that you want to do things so that you can acquire value from things which enable you to do those things’.
You seem to suggest to perform sub-optimal actions to put to use an item just because you happen to own it and to have spent a significant amount of money to acquire it: I’m assuming that if bought your cloak for the same price of a typical sweater, you would preferably use sweaters rather than the cloak. If this assumption is correct, then you are committing the sunken cost fallacy.
Planning ahead of time to perform sub-optimal actions would seem to require a different label to “Sunk Cost Fallacy”, assuming said actions would, in fact, be sub-optimal.
Or perhaps the “It’s Just Not A Sunk Cost Fallacy When You Have Not Sunk Cost And For Crying Out Loud It Is ‘Sunk’ Not ‘Sunken’ I’ve Been Trying To Correct That Subtly” fallacy?
The sunk cost fallacy is taking into account costs that have already incurred and are unrecoverable when making your current decision.
Planning your decisions ahead of time is merely an optimization: in order to do it correctly, you have to plan each decision according to the world state that you expect to occur when that decision is to be executed.
In the cloak example, in world states where a decision between wearing the cloak and wearing a sweater is to be executed, the price of the cloak and the price of the sweaters are sunk costs, thus they should not affect the decision.
I’m assuming that if bought your cloak for the same price of a typical sweater, you would preferably use sweaters rather than the cloak.
Instead, just assume that if she had not found excuses to wear the cloak, she would use sweaters rather than the cloak. This could be chosen by habit rather than considered preference.
Of course I can give away, sell, or discard things I don’t want. I don’t see what about the post gave the impression that I’m unaware of that.
You seem to suggest to perform sub-optimal actions to put to use an item just because you happen to own it and to have spent a significant amount of money to acquire it: I’m assuming that if bought your cloak for the same price of a typical sweater, you would preferably use sweaters rather than the cloak. If this assumption is correct, then you are committing the sunk (edited, thanks wedrifid) cost fallacy.
If the cloak had been the price of my typical sweater (which I get from thrift stores), not only would I already own five of them, I would choose between the individual cloaks and sweaters in the same way that I currently choose between just my sweaters (based on what would look best with my outfit). The fact that I give my cloak preference doesn’t actually imply sunk-costing, though. I made up my mind to do what I am doing, cloakwise, before I bought the cloak: I wanted the cloak so I came up with a way to ensure that I’d get the relevant amount of value out of it so I could sensibly acquire it.
And now I’m done engaging your criticisms; they are tiresome.
From the few comments of yours I’ve read, I’ve noticed that you have a pattern of taking criticism as personal attacks.
Nobody here is trying to teach you how to live your life, but if you engage in public discussion, you can’t expect your claims to remain unchallenged, whether for good or wrong reasons. If you are offended when you receive criticism on examples taken from your real life, then don’t use them.
As army1987 pointed out, the advice you give appears to entail the sunken cost fallacy, and your last answer doesn’t seem to refute that: you first committed to acquire the cloak because you “wanted” it, and then you decided to give it a priority that is dystonic with your true preferences (which would be to use the cloak only when it matches the rest of your outfit) in order to retroactively justify your commitment.
Assuming that your cloak is not a Veblen good (which would gain you utility directly from its price due to status signalling), then, if you wanted to be instrumentally rational, you should have based your decision to buy the cloak on the utility you expected to get from it by using it according to your true preferences, irrespective on its price, and then compare it to the expected utility of other uses of the same amount of money (including savings or donations).
Read it again. Then keep reading it. Look up sunk cost fallacy again if necessary. You are just trivially wrong.
No, she called your criticisms tiresome because they were repetitive, inane and completely unresponsive to her actual words on the subject. Of course there isn’t any point in her trying to engage with them further.
Did you decide what you were going to do with the cloak before you bought it, or did you correctly predict what you would do with that cloak if you had it?
I endorse wedrifid’s reply. Thanks, wedrifid.
The post seems to be directly advocating deciding to do more things with your stuff and choosing stuff to by that you will be able to successfully make yourself do more stuff with. While it is conceivable that this is rationonalisation all the way down it would be polite to accept the explicit testimony that she made up her mind before the purchase. If this is to be denied then it would need to be done by questioning whether some part of the post represents formalising rationalisation. Because whatever else it is clear that Alicorn’s testimony in the replies is entirely consistent with the great big essay she wrote on the subject.
ie. It can’t coherently be said of the post “If true, this represents execution of the sunk cost fallacy”. It would be coherent (albeit rather implausible) to claim of the post “This post represents a failure of Luminosity#Living_Luminously_by_Alicorn). You really just use stuff due to the sunk cost fallacy and are now writing a post that advocates deliberately buying stuff that you will be able to actively make yourself use more as a mere rationalisation.”
Right, I’m just uncertain how to tell the difference between “I want a flannel cloak, and in order to justify the purchase of a flannel cloak, I will wear it when it is not the best thing in my wardrobe.” and “I believe that the addition of this flannel cloak to my wardrobe will have a positive net effect.”
This is because I know it would be awesome for me to have a cloak, but I would have to change my current patterns of behavior in order to wear it regularly. I am one of the people that would leave it in the back of the closet except for special events, and I would not be satisfied with the purchase of a cloak. I am not someone for whom the addition of a cloak would be a net positive.
I also know that I could decide to change, and instead wear a cloak often enough that the investment was justified. I could choose to change into the kind of person who gets value out of wearing a cloak. I don’t know what other changes would be associated with that- and without a lot more information on which to gauge the expected results of changing the kind of person I am, I won’t do it.
I was trying to determine if the particular anecdote I just encountered was an example of someone changing their habits knowingly and in advance, or someone who simply knew what their habits were already going to be, and made a correct value judgement.
...
The opening of that section starts with the idea of ‘change into the person who will benefit from the decisions you have made’, while the closing is very much ‘correctly evaluate whether you will benefit from the decisions you are about to make’.
Consider it as a basic optimisation algorithm. You evaluate the expected utility of your life if you buy the cloak and compare it to the expected utility when you do not buy the cloak. Answering the first question requires recursing to the optimisation of how you will live your life assuming you have the cloak. Because naturally the utility you wish to estimate is the utility of having the cloak assuming you use it well, not the utility of the cloak assuming you use the cloak to suffocate babies.
Here some of the principles of “Min-Max” come into play. That means not assuming that we have perfect control of all future actions but instead making a realistic assessment of how we will behave, with other parts being partially modeled as agents with different incentives in a game.
For instance if I happened to assign ten quintillion utility to me making muffins every day for the next sixty years and yet know that even if I buy a muffin tin I’m still not going to make muffins I will choose to not buy a muffin tin. This is the same reasoning that applies when I am playing chess and white opens with F3---I don’t instantly move E5 under the expectation that white will move to G4, because that isn’t likely to happen.
But if I know that if I buy a kettlebell I will do kettlebell swings regularly as well as show it off to my sports-nerd friends and family then I will choose to buy kettlebell.
Overall this means “Make purchases where you predict that you will change into (or already be) a person who gets a net benefit from having made that purchase”.
If you assign some amount of utility to making muffins, and then choose not to make muffins, then you are either failing to optimize for utility, or assign some larger amount of utility to something which is mutually exclusive with making muffins.
What you have just described is correctly predicting your future actions, not deciding what your future actions will be for the purpose of reducing the negative effects of an action which would otherwise be a short-term benefit followed by a long-term harm. I predict that I would benefit greatly from buying a cloak, but I would be harmed more in the long term from having to keep and maintain a cloak that I rarely used. Without hacking myself, I would rarely use a cloak, and that is why I haven’t purchased one.
I thought I saw someone who had managed to to hack their own future preferences for a purpose similar to my own, and was trying to confirm if that was the case and if so gather a data point from which to evaluate the expected results of imitation. It appears as though I just found someone who made the same decision based on simple evaluation of the expected results, without a prior attempt to alter the expected results.
And while my opinions regarding cloaks are literally true, my final goal isn’t to decide whether to buy a cloak, but to practice in a low-risk environment like garb the skills required for high-risk behavior with multiple complicating factors.
But she specifically stated that she would use that cloak less often if she had paid less for it. Since we are not talking about a luxury item that you can display in public for a status signalling effect that increases with its price, this means she is behaving suboptimally according to her own preferences in order to retroactively justify the expenditure.
She may have planned to do that before she actually bought the cloak, but this is irrelevant: in principle you could plan ahead all your actions and turn yourself into a lookup table, but you can still commit the sunk (edited, thanks wedrifid) cost fallacy.
I believe you are mistaken for the reasons previously explained in redundant detail. I have indicated my support of the decision algorithm suggested by the post and reasonable interpretations thereof in as much as it represents a coherent strategy for optimising the preferences the author indicates that she has. (I reserve judgement as to what extent the specific behavioral suggestions can be generalised to either all people or to myself in particular. I lack evidence regarding either.)
I do not believe this conversation will be able to progress beyond further repetitions of “She said use the sunk cost fallacy”, “No, that isn’t the strategy advocated at all, and you don’t understand what the term means”. I suspect readers would prefer that I did not spam them in that manner.
It’s not the sunk cost fallacy- The cloak IS the best thing in the wardrobe to wear. The cloak, minus $80, is a net positive; the question I am trying to answer is “Would the cloak and -$80 have been a net positive without the prior commitment?”
The point is whether it is the best thing to wear because of its price. If it is, then it’s a sunk cost fallacy, if it isn’t then there is no fallacy.
According to my interpretation of what the OP said, in particular:
the fallacy seems be present.
Anyway, for the purpose of this discussion, the actual OP thought processes aren’t of particular interest: we are not here to decide whether to award Alicorn a rationality badge. What matters is whether the general form of the argument is correct or fallacious.
1) buy a new spatula and use it = positive utility 2) buy a new spatula, then forget it exists = negative utility 3) never buy the spatula = neutral
The ideal situation is #1. Alicorn is describing a fix to the common failure state, which is #2. Using this technique should result in either #3 (you won’t use the item and therefor don’t buy it) or #1 (you remember to use it)
The issue is that to get $80 worth of value from the cloak, you have to remember to use it. This will require changing habits that would previously have relied on other possessions (in Alicorn’s case, sweaters). For some people and some purchases, people forget to evaluate this. Alicorn has developed a technique that makes her aware of this requirement, helps her evaluate whether she really will remember to use it, and then further helps her to actually use it.
I think you’re confusing commitment with the sunk cost fallacy. Alicorn is committing to using a new wardrobe algorithm, which will properly value the cloak. Her old algorithm would undervalue it, because it wasn’t designed to handle “I have a cloak”. The sunk cost fallacy applies only if Alicorn continues wearing the cloak despite it being a reduction in utility; everything she has said here seems to indicate that wearing the cloak increases her utility; she just has to be careful to remember it as an option.
If the cloak is the best thing to wear because of its price, it still isn’t sunk cost.
It’s when the cloak is worn because of its cost, despite not being the best thing in the wardrobe, that sunk cost applies.
In poker, if I raise $80 on a value bet, and an opponent raises that by $1 (causing me to update the chances that I have a better hand), it is sunk cost to figure “There is a 1% chance that I will win the pot, but I’ve already put in $80 so I might as well lose another dollar”. It is perfectly rational to figure “There is a 1% chance that I will win the pot, and the pot will contain $162 dollars if I risk $1, and that call will end further betting. I expect +$.62 dollars from calling, and +0 dollars from folding.”
That’s what I meant.
It’s almost exactly the opposite of what you wrote.
I’m not sure what we are exactly disagreeing about.
I’m assuming that the cloak is not a Veblen good, hence the utility of wearing it is not correlated with its price.
What do you mean by “best thing in the wardrobe”?
“The best thing in the wardrobe” is that which, when worn today, will have the highest expected utility.
If the price correlates with the amount of effort you put into convincing yourself to enjoy wearing it, and that effort correlates with how much you enjoy wearing the cloak later on, then yes the price is correlated with the utility of wearing the cloak.
If this price-causes-effort-causes-enjoyment chain brings the total net utility of the cloak from “not the best thing” to “the best thing”, then you’re still winning because there was a potential profit margin in the enjoyment compared to the price and effort costs, and you took the opportunity to make some utilitarian profit.
Can you please explain this to me, in PM if necessary? I’ve read all your comments on this thread, and I still seem to be fairly convinced the sunk cost fallacy is in play. I might just be pattern matching “I spent X on it, so...”, which seems to be the only requirement for the sunk cost fallacy.
If the difference is planning, it still seems like the plan involves “My brain will fall prey to the sunk cost fallacy, so let me trick it into doing what I want using sunk costs”.
Making commitments doesn’t mean that you are engaging in the sunk cost fallacy.
Let’s say I want to exercise every day in the next week for 30 minutes. I could promise you to pay you $1000 if I don’t fulfill my goal. Making that promise will increase the likelihood that I will exercise every day.
You have to ask yourself two things: “What’s the value of the increased likelihood of exercising for yourself?” and “What’s the likelihood of having to pay the $100. What’s the expected cost of making the commitment?”
If being healthy is really valuable for yourself the benefit of the increased likelihood of exercising might be $75 for you. The chance of losing the money might be 0.25 and therefore cost you an expected $25. The net value of making that commitment is $50. It’s a good idea to make the commitment contract.
Alicon advocates to make a commitment when buying an item. This commitment is supposed to have two advantages: 1) Increasing the expected utility in case of buying the item. 2) Decreasing the chance that she makes a bad buying decision.
It should be fairly trival to see that 2) is true. Determining whether 1) is true is more complicated. It’s about far mode vs. near mode and about the value of focusing attention.
Alicorn doesn’t use a formal commitment contract for her robe. She would feel a bit of emotional pain if she would valuate her commitment. For illustration purposes let’s imagine she would use a commitment contract. She would take a friend and say: “If I don’t wear the robe 10 times in the next three month I will pay you $100”.
Let’s say we have the last day of the three month and she wore a robe 9 times. She needs to wear the robe today or lose the $100 dollar. Today, she’s interviewing for a new job. If she wears the robe to the interview she expects to have lower chances of getting the job. She estimates the costs of wearing the robe to the interview as $200. If she still decides to wear the robe she’s commiting a fallacy.
That doesn’t mean that it was a mistake to make the commitment . Otherwise she might have worn the robe only 3 times. The 6 added times of wearing the robe might be worth $100.
The scenario you describe seems relevantly different from the one Alicorn described. I completely understand how commitment contracts are helpful, and even though they rely on (or are necessitated by) human cognitive flaws, the sunk cost fallacy is not among them.
In your scenario, at time #10 you’re making the rational decision between wearing the robe today and $100. In a parallel scenario to this one based on what Alicorn did, you instead would pay the $100 ahead of time and then (for some reason) be committed to wearing it 10 times. Now, what decision are you making at time #10 that is similar to the case above?
I also believe the same thing with respect to you. I suppose we can’t resolve this disagreement, so we can close this discussion.
These aren’t really in tension. Do you want to bake muffins? Then arrange to bake muffins. Do you not want to bake muffins? Then don’t buy a muffin tin. Do you want to have and benefit from a cloak? Then acquire and arrange to wear a cloak. Do you not want to have and benefit from a cloak? Then don’t acquire a cloak. The failure mode we wish to avoid is “want to use thing, wind up not actually using thing”.
Thank you- what you have just described, in terms of my original question, is that you correctly predicted that you would wear that cloak and acquire significant value from it.
Perhaps it might be better off described as “Get into the habit of doing the things that you believe are optimal.” If you get into the habit of not baking muffins because you don’t have a muffin tin, then getting a muffin tin will not break the habit of not baking. The step ‘break the habit of not doing something you want to do’ is pretty important in acquiring value out of an item which enables you to do something that you want to do.
I suspected, however, that your actual advice was ‘change so that you want to do things so that you can acquire value from things which enable you to do those things’.
Planning ahead of time to perform sub-optimal actions would seem to require a different label to “Sunk Cost Fallacy”, assuming said actions would, in fact, be sub-optimal.
Throwing The Cost In The Sea Fallacy?
We could customize it to “Muffin Making Modification”!
Madness?
“People are crazy, the world is mad!” does tend to work as a generic catch-all. It perhaps lacks a little a specific descriptor.
Planned sunk (edited, thanks wedrifid) cost fallacy?
Or perhaps the “It’s Just Not A Sunk Cost Fallacy When You Have Not Sunk Cost And For Crying Out Loud It Is ‘Sunk’ Not ‘Sunken’ I’ve Been Trying To Correct That Subtly” fallacy?
The sunk cost fallacy is taking into account costs that have already incurred and are unrecoverable when making your current decision.
Planning your decisions ahead of time is merely an optimization: in order to do it correctly, you have to plan each decision according to the world state that you expect to occur when that decision is to be executed.
In the cloak example, in world states where a decision between wearing the cloak and wearing a sweater is to be executed, the price of the cloak and the price of the sweaters are sunk costs, thus they should not affect the decision.
Instead, just assume that if she had not found excuses to wear the cloak, she would use sweaters rather than the cloak. This could be chosen by habit rather than considered preference.
So why did she mention the price of the cloak as a relevant factor?