Cleverness-related failure mode (that actually came up in the trial unit):
One shouldn’t try too hard to rescue non-consequentialist reasons. This probably has to be emphasized especially with new audiences who associate “rationality” to Spock and university professors, or audiences who’ve studied pre-behavioral economics, and who think they score extra points if they come up with amazingly clever ways to rescue bad ideas.
Any decision-making algorithm, no matter how stupid, can be made to look like expected utility maximization through the transform “Assign infinite negative utility to departing from decision algorithm X”. This in essence is what somebody is doing when they say, “Aha! But if I stop my PhD program now, I’ll have the negative consequence of having abandoned a sunk cost!” (Sometimes I feel like hitting people with a wooden stick when they do this, but that act just expresses an emotion rather than having any discernible positive consequences.) This is Cleverly Failing to Get the Point if “not wanting to abandon a sunk cost”, i.e., the counterintuitive feel of departing from the brain’s previous decision algorithm, is treated as an overriding consideration, i.e., an infinite negative utility.
It’s a legitimate future consequence only if the person says, “The sense of having abandoned a sunk cost will make me feel sick to my stomach for around three days, after which I would start to adjust and adapt a la the hedonic treadmill”. In this case they have weighed the intensity and the duration of the future hedonic consequence, rather than treating it as an instantaneous infinite negative penalty, and are now ready to trade that off against other and probably larger considerations like the total amount of work required to get a PhD.
This probably has to be emphasized especially with new audiences who associate “rationality” to Spock and university professors, or audiences who’ve studied pre-behavioral economics, and who think they score extra points if they come up with amazingly clever ways to rescue bad ideas.
One of the other models people have for the rationalizing sort of “rationality” is that of lawyers.
Lawyers are very good at logic — the LSAT, the entrance examination for U.S. law schools, leans heavily on logic puzzles — but the whole point of being a trial or appeals lawyer is to come up with clever (and socially respectable) arguments for whatever position your client may have at the moment.
This extends past real-world lawyerhood. The tabletop role-playing game crowd have the expression “rules lawyer” for a person who comes up with clever arguments for why their character should get away with whatever they want to at the moment.
Indeed I think this is the central problem with the way most people use their powers of reasoning. (It even has a name: “the argumentative theory of reason”.) They start with a conclusion, and work backwards to find rational (or at least rational-sounding) ways of supporting that conclusion.
We all do this automatically; it may be the very thing our brains evolved to do. We have to work very hard to get ourselves to do the opposite, start with evidence and use reasoning based on the evidence to decide on our conclusion. I’d say most scientists manage to do this right maybe half the time, and most laypeople almost never manage it.
Sometimes I feel like hitting people with a wooden stick when they do this, but that act just expresses an emotion rather than having any discernible positive consequences.)
My normal response is, “so what’s bad about that?” and go a few rounds until the person has to struggle for an answer… the teachable moment where I can say, “you see what you’re doing? you’re just making stuff up. What’s actually going to happen?”
(That being said, it would definitely have been helpful for me in the past if I had thought to confine questions of consequences to things happening at a point-in-time. I eventually figured out that I needed to ask that for things people were thinking about or remembering, but there was a long time where I also had the hit-them-with-a-stick frustration to this kind of response.)
The only suggestion I have for exercises is to make people write down their own thinking (or state their thinking out loud), and then read it back as a kind of grammar-checking exercise. Are these abstract nouns or concrete nouns? Do they describe a point in time or some sort of vague non-timey thing?
I’ve done some similar things with small groups, though, and one thing that becomes quickly apparent is that everybody already knows when somebody else is doing it wrong. The part of the exercise that’s hard, is learning to apply it to your own thoughts or utterances, and for that, it helps to externalize them first, then treat them as input.
To put it another way, the prerequisite 5-second skill for consequence checking is reflecting on what you just said or thought. If people don’t reflect on their utterances, no further debiasing skills can be applied.
This is perhaps ironic because I have been going through precisely this PhD sunk-cost problem for the past few months, but regret bias is a serious part of behavior psychology. I’ve been dissatisfied with the direction that publication standards are moving in my current field (computer vision) for a while, and as a result have had a tough time finding an adviser/project match that would let me do things at a more abstract mathematical level. No one is very interested in those papers. Ultimately, over a two-year period, I reasoned that it was better for me to leave the PhD program, find a job that allowed me to pursue certain goals, and to leave research ideas to my own spare time. The single most difficult hurdle in reaching this decision was feeling very worried that I would regret leaving my institution (Harvard) because everyone tells me that a PhD from Harvard “opens lots of doors” and lots of people who I trust and think are non-trivially intelligent have insisted that unpleasantly sticking it out in the PhD program just to obtain the credential is absolutely the best thing.
My own assessment is that I will do just fine without that particular credential and that being able to use personal time to pursue the research I care about, even if I ultimately am not talented enough to publish any of it on my own, will be more fulfilling. But this was a damn hard conclusion to come by. I felt stressed and nervous, concerned that I will hate my future job’s working conditions and beat myself up over not sticking it out at Harvard. I largely made it into Harvard through sheer, stupid ability to work unreasonably long hours to self-teach. That is, by stubbornly never quitting; it’s not easy, however rational I wish to be, to feel free of these kinds of self-identity stigmas (e.g. don’t be a quitter).
I guess what I’m trying to say is that perceived future pain of regretting a decision is a legitimate consequence to consider. And sometimes that is absolutely a consequence that one should wish to avoid. To offer another example from my own life, a family member was in a position where she became pregnant unexpectedly while she was an unmarried 19-year-old college student. After many talks about the situation in general, I was asked what my own opinion was about the option of getting an abortion. I said it seemed like a reasonable option and might ultimately be the best thing, obviously modulo the person’s personal beliefs. Ultimately, however, this family member chose not to get the abortion because of the counterfactual regret of having terminated a potential life.
The person said, “if I get an abortion, then in the future I will remember that I did that thing and (as far as I can tell right now) I will always feel visceral pain about that.” That is a legitimate future consequence.
I think the problem that you want to isolate is different than just regret bias. I think the problem you want to address is the fact that a person’s current self is usually slavishly in the service of the remembering self as Kahneman puts it. We buy things because we think they will provide lots of utility, but then a few months later we don’t even use them any more. We prefer to keep our hands in painfully cold water for a longer total time as long as the last bit of the time includes warmer water (and thus fosters a more pleasant transitional memory). And we think we will regret something a lot more than we really will.
You want to design exercises that bring about a stark comparison between how you think you will remember something vs. the actual facts of the matter. And then focus on situations where the first component (how you think you will remember something) actually should matter (perhaps an abortion is a good example of that), and how the cognitive machinery that applies to problems like that one is completely inappropriate for problems like “should I upgrade to the new iPad2 because of the shinier screen”, yet we use the same cognitive software for both problems.
This sort of thing has been looked at w.r.t. dietary decisions before. I believe the results showed that when you are under a cognitive load, you’ll make consistently poorer snack choices than when you are not being asked to answer hard questions. Imagine how much more this would be influenced by the stresses of a situation like anguishing about whether to leave a PhD program.
I’m not optimistic that there is an easy way to address this. It seems to fit in with Hanson’s near/far mode ideas as well. When in near mode, we’ll be more capable of isolating practical constraints and consequences of a decision. But if a question immediately puts us in far mode, it’s much harder.
Consider the difference between “Should I leave my PhD program?” and “Should Jeff leave Jeff’s PhD program?” As much as people fail to pick out the consequences in their own decisions, I would suspect it’s far worse when migrated to another person’s issue. We tend to give advice in far mode, but always expect to receive advice in near mode.
There’s just a lot to disentangle about this. My opinion is that it would be better to break up the problem of “Why don’t people make sound consequentialist decisions?” into a bunch of smaller domain-specific sub-problems, and then to build the small tests around recognizing those sub-issues. Once people are good at dealing with any given sub-issue conditioned on the event that they recognize that they are in that sub-issue, then move on to exercises that teach you how to recognize potential sub-issues.
I think it’s important to try to convert the reason to a consequentialist reason every time actually; it’s just that one isn’t done at that point, you have to step back and decide if the reason is enough. Like the murder example one needs to avoid dismissing reasons for being in the wrong format.
“I don’t want to tell my boyfriend because he should already know” translates to: in the universe in which I tell my boyfriend he learns to rely on me to tell him these things a little more and his chance of doing this sort of thing without my asking decreases in the future. You then have to ask if this supposed effect is really true and if the negative consequence is strong enough, which depends on things like the chances that he’ll eventually figure it out. But converting the reason gets you answering the right questions.
Sunk cost fallacy could be a sign that you don’t trust your present judgement compared to when you made the original decision to put the resources in. The right question is to ask why you changed your mind so strongly that the degree isn’t worth it even at significantly less additional cost. It is because of new information, new values, new rationality skills or just being in a bad mood right now.
An advantage is that you feel just as clever for coming up with the right questions whatever you decide, which out to make this a bit easy to motivate yourself to implement.
Sunk cost fallacy could be a sign that you don’t trust your present judgement compared to when you made the original decision to put the resources in
Definitely useful. I personally find the two have a very different emotional/internal “flavor”—I can tell when I want to avoid a sunk cost vs when I’m in a bad mood and just don’t want to deal with a cost—but that’s not necessarily always true of me, much less anyone else.
It’s a legitimate future consequence only if the person says, “The sense of having abandoned a sunk cost will make me feel sick to my stomach for around three days, after which I would start to adjust and adapt a la the hedonic treadmill”
I wouldn’t even allow that. I much prefer to treat such a sense as a (misguided) signal about the map, rather than a piece of territory that I intrinsically care about. Seeing things with this framing allows you to explore the signals with less distortion, and allows them to go away more easily once you take them into account. If you start treating them as things to worry about, then you get sadness about sadness, fear about fear, and other information cascades that can be quite destructive.
Additionally, on the cases where the irrational discomfort actually sways your decision over the threshold, you’re training yourself to listen to things that should not exist in the first place, which just reinforces the problem.
You’re training yourself to listen to things that should not exist in the first place
This strikes me as a perfect lead-in to Spock style “Bah, my emotions SHOULDN’T exist, therefor I will just IGNORE them”. This does not work well.
If we ignore a REAL negative consequence in our planning, we’re going to get frustrated when the consequence happens anyway, because now it’s an UNEXPECTED negative consequence of our decision. If we further decide that we’re not REALLY having that negative consequence, then it will get further exacerbated by our unwillingness to accept the situation, and therefor our inability to actually do anything to fix the situation. It’s entirely possible that we’re now miserable for two weeks instead of three days.
Heck, It’s entirely possible the whole thing could have been fixed by thinking about it and saying “I would normally feel bad, but since I’m aware of this, I can instead just remind myself of the awesome rational decision I’m making, and how cool my life is because of this Rationality thing!”, possibly supplemented by a celebratory slice of cake to reinforce that this is a positive, not a negative, event. (And cake makes everyone happy!)
This strikes me as a perfect lead-in to Spock style “Bah, my emotions SHOULDN’T exist, therefor I will just IGNORE them”. This does not work well.
No no no, not that. That’s terrible!
“listen” is ambiguous—oops. You want to acknowledge the feeling, but not act on it. Once you can acknowledge it, you can realize that it doesn’t make sense, and then release that feeling and be done with it.
If I’m hungry, I can’t just ignore that and continue to function at 100%. I can go eat and restore my blood sugar, or I can delay that hunger and function at less-than-peak efficiency because my body does not have all the resources it needs.
Emotions are the same way—if I feel upset or a sense of loss, I have to address that emotion. This is not always a simple “acknowledge and release” 5 minute process.
Believing otherwise will screw me up just as badly as believing I can cure hunger by “acknowledging and releasing” it instead of eating lunch.
I think we agree a lot more than you realize. Pretending that you aren’t feeling emotion that you are feeling is a recipe for disaster. In your analogy, I recommend the equivalent of eating.
However, this doesn’t mean that you yield to the emotions when they’re pushing you towards bad decisions. It also doesn’t mean you pretend that it has to be some big ordeal to fix the problem right. Those are both very bad ideas for more reasons than are obvious.
“eating” can be anywhere from a split second automatic response to a extended ordeal. If you know what you’re doing, the Phd example is not more than a 5 minute process—I’ve walked people through worse things in about that time.
I “cheated” a bit, in that I had them spend ~15-20 minutes with a chat bot that taught them some skills for getting in touch with those parts of their mind. Actually working through the problem was a few minutes of text chat that basically pointed out that there was no magic option and that they needed to let go of the problem emotions. All the real magic was in putting them in the state of mind to shut up and listen.
I suppose the best analogy I could offer here is getting robbed. It takes maybe 5 minutes to get robbed. There’s (usually) nothing you can do to fix the situation or recoup the cost. But people still feel bad about it for a while.
Your link seems to suggest, more or less, using hypnosis to just wipe out this guilt—except the examples you give don’t really seem to address that emotional side at all. You’re focusing on the intellectual acceptance of “yes, I should drop the PhD”, which isn’t what I’m talking about. I’m talking about the emotional baggage that comes with that, the sense that you’ve wasted 2 years as a sunk cost that isn’t even doing you any good at this point.
If you’re really hypnotizing way that guilt, that emotional response, then I guess I am misunderstanding you. Is that the case? Because I would say that, based on my personal experience, that is seriously dangerous territory. Not to say you shouldn’t trust yourself with it—I do it myself. But it is a technique I have seen cause a lot of people serious problems, and definitely not one I’d teach casually.
If you’re really hypnotizing way that guilt, that emotional response, then I guess I am misunderstanding you. Is that the case?
Haha! YES!
...people still feel bad about it for a while.
Yep. They’re doing it wrong.
You’re focusing on the intellectual acceptance of “yes, I should drop the PhD”
No no no no no! I’m talking about the emotional acceptance. It is a very different thing than intellectual acceptance, but that does not mean they can’t track each other. If your mind is organized well, they do track.
I would say that, based on my personal experience, that is seriously dangerous territory
There is a very important distinction between suppressing emotion (perhaps successfully) and eliminating the cause of the emotion directly by coming up with a better way of handling the conflict. The latter is healthy and quite low risk compared to the null option. This is what I do - with or without “hypnosis”.
(Sometimes I feel like hitting people with a wooden stick when they do this, but that act just expresses an emotion rather than having any discernible positive consequences.)
It would have the consequence of conditioning in the subject’s mind an association between a particular thought process and being hit with a stick. Most people don’t like being hit with sticks, so the association is likely to make them avoid that particular thought process. Do you not consider “teaching people to avoid a dangerously stupid thought process” a positive consequence?
Actually they would associate the stick with a number of things, including but not limited to the stupid thought process. They would be quite likely to associate the stick with their encounter with Eliezer, and to their (failed) attempt to converse with and/or follow his thought processes. Mind: They associate the stick with all aspects of the attempt, not only with the failure.
It might work in a Master/Apprentice scenario where the stick-hitting-victim is bindingly pre-committed to a year of solitude with Stick-Happy!Eliezer in order to learn from him the art of Cognitive Kung Fu. This is the only scenario I can immediately visualize in which the stick-hitting victim would not immediately decide that Stick-Happy!Eliezer is a person they can get away with avoiding, and possibly with reporting to the police for assault.
EDIT01: This is assuming that the experiential sample size is 1.
I was only pointing out that arguably-positive consequences would be present. I agree that they most likely would not predominate outside controlled conditions, and the overall decision not to engage in spontaneous armed assault was a wise one.
Rationalization is an important skill and should be rewarded, not punished. If you never try to rationalize others’ decisions then you won’t notice when they actually do have a good justification, and if you never practice rationalization then you’ll never get good enough at it to find their justifications when they exist. The result is gross overconfidence in the stupidity of the opposing side and thus gross overconfidence in one’s own rationality. That leads to tragedies and atrocities, both personal and societal.
Hm, is perspective-taking the same skill that I was talking about? I can’t tell. Also I thought that Eliezer’s examples were phrased in the hypothetical, and thus it’d be rationalizing others’ beliefs/behavior, not one’s own. I’m not sure to what extent rationalizing a conclusion and rationalizing one’s own behavior are related. Introspectively, the defensiveness and self-justifying-ness inherent to the latter makes it a rather different animal.
“Coming up with a single, stupid explanation, failing to realize it is stupid, and then using it as an excuse to cease all further thought” is a very, very bad skill.
Thinking “well, but abandoning a sunk cost actually IS a negative future event” is smart IFF you then go “I’d be miserable for three days. How does that weigh against years spent in the program?”
It’s very, very bad, however, if you stop there and continue to spend 2 years on a PhD just because you don’t want to even THINK about those three days of misery.
I think understanding this dichotomy is critical. If you stop even thinking “well, but abandoning a sunk cost IS a negative future event” because you’re afraid of falling in to the trap of then avoiding all sunk costs, then you’re ignoring real negative consequences to your decisions.
Cleverness-related failure mode (that actually came up in the trial unit):
One shouldn’t try too hard to rescue non-consequentialist reasons. This probably has to be emphasized especially with new audiences who associate “rationality” to Spock and university professors, or audiences who’ve studied pre-behavioral economics, and who think they score extra points if they come up with amazingly clever ways to rescue bad ideas.
Any decision-making algorithm, no matter how stupid, can be made to look like expected utility maximization through the transform “Assign infinite negative utility to departing from decision algorithm X”. This in essence is what somebody is doing when they say, “Aha! But if I stop my PhD program now, I’ll have the negative consequence of having abandoned a sunk cost!” (Sometimes I feel like hitting people with a wooden stick when they do this, but that act just expresses an emotion rather than having any discernible positive consequences.) This is Cleverly Failing to Get the Point if “not wanting to abandon a sunk cost”, i.e., the counterintuitive feel of departing from the brain’s previous decision algorithm, is treated as an overriding consideration, i.e., an infinite negative utility.
It’s a legitimate future consequence only if the person says, “The sense of having abandoned a sunk cost will make me feel sick to my stomach for around three days, after which I would start to adjust and adapt a la the hedonic treadmill”. In this case they have weighed the intensity and the duration of the future hedonic consequence, rather than treating it as an instantaneous infinite negative penalty, and are now ready to trade that off against other and probably larger considerations like the total amount of work required to get a PhD.
One of the other models people have for the rationalizing sort of “rationality” is that of lawyers.
Lawyers are very good at logic — the LSAT, the entrance examination for U.S. law schools, leans heavily on logic puzzles — but the whole point of being a trial or appeals lawyer is to come up with clever (and socially respectable) arguments for whatever position your client may have at the moment.
This extends past real-world lawyerhood. The tabletop role-playing game crowd have the expression “rules lawyer” for a person who comes up with clever arguments for why their character should get away with whatever they want to at the moment.
Indeed I think this is the central problem with the way most people use their powers of reasoning. (It even has a name: “the argumentative theory of reason”.) They start with a conclusion, and work backwards to find rational (or at least rational-sounding) ways of supporting that conclusion.
We all do this automatically; it may be the very thing our brains evolved to do. We have to work very hard to get ourselves to do the opposite, start with evidence and use reasoning based on the evidence to decide on our conclusion. I’d say most scientists manage to do this right maybe half the time, and most laypeople almost never manage it.
My normal response is, “so what’s bad about that?” and go a few rounds until the person has to struggle for an answer… the teachable moment where I can say, “you see what you’re doing? you’re just making stuff up. What’s actually going to happen?”
(That being said, it would definitely have been helpful for me in the past if I had thought to confine questions of consequences to things happening at a point-in-time. I eventually figured out that I needed to ask that for things people were thinking about or remembering, but there was a long time where I also had the hit-them-with-a-stick frustration to this kind of response.)
The only suggestion I have for exercises is to make people write down their own thinking (or state their thinking out loud), and then read it back as a kind of grammar-checking exercise. Are these abstract nouns or concrete nouns? Do they describe a point in time or some sort of vague non-timey thing?
I’ve done some similar things with small groups, though, and one thing that becomes quickly apparent is that everybody already knows when somebody else is doing it wrong. The part of the exercise that’s hard, is learning to apply it to your own thoughts or utterances, and for that, it helps to externalize them first, then treat them as input.
To put it another way, the prerequisite 5-second skill for consequence checking is reflecting on what you just said or thought. If people don’t reflect on their utterances, no further debiasing skills can be applied.
This is perhaps ironic because I have been going through precisely this PhD sunk-cost problem for the past few months, but regret bias is a serious part of behavior psychology. I’ve been dissatisfied with the direction that publication standards are moving in my current field (computer vision) for a while, and as a result have had a tough time finding an adviser/project match that would let me do things at a more abstract mathematical level. No one is very interested in those papers. Ultimately, over a two-year period, I reasoned that it was better for me to leave the PhD program, find a job that allowed me to pursue certain goals, and to leave research ideas to my own spare time. The single most difficult hurdle in reaching this decision was feeling very worried that I would regret leaving my institution (Harvard) because everyone tells me that a PhD from Harvard “opens lots of doors” and lots of people who I trust and think are non-trivially intelligent have insisted that unpleasantly sticking it out in the PhD program just to obtain the credential is absolutely the best thing.
My own assessment is that I will do just fine without that particular credential and that being able to use personal time to pursue the research I care about, even if I ultimately am not talented enough to publish any of it on my own, will be more fulfilling. But this was a damn hard conclusion to come by. I felt stressed and nervous, concerned that I will hate my future job’s working conditions and beat myself up over not sticking it out at Harvard. I largely made it into Harvard through sheer, stupid ability to work unreasonably long hours to self-teach. That is, by stubbornly never quitting; it’s not easy, however rational I wish to be, to feel free of these kinds of self-identity stigmas (e.g. don’t be a quitter).
I guess what I’m trying to say is that perceived future pain of regretting a decision is a legitimate consequence to consider. And sometimes that is absolutely a consequence that one should wish to avoid. To offer another example from my own life, a family member was in a position where she became pregnant unexpectedly while she was an unmarried 19-year-old college student. After many talks about the situation in general, I was asked what my own opinion was about the option of getting an abortion. I said it seemed like a reasonable option and might ultimately be the best thing, obviously modulo the person’s personal beliefs. Ultimately, however, this family member chose not to get the abortion because of the counterfactual regret of having terminated a potential life.
The person said, “if I get an abortion, then in the future I will remember that I did that thing and (as far as I can tell right now) I will always feel visceral pain about that.” That is a legitimate future consequence.
I think the problem that you want to isolate is different than just regret bias. I think the problem you want to address is the fact that a person’s current self is usually slavishly in the service of the remembering self as Kahneman puts it. We buy things because we think they will provide lots of utility, but then a few months later we don’t even use them any more. We prefer to keep our hands in painfully cold water for a longer total time as long as the last bit of the time includes warmer water (and thus fosters a more pleasant transitional memory). And we think we will regret something a lot more than we really will.
You want to design exercises that bring about a stark comparison between how you think you will remember something vs. the actual facts of the matter. And then focus on situations where the first component (how you think you will remember something) actually should matter (perhaps an abortion is a good example of that), and how the cognitive machinery that applies to problems like that one is completely inappropriate for problems like “should I upgrade to the new iPad2 because of the shinier screen”, yet we use the same cognitive software for both problems.
This sort of thing has been looked at w.r.t. dietary decisions before. I believe the results showed that when you are under a cognitive load, you’ll make consistently poorer snack choices than when you are not being asked to answer hard questions. Imagine how much more this would be influenced by the stresses of a situation like anguishing about whether to leave a PhD program.
I’m not optimistic that there is an easy way to address this. It seems to fit in with Hanson’s near/far mode ideas as well. When in near mode, we’ll be more capable of isolating practical constraints and consequences of a decision. But if a question immediately puts us in far mode, it’s much harder.
Consider the difference between “Should I leave my PhD program?” and “Should Jeff leave Jeff’s PhD program?” As much as people fail to pick out the consequences in their own decisions, I would suspect it’s far worse when migrated to another person’s issue. We tend to give advice in far mode, but always expect to receive advice in near mode.
There’s just a lot to disentangle about this. My opinion is that it would be better to break up the problem of “Why don’t people make sound consequentialist decisions?” into a bunch of smaller domain-specific sub-problems, and then to build the small tests around recognizing those sub-issues. Once people are good at dealing with any given sub-issue conditioned on the event that they recognize that they are in that sub-issue, then move on to exercises that teach you how to recognize potential sub-issues.
I think it’s important to try to convert the reason to a consequentialist reason every time actually; it’s just that one isn’t done at that point, you have to step back and decide if the reason is enough. Like the murder example one needs to avoid dismissing reasons for being in the wrong format.
“I don’t want to tell my boyfriend because he should already know” translates to: in the universe in which I tell my boyfriend he learns to rely on me to tell him these things a little more and his chance of doing this sort of thing without my asking decreases in the future. You then have to ask if this supposed effect is really true and if the negative consequence is strong enough, which depends on things like the chances that he’ll eventually figure it out. But converting the reason gets you answering the right questions.
Sunk cost fallacy could be a sign that you don’t trust your present judgement compared to when you made the original decision to put the resources in. The right question is to ask why you changed your mind so strongly that the degree isn’t worth it even at significantly less additional cost. It is because of new information, new values, new rationality skills or just being in a bad mood right now.
An advantage is that you feel just as clever for coming up with the right questions whatever you decide, which out to make this a bit easy to motivate yourself to implement.
Definitely useful. I personally find the two have a very different emotional/internal “flavor”—I can tell when I want to avoid a sunk cost vs when I’m in a bad mood and just don’t want to deal with a cost—but that’s not necessarily always true of me, much less anyone else.
I wouldn’t even allow that. I much prefer to treat such a sense as a (misguided) signal about the map, rather than a piece of territory that I intrinsically care about. Seeing things with this framing allows you to explore the signals with less distortion, and allows them to go away more easily once you take them into account. If you start treating them as things to worry about, then you get sadness about sadness, fear about fear, and other information cascades that can be quite destructive.
Additionally, on the cases where the irrational discomfort actually sways your decision over the threshold, you’re training yourself to listen to things that should not exist in the first place, which just reinforces the problem.
This strikes me as a perfect lead-in to Spock style “Bah, my emotions SHOULDN’T exist, therefor I will just IGNORE them”. This does not work well.
If we ignore a REAL negative consequence in our planning, we’re going to get frustrated when the consequence happens anyway, because now it’s an UNEXPECTED negative consequence of our decision. If we further decide that we’re not REALLY having that negative consequence, then it will get further exacerbated by our unwillingness to accept the situation, and therefor our inability to actually do anything to fix the situation. It’s entirely possible that we’re now miserable for two weeks instead of three days.
Heck, It’s entirely possible the whole thing could have been fixed by thinking about it and saying “I would normally feel bad, but since I’m aware of this, I can instead just remind myself of the awesome rational decision I’m making, and how cool my life is because of this Rationality thing!”, possibly supplemented by a celebratory slice of cake to reinforce that this is a positive, not a negative, event. (And cake makes everyone happy!)
No no no, not that. That’s terrible!
“listen” is ambiguous—oops. You want to acknowledge the feeling, but not act on it. Once you can acknowledge it, you can realize that it doesn’t make sense, and then release that feeling and be done with it.
If I’m hungry, I can’t just ignore that and continue to function at 100%. I can go eat and restore my blood sugar, or I can delay that hunger and function at less-than-peak efficiency because my body does not have all the resources it needs.
Emotions are the same way—if I feel upset or a sense of loss, I have to address that emotion. This is not always a simple “acknowledge and release” 5 minute process.
Believing otherwise will screw me up just as badly as believing I can cure hunger by “acknowledging and releasing” it instead of eating lunch.
I think we agree a lot more than you realize. Pretending that you aren’t feeling emotion that you are feeling is a recipe for disaster. In your analogy, I recommend the equivalent of eating.
However, this doesn’t mean that you yield to the emotions when they’re pushing you towards bad decisions. It also doesn’t mean you pretend that it has to be some big ordeal to fix the problem right. Those are both very bad ideas for more reasons than are obvious.
“eating” can be anywhere from a split second automatic response to a extended ordeal. If you know what you’re doing, the Phd example is not more than a 5 minute process—I’ve walked people through worse things in about that time.
Please elaborate!
I “cheated” a bit, in that I had them spend ~15-20 minutes with a chat bot that taught them some skills for getting in touch with those parts of their mind. Actually working through the problem was a few minutes of text chat that basically pointed out that there was no magic option and that they needed to let go of the problem emotions. All the real magic was in putting them in the state of mind to shut up and listen.
I talk about it a bit here
I suppose the best analogy I could offer here is getting robbed. It takes maybe 5 minutes to get robbed. There’s (usually) nothing you can do to fix the situation or recoup the cost. But people still feel bad about it for a while.
Your link seems to suggest, more or less, using hypnosis to just wipe out this guilt—except the examples you give don’t really seem to address that emotional side at all. You’re focusing on the intellectual acceptance of “yes, I should drop the PhD”, which isn’t what I’m talking about. I’m talking about the emotional baggage that comes with that, the sense that you’ve wasted 2 years as a sunk cost that isn’t even doing you any good at this point.
If you’re really hypnotizing way that guilt, that emotional response, then I guess I am misunderstanding you. Is that the case? Because I would say that, based on my personal experience, that is seriously dangerous territory. Not to say you shouldn’t trust yourself with it—I do it myself. But it is a technique I have seen cause a lot of people serious problems, and definitely not one I’d teach casually.
Haha! YES!
Yep. They’re doing it wrong.
No no no no no! I’m talking about the emotional acceptance. It is a very different thing than intellectual acceptance, but that does not mean they can’t track each other. If your mind is organized well, they do track.
Have you read Kaj’s post on overcoming suffering and suffering as attention allocation conflicts? This is basically what I’m talking about
There is a very important distinction between suppressing emotion (perhaps successfully) and eliminating the cause of the emotion directly by coming up with a better way of handling the conflict. The latter is healthy and quite low risk compared to the null option. This is what I do - with or without “hypnosis”.
Suppressing emotion is a recipe for disaster.
It would have the consequence of conditioning in the subject’s mind an association between a particular thought process and being hit with a stick. Most people don’t like being hit with sticks, so the association is likely to make them avoid that particular thought process. Do you not consider “teaching people to avoid a dangerously stupid thought process” a positive consequence?
Actually they would associate the stick with a number of things, including but not limited to the stupid thought process. They would be quite likely to associate the stick with their encounter with Eliezer, and to their (failed) attempt to converse with and/or follow his thought processes. Mind: They associate the stick with all aspects of the attempt, not only with the failure.
It might work in a Master/Apprentice scenario where the stick-hitting-victim is bindingly pre-committed to a year of solitude with Stick-Happy!Eliezer in order to learn from him the art of Cognitive Kung Fu. This is the only scenario I can immediately visualize in which the stick-hitting victim would not immediately decide that Stick-Happy!Eliezer is a person they can get away with avoiding, and possibly with reporting to the police for assault.
EDIT01: This is assuming that the experiential sample size is 1.
I was only pointing out that arguably-positive consequences would be present. I agree that they most likely would not predominate outside controlled conditions, and the overall decision not to engage in spontaneous armed assault was a wise one.
Rationalization is an important skill and should be rewarded, not punished. If you never try to rationalize others’ decisions then you won’t notice when they actually do have a good justification, and if you never practice rationalization then you’ll never get good enough at it to find their justifications when they exist. The result is gross overconfidence in the stupidity of the opposing side and thus gross overconfidence in one’s own rationality. That leads to tragedies and atrocities, both personal and societal.
Perspective-taking is a separate “skill” from rationalizing one’s own behavior.
Hm, is perspective-taking the same skill that I was talking about? I can’t tell. Also I thought that Eliezer’s examples were phrased in the hypothetical, and thus it’d be rationalizing others’ beliefs/behavior, not one’s own. I’m not sure to what extent rationalizing a conclusion and rationalizing one’s own behavior are related. Introspectively, the defensiveness and self-justifying-ness inherent to the latter makes it a rather different animal.
“Coming up with explanations” is a good skill.
“Coming up with a single, stupid explanation, failing to realize it is stupid, and then using it as an excuse to cease all further thought” is a very, very bad skill.
Thinking “well, but abandoning a sunk cost actually IS a negative future event” is smart IFF you then go “I’d be miserable for three days. How does that weigh against years spent in the program?”
It’s very, very bad, however, if you stop there and continue to spend 2 years on a PhD just because you don’t want to even THINK about those three days of misery.
I think understanding this dichotomy is critical. If you stop even thinking “well, but abandoning a sunk cost IS a negative future event” because you’re afraid of falling in to the trap of then avoiding all sunk costs, then you’re ignoring real negative consequences to your decisions.