This is perhaps ironic because I have been going through precisely this PhD sunk-cost problem for the past few months, but regret bias is a serious part of behavior psychology. I’ve been dissatisfied with the direction that publication standards are moving in my current field (computer vision) for a while, and as a result have had a tough time finding an adviser/project match that would let me do things at a more abstract mathematical level. No one is very interested in those papers. Ultimately, over a two-year period, I reasoned that it was better for me to leave the PhD program, find a job that allowed me to pursue certain goals, and to leave research ideas to my own spare time. The single most difficult hurdle in reaching this decision was feeling very worried that I would regret leaving my institution (Harvard) because everyone tells me that a PhD from Harvard “opens lots of doors” and lots of people who I trust and think are non-trivially intelligent have insisted that unpleasantly sticking it out in the PhD program just to obtain the credential is absolutely the best thing.
My own assessment is that I will do just fine without that particular credential and that being able to use personal time to pursue the research I care about, even if I ultimately am not talented enough to publish any of it on my own, will be more fulfilling. But this was a damn hard conclusion to come by. I felt stressed and nervous, concerned that I will hate my future job’s working conditions and beat myself up over not sticking it out at Harvard. I largely made it into Harvard through sheer, stupid ability to work unreasonably long hours to self-teach. That is, by stubbornly never quitting; it’s not easy, however rational I wish to be, to feel free of these kinds of self-identity stigmas (e.g. don’t be a quitter).
I guess what I’m trying to say is that perceived future pain of regretting a decision is a legitimate consequence to consider. And sometimes that is absolutely a consequence that one should wish to avoid. To offer another example from my own life, a family member was in a position where she became pregnant unexpectedly while she was an unmarried 19-year-old college student. After many talks about the situation in general, I was asked what my own opinion was about the option of getting an abortion. I said it seemed like a reasonable option and might ultimately be the best thing, obviously modulo the person’s personal beliefs. Ultimately, however, this family member chose not to get the abortion because of the counterfactual regret of having terminated a potential life.
The person said, “if I get an abortion, then in the future I will remember that I did that thing and (as far as I can tell right now) I will always feel visceral pain about that.” That is a legitimate future consequence.
I think the problem that you want to isolate is different than just regret bias. I think the problem you want to address is the fact that a person’s current self is usually slavishly in the service of the remembering self as Kahneman puts it. We buy things because we think they will provide lots of utility, but then a few months later we don’t even use them any more. We prefer to keep our hands in painfully cold water for a longer total time as long as the last bit of the time includes warmer water (and thus fosters a more pleasant transitional memory). And we think we will regret something a lot more than we really will.
You want to design exercises that bring about a stark comparison between how you think you will remember something vs. the actual facts of the matter. And then focus on situations where the first component (how you think you will remember something) actually should matter (perhaps an abortion is a good example of that), and how the cognitive machinery that applies to problems like that one is completely inappropriate for problems like “should I upgrade to the new iPad2 because of the shinier screen”, yet we use the same cognitive software for both problems.
This sort of thing has been looked at w.r.t. dietary decisions before. I believe the results showed that when you are under a cognitive load, you’ll make consistently poorer snack choices than when you are not being asked to answer hard questions. Imagine how much more this would be influenced by the stresses of a situation like anguishing about whether to leave a PhD program.
I’m not optimistic that there is an easy way to address this. It seems to fit in with Hanson’s near/far mode ideas as well. When in near mode, we’ll be more capable of isolating practical constraints and consequences of a decision. But if a question immediately puts us in far mode, it’s much harder.
Consider the difference between “Should I leave my PhD program?” and “Should Jeff leave Jeff’s PhD program?” As much as people fail to pick out the consequences in their own decisions, I would suspect it’s far worse when migrated to another person’s issue. We tend to give advice in far mode, but always expect to receive advice in near mode.
There’s just a lot to disentangle about this. My opinion is that it would be better to break up the problem of “Why don’t people make sound consequentialist decisions?” into a bunch of smaller domain-specific sub-problems, and then to build the small tests around recognizing those sub-issues. Once people are good at dealing with any given sub-issue conditioned on the event that they recognize that they are in that sub-issue, then move on to exercises that teach you how to recognize potential sub-issues.
This is perhaps ironic because I have been going through precisely this PhD sunk-cost problem for the past few months, but regret bias is a serious part of behavior psychology. I’ve been dissatisfied with the direction that publication standards are moving in my current field (computer vision) for a while, and as a result have had a tough time finding an adviser/project match that would let me do things at a more abstract mathematical level. No one is very interested in those papers. Ultimately, over a two-year period, I reasoned that it was better for me to leave the PhD program, find a job that allowed me to pursue certain goals, and to leave research ideas to my own spare time. The single most difficult hurdle in reaching this decision was feeling very worried that I would regret leaving my institution (Harvard) because everyone tells me that a PhD from Harvard “opens lots of doors” and lots of people who I trust and think are non-trivially intelligent have insisted that unpleasantly sticking it out in the PhD program just to obtain the credential is absolutely the best thing.
My own assessment is that I will do just fine without that particular credential and that being able to use personal time to pursue the research I care about, even if I ultimately am not talented enough to publish any of it on my own, will be more fulfilling. But this was a damn hard conclusion to come by. I felt stressed and nervous, concerned that I will hate my future job’s working conditions and beat myself up over not sticking it out at Harvard. I largely made it into Harvard through sheer, stupid ability to work unreasonably long hours to self-teach. That is, by stubbornly never quitting; it’s not easy, however rational I wish to be, to feel free of these kinds of self-identity stigmas (e.g. don’t be a quitter).
I guess what I’m trying to say is that perceived future pain of regretting a decision is a legitimate consequence to consider. And sometimes that is absolutely a consequence that one should wish to avoid. To offer another example from my own life, a family member was in a position where she became pregnant unexpectedly while she was an unmarried 19-year-old college student. After many talks about the situation in general, I was asked what my own opinion was about the option of getting an abortion. I said it seemed like a reasonable option and might ultimately be the best thing, obviously modulo the person’s personal beliefs. Ultimately, however, this family member chose not to get the abortion because of the counterfactual regret of having terminated a potential life.
The person said, “if I get an abortion, then in the future I will remember that I did that thing and (as far as I can tell right now) I will always feel visceral pain about that.” That is a legitimate future consequence.
I think the problem that you want to isolate is different than just regret bias. I think the problem you want to address is the fact that a person’s current self is usually slavishly in the service of the remembering self as Kahneman puts it. We buy things because we think they will provide lots of utility, but then a few months later we don’t even use them any more. We prefer to keep our hands in painfully cold water for a longer total time as long as the last bit of the time includes warmer water (and thus fosters a more pleasant transitional memory). And we think we will regret something a lot more than we really will.
You want to design exercises that bring about a stark comparison between how you think you will remember something vs. the actual facts of the matter. And then focus on situations where the first component (how you think you will remember something) actually should matter (perhaps an abortion is a good example of that), and how the cognitive machinery that applies to problems like that one is completely inappropriate for problems like “should I upgrade to the new iPad2 because of the shinier screen”, yet we use the same cognitive software for both problems.
This sort of thing has been looked at w.r.t. dietary decisions before. I believe the results showed that when you are under a cognitive load, you’ll make consistently poorer snack choices than when you are not being asked to answer hard questions. Imagine how much more this would be influenced by the stresses of a situation like anguishing about whether to leave a PhD program.
I’m not optimistic that there is an easy way to address this. It seems to fit in with Hanson’s near/far mode ideas as well. When in near mode, we’ll be more capable of isolating practical constraints and consequences of a decision. But if a question immediately puts us in far mode, it’s much harder.
Consider the difference between “Should I leave my PhD program?” and “Should Jeff leave Jeff’s PhD program?” As much as people fail to pick out the consequences in their own decisions, I would suspect it’s far worse when migrated to another person’s issue. We tend to give advice in far mode, but always expect to receive advice in near mode.
There’s just a lot to disentangle about this. My opinion is that it would be better to break up the problem of “Why don’t people make sound consequentialist decisions?” into a bunch of smaller domain-specific sub-problems, and then to build the small tests around recognizing those sub-issues. Once people are good at dealing with any given sub-issue conditioned on the event that they recognize that they are in that sub-issue, then move on to exercises that teach you how to recognize potential sub-issues.