Many of the pain points listed have a common trait: the decision would seem easier with less information. For example, the PhD decision is easier if you didn’t know about the costs which have been sunk, the identity decisions are easier if you’re not sure of your own identity, cached thought problems are easier without having that thought cached, etc…
But we know that information should never have negative value. So why not highlight that dissonance? Imagine the following exercise:
Handout: “You spent the last 3 years working toward a PhD. You passed up a $90k job to stay in the program. Now you have 2 years left, and another $90k job offer has come your way. Do you take it?” (I don’t know much about PhD programs, so feel free to imagine more plausible numbers here and add narrative).
Exercise 1: Is there any information you would prefer to not know?
Exercise 2: How much would you pay to not know it?
If you really want to have fun, give people monopoly money and let them bid to remove information from a range of scenarios. Note that we’re not offering to change the facts, just to not know them.
Personally, I think this would be a lot easier if I could just forget about all that time spent in the PhD program.
At least in this case, the exercise highlights the difference between consequentialist and non-consequentialist reasons/excuses for doing things. The “how much would you pay to not know it” is especially handy, since it puts a number on that mental pain. Then we can ask whether the mental pain is worth all the money you’d lose by, in this example, staying in the PhD program.
I’m not sure this works well—last time “I” made a decision, “I” preferred five years of work for a PhD title to a $90k job now. It would seem unlikely that I’d prefer a $90k job now over two years of work for a PhD title, especially given that I’m now more sure that there are good jobs waiting for me.
The problem is, that hypothetical doesn’t really have any weight, unless you specify that having a PhD will still only produce a job worth $90K, at which point the audience has to wonder why this hypothetical fool started on their degree in the first place.
I do like the point about paying to remove information—there’s times I would happily have paid to remove information from my awareness, because I was aware it was biasing me in very annoying ways. I think learning to deal with that separately would be very useful, and probably help a lot with Consequentialism (maybe they’re even the same issue? my intuition tells me they feel different internally, but I don’t have a lot of good examples available right now)
The money isn’t necessarily the only factor. Don’t forget about location, working hours, stress levels, and job satisfaction. I’d take a $70k job that’s intrinsically rewarding over a $100k job that “isn’t really my type of environment” any day.
Of course, I’d have to KNOW that the $70k job was intrinsically rewarding and that the $100k job wouldn’t be, but if the hypothetical fool does know this about his PhD job prospects, for example he wants to be an academic and the job offers so far are in unintellectual labor, or in the family business, or in a city he/she would like to avoid settling down in, or involve 50% more hours than the target job of the same wage --
I don’t know if that’s useful or not, but I’ll err on the side of opening my mouth.
Research suggests that once you have sufficient income to meet your basic needs, that travel time is one of the biggest factors in job satisfaction. I think we tend to focus on income because it’s much easier to evaluate the actual pay rate of a job—if you’re promised $100K, you can expect 100K. If you’re promised 40 hours and no overtime then you’ll often find that tested. If you’re promised low stress and high job satisfaction, well, good luck suing for breach of contract on that.
Being promised low stress/high satisfaction and having a rough idea of what kind of work or work environment is (more or less) enjoyable to you are quite different things. A given idea of which work is enjoyable won’t be 100% accurate; there are always going to be surprises from both inside the mind and out. But most people have a rough idea what kind of work they prefer to do. That’s where the low stress/high satisfaction predictions come from in this scenario.
Obviously one can only expect so much “enjoyment” in a work environment (and no “work” is fun and enjoyable 100% of the time), but if one type of work feels worthwhile to a given person, and the other doesn’t, even if this is on the basis of inference, then for some people this is going to be a significant factor in how good/bad they feel about passing up those $90k jobs for the PhD program that might now be in question.
Fair point. I’m fairly young, so most of my social group is still trying to figure out what sort of work environment they want, and how to actually identify it—a lot of entry level jobs outright lie about the work environment (“we value employee feedback, overtime only when necessary” → “we are going to be doing another 80 hour death march this week because of an arbitrary release deadline”).
In game theory, there are a number of situations where it is rational to handicap your own rationality: Reduce your number of choices, take away information, etc.
Now, in game theory you’re competing against someone else, whereas in this case you’re only competing against (time-indexed versions of?) yourself; but it could be that the same rules apply. Maybe it really is rational to pay to not know something.
Or maybe it’s rational for a bounded agent to pay to be counter-biased: Knowing that I have this bias toward sunk costs, make me ignorant of all sunk costs.
In game theory, there are a number of situations where it is rational to handicap your own rationality: Reduce your number of choices, take away information, etc.
TDT is intended to eliminate this. A TDT-agent—one that’s correctly modeled by the environment, not that some other agent thinks is a CDT-agent—is supposed to never benefit from having any option taken away from it, and will never pay to avoid learning a piece of information.
Er, this is assuming that the information revealed is not intentionally misleading, correct? Because certainly you could give a TDT agent an extra option which would be rational to take on the basis of the information available to the agent, but which would still be rigged to be worse than all other options.
Or in other words, the TDT agent can never be aware of such a situation.
Agreed. I think one could assert “Given a perfect decision theory AND a perfect implementation, additional information is never a negative”, but it’s silly to live as though that were true. If you know your decision theory doesn’t handle X information correctly (say, sunken costs) then it’s in your best interests to either eliminate the information, or fix the decision theory.
Of course, eliminating information seems to be by far the easier option...
If I know the class of errors my decision theory tends to make given the kinds of Xes I most commonly run into, I can also adopt a third option… for want of a better term, I can patch my decision theory. E.g., “Well, I want to finish this project, but I suspect that part of that desire stems from an invalid weighting of sunk costs, so I won’t take that desire at face value… I’ll apply some kind of rough-and-ready discounting factor to it.” This is clearly not as good as actually fixing my decision theory, but isn’t as hard either, and is sometimes more practical than eliminating the information.
Very true. However, “avoid X information, since it biases me” is actually an example of such a patch. Especially if the information doesn’t otherwise have any useful value. How often does knowledge of sunk costs actually move you towards ideal action, rather than biasing you away from it?
Sure, avoiding information is an example of patching a decision theory, agreed.
So I guess what I’m saying is that “either eliminate the information, or fix the decision theory” is a misleading way to phrase the choice. My real choice is between fixing it and patching it, where eliminating the information is one of several ways to patch it, and not always the best.
Making choices about future investments in ignorance of the existing data I have about previous investments and their ROI is probably less ideal than taking those data into consideration and applying some other patch to compensate for sunk-costing.
Many of the pain points listed have a common trait: the decision would seem easier with less information. For example, the PhD decision is easier if you didn’t know about the costs which have been sunk, the identity decisions are easier if you’re not sure of your own identity, cached thought problems are easier without having that thought cached, etc…
But we know that information should never have negative value. So why not highlight that dissonance? Imagine the following exercise:
Handout: “You spent the last 3 years working toward a PhD. You passed up a $90k job to stay in the program. Now you have 2 years left, and another $90k job offer has come your way. Do you take it?” (I don’t know much about PhD programs, so feel free to imagine more plausible numbers here and add narrative).
Exercise 1: Is there any information you would prefer to not know? Exercise 2: How much would you pay to not know it?
If you really want to have fun, give people monopoly money and let them bid to remove information from a range of scenarios. Note that we’re not offering to change the facts, just to not know them.
Personally, I think this would be a lot easier if I could just forget about all that time spent in the PhD program.
At least in this case, the exercise highlights the difference between consequentialist and non-consequentialist reasons/excuses for doing things. The “how much would you pay to not know it” is especially handy, since it puts a number on that mental pain. Then we can ask whether the mental pain is worth all the money you’d lose by, in this example, staying in the PhD program.
I’m not sure this works well—last time “I” made a decision, “I” preferred five years of work for a PhD title to a $90k job now. It would seem unlikely that I’d prefer a $90k job now over two years of work for a PhD title, especially given that I’m now more sure that there are good jobs waiting for me.
Thanks, Joachim. Like I said, I don’t know much about PhD programs. What would be some better numbers to make the point?
I’m sorry, but I have no idea—I’m in the Netherlands, which has a different academic/economic structure than the US.
The problem is, that hypothetical doesn’t really have any weight, unless you specify that having a PhD will still only produce a job worth $90K, at which point the audience has to wonder why this hypothetical fool started on their degree in the first place.
I do like the point about paying to remove information—there’s times I would happily have paid to remove information from my awareness, because I was aware it was biasing me in very annoying ways. I think learning to deal with that separately would be very useful, and probably help a lot with Consequentialism (maybe they’re even the same issue? my intuition tells me they feel different internally, but I don’t have a lot of good examples available right now)
The money isn’t necessarily the only factor. Don’t forget about location, working hours, stress levels, and job satisfaction. I’d take a $70k job that’s intrinsically rewarding over a $100k job that “isn’t really my type of environment” any day.
Of course, I’d have to KNOW that the $70k job was intrinsically rewarding and that the $100k job wouldn’t be, but if the hypothetical fool does know this about his PhD job prospects, for example he wants to be an academic and the job offers so far are in unintellectual labor, or in the family business, or in a city he/she would like to avoid settling down in, or involve 50% more hours than the target job of the same wage --
I don’t know if that’s useful or not, but I’ll err on the side of opening my mouth.
Research suggests that once you have sufficient income to meet your basic needs, that travel time is one of the biggest factors in job satisfaction. I think we tend to focus on income because it’s much easier to evaluate the actual pay rate of a job—if you’re promised $100K, you can expect 100K. If you’re promised 40 hours and no overtime then you’ll often find that tested. If you’re promised low stress and high job satisfaction, well, good luck suing for breach of contract on that.
I read the first sentence of this comment three times, with increasing incredulity, before my brain finally parsed “travel time” in the correct order.
I think perhaps my expectations of LW discourse are being unduly skewed by all the HPMOR discussion.
Being promised low stress/high satisfaction and having a rough idea of what kind of work or work environment is (more or less) enjoyable to you are quite different things. A given idea of which work is enjoyable won’t be 100% accurate; there are always going to be surprises from both inside the mind and out. But most people have a rough idea what kind of work they prefer to do. That’s where the low stress/high satisfaction predictions come from in this scenario.
Obviously one can only expect so much “enjoyment” in a work environment (and no “work” is fun and enjoyable 100% of the time), but if one type of work feels worthwhile to a given person, and the other doesn’t, even if this is on the basis of inference, then for some people this is going to be a significant factor in how good/bad they feel about passing up those $90k jobs for the PhD program that might now be in question.
Fair point. I’m fairly young, so most of my social group is still trying to figure out what sort of work environment they want, and how to actually identify it—a lot of entry level jobs outright lie about the work environment (“we value employee feedback, overtime only when necessary” → “we are going to be doing another 80 hour death march this week because of an arbitrary release deadline”).
In game theory, there are a number of situations where it is rational to handicap your own rationality: Reduce your number of choices, take away information, etc.
Now, in game theory you’re competing against someone else, whereas in this case you’re only competing against (time-indexed versions of?) yourself; but it could be that the same rules apply. Maybe it really is rational to pay to not know something.
Or maybe it’s rational for a bounded agent to pay to be counter-biased: Knowing that I have this bias toward sunk costs, make me ignorant of all sunk costs.
TDT is intended to eliminate this. A TDT-agent—one that’s correctly modeled by the environment, not that some other agent thinks is a CDT-agent—is supposed to never benefit from having any option taken away from it, and will never pay to avoid learning a piece of information.
Er, this is assuming that the information revealed is not intentionally misleading, correct? Because certainly you could give a TDT agent an extra option which would be rational to take on the basis of the information available to the agent, but which would still be rigged to be worse than all other options.
Or in other words, the TDT agent can never be aware of such a situation.
Amendment accepted.
Agreed. I think one could assert “Given a perfect decision theory AND a perfect implementation, additional information is never a negative”, but it’s silly to live as though that were true. If you know your decision theory doesn’t handle X information correctly (say, sunken costs) then it’s in your best interests to either eliminate the information, or fix the decision theory.
Of course, eliminating information seems to be by far the easier option...
If I know the class of errors my decision theory tends to make given the kinds of Xes I most commonly run into, I can also adopt a third option… for want of a better term, I can patch my decision theory. E.g., “Well, I want to finish this project, but I suspect that part of that desire stems from an invalid weighting of sunk costs, so I won’t take that desire at face value… I’ll apply some kind of rough-and-ready discounting factor to it.” This is clearly not as good as actually fixing my decision theory, but isn’t as hard either, and is sometimes more practical than eliminating the information.
Very true. However, “avoid X information, since it biases me” is actually an example of such a patch. Especially if the information doesn’t otherwise have any useful value. How often does knowledge of sunk costs actually move you towards ideal action, rather than biasing you away from it?
Sure, avoiding information is an example of patching a decision theory, agreed.
So I guess what I’m saying is that “either eliminate the information, or fix the decision theory” is a misleading way to phrase the choice. My real choice is between fixing it and patching it, where eliminating the information is one of several ways to patch it, and not always the best.
Making choices about future investments in ignorance of the existing data I have about previous investments and their ROI is probably less ideal than taking those data into consideration and applying some other patch to compensate for sunk-costing.
I like the idea of phrasing it as “patching vs long-term fixes” :)