How worth doing something is depends on the product of its success chance and its payoff, but it’s not clear that anticipations of goodness scale as much as consequences of goodness do, which could lead to predictably unmotivating plans (which ‘should be’ motivating).
However, I have a question. How would you distinguish a case where anticipations of goodness are not matching expected consequences of goodness (aside: I think “goodness of consequences” is a less awkward / more accurate formulation here, actually), from a case where expected goodness of consequences differs from claimed expected goodness of consequences?
In other words:
Alice: You should work on Project X!
Bob: Why?
Alice: Project X is very important! If accomplished, the consequences will be [stuff]!
Bob: Really?
Alice: Yeah! Because of [reasons]!
Bob, thinking: That sounds dubious but I can’t really explain why…
Bob: I am convinced.
Bob, thinking: I am not convinced…
Alice: Great! Then you’ll work on Project X, right? Because it’s so important?
Bob, thinking: There’s no good reason for me to say no…
Bob: Of course I’ll work on Project X.
Bob, thinking: I won’t work on Project X.
Later:
Alice: Bob, why haven’t you been working on Project X?!
Bob, thinking: If I tell her that I was never convinced in the first place, that will look bad…
Bob: Uh, motivation…al… problems. My, uh, System 1. And stuff. You know how it is.
Alice: Confounded System 1! Don’t worry, Bob, I’ll figure out a way around this problem!
Bob: Great! I look forward to being able to work on Project X, which is important.
How worth doing something is depends on the product of its success chance and its payoff, but it’s not clear that anticipations of goodness scale as much as consequences of goodness do, which could lead to predictably unmotivating plans (which ‘should be’ motivating).
This is a reasonable point.
However, I have a question. How would you distinguish a case where anticipations of goodness are not matching expected consequences of goodness (aside: I think “goodness of consequences” is a less awkward / more accurate formulation here, actually), from a case where expected goodness of consequences differs from claimed expected goodness of consequences?
In other words:
Alice: You should work on Project X!
Bob: Why?
Alice: Project X is very important! If accomplished, the consequences will be [stuff]!
Bob: Really?
Alice: Yeah! Because of [reasons]!
Bob, thinking: That sounds dubious but I can’t really explain why…
Bob: I am convinced.
Bob, thinking: I am not convinced…
Alice: Great! Then you’ll work on Project X, right? Because it’s so important?
Bob, thinking: There’s no good reason for me to say no…
Bob: Of course I’ll work on Project X.
Bob, thinking: I won’t work on Project X.
Later:
Alice: Bob, why haven’t you been working on Project X?!
Bob, thinking: If I tell her that I was never convinced in the first place, that will look bad…
Bob: Uh, motivation…al… problems. My, uh, System 1. And stuff. You know how it is.
Alice: Confounded System 1! Don’t worry, Bob, I’ll figure out a way around this problem!
Bob: Great! I look forward to being able to work on Project X, which is important.
Bob, thinking: Phew…
Edit: See also “epistemic learned helplessness” (which, as Scott points out, is exactly the correct response much of the time).