Our conceptual understanding of ‘motivated cognition’, and why it’s defective as a cognitive algorithm—the “Bottom Line” insight.
“Defective” isn’t quite enough; you want a prescription to replace it with. Saying “this is a bad habit” seems less useful than saying “here is a good habit.”
There are two obvious prescriptions I see: provide correct rationales for decisions, or do not provide rationales for decisions. Which prescription you shoot for has a radically different impact on what exercises you do, and so should be talked about quite a bit. It may be desirable to try and wipe out rationalization, first, and then progress to correct rationales.
One exercise might be asking “who will this convince?” and “whose desires do I want to maximize?”. Lucy probably doesn’t actually expect Marvin to be swayed by the plight of Big Sugar, and probably doesn’t actually suspect that Marvin will believe she’s motivated by the plight of Big Sugar, and so that deflection may be the wrong play because it’s incredible.
It seems to me that social incentives will swallow most internal incentives here. If I can get more out of others by rationalizing, then it may be a losing move for me to not rationalize- and so it may be more profitable to focus specifically on internal desire-desire conflicts. If Marvin will buy the cake for Lucy if she gives Marvin-optimized reasons, then Lucy should seek to determine whether or not she wants the cake for Lucy-optimized reasons and then present the case to Marvin in terms of Marvin-optimized reasons.
When Lucy senses one desire to eat a whole chocolate cake, and comes up with the sugar industry reason, perhaps Lucy should ask which Lucy that represents (Altruist Lucy) and which other Lucys want to have a say on the issue. Thin Lucy and Cheap Lucy might both think that Lucy shouldn’t buy the cake, and Sweet Tooth Lucy wants that cake.
And when Lucy simulates their internal discussion, she quickly realizes that Altruist Lucy doesn’t actually care much about the cake issue, compared to the other three. If Altruist Lucy was fully modeled here, she’d probably side with Cheap Lucy (as those dollars can do more good elsewhere). And so the question is what tradeoff Lucy wants to make between the preferences of Thin Lucy, Cheap Lucy, and Sweet Tooth Lucy.
Notice that the rationalization is an explicit call for alliances or disguise in this model. Only three Lucys are really interested- and the weight is against the cake- but Sweet Tooth Lucy can call in other Lucys by constructing arguments that tangentially involve them. That should be a costly move- at the beginning of a decision, Lucy should determine which Lucys are most relevant to the decision, and then be skeptical of attempts to bring in other Lucys.
The first exercise would be labeling the desires involved in a decision. I suspect there will generally be at least three, but in some decisions one or two will dominate. It might be useful to start with decisions where one desire dominates, and then move to where two desires agree, and then three desires agree, and then start introducing conflicting desires.
Jack tripped, and is falling. He notices a desire to stop his fall.
Healthy Jack wants to not get hurt.
Jack tripped, and is falling, within sight of his girlfriend. He notices a desire to stop his fall.
Healthy Jack wants to not get hurt and Impressive Jack wants to not make a fool of himself. They agree on the recommended action.
On a lazy Saturday afternoon, Jack notices a desire to do a mildly dangerous trick in front of his girlfriend.
Impressive Jack wants to show off, and Healthy Jack wants to not get hurt. They disagree on the recommended action.
The second exercise would be declaring other desires as invalid (or possibly valid). This one seems like it could be done either as a worksheet—“does Cheap Jack have anything important to say about Jack tripping, conditioned on Healthy Jack and Impressive Jack already being in the discussion?”—or better yet, socially, in which someone describes a recent decision they faced, which three desires they thought were the most important, and then their partner / other members of the group seek to argue for the inclusion of other desires. It’s not yet clear how to get a good balance of suggestions that should be shot down and suggestions that should be considered more deeply, and assigning any sort of points to performance in this exercise could cause motivated cognition, which is bad.
The third exercise would be finding a quick way to resolve this competition between desires. This seems the area where it’s hardest to be prescriptive- different methods will fit different minds. Here are a few I can think of:
Summarize each desire’s case in a single sentence, put all the sentences next to each other, and choose one side or the other.
Summarize each desire’s case in a single sentence, then go with the one that’s most compelling.
Summarize each desire’s case in a single sentence, assign each a weight, and then randomly determine which desire to go with (using the weights).
Take the proposed courses of action, and then find compromises along the axis of each desire. Cheap Lucy could be satisfied more and Sweet Tooth Lucy only a little less if Lucy just bought a bag of sugar and ate some of it. Thin Lucy could be satisfied more and Sweet Tooth Lucy only a little less if Lucy bought a cake made with Splenda instead of sugar. Imagine the expanded alternative set and choose from one of the options in it.
“Break down what your parts have to say into parts” would be an interesting counter to rationalization—I think I’ll have to call this an immediate $50 award on the grounds that I intend to test the skill itself, never mind how to teach it.
“Break down what your parts have to say into parts” would be an interesting counter to rationalization—I think I’ll have to call this an immediate $50 award on the grounds that I intend to test the skill itself, never mind how to teach it.
“Defective” isn’t quite enough; you want a prescription to replace it with. Saying “this is a bad habit” seems less useful than saying “here is a good habit.”
There are two obvious prescriptions I see: provide correct rationales for decisions, or do not provide rationales for decisions. Which prescription you shoot for has a radically different impact on what exercises you do, and so should be talked about quite a bit. It may be desirable to try and wipe out rationalization, first, and then progress to correct rationales.
One exercise might be asking “who will this convince?” and “whose desires do I want to maximize?”. Lucy probably doesn’t actually expect Marvin to be swayed by the plight of Big Sugar, and probably doesn’t actually suspect that Marvin will believe she’s motivated by the plight of Big Sugar, and so that deflection may be the wrong play because it’s incredible.
It seems to me that social incentives will swallow most internal incentives here. If I can get more out of others by rationalizing, then it may be a losing move for me to not rationalize- and so it may be more profitable to focus specifically on internal desire-desire conflicts. If Marvin will buy the cake for Lucy if she gives Marvin-optimized reasons, then Lucy should seek to determine whether or not she wants the cake for Lucy-optimized reasons and then present the case to Marvin in terms of Marvin-optimized reasons.
When Lucy senses one desire to eat a whole chocolate cake, and comes up with the sugar industry reason, perhaps Lucy should ask which Lucy that represents (Altruist Lucy) and which other Lucys want to have a say on the issue. Thin Lucy and Cheap Lucy might both think that Lucy shouldn’t buy the cake, and Sweet Tooth Lucy wants that cake.
And when Lucy simulates their internal discussion, she quickly realizes that Altruist Lucy doesn’t actually care much about the cake issue, compared to the other three. If Altruist Lucy was fully modeled here, she’d probably side with Cheap Lucy (as those dollars can do more good elsewhere). And so the question is what tradeoff Lucy wants to make between the preferences of Thin Lucy, Cheap Lucy, and Sweet Tooth Lucy.
Notice that the rationalization is an explicit call for alliances or disguise in this model. Only three Lucys are really interested- and the weight is against the cake- but Sweet Tooth Lucy can call in other Lucys by constructing arguments that tangentially involve them. That should be a costly move- at the beginning of a decision, Lucy should determine which Lucys are most relevant to the decision, and then be skeptical of attempts to bring in other Lucys.
The first exercise would be labeling the desires involved in a decision. I suspect there will generally be at least three, but in some decisions one or two will dominate. It might be useful to start with decisions where one desire dominates, and then move to where two desires agree, and then three desires agree, and then start introducing conflicting desires.
Healthy Jack wants to not get hurt.
Healthy Jack wants to not get hurt and Impressive Jack wants to not make a fool of himself. They agree on the recommended action.
Impressive Jack wants to show off, and Healthy Jack wants to not get hurt. They disagree on the recommended action.
The second exercise would be declaring other desires as invalid (or possibly valid). This one seems like it could be done either as a worksheet—“does Cheap Jack have anything important to say about Jack tripping, conditioned on Healthy Jack and Impressive Jack already being in the discussion?”—or better yet, socially, in which someone describes a recent decision they faced, which three desires they thought were the most important, and then their partner / other members of the group seek to argue for the inclusion of other desires. It’s not yet clear how to get a good balance of suggestions that should be shot down and suggestions that should be considered more deeply, and assigning any sort of points to performance in this exercise could cause motivated cognition, which is bad.
The third exercise would be finding a quick way to resolve this competition between desires. This seems the area where it’s hardest to be prescriptive- different methods will fit different minds. Here are a few I can think of:
Summarize each desire’s case in a single sentence, put all the sentences next to each other, and choose one side or the other.
Summarize each desire’s case in a single sentence, then go with the one that’s most compelling.
Summarize each desire’s case in a single sentence, assign each a weight, and then randomly determine which desire to go with (using the weights).
Take the proposed courses of action, and then find compromises along the axis of each desire. Cheap Lucy could be satisfied more and Sweet Tooth Lucy only a little less if Lucy just bought a bag of sugar and ate some of it. Thin Lucy could be satisfied more and Sweet Tooth Lucy only a little less if Lucy bought a cake made with Splenda instead of sugar. Imagine the expanded alternative set and choose from one of the options in it.
I’m sure there are more.
“Break down what your parts have to say into parts” would be an interesting counter to rationalization—I think I’ll have to call this an immediate $50 award on the grounds that I intend to test the skill itself, never mind how to teach it.
I thought that was your reason for writing the HJPEV’s internal 4-way dialog.
Awesome!