You: People can’t be reflectively consistent. Me: Yes they can, sometimes. You: Of course they can. Me: I’m confused. You: Of course people can be reflectively consistent. But only in the dreamland. If you are still confused, it’s probably because you are still thinking about the dreamland, while I’m talking about reality.
I think pjeby’s point was that reflective consistency is a way of thinking—so if you commit to thinking in a reflectively consistent way, you will think in that way when you think, but you may still wind up not acting according to that kind of thoughts every time you would want to, because you’re not entirely likely to notice that you need to think them in the first place.
Reflective consistency is not about a way of thinking. Decision theory, considered in the simplest case, talks about properties of actions, including future actions, while ignoring properties of the algorithm generating the actions.
Basically, our conversation went like this:
You: People can’t be reflectively consistent.
Me: Yes they can, sometimes.
You: Of course they can.
Me: I’m confused.
No, it went like this:
Me: People can't be reflectively consistent
You: But they can precommit to be
Me: But that won't *actually make them so*
You: But they could precommit to acting as if they were
Me: Of course they can, but it still won't actually make them so.
See also Abraham Lincoln’s, “If you call a tail a leg, how many legs does a dog have? Four, because calling a tail a leg doesn’t make it so.”
See also Abraham Lincoln’s, “If you call a tail a leg, how many legs does a dog have? Four, because calling a tail a leg doesn’t make it so.”
This is a diversion, but this has always struck me as a stupid answer to an even stupider question. I don’t really understand why people think it’s supposed to reveal some deep wisdom.
This is a diversion, but this has always struck me as a stupid answer to an even stupider question. I don’t really understand why people think it’s supposed to reveal some deep wisdom.
That’s Zen for you. ;-)
Seriously, the point (for me, anyhow) is that System 2 thinking routinely tries to call a tail a leg, and I think there’s a strong argument to be made that it’s an important part of what system 2 reasoning “evolved for”.
Huh? Reflective consistency is a property of behavior. If you behave as if you are reflectively consistent, you are.
And I am saying that a single precommitment to behaving in a reflectively consistent way, will not result in you actually behaving in the same way as you would if you individually committed to all of the specific decisions recommended by your abstract decision theory. Your perceptions and motivation will differ, and therefore your actual actions will differ.
People try to precommit in this fashion all the time, by adopting time management or organizational systems that purport to provide them with a consistent decision theory over some subdomain of decisions. They hope to then simply commit to that system, and thereby somehow escape the need for making (and committing to) the individual decisions. This doesn’t usually work very well, for reasons that have nothing to do with which decision theory they are attempting to adopt.
In my original comment, I specified that I only consider the situations “where the calculations are available”, that is you know (theoretically!) exactly what to do to be reflectively consistent in such situations and don’t need to achieve great artistic feats to pull that off.
You need to qualify what you are asserting, otherwise everything looks gray.
I’m asserting that people don’t actually do what they “decide” to do on the abstract level of System 2, unless certain System 1 processes are engaged with respect to the concrete, “near” aspects of the situation where the behavior is to be executed, and that merely precommitting to follow a certain decision theory is not a substitute for the actual, concrete, System 1commitment processes involved.
Now, could you commit to following a certain behavior under certain circumstances, that included the steps needed to also obtain System 1 commitment for the decision?
That I do not know. I think maybe you could. It would depend, I think, on how concretely you could define the circumstances when these steps would be taken… and doing that in a way that was both concrete and comprehensive would likely be difficult, which is why I’m not so sure about its feasibility.
Your model of human behavior doesn’t look in the least realistic to me, with its prohibition of reason, and requirements for difficult rituals of baptising reason into action.
Your model of human behavior doesn’t look in the least realistic to me, with its prohibition of reason, and requirements for difficult rituals of baptising reason into action.
Well, I suppose all the experiments that have been done on construal theory, and how concrete vs. abstract construal affects action and procrastination must be unrealistic, too, since that is a major piece of what I’m talking about here.
(If people were generally good at turning their reasoning into action, akrasia wouldn’t be such a hot topic here and in the rest of the world.)
Akrasia happens, but it’s not a universal mode. I object to you implying that akrasia is inevitable.
I never said it was inevitable. I said it happens when there are conflicts, and you haven’t really decided what to do about those conflicts, with enough detail and specificity for System 1 to automatically make the “right” choice in context. If you want different results, it’s up to you to specify them for yourself.
Basically, our conversation went like this:
You: People can’t be reflectively consistent.
Me: Yes they can, sometimes.
You: Of course they can.
Me: I’m confused.
You: Of course people can be reflectively consistent. But only in the dreamland. If you are still confused, it’s probably because you are still thinking about the dreamland, while I’m talking about reality.
I think pjeby’s point was that reflective consistency is a way of thinking—so if you commit to thinking in a reflectively consistent way, you will think in that way when you think, but you may still wind up not acting according to that kind of thoughts every time you would want to, because you’re not entirely likely to notice that you need to think them in the first place.
Reflective consistency is not about a way of thinking. Decision theory, considered in the simplest case, talks about properties of actions, including future actions, while ignoring properties of the algorithm generating the actions.
No, it went like this:
See also Abraham Lincoln’s, “If you call a tail a leg, how many legs does a dog have? Four, because calling a tail a leg doesn’t make it so.”
This is a diversion, but this has always struck me as a stupid answer to an even stupider question. I don’t really understand why people think it’s supposed to reveal some deep wisdom.
That’s Zen for you. ;-)
Seriously, the point (for me, anyhow) is that System 2 thinking routinely tries to call a tail a leg, and I think there’s a strong argument to be made that it’s an important part of what system 2 reasoning “evolved for”.
Huh? Reflective consistency is a property of behavior. If you behave as if you are reflectively consistent, you are.
And I am saying that a single precommitment to behaving in a reflectively consistent way, will not result in you actually behaving in the same way as you would if you individually committed to all of the specific decisions recommended by your abstract decision theory. Your perceptions and motivation will differ, and therefore your actual actions will differ.
People try to precommit in this fashion all the time, by adopting time management or organizational systems that purport to provide them with a consistent decision theory over some subdomain of decisions. They hope to then simply commit to that system, and thereby somehow escape the need for making (and committing to) the individual decisions. This doesn’t usually work very well, for reasons that have nothing to do with which decision theory they are attempting to adopt.
In my original comment, I specified that I only consider the situations “where the calculations are available”, that is you know (theoretically!) exactly what to do to be reflectively consistent in such situations and don’t need to achieve great artistic feats to pull that off.
You need to qualify what you are asserting, otherwise everything looks gray.
I’m asserting that people don’t actually do what they “decide” to do on the abstract level of System 2, unless certain System 1 processes are engaged with respect to the concrete, “near” aspects of the situation where the behavior is to be executed, and that merely precommitting to follow a certain decision theory is not a substitute for the actual, concrete, System 1commitment processes involved.
Now, could you commit to following a certain behavior under certain circumstances, that included the steps needed to also obtain System 1 commitment for the decision?
That I do not know. I think maybe you could. It would depend, I think, on how concretely you could define the circumstances when these steps would be taken… and doing that in a way that was both concrete and comprehensive would likely be difficult, which is why I’m not so sure about its feasibility.
Your model of human behavior doesn’t look in the least realistic to me, with its prohibition of reason, and requirements for difficult rituals of baptising reason into action.
Well, I suppose all the experiments that have been done on construal theory, and how concrete vs. abstract construal affects action and procrastination must be unrealistic, too, since that is a major piece of what I’m talking about here.
(If people were generally good at turning their reasoning into action, akrasia wouldn’t be such a hot topic here and in the rest of the world.)
Akrasia happens, but it’s not a universal mode. I object to you implying that akrasia is inevitable.
I never said it was inevitable. I said it happens when there are conflicts, and you haven’t really decided what to do about those conflicts, with enough detail and specificity for System 1 to automatically make the “right” choice in context. If you want different results, it’s up to you to specify them for yourself.