In many contexts the law of the excluded middle can certainly be brought to bear, but I would argue that reasoning about double think about driving ability is not one of them. A driver who is realistically pessimistic about their driving will drive less confidently, more hesitantly, more constantly vigilantly, and be a hazard to other drivers’ abilities to model what is going on. The same is also true of the irrationally over-optimistic drivers, but modelling what a driver who seems to think they know what they are doing is more straightforward. So the optimal modelling of driving among other drivers is supported best when almost all drivers drive with somewhat more confidence than is warranted, with the proviso that drivers in general not only learn to drive better with experience, but become more accurate about how well they drive … and still drive with more confidence than is fully warranted, that taken into account. A truly good experienced driver will be modelling how experienced drivers around them are AND how much those drivers are driving within their ability, without having to think about it. That is a lot of shades of gray estimating going on in real time, and the models will not be improved by a driver treating any continuum estimation, including that of their own relative ability, as a dichotomy will not improve the overall modelling. Nor will too much detail. The truth is, every driver gets to be aware at times that they hadn’t known what they hadn’t known about driving safely in particular situations, but keeping that in mind constantly will not make them a better driver.
And when the consequences of modelling-mismatch can easily be multi-fatal and are more usually instructive, keeping modelling updates tractable has great value, and simplifying one’s own understanding of one’s own driving ability supports that.
So far this may sound like another way of talking about what Against Against Doublethink is saying, but I think there is more going on.
“Without any principled algorithm determining which of the contradictory propositions to use as an input for the simulation at hand, you’ll fail as often as you succeed. So it makes no sense to anticipate more positive outcomes from believing contradictions.”
It looks to me like the second sentence doesn’t follow from the first in the way stated. The phrase “you’ll fail as often as you succeed” sounds to me like a crapshoot centered around 50:50, but often minimal checking is enough to get a sense of whether naive estimations in a domain, with Doublethink known to be active, are actually much less accurate than that or nonetheless pretty accurate, maybe more than good enough for realtime use: quick, that aggressive driver weaving fast up behind you, is that driver going to wait for a principled model update?
So in practice even someone doing their best to eschew Doublethink is going to benefit from some meta-estimating of its effects, if only in other actors present. To eschew that modelling too on principle invites much the same Doublethink about modelling in through the backdoor.
There’s no kicking Doublethink to the curb, there is only managing it.
In many contexts the law of the excluded middle can certainly be brought to bear, but I would argue that reasoning about double think about driving ability is not one of them. A driver who is realistically pessimistic about their driving will drive less confidently, more hesitantly, more constantly vigilantly, and be a hazard to other drivers’ abilities to model what is going on. The same is also true of the irrationally over-optimistic drivers, but modelling what a driver who seems to think they know what they are doing is more straightforward. So the optimal modelling of driving among other drivers is supported best when almost all drivers drive with somewhat more confidence than is warranted, with the proviso that drivers in general not only learn to drive better with experience, but become more accurate about how well they drive … and still drive with more confidence than is fully warranted, that taken into account. A truly good experienced driver will be modelling how experienced drivers around them are AND how much those drivers are driving within their ability, without having to think about it. That is a lot of shades of gray estimating going on in real time, and the models will not be improved by a driver treating any continuum estimation, including that of their own relative ability, as a dichotomy will not improve the overall modelling. Nor will too much detail. The truth is, every driver gets to be aware at times that they hadn’t known what they hadn’t known about driving safely in particular situations, but keeping that in mind constantly will not make them a better driver.
And when the consequences of modelling-mismatch can easily be multi-fatal and are more usually instructive, keeping modelling updates tractable has great value, and simplifying one’s own understanding of one’s own driving ability supports that.
So far this may sound like another way of talking about what Against Against Doublethink is saying, but I think there is more going on.
“Without any principled algorithm determining which of the contradictory propositions to use as an input for the simulation at hand, you’ll fail as often as you succeed. So it makes no sense to anticipate more positive outcomes from believing contradictions.”
It looks to me like the second sentence doesn’t follow from the first in the way stated. The phrase “you’ll fail as often as you succeed” sounds to me like a crapshoot centered around 50:50, but often minimal checking is enough to get a sense of whether naive estimations in a domain, with Doublethink known to be active, are actually much less accurate than that or nonetheless pretty accurate, maybe more than good enough for realtime use: quick, that aggressive driver weaving fast up behind you, is that driver going to wait for a principled model update?
So in practice even someone doing their best to eschew Doublethink is going to benefit from some meta-estimating of its effects, if only in other actors present. To eschew that modelling too on principle invites much the same Doublethink about modelling in through the backdoor.
There’s no kicking Doublethink to the curb, there is only managing it.