I mean, I see why a party would want their members to perceive the other party’s candidate as having a blind spot. But I don’t see why they’d be typically able to do this, given that the other party’s candidate would rather not be perceived this way, the other party would rather their candidate not be perceived this way, and, naively, one might expect voters to wish not to be deluded. It isn’t enough to know there’s an incentive in one direction; there’s gotta be more like a net incentive across capacity-weighted players, or else an easier time creating appearance-of-blindspots vs creating visible-lack-of-blindspots, or something. So, I’m somehow still not hearing a model that gives me this prediction.
To be pedantic, my model is pretty obvious, and clearly gives this prediction, so you can’t really say that you don’t see a model here, you just don’t believe the model. Your model with extra assumptions doesn’t give this prediction, but the one I gave clearly does.
You can’t find a person this can’t be done to because there is something obviously wrong with everyone? Things can be twisted easily enough. (Offense is stronger than defense here.) If you didn’t find it, you just didn’t look hard/creatively enough. Our intuitions against people tricking us aren’t really suitable defense against sufficiently optimized searching. (Luckily, this is actually hard to do so it is pretty confined most of the time to major things like politics.) Also, very clearly, you don’t actually have to convince all that many people for this to work! If even 20% of people really bought it, those people would probably vote and give you an utter landslide if the other side didn’t do the same thing (which we know they do, just look at how divisive candidates obviously are!)
I mean, I see why a party would want their members to perceive the other party’s candidate as having a blind spot. But I don’t see why they’d be typically able to do this, given that the other party’s candidate would rather not be perceived this way, the other party would rather their candidate not be perceived this way, and, naively, one might expect voters to wish not to be deluded. It isn’t enough to know there’s an incentive in one direction; there’s gotta be more like a net incentive across capacity-weighted players, or else an easier time creating appearance-of-blindspots vs creating visible-lack-of-blindspots, or something. So, I’m somehow still not hearing a model that gives me this prediction.
To be pedantic, my model is pretty obvious, and clearly gives this prediction, so you can’t really say that you don’t see a model here, you just don’t believe the model. Your model with extra assumptions doesn’t give this prediction, but the one I gave clearly does.
You can’t find a person this can’t be done to because there is something obviously wrong with everyone? Things can be twisted easily enough. (Offense is stronger than defense here.) If you didn’t find it, you just didn’t look hard/creatively enough. Our intuitions against people tricking us aren’t really suitable defense against sufficiently optimized searching. (Luckily, this is actually hard to do so it is pretty confined most of the time to major things like politics.) Also, very clearly, you don’t actually have to convince all that many people for this to work! If even 20% of people really bought it, those people would probably vote and give you an utter landslide if the other side didn’t do the same thing (which we know they do, just look at how divisive candidates obviously are!)