So, my model isn’t about them making their candidate that way, it is the much more obvious political move… make your opponent as controversial as possible. There is something weird / off / wrong about your opponent’s candidate, so find out things that could plausibly make the electorate think that, and push as hard as possible. I think they’re good enough at it. Or, in other words, try to find the best scissors statements about your opponent, where ‘best’ is determined both in terms of not losing your own supporters, and in terms of losing your opponent possible supporters.
This is often done as a psyop on your own side, to make them not understand why anyone could possibly support said person.
That said, against the simplified explanation I presented in my initial comment, there is also the obvious fact I didn’t mention that the parties themselves have a certain culture, and that culture will have blindspots which they don’t select along, but the other party does. Since the selection optimizes hard for what the party can see, that makes the selected bad on that metric, and even pushes out the people that can see the issue making it even blinder.
I mean, I see why a party would want their members to perceive the other party’s candidate as having a blind spot. But I don’t see why they’d be typically able to do this, given that the other party’s candidate would rather not be perceived this way, the other party would rather their candidate not be perceived this way, and, naively, one might expect voters to wish not to be deluded. It isn’t enough to know there’s an incentive in one direction; there’s gotta be more like a net incentive across capacity-weighted players, or else an easier time creating appearance-of-blindspots vs creating visible-lack-of-blindspots, or something. So, I’m somehow still not hearing a model that gives me this prediction.
To be pedantic, my model is pretty obvious, and clearly gives this prediction, so you can’t really say that you don’t see a model here, you just don’t believe the model. Your model with extra assumptions doesn’t give this prediction, but the one I gave clearly does.
You can’t find a person this can’t be done to because there is something obviously wrong with everyone? Things can be twisted easily enough. (Offense is stronger than defense here.) If you didn’t find it, you just didn’t look hard/creatively enough. Our intuitions against people tricking us aren’t really suitable defense against sufficiently optimized searching. (Luckily, this is actually hard to do so it is pretty confined most of the time to major things like politics.) Also, very clearly, you don’t actually have to convince all that many people for this to work! If even 20% of people really bought it, those people would probably vote and give you an utter landslide if the other side didn’t do the same thing (which we know they do, just look at how divisive candidates obviously are!)
I should perhaps have added something I thought of slightly later that isn’t really part of my original model, but an intentional blindspot can be a sign of loyalty in certain cases.
So, my model isn’t about them making their candidate that way, it is the much more obvious political move… make your opponent as controversial as possible. There is something weird / off / wrong about your opponent’s candidate, so find out things that could plausibly make the electorate think that, and push as hard as possible. I think they’re good enough at it. Or, in other words, try to find the best scissors statements about your opponent, where ‘best’ is determined both in terms of not losing your own supporters, and in terms of losing your opponent possible supporters.
This is often done as a psyop on your own side, to make them not understand why anyone could possibly support said person.
That said, against the simplified explanation I presented in my initial comment, there is also the obvious fact I didn’t mention that the parties themselves have a certain culture, and that culture will have blindspots which they don’t select along, but the other party does. Since the selection optimizes hard for what the party can see, that makes the selected bad on that metric, and even pushes out the people that can see the issue making it even blinder.
I mean, I see why a party would want their members to perceive the other party’s candidate as having a blind spot. But I don’t see why they’d be typically able to do this, given that the other party’s candidate would rather not be perceived this way, the other party would rather their candidate not be perceived this way, and, naively, one might expect voters to wish not to be deluded. It isn’t enough to know there’s an incentive in one direction; there’s gotta be more like a net incentive across capacity-weighted players, or else an easier time creating appearance-of-blindspots vs creating visible-lack-of-blindspots, or something. So, I’m somehow still not hearing a model that gives me this prediction.
To be pedantic, my model is pretty obvious, and clearly gives this prediction, so you can’t really say that you don’t see a model here, you just don’t believe the model. Your model with extra assumptions doesn’t give this prediction, but the one I gave clearly does.
You can’t find a person this can’t be done to because there is something obviously wrong with everyone? Things can be twisted easily enough. (Offense is stronger than defense here.) If you didn’t find it, you just didn’t look hard/creatively enough. Our intuitions against people tricking us aren’t really suitable defense against sufficiently optimized searching. (Luckily, this is actually hard to do so it is pretty confined most of the time to major things like politics.) Also, very clearly, you don’t actually have to convince all that many people for this to work! If even 20% of people really bought it, those people would probably vote and give you an utter landslide if the other side didn’t do the same thing (which we know they do, just look at how divisive candidates obviously are!)
I should perhaps have added something I thought of slightly later that isn’t really part of my original model, but an intentional blindspot can be a sign of loyalty in certain cases.