While there are legitimate differences that matter quite a bit between the sides, I believe a lot of the reason why candidates are like ‘scissors statements’ is because the median voter theorem actually kind of works, and the parties see the need to move their candidates pretty far toward the current center, but they also know they will lose the extremists to not voting or voting third party if they don’t give them something to focus on, so both sides are literally optimizing for the effect to keep their extremists engaged.
I don’t follow this model yet. I see why, under this model, a party would want the opponent’s candidate to enrage people / have a big blind spot (and how this would keep the extremes on their side engaged), but I don’t see why this model would predict that they would want their own candidate to enrage people / have a big blind spot.
It sounds to me like the model is ‘the candidate needs to have a (party-aligned) big blind spot in order to be acceptable to the extremists(/base)‘. (Which is what you’d expect, if those voters are bucketing ‘not-seeing A’ with ‘seeing B’.)
(Riffing off from that: I expect there’s also something like, Motive Ambiguity-style, ‘the candidate needs to have some, familiar/legible(?), big blind spot, in order to be acceptable/non-triggering to people who are used to the dialectical conflict’.)
It seems I was not clear enough, but this is not my model. (I explain it to the person who asked if you want to see what I meant, but I was talking about parties turning their opponents into scissors statements.)
That said, I do believe that it is a possible partial explanation that sometimes having an intentional blind spot can be seen as a sign of loyalty by the party structure.
So, my model isn’t about them making their candidate that way, it is the much more obvious political move… make your opponent as controversial as possible. There is something weird / off / wrong about your opponent’s candidate, so find out things that could plausibly make the electorate think that, and push as hard as possible. I think they’re good enough at it. Or, in other words, try to find the best scissors statements about your opponent, where ‘best’ is determined both in terms of not losing your own supporters, and in terms of losing your opponent possible supporters.
This is often done as a psyop on your own side, to make them not understand why anyone could possibly support said person.
That said, against the simplified explanation I presented in my initial comment, there is also the obvious fact I didn’t mention that the parties themselves have a certain culture, and that culture will have blindspots which they don’t select along, but the other party does. Since the selection optimizes hard for what the party can see, that makes the selected bad on that metric, and even pushes out the people that can see the issue making it even blinder.
I mean, I see why a party would want their members to perceive the other party’s candidate as having a blind spot. But I don’t see why they’d be typically able to do this, given that the other party’s candidate would rather not be perceived this way, the other party would rather their candidate not be perceived this way, and, naively, one might expect voters to wish not to be deluded. It isn’t enough to know there’s an incentive in one direction; there’s gotta be more like a net incentive across capacity-weighted players, or else an easier time creating appearance-of-blindspots vs creating visible-lack-of-blindspots, or something. So, I’m somehow still not hearing a model that gives me this prediction.
To be pedantic, my model is pretty obvious, and clearly gives this prediction, so you can’t really say that you don’t see a model here, you just don’t believe the model. Your model with extra assumptions doesn’t give this prediction, but the one I gave clearly does.
You can’t find a person this can’t be done to because there is something obviously wrong with everyone? Things can be twisted easily enough. (Offense is stronger than defense here.) If you didn’t find it, you just didn’t look hard/creatively enough. Our intuitions against people tricking us aren’t really suitable defense against sufficiently optimized searching. (Luckily, this is actually hard to do so it is pretty confined most of the time to major things like politics.) Also, very clearly, you don’t actually have to convince all that many people for this to work! If even 20% of people really bought it, those people would probably vote and give you an utter landslide if the other side didn’t do the same thing (which we know they do, just look at how divisive candidates obviously are!)
I should perhaps have added something I thought of slightly later that isn’t really part of my original model, but an intentional blindspot can be a sign of loyalty in certain cases.
While there are legitimate differences that matter quite a bit between the sides, I believe a lot of the reason why candidates are like ‘scissors statements’ is because the median voter theorem actually kind of works, and the parties see the need to move their candidates pretty far toward the current center, but they also know they will lose the extremists to not voting or voting third party if they don’t give them something to focus on, so both sides are literally optimizing for the effect to keep their extremists engaged.
I don’t follow this model yet. I see why, under this model, a party would want the opponent’s candidate to enrage people / have a big blind spot (and how this would keep the extremes on their side engaged), but I don’t see why this model would predict that they would want their own candidate to enrage people / have a big blind spot.
It sounds to me like the model is ‘the candidate needs to have a (party-aligned) big blind spot in order to be acceptable to the extremists(/base)‘. (Which is what you’d expect, if those voters are bucketing ‘not-seeing A’ with ‘seeing B’.)
(Riffing off from that: I expect there’s also something like, Motive Ambiguity-style, ‘the candidate needs to have some, familiar/legible(?), big blind spot, in order to be acceptable/non-triggering to people who are used to the dialectical conflict’.)
It seems I was not clear enough, but this is not my model. (I explain it to the person who asked if you want to see what I meant, but I was talking about parties turning their opponents into scissors statements.)
That said, I do believe that it is a possible partial explanation that sometimes having an intentional blind spot can be seen as a sign of loyalty by the party structure.
So, my model isn’t about them making their candidate that way, it is the much more obvious political move… make your opponent as controversial as possible. There is something weird / off / wrong about your opponent’s candidate, so find out things that could plausibly make the electorate think that, and push as hard as possible. I think they’re good enough at it. Or, in other words, try to find the best scissors statements about your opponent, where ‘best’ is determined both in terms of not losing your own supporters, and in terms of losing your opponent possible supporters.
This is often done as a psyop on your own side, to make them not understand why anyone could possibly support said person.
That said, against the simplified explanation I presented in my initial comment, there is also the obvious fact I didn’t mention that the parties themselves have a certain culture, and that culture will have blindspots which they don’t select along, but the other party does. Since the selection optimizes hard for what the party can see, that makes the selected bad on that metric, and even pushes out the people that can see the issue making it even blinder.
I mean, I see why a party would want their members to perceive the other party’s candidate as having a blind spot. But I don’t see why they’d be typically able to do this, given that the other party’s candidate would rather not be perceived this way, the other party would rather their candidate not be perceived this way, and, naively, one might expect voters to wish not to be deluded. It isn’t enough to know there’s an incentive in one direction; there’s gotta be more like a net incentive across capacity-weighted players, or else an easier time creating appearance-of-blindspots vs creating visible-lack-of-blindspots, or something. So, I’m somehow still not hearing a model that gives me this prediction.
To be pedantic, my model is pretty obvious, and clearly gives this prediction, so you can’t really say that you don’t see a model here, you just don’t believe the model. Your model with extra assumptions doesn’t give this prediction, but the one I gave clearly does.
You can’t find a person this can’t be done to because there is something obviously wrong with everyone? Things can be twisted easily enough. (Offense is stronger than defense here.) If you didn’t find it, you just didn’t look hard/creatively enough. Our intuitions against people tricking us aren’t really suitable defense against sufficiently optimized searching. (Luckily, this is actually hard to do so it is pretty confined most of the time to major things like politics.) Also, very clearly, you don’t actually have to convince all that many people for this to work! If even 20% of people really bought it, those people would probably vote and give you an utter landslide if the other side didn’t do the same thing (which we know they do, just look at how divisive candidates obviously are!)
I should perhaps have added something I thought of slightly later that isn’t really part of my original model, but an intentional blindspot can be a sign of loyalty in certain cases.