It’s easier to explain things in terms of over-arching narratives—if some movement with some well-defined and explicit ideology was responsible for Trump, it becomes easier to explain his appeal, and it makes us feel like we are able to more easily make predictions about Trump and his support base. This relieves some anxiety—maybe we didn’t win, but at least we have a good model of our enemy. And our narrative might be such that it says we will be able to win in the future. As long as we keep trying, staying on the morally right side according to this narrative, and so forth.
A lot of liberals, I think, were quite unsettled by the Trump phenomenon, and so part of that reaction was to turn to quasi-conspiracy type thinking, where you posit that there is this huge, but largely silent mass of people with a coherent right-wing, racist, nationalist ideology. The reason to turn to that type of thinking is largely the same for conspiracy-type thinking in general. It’s actually more comforting to believe that the world is this highly predictable place, even if you are literally powerless within it.
The truth though, is that most voters do not have a coherent ideology, most vote on a small number of issues they hold to be most important, and their viewpoints might be very malleable and susceptible to the biases of various informational sources. This makes it harder to predict outcomes, because the things that are easier to measure become less valuable. We don’t have a good model that predicts which issues people believe the most strongly in, what things would change peoples’ opinions, or even just what kind of personality would appeal to the largest number of people.
Now this is where human behavior gets strange. When our predictions fail in a big way, this is usually evidence that our model needs to be updated in the direction of more complexity. Yes, sometimes we over-complicate things (like when we try to force the data to fit a theory by adding a lot of exceptions), but in general our models of the universe, as they gain in scope, and in explanatory power, gain in complexity. But most people, when they find out that they were really really wrong, seem to update in the opposite direction. Oh, a loud-mouthed, lying, sexist, bad-hair, anti-intellectual, guy-with-horrible-opinions-about-everything just won the election? I guess that means half the country are virulent racists!
When our predictions fail in a big way, this is usually evidence that our model needs to be updated in the direction of more complexity.
Well, no. When predictions fail in a big way, this is usually evidence that your model is wrong and needs to be discarded. Adding epicycles helps only if you already have the basic things right and failing in a big way shows that you do NOT have the basic things right.
Well what I meant by complexity was specifically not to add epicycles, or excuses or special cases to a model we really like, but rather, replacing the model with one that has more power and precision. Yes, sometimes that means we switch to a simpler theory—heliocentrism required fewer parameters than geocentrism to make it work initially—but in general the long term trend seems to be towards models with more parameters. That doesn’t mean throw away Occam’s razor—just that more accurate predictions usually require a model with more knobs and levers. And that may only be because we now need to model more interactions than were there originally. Maybe our system has become entangled with another system that it wasn’t interacting with before.
If you previous model crashed and burned, you do not need another one with “more power and precision”, you need one which works.
It’s common to speak of two axis of development which are called something like evolutionary/revolutionary, continuous/discontinuous, horizontal/vertical, etc. One is incremental improvement, the other is a radical jump. “More power and precision” implies you want to take the incremental improvement route. I’m arguing for the radical jump.
You may not know a priori whether your theory needs a “radical jump” or an “incremental improvement.” But it still seems to be the historical case that theories tend to gain in complexity over the long term. General Relativity is more complex than Newton’s Laws is more complex than Heliocentrism or Geocentrism, and String Theory is more complex than all of those. Multiverse theories add a whole new layer of parameters.
If you have a model that has worked pretty well in one regime but that completely fails in a different regime, then you probably need a new theory that is both a “radical jump” from the previous theory, but that is also likely to be more complex. You are now adding more scenarios and regimes that require prediction under one model. This new model will typically have more parameters than the previous model. As long as you have been operating under Occam’s Razor reasonably well since the beginning—then as you add more variables, or as support of the distribution of the variables increases, your model has to not only operate as well as it did before in the old regime but also work well in the new regime. Think about an exponential growth model—it works just fine in the beginning when your population is small, then fails dramatically in the regime when your population saturates. You update your model to a logistic growth function which captures both regimes better, but it adds a new parameter, namely, the carrying capacity.
I’m not saying this pattern follows in literally every conceivable situation in which a theory fails. Your model may just suck, or be overcomplicated from the start. But, if we have been fairly principled about how we design our models, and gradually expand them to explain more things, then this pattern should generally hold.
A lot of liberals, I think, were quite unsettled by the Trump phenomenon, and so part of that reaction was to turn to quasi-conspiracy type thinking, where you posit that there is this huge, but largely silent mass of people with a coherent right-wing, racist, nationalist ideology.
And it’s quite ironic if that ideology is supposedly lead by a gay person who speaks about his Black boyfriends and who has a Jewish mother.
Wait, who claims that Milo Yiannopoulos is The Leader Of The Alt-Right? He’s a journalist and professional shit-stirrer who associates with the alt-right, that’s all.
Interesting. Most of those (including some with the word “leader” in) seem perfectly consistent with what I said, and the only one calling Yiannopoulos the leader of anything is the last one which is an obvious joke from start to finish—but I agree that several of them are at any rate claiming that Yiannopoulos is at least A Leader Of The Alt-Right, which is much more defensible but still rather silly.
And more importantly, he cannot stop talking all the time about how gay he is and that his boyfriends are black.
Because there are many e.g. gays who disagree with SJWs, but they can easily be reframed as e.g. “privileged white males”. If you focus the attention to their aspects that fit the narrative, it is much easier to ignore those aspects that don’t (i.e. “a gay disagrees with SJWs” is a paradox, but “a cis white male disagrees with SJWs” is a confirmation of the worldview, so the key is to make you only think about the latter). With Milo such strategy is impossible, because if you let him talk for 10 seconds, he will remind you that he is gay and that his boyfriends are black. That will be the first and the last thing he will say, and everyone in the audience will remember that. His frame is unshakeable. The only way to stop people associating him with gayness and black boyfriends is to completely prevent him from being seen and heard. Which is quite difficult considering he works for media.
It’s easier to explain things in terms of over-arching narratives—if some movement with some well-defined and explicit ideology was responsible for Trump, it becomes easier to explain his appeal, and it makes us feel like we are able to more easily make predictions about Trump and his support base. This relieves some anxiety—maybe we didn’t win, but at least we have a good model of our enemy. And our narrative might be such that it says we will be able to win in the future. As long as we keep trying, staying on the morally right side according to this narrative, and so forth.
A lot of liberals, I think, were quite unsettled by the Trump phenomenon, and so part of that reaction was to turn to quasi-conspiracy type thinking, where you posit that there is this huge, but largely silent mass of people with a coherent right-wing, racist, nationalist ideology. The reason to turn to that type of thinking is largely the same for conspiracy-type thinking in general. It’s actually more comforting to believe that the world is this highly predictable place, even if you are literally powerless within it.
The truth though, is that most voters do not have a coherent ideology, most vote on a small number of issues they hold to be most important, and their viewpoints might be very malleable and susceptible to the biases of various informational sources. This makes it harder to predict outcomes, because the things that are easier to measure become less valuable. We don’t have a good model that predicts which issues people believe the most strongly in, what things would change peoples’ opinions, or even just what kind of personality would appeal to the largest number of people.
Now this is where human behavior gets strange. When our predictions fail in a big way, this is usually evidence that our model needs to be updated in the direction of more complexity. Yes, sometimes we over-complicate things (like when we try to force the data to fit a theory by adding a lot of exceptions), but in general our models of the universe, as they gain in scope, and in explanatory power, gain in complexity. But most people, when they find out that they were really really wrong, seem to update in the opposite direction. Oh, a loud-mouthed, lying, sexist, bad-hair, anti-intellectual, guy-with-horrible-opinions-about-everything just won the election? I guess that means half the country are virulent racists!
Well, no. When predictions fail in a big way, this is usually evidence that your model is wrong and needs to be discarded. Adding epicycles helps only if you already have the basic things right and failing in a big way shows that you do NOT have the basic things right.
Well what I meant by complexity was specifically not to add epicycles, or excuses or special cases to a model we really like, but rather, replacing the model with one that has more power and precision. Yes, sometimes that means we switch to a simpler theory—heliocentrism required fewer parameters than geocentrism to make it work initially—but in general the long term trend seems to be towards models with more parameters. That doesn’t mean throw away Occam’s razor—just that more accurate predictions usually require a model with more knobs and levers. And that may only be because we now need to model more interactions than were there originally. Maybe our system has become entangled with another system that it wasn’t interacting with before.
If you previous model crashed and burned, you do not need another one with “more power and precision”, you need one which works.
It’s common to speak of two axis of development which are called something like evolutionary/revolutionary, continuous/discontinuous, horizontal/vertical, etc. One is incremental improvement, the other is a radical jump. “More power and precision” implies you want to take the incremental improvement route. I’m arguing for the radical jump.
You may not know a priori whether your theory needs a “radical jump” or an “incremental improvement.” But it still seems to be the historical case that theories tend to gain in complexity over the long term. General Relativity is more complex than Newton’s Laws is more complex than Heliocentrism or Geocentrism, and String Theory is more complex than all of those. Multiverse theories add a whole new layer of parameters.
If you have a model that has worked pretty well in one regime but that completely fails in a different regime, then you probably need a new theory that is both a “radical jump” from the previous theory, but that is also likely to be more complex. You are now adding more scenarios and regimes that require prediction under one model. This new model will typically have more parameters than the previous model. As long as you have been operating under Occam’s Razor reasonably well since the beginning—then as you add more variables, or as support of the distribution of the variables increases, your model has to not only operate as well as it did before in the old regime but also work well in the new regime. Think about an exponential growth model—it works just fine in the beginning when your population is small, then fails dramatically in the regime when your population saturates. You update your model to a logistic growth function which captures both regimes better, but it adds a new parameter, namely, the carrying capacity.
I’m not saying this pattern follows in literally every conceivable situation in which a theory fails. Your model may just suck, or be overcomplicated from the start. But, if we have been fairly principled about how we design our models, and gradually expand them to explain more things, then this pattern should generally hold.
And it’s quite ironic if that ideology is supposedly lead by a gay person who speaks about his Black boyfriends and who has a Jewish mother.
Wait, who claims that Milo Yiannopoulos is The Leader Of The Alt-Right? He’s a journalist and professional shit-stirrer who associates with the alt-right, that’s all.
In case anyone was curious, here is a collection of how different publishers have referred to Milo:
The New York Times: “a prominent figure in the white nationalist “alt-right.””
Bloomberg Businessweek: “the most notorious spokesman for the alt-right” and “the alt-right’s mouthpiece”
Politico: “alt-right journalist and firebrand”
USA Today: “white nationalist and alt-right poster boy” (in previous version)
news.com.au: “Alt-right star”
The Guardian: “a spokesman for the “alt-right””
BBC: “a figurehead for the alt-right”
The following publications have literally said “leader”:
Mother Jones: “Alt-Right Leader” and “alt-right lightning rod”
NPR: “a self-proclaimed leader of the movement”
ABC15 (Arizona) and also other ABC outlets: “Alt-right leader”
The Australian: “Alt-right leader”
The Hill: “alt-right leader”
CBS (Seattle): “alt-right leader” (in previous version)
You can even find the literal words “the leader of the alt-right” in that order:
Salon: “the leader of the alt-right”
I have not yet found it in titlecase (“The Leader Of The Alt-Right”) in a mainstream publication.
Interesting. Most of those (including some with the word “leader” in) seem perfectly consistent with what I said, and the only one calling Yiannopoulos the leader of anything is the last one which is an obvious joke from start to finish—but I agree that several of them are at any rate claiming that Yiannopoulos is at least A Leader Of The Alt-Right, which is much more defensible but still rather silly.
To be explicit, my primary goal was to collect empirical data on how publications introduce Milo (as opposed to contradicting you).
Milo is an agent provocateur, a professional troll, and, cough, a pain in the ass X-D
And more importantly, he cannot stop talking all the time about how gay he is and that his boyfriends are black.
Because there are many e.g. gays who disagree with SJWs, but they can easily be reframed as e.g. “privileged white males”. If you focus the attention to their aspects that fit the narrative, it is much easier to ignore those aspects that don’t (i.e. “a gay disagrees with SJWs” is a paradox, but “a cis white male disagrees with SJWs” is a confirmation of the worldview, so the key is to make you only think about the latter). With Milo such strategy is impossible, because if you let him talk for 10 seconds, he will remind you that he is gay and that his boyfriends are black. That will be the first and the last thing he will say, and everyone in the audience will remember that. His frame is unshakeable. The only way to stop people associating him with gayness and black boyfriends is to completely prevent him from being seen and heard. Which is quite difficult considering he works for media.