This lack of motivation is connected to another important psychology – the willingness to fail conventionally. Most people in politics are, whether they know it or not, much more comfortable with failing conventionally than risking the social stigma of behaving unconventionally. They did not mind losing so much as being embarrassed, as standing out from the crowd. (The same phenomenon explains why the vast majority of active fund management destroys wealth and nobody learns from this fact repeated every year.)
We plebs can draw a distinction between belief and action, but political operatives like him can’t. For “failing conventionally”, read “supporting the elite consensus”.
Now, ‘rationalists’, at least in the LW sense (as opposed to the broader sense of Kahneman et al.), have a vague sense that this is true, although I’m not sure if it’s been elaborated on yet. “People are more interested in going through the conventional symbolic motions of doing a thing than they are in actually doing the thing” (e.g. “political actors are more interested in going through the conventional symbolic motions of working out which side they ought to be on than in actually working it out”) is widespread enough in the community that it’s been blamed for the failure of MetaMed. (Reading that post, it sounds to me like it failed because it didn’t have enough sales/marketing talent, but that’s beside the point.)
Something worth noting: the alternate take on this is that, while most people are more interested in going through the conventional symbolic motions of doing a thing than they are in actually doing the thing, conventional symbolic motions are still usually good enough. Sometimes they aren’t, but usually they are—which allows the Burkean reading that the conventional symbolic motions have actually been selected for effectiveness to an extent that may surprise the typical LW reader.
It should also be pointed out that, while we praise people or institutions that behave unconventionally to try to win when it works (e.g. Eliezer promoting AI safety by writing Harry Potter fanfiction, the Trump campaign), we don’t really blame people or institutions that behave conventionally and lose. So going through the motions could be modeled purely by calculation of risk, at least in the political case: if you win, you win, but if you support an insurgency and lose, that’s a much bigger deal than if you support the consensus and lose—at least for the right definition of ‘consensus’. But that can’t be a complete account of it, because MetaMed.
Right, and he addresses this in the article:
We plebs can draw a distinction between belief and action, but political operatives like him can’t. For “failing conventionally”, read “supporting the elite consensus”.
Now, ‘rationalists’, at least in the LW sense (as opposed to the broader sense of Kahneman et al.), have a vague sense that this is true, although I’m not sure if it’s been elaborated on yet. “People are more interested in going through the conventional symbolic motions of doing a thing than they are in actually doing the thing” (e.g. “political actors are more interested in going through the conventional symbolic motions of working out which side they ought to be on than in actually working it out”) is widespread enough in the community that it’s been blamed for the failure of MetaMed. (Reading that post, it sounds to me like it failed because it didn’t have enough sales/marketing talent, but that’s beside the point.)
Something worth noting: the alternate take on this is that, while most people are more interested in going through the conventional symbolic motions of doing a thing than they are in actually doing the thing, conventional symbolic motions are still usually good enough. Sometimes they aren’t, but usually they are—which allows the Burkean reading that the conventional symbolic motions have actually been selected for effectiveness to an extent that may surprise the typical LW reader.
It should also be pointed out that, while we praise people or institutions that behave unconventionally to try to win when it works (e.g. Eliezer promoting AI safety by writing Harry Potter fanfiction, the Trump campaign), we don’t really blame people or institutions that behave conventionally and lose. So going through the motions could be modeled purely by calculation of risk, at least in the political case: if you win, you win, but if you support an insurgency and lose, that’s a much bigger deal than if you support the consensus and lose—at least for the right definition of ‘consensus’. But that can’t be a complete account of it, because MetaMed.