Normal folks don’t let politics overtake their mind; concerned folks get into huge flamewars; but we know exactly why this is counterproductive.
Trouble is, the question still remains open: how to understand politics so that you’re reasonably sure that you’ve grasped its implications on your personal life and destiny well enough? Too often, LW participants seem to me like they take it for granted that throughout the Western world, something resembling the modern U.S. regime will continue into indefinite future, all until a technological singularity kicks in. But this seems to me like a completely unwarranted assumption, and if it turns out to be false, then the ability to understand where the present political system is heading and plan for the consequences will be a highly valuable intellectual asset—something that a self-proclaimed “rationalist” should definitely take into account.
Now, for full disclosure, there are many reasons why I could be biased about this. I lived through a time and place—late 1980s and early 1990s in ex-Yugoslavia—where most people were blissfully unaware of the storm that was just beyond the horizon, even though any cool-headed objective observer should have been able to foresee it. My own life was very negatively affected by my family’s inability to understand the situation before all hell broke loose. This has perhaps made me so paranoid that I’m unable to understand why the present political situation in the Western world is guaranteed to be so stable that I can safely forget about it. Yet I still have to see some arguments for this conclusion that would pass the standards that LW people normally apply to other topics.
I agree with you on this, but honestly, its a difficult enough topic that semi-specialists are needed. Trying as a non-specialist to figure out how stable your political system is rather than trying to find a specialist you can trust will get you about as far as it would in law etc.
Trickier than the ‘how stable’ question is that of what is likely to result from a failure. To the extent that such knowledge is missing the problem of what to do about it gains faint hints reminiscent of Pascal’s Mugging.
Now, for full disclosure, there are many reasons why I could be biased about this.
With emphasis on “could be” as opposed to “am”. Different past experiences leading to different conclusions isn’t necessarily “bias”. This is a bit of a pet peeve of mine. I often see the naive, the inexperienced, quite often the young, dismiss the views of the more experienced as “biased” or by some broad synonym.
The implicit reasoning seems to be as follows: “Here is the evidence. The evidence plus a uniform prior distribution leads to conclusion A. Yet this person sees the evidence and draws conclusion B different from A. Therefore he is letting his biases affect his judgment.”
One problem with the reasoning is that “the evidence” is not the (only) evidence. There is, rather, “evidence I’m aware of” and “evidence I’m not aware of but the other person might be aware of”. It’s entirely possible for that other evidence to be decisive.
Your comment is an instance of the “forcing fallacy” which really deserves a post of its own: claiming that we should spend resources on a problem because a lot of utility depends, or could depend, on the answer. There are many examples of this on LW, but to choose an uncontroversial one from elsewhere: why aren’t more physicists working on teleportation? The general counter to the pattern is noting that problems may be difficult, and may or may not have viable attacks right now, so we may be better off ignoring them after all. I don’t see a viable attack for applying LW-style rationality to political prediction, do you?
The general counter to the pattern is noting that problems may be difficult, and may or may not have viable attacks right now, so we may be better off ignoring them after all.
This is valid where there are experts that can confidently estimate that there are no attacks. There are lots of expert physicists, so if steps towards teleportation were feasible, someone would’ve noticed. In case there are no experts to produce such confidence, correct course of action is to create them (perhaps from more general experts, by way of giving a research focus).
The rule “If it’s an important problem, and we haven’t tried to understand it, we should” holds in any case, it’s just that in case of teleportation, we already did try to understand what we presently can, as a side effect of widespread knowledge of physics.
This is one of the reasons I actually rather like the politics in Heinlein’s writing; while it occasionally sounds preachy, and I routinely disagree with the implicit statement that the proposed system has higher utility than current ones, it does expose some really interesting ideas. This has led me to wonder, on occasion, about other potential government systems and to attempt to determine their utility compared to what we have.
Of course, I’m not really a student of political science and therefore am ill-equipped for this purpose, and estimate insufficient utility to attempting to undertake the scholarship needed to correct this (mostly due to opportunity cost; I am active in a field where I can contribute significant utility today, and it’s more efficient to update and expand my knowledge there than to branch into a completely different field in any depth). Nonetheless, inefficient though it may be, it’s an open question that I find my mind wandering to on occasion.
The conclusion I’ve reached is that if the US government (as we currently recognize it) continues until the technological singularity, it will be because the singularity comes soon (requires within ~50 years at a low-confidence estimate, at 150 years I’m 90% confident the US government either won’t exist or won’t be recognizable). There are too many problems with the system; it wasn’t optimized for the modern world, to the extent was optimized at all, and of course the “modern world” keeps advancing too. The US has tried to keep up (universal adult suffrage, several major changes to how political parties are organized (nobody today seriously proposes a split ticket), the increasing authority of the federal government over the states, etc.) but such change is reactive and takes time. It will always lag behind the bleeding edge, and if it gets too far behind the then-current institution will either be overthrown or will lose its significance and become something like the 21st century’s serious implementations of the feudal system (rare, somewhat different from how it was a few hundred years back, and nonetheless mostly irrelevant).
cousin_it:
Trouble is, the question still remains open: how to understand politics so that you’re reasonably sure that you’ve grasped its implications on your personal life and destiny well enough? Too often, LW participants seem to me like they take it for granted that throughout the Western world, something resembling the modern U.S. regime will continue into indefinite future, all until a technological singularity kicks in. But this seems to me like a completely unwarranted assumption, and if it turns out to be false, then the ability to understand where the present political system is heading and plan for the consequences will be a highly valuable intellectual asset—something that a self-proclaimed “rationalist” should definitely take into account.
Now, for full disclosure, there are many reasons why I could be biased about this. I lived through a time and place—late 1980s and early 1990s in ex-Yugoslavia—where most people were blissfully unaware of the storm that was just beyond the horizon, even though any cool-headed objective observer should have been able to foresee it. My own life was very negatively affected by my family’s inability to understand the situation before all hell broke loose. This has perhaps made me so paranoid that I’m unable to understand why the present political situation in the Western world is guaranteed to be so stable that I can safely forget about it. Yet I still have to see some arguments for this conclusion that would pass the standards that LW people normally apply to other topics.
I agree with you on this, but honestly, its a difficult enough topic that semi-specialists are needed. Trying as a non-specialist to figure out how stable your political system is rather than trying to find a specialist you can trust will get you about as far as it would in law etc.
Trickier than the ‘how stable’ question is that of what is likely to result from a failure. To the extent that such knowledge is missing the problem of what to do about it gains faint hints reminiscent of Pascal’s Mugging.
That sounds plausible, but should probably have a time frame added.
With emphasis on “could be” as opposed to “am”. Different past experiences leading to different conclusions isn’t necessarily “bias”. This is a bit of a pet peeve of mine. I often see the naive, the inexperienced, quite often the young, dismiss the views of the more experienced as “biased” or by some broad synonym.
The implicit reasoning seems to be as follows: “Here is the evidence. The evidence plus a uniform prior distribution leads to conclusion A. Yet this person sees the evidence and draws conclusion B different from A. Therefore he is letting his biases affect his judgment.”
One problem with the reasoning is that “the evidence” is not the (only) evidence. There is, rather, “evidence I’m aware of” and “evidence I’m not aware of but the other person might be aware of”. It’s entirely possible for that other evidence to be decisive.
Your comment is an instance of the “forcing fallacy” which really deserves a post of its own: claiming that we should spend resources on a problem because a lot of utility depends, or could depend, on the answer. There are many examples of this on LW, but to choose an uncontroversial one from elsewhere: why aren’t more physicists working on teleportation? The general counter to the pattern is noting that problems may be difficult, and may or may not have viable attacks right now, so we may be better off ignoring them after all. I don’t see a viable attack for applying LW-style rationality to political prediction, do you?
This is valid where there are experts that can confidently estimate that there are no attacks. There are lots of expert physicists, so if steps towards teleportation were feasible, someone would’ve noticed. In case there are no experts to produce such confidence, correct course of action is to create them (perhaps from more general experts, by way of giving a research focus).
The rule “If it’s an important problem, and we haven’t tried to understand it, we should” holds in any case, it’s just that in case of teleportation, we already did try to understand what we presently can, as a side effect of widespread knowledge of physics.
This is one of the reasons I actually rather like the politics in Heinlein’s writing; while it occasionally sounds preachy, and I routinely disagree with the implicit statement that the proposed system has higher utility than current ones, it does expose some really interesting ideas. This has led me to wonder, on occasion, about other potential government systems and to attempt to determine their utility compared to what we have.
Of course, I’m not really a student of political science and therefore am ill-equipped for this purpose, and estimate insufficient utility to attempting to undertake the scholarship needed to correct this (mostly due to opportunity cost; I am active in a field where I can contribute significant utility today, and it’s more efficient to update and expand my knowledge there than to branch into a completely different field in any depth). Nonetheless, inefficient though it may be, it’s an open question that I find my mind wandering to on occasion.
The conclusion I’ve reached is that if the US government (as we currently recognize it) continues until the technological singularity, it will be because the singularity comes soon (requires within ~50 years at a low-confidence estimate, at 150 years I’m 90% confident the US government either won’t exist or won’t be recognizable). There are too many problems with the system; it wasn’t optimized for the modern world, to the extent was optimized at all, and of course the “modern world” keeps advancing too. The US has tried to keep up (universal adult suffrage, several major changes to how political parties are organized (nobody today seriously proposes a split ticket), the increasing authority of the federal government over the states, etc.) but such change is reactive and takes time. It will always lag behind the bleeding edge, and if it gets too far behind the then-current institution will either be overthrown or will lose its significance and become something like the 21st century’s serious implementations of the feudal system (rare, somewhat different from how it was a few hundred years back, and nonetheless mostly irrelevant).