For example if somewhere in the thread there were a post “Could you clarify?” and that post got −2 karma, you would just assume that two people block-downvoted everything and add 2 karma to every post in the thread.
So, that seems like a plausible method, and that suggests there’s a −2 to −3 range going on to Eli’s stuff. But that’s a lot of effort, and it means that people reading it or going to get a false feeling of a consensus on LW unless they are aware enough to do that, and moreover, it is, simply put, highly discouraging. Daenery and TimS have both stated that due to this sort of thing (and be clear it is coming disproportionately from a specific end of the political spectrum) that they are less frequently posting on LW. That means that people are actively using the karma system to force a political narrative. Aside from the obvious reasons why that’s bad, that’s also unhelpful if one is actually trying to have discussion that has any decent chance of actually finding out information about reality rather than simply seeing what “side” has won in any given context. I’d rather that LW not turn into the political equivalent of /r/politics on reddit, where despite the nominal goals, certain political opinions drown out almost all dissent. The fact that it would be occurring on the other end of the political spectrum doesn’t help matters. And can easily be particularly damaging given LW’s long-term goals are about rationality, not politics.
For my own trying-to-shut-up part, I do find one thing about “politics is the mind-killer” distinctly weird: the notion that we can seriously discuss morality, ethics, meta-ethics, and Taking Over The World thereby, and somehow expect never to arrive at a matter of political controversy.
For one example, an FAI would likely severely reduce the resource-income and social status of every single currently-active politician, left or right, up or down.
For another, more difficult, example, I can’t actually think of how you would do, say, CEV without some kind of voting and weighting system over the particular varieties of human values. Once you’ve got some notion of having to measure the values of at least a representative sample of everyone in the world and extrapolate those, you are innately in “political” territory. Once you need to talk of resource tradeoffs between values, you are innately in “economic” territory. Waving your arms and saying, “Friendly Superintelligence!” won’t actually tell us anything about what algorithm that thing is actually running.
If I may don my Evil Hansonian hat for a moment, conventional politics isn’t so much about charting the future of our society as about negotiating the power relationships between tribal alignments. Values and ethical preferences and vague feelings of ickiness go into those alignments (and then proceed to feed back out of them), but it’s far rarer for people to support political factions out of de-novo ethical reasoning than you’d guess from talking to them about it. The mind-killer meme is fundamentally an encouragement to be mindful of that, especially of the nasty ideological feedback loops that it tends to imply, and a suggestion to focus on object-level issues where the feedback isn’t quite so intense.
One consequence of this is that political shifts happen at or above human timescales, as their subjects become things that established tribes notice they can fight over. If you happen to be a singularitarian, then, you probably believe that the kinds of technological and social changes that LW talks about will at some point—probably soon, possibly already—be moving faster than politics can keep up with. Speaking for myself, I expect anything that conventional legislatures or political parties say about AI to matter about as much as the RIAA did when they went after Napster, and still less once we’re in a position to be talking seriously about strong, friendly artificial intelligence.
More importantly from our perspective, though, anything conventional politics doesn’t care about yet is also something that we have a considerably better chance of talking about sanely. We may be—in fact, we’re certainly—in the territory of politics in the sense of subjects relevant to the future of the polis, but as long as identity considerations and politics-specific “conventional wisdom” stay relatively distant from our reasoning, we can expect our minds to remain relatively happy and unkilled.
Yeah, this comes up from time to time. My own approach to it is to (attempt as best as I can to) address the underlying policy question while avoiding language that gets associated with particular partisan groups.
For example, I might discuss how a Blue politician might oppose FAI because they value their social status, or how a Green politician might expect a Blue politician to oppose FAI for such a reason even though the Green politician is not driven purely by such motives, or whatever… rather than using other word-pairs like (Republican/Democrat), (liberal/conservative), (reactionary/progressive), or whatever.
If I get to a point in that conversation where the general points are clearly understood, and to make further progress I need to actually get into specifics about specific politicians and political parties, well, OK, I decide what to do when that happens. But that’s not where I start.
And I agree that the CEV version of that conversation is more difficult, and needs to be approached with more care to avoid being derailed by largely irrelevant partisan considerations, and that the same is true more generally about specific questions related to value tradeoffs and, even more generally, questions about where human values conflict with one another in the first place.
I don’t think the usual mantra of politics is the mind-killer is meant to avoid all political issues, although that would be one interpretation. Rather, there are two distinct observations: one is purely descriptive, that politics can be a mind-killer. The second is proscriptive- to possibly refrain when possible from discussing politics until our general level of rationality improves. Unfortunately, that’s fairly difficult, because many of these issues matter. Moreover, it connects with certain problems where counter-intuitive or contrarian ideas are seen as somehow less political than more mainstream ones.
Moreover, it connects with certain problems where counter-intuitive or contrarian ideas are seen as somehow less political than more mainstream ones.
That’s… an excellent way of putting it. Non-mainstream political “tribes” are considered “less political” precisely because they don’t stand any chance of actually winning elections in the real world, so they get a Meta-Contrarian Boost on the internet. The usual ones I see are anarchists, libertarians, and neo-reactionaries.
Empirically I don’t think this is true. Minority political tribes sometimes get a pass for organizing themselves around things that aren’t partisan issues, or are only minor partisan issues, in the mainstream—the Greens sometimes benefit from this in US discourse, although they’re a complicated and very regionally dependent case—but as soon as you stake out a position on a mainstream claim, even if your reasoning is very different from the norm, you should expect to be attacked as viciously as any mainstream wonk. I expect neoreaction, for example, would have met with a much less heated reception if it weren’t for its views on race.
Minority views do get a boost on the Internet, but I think that has more to do with the echo-chamber effects that it encourages. It’s far easier to find or collect a group of people that all agree with you on Reddit or Tumblr than it is out there in the slow, short-range world of blood and bone.
That’s… an excellent way of putting it. Non-mainstream political “tribes” are considered “less political” precisely because they don’t stand any chance of actually winning elections in the real world, so they get a Meta-Contrarian Boost on the internet. The usual ones I see are anarchists, libertarians, and neo-reactionaries.
Considered where, and by whom? Because that is completely unlike my experience. On the Usenet groups rec.arts.sf.*, it was (I have not read Usenet for many years) absolutely standard that Progressive ideas were seen as non-political, while the merest hint of disagreement would immediately be piled on as “introducing politics to the discussion”. And the reactosphere is intensely aware that what they are talking is politics.
So, that seems like a plausible method, and that suggests there’s a −2 to −3 range going on to Eli’s stuff. But that’s a lot of effort, and it means that people reading it or going to get a false feeling of a consensus on LW unless they are aware enough to do that, and moreover, it is, simply put, highly discouraging. Daenery and TimS have both stated that due to this sort of thing (and be clear it is coming disproportionately from a specific end of the political spectrum) that they are less frequently posting on LW. That means that people are actively using the karma system to force a political narrative. Aside from the obvious reasons why that’s bad, that’s also unhelpful if one is actually trying to have discussion that has any decent chance of actually finding out information about reality rather than simply seeing what “side” has won in any given context. I’d rather that LW not turn into the political equivalent of /r/politics on reddit, where despite the nominal goals, certain political opinions drown out almost all dissent. The fact that it would be occurring on the other end of the political spectrum doesn’t help matters. And can easily be particularly damaging given LW’s long-term goals are about rationality, not politics.
For my own trying-to-shut-up part, I do find one thing about “politics is the mind-killer” distinctly weird: the notion that we can seriously discuss morality, ethics, meta-ethics, and Taking Over The World thereby, and somehow expect never to arrive at a matter of political controversy.
For one example, an FAI would likely severely reduce the resource-income and social status of every single currently-active politician, left or right, up or down.
For another, more difficult, example, I can’t actually think of how you would do, say, CEV without some kind of voting and weighting system over the particular varieties of human values. Once you’ve got some notion of having to measure the values of at least a representative sample of everyone in the world and extrapolate those, you are innately in “political” territory. Once you need to talk of resource tradeoffs between values, you are innately in “economic” territory. Waving your arms and saying, “Friendly Superintelligence!” won’t actually tell us anything about what algorithm that thing is actually running.
If I may don my Evil Hansonian hat for a moment, conventional politics isn’t so much about charting the future of our society as about negotiating the power relationships between tribal alignments. Values and ethical preferences and vague feelings of ickiness go into those alignments (and then proceed to feed back out of them), but it’s far rarer for people to support political factions out of de-novo ethical reasoning than you’d guess from talking to them about it. The mind-killer meme is fundamentally an encouragement to be mindful of that, especially of the nasty ideological feedback loops that it tends to imply, and a suggestion to focus on object-level issues where the feedback isn’t quite so intense.
One consequence of this is that political shifts happen at or above human timescales, as their subjects become things that established tribes notice they can fight over. If you happen to be a singularitarian, then, you probably believe that the kinds of technological and social changes that LW talks about will at some point—probably soon, possibly already—be moving faster than politics can keep up with. Speaking for myself, I expect anything that conventional legislatures or political parties say about AI to matter about as much as the RIAA did when they went after Napster, and still less once we’re in a position to be talking seriously about strong, friendly artificial intelligence.
More importantly from our perspective, though, anything conventional politics doesn’t care about yet is also something that we have a considerably better chance of talking about sanely. We may be—in fact, we’re certainly—in the territory of politics in the sense of subjects relevant to the future of the polis, but as long as identity considerations and politics-specific “conventional wisdom” stay relatively distant from our reasoning, we can expect our minds to remain relatively happy and unkilled.
Yeah, this comes up from time to time. My own approach to it is to (attempt as best as I can to) address the underlying policy question while avoiding language that gets associated with particular partisan groups.
For example, I might discuss how a Blue politician might oppose FAI because they value their social status, or how a Green politician might expect a Blue politician to oppose FAI for such a reason even though the Green politician is not driven purely by such motives, or whatever… rather than using other word-pairs like (Republican/Democrat), (liberal/conservative), (reactionary/progressive), or whatever.
If I get to a point in that conversation where the general points are clearly understood, and to make further progress I need to actually get into specifics about specific politicians and political parties, well, OK, I decide what to do when that happens. But that’s not where I start.
And I agree that the CEV version of that conversation is more difficult, and needs to be approached with more care to avoid being derailed by largely irrelevant partisan considerations, and that the same is true more generally about specific questions related to value tradeoffs and, even more generally, questions about where human values conflict with one another in the first place.
I don’t think the usual mantra of politics is the mind-killer is meant to avoid all political issues, although that would be one interpretation. Rather, there are two distinct observations: one is purely descriptive, that politics can be a mind-killer. The second is proscriptive- to possibly refrain when possible from discussing politics until our general level of rationality improves. Unfortunately, that’s fairly difficult, because many of these issues matter. Moreover, it connects with certain problems where counter-intuitive or contrarian ideas are seen as somehow less political than more mainstream ones.
That’s… an excellent way of putting it. Non-mainstream political “tribes” are considered “less political” precisely because they don’t stand any chance of actually winning elections in the real world, so they get a Meta-Contrarian Boost on the internet. The usual ones I see are anarchists, libertarians, and neo-reactionaries.
Empirically I don’t think this is true. Minority political tribes sometimes get a pass for organizing themselves around things that aren’t partisan issues, or are only minor partisan issues, in the mainstream—the Greens sometimes benefit from this in US discourse, although they’re a complicated and very regionally dependent case—but as soon as you stake out a position on a mainstream claim, even if your reasoning is very different from the norm, you should expect to be attacked as viciously as any mainstream wonk. I expect neoreaction, for example, would have met with a much less heated reception if it weren’t for its views on race.
Minority views do get a boost on the Internet, but I think that has more to do with the echo-chamber effects that it encourages. It’s far easier to find or collect a group of people that all agree with you on Reddit or Tumblr than it is out there in the slow, short-range world of blood and bone.
Considered where, and by whom? Because that is completely unlike my experience. On the Usenet groups rec.arts.sf.*, it was (I have not read Usenet for many years) absolutely standard that Progressive ideas were seen as non-political, while the merest hint of disagreement would immediately be piled on as “introducing politics to the discussion”. And the reactosphere is intensely aware that what they are talking is politics.