That is entirely possible. However, in that case, I would expect that other people would argue social-democratic positions well (assuming we hold that social-democratic positions have the same prior probability as those of any other ideology of equivalent complexity), and receive upvotes for it. Instead, I just saw an overwhelmingly neoliberal consensus in which I was actually one of the two or three people explaining or advocating left-wing positions at all.
Think of the Talmud’s old heuristic for a criminal court: a clear majority ruling is reliable, but a unanimous or nearly unanimous ruling indicates a failure to consider alternatives.
Now, admittedly, neoliberal positions appear often appealingly simple, even when counterintuitive. The problem is that they appear simple because the complexity is hiding in unexamined assumptions, assumptions often concealed in neat little parables like “money, markets, and businesses arise as a larger-scale elaboration of primitive barter relations”. These parables are simple and sound plausible, so we give them very large priors. Problem is, they are also complete ahistorical, and only sound simple for anthropic reasons (that is: any theory about history which neatly leads to us will sound simpler than one that leads to some alternative present, even if real history was in fact more complicated and our real present less genuinely probable).
So overall, it seems that for LessWrong, any non-neoliberal position (ie: position based on refuting those parables) is going to have a larger inferential distance and take a nasty complexity penalty compared to simply accepting the parables and not going looking for historical evidence. This may be a fault of anthropic bias, or even possibly a fault of Bayesian thinking itself (ie: large priors lead to very-confident belief even in the absence of definite evidence).
Now, admittedly, neoliberal positions appear often appealingly simple, even when counterintuitive. The problem is that they appear simple because the complexity is hiding in unexamined assumptions, assumptions often concealed in neat little parables like “money, markets, and businesses arise as a larger-scale elaboration of primitive barter relations”. These parables are simple and sound plausible, so we give them very large priors. Problem is, they are also complete ahistorical, and only sound simple for anthropic reasons (that is: any theory about history which neatly leads to us will sound simpler than one that leads to some alternative present, even if real history was in fact more complicated and our real present less genuinely probable).
This particular example doesn’t seem troublesome to me, because I’m comfortable with the idea of bartering for debt. That is, my neighbor gives me a cow, and now I owe him one- then I defend his home from raiders, and give him a chicken, and then we’re even. A tinker comes to town, and I trade him a pot of alcohol for a knife because there’s no real trust of future exchanges, and so on. Coinage eventually makes it much easier to keep track of these things, because then we don’t have my neighbor’s subjective estimate of how much I owe him versus my subjective estimate of how much I owe my neighbor, we can count pieces of silver.
Now, suppose I’m explaining to a child how markets work. There are simply less moving pieces to tell it as “twenty chickens for a cow” than “a cow now for something roughly proportional to the value of the cow in the future,” and so that’s the explanation I’ll use, but the theory still works for what actually happened. (Indeed, no doubt you can explain the preference for debt over immediate bartering as having lower frictional costs for transactions.)
In general, it’s important to keep “this is an illustrative example” separate from “this is how it happened,” which I don’t know if various neoliberals have done. Adam Smith, for example, claims that barter would be impractical, and thus people immediately moved to currency, which was sometimes things like cattle but generally something metal.
I would expect that other people would argue social-democratic positions well
In this particular thread or on LW in general?
In the particular thread, it’s likely that such people didn’t have time or inclination to argue, or maybe just missed this whole thing altogether. On LW in general, I don’t know—I haven’t seen enough to form an opinion.
In any case the survey results do not support your thesis that LW is dominated by neoliberals.
but a unanimous or nearly unanimous ruling indicates a failure to consider alternatives.
Haven’t seen much unanimity on sociopolitical issues here.
On the other hand there is that guy Bayes… hmm… what did you say about unanimity? :-D
Problem is, they are also complete ahistorical, and only sound simple for anthropic reasons
Graeber’s views are not quite mainstream consensus ones. And, as you say, *any* historical narrative will sound simple for anthropic reasons—it’s not something specific to neo-liberalism.
Not sure what you are proposing as an alternative to historical narratives leading to what actually happened. Basing theories of reality on counterfactuals doesn’t sound like a good idea to me.
In any case the survey results do not support your thesis that LW is dominated by neoliberals.
The survey results are out? Neat!
Not sure what you are proposing as an alternative to historical narratives leading to what actually happened. Basing theories of reality on counterfactuals doesn’t sound like a good idea to me.
I’m not saying we should base theories on counterfactuals. I’m saying that we should account for anthropic bias when giving out complexity penalties. The real path reality took to produce us is often more complicated than the idealized or imagined path.
Graeber’s view are not quite mainstream consensus ones.
The question is: are they non-mainstream in economics, anthropology, or both? I wouldn’t trust him to make any economic predictions, but if he tells me that the story of barter is false, I’m going to note that his training, employment, and social proof are as an academic anthropologist working with pre-industrial tribal cultures.
At minimum, it does seem like many anthropologists see Graeber’s work as much more tied into his politics than things even often are in that field, and that’s a field that has serious issues with that as a whole.
Considering how many of their comments have been downvoted, including inquiries like this one, and other recent events, such as those discussed by Ialdabaoth and others here, my guess is that’s not what is going on here.
To be clear, I don’t think someone’s net-stalking me. That would be ridiculous. But I do think there’s a certain… tone and voice that’s preferred in a LessWrong post, and I haven’t learned it yet. There’s a way to “sound more rational”, and votes are following that.
While I take your point, it seems unlikely that that’s what’s motivating the response here. eli_sennesh and Eugine_Nier are about as far apart from each other politically as you can get without going into seriously fringe positions, with ialdabaoth in the middle, but there’s evidence of block downvoting for all of them. You’d need a pretty dastardly enemy to explain all of that.
(I don’t think block downvoting’s responsible for most of eli’s recent karma loss, though.)
(I don’t think block downvoting’s responsible for most of eli’s recent karma loss, though.)
Block, meaning organized effort? Definitely not. But I definitely find a −100 karma hit surprising, considering that even very hiveminded places like Reddit are very slow to accumulate comment votes in one direction or the other.
EDIT: And now I’m at +13 karma, which from −48 is simply absurd again. Is the system intended to produce dramatic swings like that? Have I invoked the “complain about downvoting, get upvoted like mad” effect seen normally on Reddit?
There’s a fairly common pattern where someone says something that a small handful of folks downvote, then other folks come along and upvote the comment back to zero because they don’t feel it deserves to be negative, even though they would not have upvoted it otherwise. You’ve been posting a lot lately, so getting shifts of several dozen karma back and forth due to this kind of dynamic is not unheard of, though it’s certainly extreme.
Concerted, not necessarily organized. It’s possible for one person to put a pretty big dent in someone else’s karma if they’re tolerant of boredom and have a reasonable amount of karma of their own; you get four possible downvotes to each upvote of your own (upvotes aren’t capped), which is only rate-limiting if you’re new, downvoting everything you see, or heavily downvoted yourself.
This just happens to have been a sensitive issue recently, as the links in JoshuaZ’s ancestor comment might imply.
I understand block downvoting as a user (one, but possibly more) just going through each and every post by a certain poster and downvoting each one without caring about what it says.
It is not an “organized effort” in the sense of a conspiracy.
Blockvoting may or may not be going on in this case, but at this point, I also assign a high probability that there are people who here downvote essentially all posts that potentially seem to be arguing for positions that are generally seen as to be on the left-end of the political spectrum. That seems include posts which are purely giving data and statistics.
As I mentioned, I accept the block downvoting exists, it’s pretty obvious. However the question is what remains after you filter it out. And as you yourself point out, in this case the remainder is still negative.
I hope you realize the epistemical dangers of automatically considering all negative feedback as malicious machinations of your dastardly enemies...
Of course that would be epistemically dangerous. Dare I say it, as assuming that all language used by people one doesn’t like is adversarial?
More to the point, I haven’t made any such assumption. There are contexts where negative feedback and discussion is genuine and useful, and some of eli’s comments have been unproductive, and I’ve actually downvoted some of them. That doesn’t alter the fact that there’s nothing automatic going on: in the here and now, we have a problem involving at least one person, and likely more, downvoting due primarily for disagreement rather than anything substantial, and that that is coming from a specific end of the political spectrum. That doesn’t say anything about “dastardly enemies”- it simply means that karma results on these specific issues are highly likely in this context to be not representative, especially when people are apparently downvoting Eli’s comments that are literal answers to questions that they don’t like, such as here.
The possibilities that Eli’s comments were downvoted “politically” and that they were downvoted “on merits” are not mutually exclusive. It’s likely that both things happened.
Block down- and up-voting certainly exists. However, as has been pointed out, you should treat this as noise (or, rather, the zero-information “I don’t like you” message) and filter it out to the degree that you can.
Frankly, I haven’t looked carefully at votes in that thread, but some of Eli’s posts were silly enough to downvote on their merits, IMHO. I have a habit of not voting on posts in threads that I participate in, but if I were just an observer, I would have probably downvoted a couple.
The possibilities that Eli’s comments were downvoted “politically” and that they were downvoted “on merits” are not mutually exclusive. It’s likely that both things happened.
I agree that both likely happened. But if a substantial fraction was happening to the first, what does that suggest?
However, as has been pointed out, you should treat this as noise (or, rather, the zero-information “I don’t like you” message) and filter it out to the degree that you can.
Look at short neutral “utility” posts and add back the missing karma to all the rest.
For example if somewhere in the thread there were a post “Could you clarify?” and that post got −2 karma, you would just assume that two people block-downvoted everything and add 2 karma to every post in the thread.
If you want to be more precise about it, you can look at the “% positive” number which will help you figure out how much karma to add back.
For example if somewhere in the thread there were a post “Could you clarify?” and that post got −2 karma, you would just assume that two people block-downvoted everything and add 2 karma to every post in the thread.
So, that seems like a plausible method, and that suggests there’s a −2 to −3 range going on to Eli’s stuff. But that’s a lot of effort, and it means that people reading it or going to get a false feeling of a consensus on LW unless they are aware enough to do that, and moreover, it is, simply put, highly discouraging. Daenery and TimS have both stated that due to this sort of thing (and be clear it is coming disproportionately from a specific end of the political spectrum) that they are less frequently posting on LW. That means that people are actively using the karma system to force a political narrative. Aside from the obvious reasons why that’s bad, that’s also unhelpful if one is actually trying to have discussion that has any decent chance of actually finding out information about reality rather than simply seeing what “side” has won in any given context. I’d rather that LW not turn into the political equivalent of /r/politics on reddit, where despite the nominal goals, certain political opinions drown out almost all dissent. The fact that it would be occurring on the other end of the political spectrum doesn’t help matters. And can easily be particularly damaging given LW’s long-term goals are about rationality, not politics.
For my own trying-to-shut-up part, I do find one thing about “politics is the mind-killer” distinctly weird: the notion that we can seriously discuss morality, ethics, meta-ethics, and Taking Over The World thereby, and somehow expect never to arrive at a matter of political controversy.
For one example, an FAI would likely severely reduce the resource-income and social status of every single currently-active politician, left or right, up or down.
For another, more difficult, example, I can’t actually think of how you would do, say, CEV without some kind of voting and weighting system over the particular varieties of human values. Once you’ve got some notion of having to measure the values of at least a representative sample of everyone in the world and extrapolate those, you are innately in “political” territory. Once you need to talk of resource tradeoffs between values, you are innately in “economic” territory. Waving your arms and saying, “Friendly Superintelligence!” won’t actually tell us anything about what algorithm that thing is actually running.
If I may don my Evil Hansonian hat for a moment, conventional politics isn’t so much about charting the future of our society as about negotiating the power relationships between tribal alignments. Values and ethical preferences and vague feelings of ickiness go into those alignments (and then proceed to feed back out of them), but it’s far rarer for people to support political factions out of de-novo ethical reasoning than you’d guess from talking to them about it. The mind-killer meme is fundamentally an encouragement to be mindful of that, especially of the nasty ideological feedback loops that it tends to imply, and a suggestion to focus on object-level issues where the feedback isn’t quite so intense.
One consequence of this is that political shifts happen at or above human timescales, as their subjects become things that established tribes notice they can fight over. If you happen to be a singularitarian, then, you probably believe that the kinds of technological and social changes that LW talks about will at some point—probably soon, possibly already—be moving faster than politics can keep up with. Speaking for myself, I expect anything that conventional legislatures or political parties say about AI to matter about as much as the RIAA did when they went after Napster, and still less once we’re in a position to be talking seriously about strong, friendly artificial intelligence.
More importantly from our perspective, though, anything conventional politics doesn’t care about yet is also something that we have a considerably better chance of talking about sanely. We may be—in fact, we’re certainly—in the territory of politics in the sense of subjects relevant to the future of the polis, but as long as identity considerations and politics-specific “conventional wisdom” stay relatively distant from our reasoning, we can expect our minds to remain relatively happy and unkilled.
Yeah, this comes up from time to time. My own approach to it is to (attempt as best as I can to) address the underlying policy question while avoiding language that gets associated with particular partisan groups.
For example, I might discuss how a Blue politician might oppose FAI because they value their social status, or how a Green politician might expect a Blue politician to oppose FAI for such a reason even though the Green politician is not driven purely by such motives, or whatever… rather than using other word-pairs like (Republican/Democrat), (liberal/conservative), (reactionary/progressive), or whatever.
If I get to a point in that conversation where the general points are clearly understood, and to make further progress I need to actually get into specifics about specific politicians and political parties, well, OK, I decide what to do when that happens. But that’s not where I start.
And I agree that the CEV version of that conversation is more difficult, and needs to be approached with more care to avoid being derailed by largely irrelevant partisan considerations, and that the same is true more generally about specific questions related to value tradeoffs and, even more generally, questions about where human values conflict with one another in the first place.
I don’t think the usual mantra of politics is the mind-killer is meant to avoid all political issues, although that would be one interpretation. Rather, there are two distinct observations: one is purely descriptive, that politics can be a mind-killer. The second is proscriptive- to possibly refrain when possible from discussing politics until our general level of rationality improves. Unfortunately, that’s fairly difficult, because many of these issues matter. Moreover, it connects with certain problems where counter-intuitive or contrarian ideas are seen as somehow less political than more mainstream ones.
Moreover, it connects with certain problems where counter-intuitive or contrarian ideas are seen as somehow less political than more mainstream ones.
That’s… an excellent way of putting it. Non-mainstream political “tribes” are considered “less political” precisely because they don’t stand any chance of actually winning elections in the real world, so they get a Meta-Contrarian Boost on the internet. The usual ones I see are anarchists, libertarians, and neo-reactionaries.
Empirically I don’t think this is true. Minority political tribes sometimes get a pass for organizing themselves around things that aren’t partisan issues, or are only minor partisan issues, in the mainstream—the Greens sometimes benefit from this in US discourse, although they’re a complicated and very regionally dependent case—but as soon as you stake out a position on a mainstream claim, even if your reasoning is very different from the norm, you should expect to be attacked as viciously as any mainstream wonk. I expect neoreaction, for example, would have met with a much less heated reception if it weren’t for its views on race.
Minority views do get a boost on the Internet, but I think that has more to do with the echo-chamber effects that it encourages. It’s far easier to find or collect a group of people that all agree with you on Reddit or Tumblr than it is out there in the slow, short-range world of blood and bone.
That’s… an excellent way of putting it. Non-mainstream political “tribes” are considered “less political” precisely because they don’t stand any chance of actually winning elections in the real world, so they get a Meta-Contrarian Boost on the internet. The usual ones I see are anarchists, libertarians, and neo-reactionaries.
Considered where, and by whom? Because that is completely unlike my experience. On the Usenet groups rec.arts.sf.*, it was (I have not read Usenet for many years) absolutely standard that Progressive ideas were seen as non-political, while the merest hint of disagreement would immediately be piled on as “introducing politics to the discussion”. And the reactosphere is intensely aware that what they are talking is politics.
Have you considered that you lost your karma not because you argued typical social-democratic positions, but because you argued them badly?
That is entirely possible. However, in that case, I would expect that other people would argue social-democratic positions well (assuming we hold that social-democratic positions have the same prior probability as those of any other ideology of equivalent complexity), and receive upvotes for it. Instead, I just saw an overwhelmingly neoliberal consensus in which I was actually one of the two or three people explaining or advocating left-wing positions at all.
Think of the Talmud’s old heuristic for a criminal court: a clear majority ruling is reliable, but a unanimous or nearly unanimous ruling indicates a failure to consider alternatives.
Now, admittedly, neoliberal positions appear often appealingly simple, even when counterintuitive. The problem is that they appear simple because the complexity is hiding in unexamined assumptions, assumptions often concealed in neat little parables like “money, markets, and businesses arise as a larger-scale elaboration of primitive barter relations”. These parables are simple and sound plausible, so we give them very large priors. Problem is, they are also complete ahistorical, and only sound simple for anthropic reasons (that is: any theory about history which neatly leads to us will sound simpler than one that leads to some alternative present, even if real history was in fact more complicated and our real present less genuinely probable).
So overall, it seems that for LessWrong, any non-neoliberal position (ie: position based on refuting those parables) is going to have a larger inferential distance and take a nasty complexity penalty compared to simply accepting the parables and not going looking for historical evidence. This may be a fault of anthropic bias, or even possibly a fault of Bayesian thinking itself (ie: large priors lead to very-confident belief even in the absence of definite evidence).
This particular example doesn’t seem troublesome to me, because I’m comfortable with the idea of bartering for debt. That is, my neighbor gives me a cow, and now I owe him one- then I defend his home from raiders, and give him a chicken, and then we’re even. A tinker comes to town, and I trade him a pot of alcohol for a knife because there’s no real trust of future exchanges, and so on. Coinage eventually makes it much easier to keep track of these things, because then we don’t have my neighbor’s subjective estimate of how much I owe him versus my subjective estimate of how much I owe my neighbor, we can count pieces of silver.
Now, suppose I’m explaining to a child how markets work. There are simply less moving pieces to tell it as “twenty chickens for a cow” than “a cow now for something roughly proportional to the value of the cow in the future,” and so that’s the explanation I’ll use, but the theory still works for what actually happened. (Indeed, no doubt you can explain the preference for debt over immediate bartering as having lower frictional costs for transactions.)
In general, it’s important to keep “this is an illustrative example” separate from “this is how it happened,” which I don’t know if various neoliberals have done. Adam Smith, for example, claims that barter would be impractical, and thus people immediately moved to currency, which was sometimes things like cattle but generally something metal.
In this particular thread or on LW in general?
In the particular thread, it’s likely that such people didn’t have time or inclination to argue, or maybe just missed this whole thing altogether. On LW in general, I don’t know—I haven’t seen enough to form an opinion.
In any case the survey results do not support your thesis that LW is dominated by neoliberals.
Haven’t seen much unanimity on sociopolitical issues here.
On the other hand there is that guy Bayes… hmm… what did you say about unanimity? :-D
Graeber’s views are not quite mainstream consensus ones. And, as you say, *any* historical narrative will sound simple for anthropic reasons—it’s not something specific to neo-liberalism.
Not sure what you are proposing as an alternative to historical narratives leading to what actually happened. Basing theories of reality on counterfactuals doesn’t sound like a good idea to me.
The survey results are out? Neat!
I’m not saying we should base theories on counterfactuals. I’m saying that we should account for anthropic bias when giving out complexity penalties. The real path reality took to produce us is often more complicated than the idealized or imagined path.
The question is: are they non-mainstream in economics, anthropology, or both? I wouldn’t trust him to make any economic predictions, but if he tells me that the story of barter is false, I’m going to note that his training, employment, and social proof are as an academic anthropologist working with pre-industrial tribal cultures.
Previous years’ survey results: 2012, 2011, 2009. The 2013 survey is currently ongoing.
How would that work?
I am not sure what the mainstream consensus in anthropology looks like, but I have the impression that Graeber’s research is quite controversial.
At minimum, it does seem like many anthropologists see Graeber’s work as much more tied into his politics than things even often are in that field, and that’s a field that has serious issues with that as a whole.
Considering how many of their comments have been downvoted, including inquiries like this one, and other recent events, such as those discussed by Ialdabaoth and others here, my guess is that’s not what is going on here.
To be clear, I don’t think someone’s net-stalking me. That would be ridiculous. But I do think there’s a certain… tone and voice that’s preferred in a LessWrong post, and I haven’t learned it yet. There’s a way to “sound more rational”, and votes are following that.
I hope you realize the epistemical dangers of automatically considering all negative feedback as malicious machinations of your dastardly enemies...
While I take your point, it seems unlikely that that’s what’s motivating the response here. eli_sennesh and Eugine_Nier are about as far apart from each other politically as you can get without going into seriously fringe positions, with ialdabaoth in the middle, but there’s evidence of block downvoting for all of them. You’d need a pretty dastardly enemy to explain all of that.
(I don’t think block downvoting’s responsible for most of eli’s recent karma loss, though.)
Block, meaning organized effort? Definitely not. But I definitely find a −100 karma hit surprising, considering that even very hiveminded places like Reddit are very slow to accumulate comment votes in one direction or the other.
EDIT: And now I’m at +13 karma, which from −48 is simply absurd again. Is the system intended to produce dramatic swings like that? Have I invoked the “complain about downvoting, get upvoted like mad” effect seen normally on Reddit?
There’s a fairly common pattern where someone says something that a small handful of folks downvote, then other folks come along and upvote the comment back to zero because they don’t feel it deserves to be negative, even though they would not have upvoted it otherwise. You’ve been posting a lot lately, so getting shifts of several dozen karma back and forth due to this kind of dynamic is not unheard of, though it’s certainly extreme.
Concerted, not necessarily organized. It’s possible for one person to put a pretty big dent in someone else’s karma if they’re tolerant of boredom and have a reasonable amount of karma of their own; you get four possible downvotes to each upvote of your own (upvotes aren’t capped), which is only rate-limiting if you’re new, downvoting everything you see, or heavily downvoted yourself.
This just happens to have been a sensitive issue recently, as the links in JoshuaZ’s ancestor comment might imply.
Well, I’m sorry for kvetching, then.
I understand block downvoting as a user (one, but possibly more) just going through each and every post by a certain poster and downvoting each one without caring about what it says.
It is not an “organized effort” in the sense of a conspiracy.
Blockvoting may or may not be going on in this case, but at this point, I also assign a high probability that there are people who here downvote essentially all posts that potentially seem to be arguing for positions that are generally seen as to be on the left-end of the political spectrum. That seems include posts which are purely giving data and statistics.
Ah, well. I blame Clippy, then.
As I mentioned, I accept the block downvoting exists, it’s pretty obvious. However the question is what remains after you filter it out. And as you yourself point out, in this case the remainder is still negative.
Of course that would be epistemically dangerous. Dare I say it, as assuming that all language used by people one doesn’t like is adversarial?
More to the point, I haven’t made any such assumption. There are contexts where negative feedback and discussion is genuine and useful, and some of eli’s comments have been unproductive, and I’ve actually downvoted some of them. That doesn’t alter the fact that there’s nothing automatic going on: in the here and now, we have a problem involving at least one person, and likely more, downvoting due primarily for disagreement rather than anything substantial, and that that is coming from a specific end of the political spectrum. That doesn’t say anything about “dastardly enemies”- it simply means that karma results on these specific issues are highly likely in this context to be not representative, especially when people are apparently downvoting Eli’s comments that are literal answers to questions that they don’t like, such as here.
The possibilities that Eli’s comments were downvoted “politically” and that they were downvoted “on merits” are not mutually exclusive. It’s likely that both things happened.
Block down- and up-voting certainly exists. However, as has been pointed out, you should treat this as noise (or, rather, the zero-information “I don’t like you” message) and filter it out to the degree that you can.
Frankly, I haven’t looked carefully at votes in that thread, but some of Eli’s posts were silly enough to downvote on their merits, IMHO. I have a habit of not voting on posts in threads that I participate in, but if I were just an observer, I would have probably downvoted a couple.
I agree that both likely happened. But if a substantial fraction was happening to the first, what does that suggest?
And how do you suggest one do so in this context?
Look at short neutral “utility” posts and add back the missing karma to all the rest.
For example if somewhere in the thread there were a post “Could you clarify?” and that post got −2 karma, you would just assume that two people block-downvoted everything and add 2 karma to every post in the thread.
If you want to be more precise about it, you can look at the “% positive” number which will help you figure out how much karma to add back.
I am not sure it’s worth the bother, though.
So, that seems like a plausible method, and that suggests there’s a −2 to −3 range going on to Eli’s stuff. But that’s a lot of effort, and it means that people reading it or going to get a false feeling of a consensus on LW unless they are aware enough to do that, and moreover, it is, simply put, highly discouraging. Daenery and TimS have both stated that due to this sort of thing (and be clear it is coming disproportionately from a specific end of the political spectrum) that they are less frequently posting on LW. That means that people are actively using the karma system to force a political narrative. Aside from the obvious reasons why that’s bad, that’s also unhelpful if one is actually trying to have discussion that has any decent chance of actually finding out information about reality rather than simply seeing what “side” has won in any given context. I’d rather that LW not turn into the political equivalent of /r/politics on reddit, where despite the nominal goals, certain political opinions drown out almost all dissent. The fact that it would be occurring on the other end of the political spectrum doesn’t help matters. And can easily be particularly damaging given LW’s long-term goals are about rationality, not politics.
For my own trying-to-shut-up part, I do find one thing about “politics is the mind-killer” distinctly weird: the notion that we can seriously discuss morality, ethics, meta-ethics, and Taking Over The World thereby, and somehow expect never to arrive at a matter of political controversy.
For one example, an FAI would likely severely reduce the resource-income and social status of every single currently-active politician, left or right, up or down.
For another, more difficult, example, I can’t actually think of how you would do, say, CEV without some kind of voting and weighting system over the particular varieties of human values. Once you’ve got some notion of having to measure the values of at least a representative sample of everyone in the world and extrapolate those, you are innately in “political” territory. Once you need to talk of resource tradeoffs between values, you are innately in “economic” territory. Waving your arms and saying, “Friendly Superintelligence!” won’t actually tell us anything about what algorithm that thing is actually running.
If I may don my Evil Hansonian hat for a moment, conventional politics isn’t so much about charting the future of our society as about negotiating the power relationships between tribal alignments. Values and ethical preferences and vague feelings of ickiness go into those alignments (and then proceed to feed back out of them), but it’s far rarer for people to support political factions out of de-novo ethical reasoning than you’d guess from talking to them about it. The mind-killer meme is fundamentally an encouragement to be mindful of that, especially of the nasty ideological feedback loops that it tends to imply, and a suggestion to focus on object-level issues where the feedback isn’t quite so intense.
One consequence of this is that political shifts happen at or above human timescales, as their subjects become things that established tribes notice they can fight over. If you happen to be a singularitarian, then, you probably believe that the kinds of technological and social changes that LW talks about will at some point—probably soon, possibly already—be moving faster than politics can keep up with. Speaking for myself, I expect anything that conventional legislatures or political parties say about AI to matter about as much as the RIAA did when they went after Napster, and still less once we’re in a position to be talking seriously about strong, friendly artificial intelligence.
More importantly from our perspective, though, anything conventional politics doesn’t care about yet is also something that we have a considerably better chance of talking about sanely. We may be—in fact, we’re certainly—in the territory of politics in the sense of subjects relevant to the future of the polis, but as long as identity considerations and politics-specific “conventional wisdom” stay relatively distant from our reasoning, we can expect our minds to remain relatively happy and unkilled.
Yeah, this comes up from time to time. My own approach to it is to (attempt as best as I can to) address the underlying policy question while avoiding language that gets associated with particular partisan groups.
For example, I might discuss how a Blue politician might oppose FAI because they value their social status, or how a Green politician might expect a Blue politician to oppose FAI for such a reason even though the Green politician is not driven purely by such motives, or whatever… rather than using other word-pairs like (Republican/Democrat), (liberal/conservative), (reactionary/progressive), or whatever.
If I get to a point in that conversation where the general points are clearly understood, and to make further progress I need to actually get into specifics about specific politicians and political parties, well, OK, I decide what to do when that happens. But that’s not where I start.
And I agree that the CEV version of that conversation is more difficult, and needs to be approached with more care to avoid being derailed by largely irrelevant partisan considerations, and that the same is true more generally about specific questions related to value tradeoffs and, even more generally, questions about where human values conflict with one another in the first place.
I don’t think the usual mantra of politics is the mind-killer is meant to avoid all political issues, although that would be one interpretation. Rather, there are two distinct observations: one is purely descriptive, that politics can be a mind-killer. The second is proscriptive- to possibly refrain when possible from discussing politics until our general level of rationality improves. Unfortunately, that’s fairly difficult, because many of these issues matter. Moreover, it connects with certain problems where counter-intuitive or contrarian ideas are seen as somehow less political than more mainstream ones.
That’s… an excellent way of putting it. Non-mainstream political “tribes” are considered “less political” precisely because they don’t stand any chance of actually winning elections in the real world, so they get a Meta-Contrarian Boost on the internet. The usual ones I see are anarchists, libertarians, and neo-reactionaries.
Empirically I don’t think this is true. Minority political tribes sometimes get a pass for organizing themselves around things that aren’t partisan issues, or are only minor partisan issues, in the mainstream—the Greens sometimes benefit from this in US discourse, although they’re a complicated and very regionally dependent case—but as soon as you stake out a position on a mainstream claim, even if your reasoning is very different from the norm, you should expect to be attacked as viciously as any mainstream wonk. I expect neoreaction, for example, would have met with a much less heated reception if it weren’t for its views on race.
Minority views do get a boost on the Internet, but I think that has more to do with the echo-chamber effects that it encourages. It’s far easier to find or collect a group of people that all agree with you on Reddit or Tumblr than it is out there in the slow, short-range world of blood and bone.
Considered where, and by whom? Because that is completely unlike my experience. On the Usenet groups rec.arts.sf.*, it was (I have not read Usenet for many years) absolutely standard that Progressive ideas were seen as non-political, while the merest hint of disagreement would immediately be piled on as “introducing politics to the discussion”. And the reactosphere is intensely aware that what they are talking is politics.