Yea, that’s what I tried to say. If you want to have a debate better than 4chan, but also feel bad whenever someone accuses you of censorship, you need to think about it and find a solution you would be satisfied with (while accepting that it may be imperfect), considering both sides of the risk.
Disabling the voting system or giving someone dozen “balancing” upvotes whenever they accuse you of censorship / manipulation / hive mind, that only incentivizes people to keep accusing you of censorship / manipulation / hive mind. And maybe I am overreacting, but I think I already see a pattern:
Zack cannot convince us of his opinions on the object level, so he instead keeps writing about how the rationalists are not sufficiently rational to accept his politically incorrect opinions (if you disagree with him, that only proves his point);
Trevor keeps writing about how secret services are trying to manipulate the AI safety community, and how they like to use “clown attacks” i.e. manipulate people to associate the beliefs they want to suppress with low status (if you tell him this is probably crazy, that only proves his point);
now Roko joined the group by writing a few comments that got downvoted (possibly rightfully), and then complaining that if you downvote him, you participate in the system of censorship (so if you downvote him, that only proves his point).
We have a long history of content critical of Less Wrong getting highly upvoted on Less Wrong. Which alone is a good thing—if that criticism makes sense, and if the readers understand the paradoxes involved (such as: more tolerant groups will often get accused of intolerance more frequently, simply because they do not suppress such speech). Famously, Holden Karnofsky’s criticism of Singularity Institute (previous name of Yudkowsky’s organization) was among the top upvoted articles in 2012. And that was a good thing, because it allowed a honest and friendly debate between both sides.
But recently it seems to me that this is devolving into people upvoting cheap criticism, which seems optimized to exploit this pattern. Instead of writing a well-reasoned article whose central idea disagrees with the current LW consensus and letting the readers appreciate the nuances of the fact that such article was posted on LW, the posts are lower-effort and directly include some form of “if you disagree with me, that only proves my point”. And… it works.
Zack cannot convince us [...] if you disagree with him, that only proves his point
I don’t think I’m doing this! It’s true that I think it’s common for apparent disagreements to be explained by political factors, but I think that claim is itself something I can support with evidence and arguments. I absolutely reject “If you disagree, that itself proves I’m right” as an argument, and I think I’ve been clear about this. (See the paragraph in “A Hill of Validity in Defense of Meaning” starting with “Especially compared to normal Berkeley [...]”.)
If you’re interested, I’m willing to write more words explaining my model of which disagreements with which people on which topics are being biased by which factors. But I get the sense that you don’t care that much, and that you’re just annoyed that my grudge against Yudkowsky and a lot of people with Berkeley is too easily summarized as being with an abstracted “community” that you also happen to be in even though this has nothing to do with you? Sorry! I’m not totally sure how to fix this. (It’s useful to sometimes be able to talk about general cultural trends, and being specific about which exact sub-sub-clusters are and are not guilty of the behavior being criticized would be a lot of extra wordcount that I don’t think anyone is interested in.)
Sorry for making this personal—I had only 3 examples in mind, couldn’t leave one out.
Would you agree with the statement that your meta-level articles are more karma-successful than your object-level articles?
Because if that is a fair description, I see it as a huge problem. (Not exactly as “you doing the wrong thing” but rather “the voting algorithm of LW users providing you a weird incentive landscape”.) Because the object level is where the ball is! The meta level is ultimately there only to make us more efficient at the object level by indirect means. If you succeed at the meta level, then you should also succeed at the object level, otherwise what exactly was the point?
(Yours is a different situation from Roko’s, who got lots of karma for an object-level article, and then wrote a few negative-karma comments, which was what triggered the censorship engine.)
The thing I am wondering about is basically this: If you write an article, saying effectively “Yudkowsky is silly for denying X”, and you get hundreds of upvotes, what would happen if you consequently abandoned the meta level entirely, and just wrote an article saying directly “X”. Would it also get hundreds of upvotes? What is your guess?
Because if it is the case that the article saying “X” would also get hundreds of upvotes, then my annoyance is with you. Why don’t you write the damned article and bask in the warmth of rationalist social approval? Sounds like win/win to everyone concerned (perhaps except for Yudkowsky, but I doubt that he is happy about the meta articles either, so this still doesn’t make it worse for him, I guess). Then the situation gets resolved and we all can move on to something else.
On the other hand, if it is the case that the article saying “X” would not get so many upvotes, then my annoyance is with the voters. I mean, what is the meaning of blaming someone for not supporting X, if you do not support X yourself? Then, I suspect the actual algorithm behind the votes was something like “ooh, this is so edgy, and I identify as edgy, have my upvote brother” without actually having a specific opinion on X. Contrarianism for contrarianism’s sake.
(My guess is that the article saying “X” would indeed get much less karma, and that you are aware of that, which is why you didn’t write it. If that is right, I blame the voters for pouring gasoline into fire, supporting you to fight for something they don’t themselves believe in, just because watching you fight is fun.)
Of course, as is usual when psychologising, this all is merely my guess and can be horribly wrong.
Would you agree with the statement that your meta-level articles are more karma-successful than your object-level articles? Because if that is a fair description, I see it as a huge problem.
I don’t think this is a good characterization of my posts on this website.
If by “meta-level articles”, you mean my philosophy of language work (like “Where to Draw the Boundaries?” and “Unnatural Categories Are Optimized for Deception”), I don’t think success is a problem. I think that was genuinely good work that bears directly on the site’s mission, independently of the historical fact that I had my own idiosyncratic (“object-level”?) reasons for getting obsessed with the philosophy of language in 2019–2020.[1]
If by “object-level articles”, you mean my writing on my special-interest blog about sexology and gender, well, the overwhelming majority of that never got a karma score because it was never cross-posted to Less Wrong. (I only cross-post specific articles from my special-interest blog when I think they’re plausibly relevant to the site’s mission.)
If by “meta-level articles”, you mean my recent memoir sequence which talks about sexology and the philosophy of language and various autobiographical episodes of low-stakes infighting among community members in Berkeley, California, well, those haven’t been karma-successful: parts 1, 2, and 3 are currently[2] sitting at 0.35, 0.08 (!), and 0.54 karma-per-vote, respectively.
If by “meta-level articles”, you mean posts that reply to other users of this website (such as “Contra Yudkowsky on Epistemic Conduct for Author Criticism” or “‘Rationalist Discourse’ Is Like ‘Physicist Motors’”), I contest the “meta level” characterization. I think it’s normal and not particularly meta for intellectuals to write critiques of each other’s work, where Smith writes “Kittens are Cute”, and Jones replies in “Contra Smith on Kitten Cuteness”. Sure, it would be possible for Jones to write a broadly similar article, “Kittens Aren’t Cute”, that ignores Smith altogether, but I think that’s often a worse choice, if the narrow purpose of Jones’s article is to critique the specific arguments made by Smith, notwithstanding that someone else might have better arguments in favor of the Cute Kitten theory that have not been heretofore considered.
You’re correct to notice that a lot of my recent work has a cult-infighting drama angle to it. (This is very explicit in the memoir sequence, but it noticeably leaks into my writing elsewhere.) I’m pretty sure I’m not doing it for the karma. I think I’m doing it because I’m disillusioned and traumatized from the events described in the memoir, and will hopefully get over it after I’ve got it all written down and out of my system.
There’s another couple posts in that sequence (including this coming Saturday, probably). If you don’t like it, I hereby encourage you to strong-downvote it. I write because I selfishly have something to say; I don’t think I’m entitled to anyone’s approval.
In some of those posts, I referenced the work of conventional academics like Brian Skyrmsand others, which I think provides some support for the notion that the nature of language and categories is a philosophically rich topic that someone might find significant in its own right, rather than being some sort of smokescreen for a hidden agenda.
Pt. 1 actually had a much higher score (over 100 points) shortly after publication, but got a lot of downvotes later after being criticized on Twitter.
Aren’t there a bunch of Litanies (Tarski, Gendlin, Hodgell) denouncing precisely this kind of self-deception?
If they engage in censorship and believe they are the kind of people who don’t, they ought to either stop, or change their belief.
Yea, that’s what I tried to say. If you want to have a debate better than 4chan, but also feel bad whenever someone accuses you of censorship, you need to think about it and find a solution you would be satisfied with (while accepting that it may be imperfect), considering both sides of the risk.
Disabling the voting system or giving someone dozen “balancing” upvotes whenever they accuse you of censorship / manipulation / hive mind, that only incentivizes people to keep accusing you of censorship / manipulation / hive mind. And maybe I am overreacting, but I think I already see a pattern:
Zack cannot convince us of his opinions on the object level, so he instead keeps writing about how the rationalists are not sufficiently rational to accept his politically incorrect opinions (if you disagree with him, that only proves his point);
Trevor keeps writing about how secret services are trying to manipulate the AI safety community, and how they like to use “clown attacks” i.e. manipulate people to associate the beliefs they want to suppress with low status (if you tell him this is probably crazy, that only proves his point);
now Roko joined the group by writing a few comments that got downvoted (possibly rightfully), and then complaining that if you downvote him, you participate in the system of censorship (so if you downvote him, that only proves his point).
We have a long history of content critical of Less Wrong getting highly upvoted on Less Wrong. Which alone is a good thing—if that criticism makes sense, and if the readers understand the paradoxes involved (such as: more tolerant groups will often get accused of intolerance more frequently, simply because they do not suppress such speech). Famously, Holden Karnofsky’s criticism of Singularity Institute (previous name of Yudkowsky’s organization) was among the top upvoted articles in 2012. And that was a good thing, because it allowed a honest and friendly debate between both sides.
But recently it seems to me that this is devolving into people upvoting cheap criticism, which seems optimized to exploit this pattern. Instead of writing a well-reasoned article whose central idea disagrees with the current LW consensus and letting the readers appreciate the nuances of the fact that such article was posted on LW, the posts are lower-effort and directly include some form of “if you disagree with me, that only proves my point”. And… it works.
I would like to see less of this.
I don’t think I’m doing this! It’s true that I think it’s common for apparent disagreements to be explained by political factors, but I think that claim is itself something I can support with evidence and arguments. I absolutely reject “If you disagree, that itself proves I’m right” as an argument, and I think I’ve been clear about this. (See the paragraph in “A Hill of Validity in Defense of Meaning” starting with “Especially compared to normal Berkeley [...]”.)
If you’re interested, I’m willing to write more words explaining my model of which disagreements with which people on which topics are being biased by which factors. But I get the sense that you don’t care that much, and that you’re just annoyed that my grudge against Yudkowsky and a lot of people with Berkeley is too easily summarized as being with an abstracted “community” that you also happen to be in even though this has nothing to do with you? Sorry! I’m not totally sure how to fix this. (It’s useful to sometimes be able to talk about general cultural trends, and being specific about which exact sub-sub-clusters are and are not guilty of the behavior being criticized would be a lot of extra wordcount that I don’t think anyone is interested in.)
Sorry for making this personal—I had only 3 examples in mind, couldn’t leave one out.
Would you agree with the statement that your meta-level articles are more karma-successful than your object-level articles?
Because if that is a fair description, I see it as a huge problem. (Not exactly as “you doing the wrong thing” but rather “the voting algorithm of LW users providing you a weird incentive landscape”.) Because the object level is where the ball is! The meta level is ultimately there only to make us more efficient at the object level by indirect means. If you succeed at the meta level, then you should also succeed at the object level, otherwise what exactly was the point?
(Yours is a different situation from Roko’s, who got lots of karma for an object-level article, and then wrote a few negative-karma comments, which was what triggered the censorship engine.)
The thing I am wondering about is basically this: If you write an article, saying effectively “Yudkowsky is silly for denying X”, and you get hundreds of upvotes, what would happen if you consequently abandoned the meta level entirely, and just wrote an article saying directly “X”. Would it also get hundreds of upvotes? What is your guess?
Because if it is the case that the article saying “X” would also get hundreds of upvotes, then my annoyance is with you. Why don’t you write the damned article and bask in the warmth of rationalist social approval? Sounds like win/win to everyone concerned (perhaps except for Yudkowsky, but I doubt that he is happy about the meta articles either, so this still doesn’t make it worse for him, I guess). Then the situation gets resolved and we all can move on to something else.
On the other hand, if it is the case that the article saying “X” would not get so many upvotes, then my annoyance is with the voters. I mean, what is the meaning of blaming someone for not supporting X, if you do not support X yourself? Then, I suspect the actual algorithm behind the votes was something like “ooh, this is so edgy, and I identify as edgy, have my upvote brother” without actually having a specific opinion on X. Contrarianism for contrarianism’s sake.
(My guess is that the article saying “X” would indeed get much less karma, and that you are aware of that, which is why you didn’t write it. If that is right, I blame the voters for pouring gasoline into fire, supporting you to fight for something they don’t themselves believe in, just because watching you fight is fun.)
Of course, as is usual when psychologising, this all is merely my guess and can be horribly wrong.
I don’t think this is a good characterization of my posts on this website.
If by “meta-level articles”, you mean my philosophy of language work (like “Where to Draw the Boundaries?” and “Unnatural Categories Are Optimized for Deception”), I don’t think success is a problem. I think that was genuinely good work that bears directly on the site’s mission, independently of the historical fact that I had my own idiosyncratic (“object-level”?) reasons for getting obsessed with the philosophy of language in 2019–2020.[1]
If by “object-level articles”, you mean my writing on my special-interest blog about sexology and gender, well, the overwhelming majority of that never got a karma score because it was never cross-posted to Less Wrong. (I only cross-post specific articles from my special-interest blog when I think they’re plausibly relevant to the site’s mission.)
If by “meta-level articles”, you mean my recent memoir sequence which talks about sexology and the philosophy of language and various autobiographical episodes of low-stakes infighting among community members in Berkeley, California, well, those haven’t been karma-successful: parts 1, 2, and 3 are currently[2] sitting at 0.35, 0.08 (!), and 0.54 karma-per-vote, respectively.
If by “meta-level articles”, you mean posts that reply to other users of this website (such as “Contra Yudkowsky on Epistemic Conduct for Author Criticism” or “‘Rationalist Discourse’ Is Like ‘Physicist Motors’”), I contest the “meta level” characterization. I think it’s normal and not particularly meta for intellectuals to write critiques of each other’s work, where Smith writes “Kittens are Cute”, and Jones replies in “Contra Smith on Kitten Cuteness”. Sure, it would be possible for Jones to write a broadly similar article, “Kittens Aren’t Cute”, that ignores Smith altogether, but I think that’s often a worse choice, if the narrow purpose of Jones’s article is to critique the specific arguments made by Smith, notwithstanding that someone else might have better arguments in favor of the Cute Kitten theory that have not been heretofore considered.
You’re correct to notice that a lot of my recent work has a cult-infighting drama angle to it. (This is very explicit in the memoir sequence, but it noticeably leaks into my writing elsewhere.) I’m pretty sure I’m not doing it for the karma. I think I’m doing it because I’m disillusioned and traumatized from the events described in the memoir, and will hopefully get over it after I’ve got it all written down and out of my system.
There’s another couple posts in that sequence (including this coming Saturday, probably). If you don’t like it, I hereby encourage you to strong-downvote it. I write because I selfishly have something to say; I don’t think I’m entitled to anyone’s approval.
In some of those posts, I referenced the work of conventional academics like Brian Skyrms and others, which I think provides some support for the notion that the nature of language and categories is a philosophically rich topic that someone might find significant in its own right, rather than being some sort of smokescreen for a hidden agenda.
Pt. 1 actually had a much higher score (over 100 points) shortly after publication, but got a lot of downvotes later after being criticized on Twitter.