Literally speaking not at all, since it was an exaggeration. 10 pages of praise is clearly not necessary.
That said, I strongly believe that posts containing criticism of Less Wrong on average get many more downvotes (and less upvotes) than posts which remark on how great Less Wrong is. For example, I have seen “joke” posts on how Yudkowski is god that get about +50 points (was a while ago, would need to check to confirm memory). On the other hand, every time I post a criticism of Less Wrong, it gets a lot of downvotes (though usually some upvotes as well), and as for criticism posted by other people.… well I don’t see a lot of that, do you?
Maybe your criticisms of Less Wrong just aren’t all that well-reasoned. Plenty of Less Wrong criticism gets upvoted here. The most-upvoted post of all time is a criticism of MIRI, and several of my own most-upvoted comments are direct criticisms of Eliezer, e.g. this and this. See also this much-upvoted post.
Thanks for the reply. When you suggest that maybe the problem is on my end, are you really just offering that as a mere possibility, or do you believe that that is actually the case? I’m asking because while it is of course entirely reasonable to say that the fault lies with me, nobody as of yet has told me what specifically is wrong with my posts (other than: “not enough facts”, or: “You sound left-wing”). If the latter is the case, please tell me what specifically I could improve.
The first post you link to is the one by Holden that I specifically referred to above as the only type of criticism that does get upvoted. The reasons for this are varied: 1) Holden is high status: Nobody is going to tell Holden to shut up and go away (as I’ve been told to) because the mere fact that he is taking the MIRI seriously is good for the MIRI and Less Wrong. 2) Holden is exceedingly polite and says nothing that could even be taken as an excuse to be offended 3) Holden goes out of his way to praise Less Wrong as a community, which of course makes people here feel good. 4) Holden has spent a ridiculous amount of time and effort writing and supporting that exceedingly lengthy post, well beyond normal standards. 5) Holden doesn’t actually say anything that is considered Taboo here on Less Wrong. His post defends the proposition that donating to MIRI isn’t the best possible expenditure of money. That’s hardly going to rile people up.
Holden’s post is the equivalent of James Randi going to a dowser’s forum, and writing a 10 page thesis on why he thinks dowsing isn’t 100% effective, while repeatedly saying how he might be wrong, and he really wants to be able to change his mind, and isn’t the idea of dowsing wonderful and aren’t dowsers great people. Of course the dowsers would be very happy with a post like that: it only validates them to have something like James Randi say all that. This does NOT mean that dowsers are all rational individuals who are happy to receive criticism of their ideas.
The same point holds for your own posts criticizing Eliezer, albeit to a lesser extent. And again, criticizing Eliezer is not taboo here. Criticizing Less Wrong itself, more so.
posts containing criticism of Less Wrong on average get many more downvotes (and less upvotes) than posts which remark on how great Less Wrong is.
(nods) That’s a far more defensible statement. It might even be true.
as for criticism posted by other people.… well I don’t see a lot of that, do you?
I’m not sure what you mean by “a lot”. I’ve seen more criticism of LessWrong here than I’ve seen criticism of RationalWiki, for example, and less than I’ve seen criticism of the Catholic Church. More than I’ve seen criticism of Dan Dannett. I’m not sure if I’ve seen more criticism of Less Wrong than of Richard Dawkins, or less. What’s your standard?
We could instead ask: should there be more of it? Should there be less? I suspect that’s a wrong question as well though. Mostly, I think the criticism should be of higher quality. Most of what I see is tedious and redundant. Of course, Sturgeon’s Law applies in this as in everything.
All of that said, if I were to list off the top of my head the top ten critics of LessWrong who post on LW , your name would not even come up, so if you are attempting to suggest that you are somehow the singular contrarian voice on this site I can only conclude that you haven’t read much of the site’s archives.
There is also more criticism of Less Wrong here than there is criticism of people who think that the world is run by lizard-people. This is because Less Wrong is more relevant to Less Wrong than Lizard-people, not because the lizard-believers are actually considered more credible.
The only reasonable standard to me is comparing the amount of criticism with the amount of praise. I see much more posts talking about how great Less Wrong is than I see criticism of Less Wrong. More worryingly, the criticism of Less Wrong that I do see is on other forums, where it is widely agreed that Less Wrong is subject to group think, but which is summarily ignored here.
I assume you aren’t actually suggesting that RationalWiki, the Catholic Church, Dan Dannett and Richard Dawkins are as irrelevant to Less Wrong as lizard-people. I picked a few targets that seemed vaguely relevant; if you think I should pick different targets, let me know what they are.
The only reasonable standard to me is comparing the amount of criticism with the amount of praise.
Why is that? This doesn’t seem true to me at all.
More worryingly, the criticism of Less Wrong that I do see is on other forums
Why does this worry you?
it is widely agreed that Less Wrong is subject to group think, but which is summarily ignored here.
This might be true. Can you unpack what you mean by “group think”? (Or what you think those other people on other forums whom you’re reporting the statements of mean by it, if that’s more relevant?)
No, I am saying that comparing criticism of Less Wrong with criticism of other websites/people is not a valid metric at all, since the total amount written on the subject differs between each. You can’t look at absolute amounts of criticism here, it has to be relative or merely the total amount of posts would determine the answer.
It worries me that a lot of the criticism of Less Wrong is made outside of Less Wrong because this indicates that the criticism is not accepted here and Less Wrong exists in a bubble.
The exact criticism of Less Wrong usually isn’t very good, since people tend to not spend a lot of time writing thoughtful criticisms of websites that they aren’t affiliated with. It usually amounts to “gives off a bad vibe”, “uses their own little language”, “Copies Yudkowski in everything they believe” or “Disproportionally holds extreme views without thinking this is odd.” All of this indicates what I call group think, which is the act of paying too much attention to what others in the in-group believe and being isolated from the rest of the world.
Imagine that you have a community X, which is perfectly rational and perfectly updating. (I am not saying LW is that community; this is just an example.) Of course there would be many people who disagree with X; some of them would be horribly offended by the views of X. Those people would criticize X a lot. So even with a perfectly updating super rationalist community, the worst criticism would come from outside.
Also, most criticism would come from outside simply because there are more non-members than members, and if the group is not secret and is somehow interesting, many non-members will express their opinions about the group.
Therefore, “a lot of the criticism of Less Wrong is made outside of Less Wrong” is not an evidence against rationality of LessWrong, because we would expect the same result both in universes where LW is rational and in universes where LW is irrational.
So even with a perfectly updating super rationalist community, the worst criticism would come from outside.
You write “so”, but that doesn’t follow. You are tacitly assuming that a community has to be held together by shared beliefs, but that does not match genuine rationality, since one cannot predetermie where rational enquiry will lead—to attempt to do so is to introduce confirmation bias., You also seem to think that the “worst” criticism is some kind of vitriolic invective. But what is of concern to genuine rationalists is the best—best argued, most effective—criticism.
Also, most criticism would come from outside simply because there are more non-members than members, and if the group is not secret and is somehow interesting, many non-members will express their opinions about the group.
If the group is discussing specialised topics, then good criticism can only come from those who are familiar with those topics.
Therefore, “a lot of the criticism of Less Wrong is made outside of Less Wrong” is not an evidence against rationality of LessWrong, because we would expect the same result both in universes where LW is rational and in universes where LW is irrational.
You are still missing the point that a genuine rationalist community would invite criticism.
You are still missing the point that a genuine rationalist community would invite criticism.
How specifically?
For example, should we ask all the critics from outside to publish an article on LW about what they think is wrong with LW? Do we also need to upvote such articles, regardless of their merit? Do we also have to write supporting comments to such articles, regardles of whether we agree with their points? Do we have to obsess about the same points again and again and again, never stopping? … What exactly should a community do to pass the “invites criticism” test?
I made the strawmen suggestions because I wasn’t sure what was your point, and I wanted to have also an “upper bound” on what the community is supposed to do to pass the “invites criticism” test. Because defining only the lower bound could easily lead to later responses of type: “Sure, you did X, Y and Z, but you are still not inviting criticism.”
The simplest solution would be to contact people already criticizing LW and invite them to write and publish a single article (without having to create an account, collect karma, learn markdown formatting, and all other trivial inconveniences), assuming the article passes at least some basic filter (no obvious insanity; claims of LW doing something backed up by hyperlinks). There is always a possibility that we would simply not notice some critics, but it can be partially solved by asking “have you noticed any new critic?” in Open Thread.
Somehow I don’t like the “behave like a dick and be rewarded by greater publicity” aspect this would inevitably have, since the most vocal critics of LW are the two or three people from RationalWiki whose typical manner of discussion is, uhm, less than polite. But if we don’t choose them, it could seem from outside like avoiding the strongest arguments. Let’s suppose this is a price we are willing to pay in the name of properly checking our beliefs—especially if it only happens once in a long time.
Seems like a good idea to me; at least worth trying once.
inviting opposing views regularly happens on, eg acaemic philosophy
I guess the invited opponents in this situation are other academical philosophers, not e.g. a random blogger who built their fame by saying “philosophers are a bunch of idiots” and inserting ad-hominems about specific people.
So if we tried in a similar manner to speak with the polite equals, the invited critics would be people from other organizations (like Holden Karnofsky from GiveWell). Which kinda already happened. And it seems like not enough; partially because of the polite argumentation, but also because it only happened once.
Perhaps what we should aim for is something between Holden Karnofsky and our beloved stalkers at RationalWiki. Perhaps we should not ask people to express their opinion about whole LW (unless they volunteer to), but only about some specific aspect. That way they wouldn’t have to read everything to form an opinion (e.g. someone could review only the quantum physics part, ignoring the rest of the sequences).
Do you have a specific suggestion of people that could be invited to write their critism of LW here?
Your article would have been a lot more well received if you hadn’t mentioned LessWrong so much in your last main paragraph. If you had subtly avoided calling direct fault on LessWrong, I think this could have have been very well received. Just look at the comments here. Despite the karma on the article, this post is getting a lot of attention.
I’ve been probing LessWrong’s reactions to various things since Inferential Silence motivated me to bother with LessWrong. I can give you a discrete bullet point list of what LessWrong likes, loves, and hates. It’s hard to pinpoint the groupthink because the one special topic that LessWrong “never” disagrees on is so hard to find. You’re perfectly allowed to disagree with cryonics, SAI, Yudkowsky, you can discuss politics if you’re quiet about it, you can discuss any of a number of things and not suffer intense downvoting so long as you express your thoughts perfectly clearly. In this way LessWrong skillfully avoids noticing that it is participating in groupthink.
So what is it? It’s simple, recursive, ironic, and intensely obvious in hindsight. LessWrong is the focus of LessWrong. It’s not any given subject, topic, person, or method. It is the LessWrong collective itself. LessWrong is the one thing you cannot hate, while also being a part of LessWrong. To challenge LessWrong is to challenge rationality. Challenging Yudkowsky? Sure. Not like he’s the avatar of rationality or anything. Go ahead, disagree with him. Most people here disagree with him on some subject or another. I’m probably one of the few people that does understand and agree with Yudkowsky nearly entirely. The best advice I could give LessWrong is that, if it were down to the two of them as to which was a better fit for being the avatar of rationality, it is Yudkowsky. LessWrong disagrees. LessWrong is totally content to disavow credence to Yudkowsky. No, in LessWrong’s view, the title of avatar of rationality belongs to itself. Not to any particular person in the collective, but to the collective itself. So long as you avoid hitting that node and make your thoughts clear in LessWrong’s memetic language, you’re fine. Fall outside that boundary? Hell no. Not on LessWrong. Not while I’m still here. (For each member of the hive mind in turn.)
There is a counter argument here, in your and other’s increasing disallegiance to LessWrong. The problem is that most of you aren’t equipped to skillfully reform LessWrong, so you just end up leaving and the problem goes ignored. The effectively “removes” you from the hive, so despite that you hold the counter-stance, you’re not really part of LessWrong to the point where it can be claimed that LessWrong values anything other than LessWrong. Well suppose you don’t leave LessWrong. Despite your contrary view, you can barely identify the problem enough to voice it, making you question if your viewpoint in rationally legitimate. Right now you’re on the border of LessWrong, deciding if you’re going to be a part of the collective or not. In this way, you can construct a very precise measure of LessWrong with a small bit of introspection.
Judging by the reception of your comments here, I’d say you’re well equipped to speak the LessWrong language, so all you need is sufficient understanding of the hive’s mind to begin reforming it. I’d further suggest starting with something other than the ban on politics, but if this was the subject you picked, then I must assume you’re not hive-aware (compare to: self-aware) enough to formally recognize the other flaws.
I am not sure I follow your argument completely. It feels to me as if you suggested that discussing everything, as long as it is polite and rational, is the proof of LessWrong hivemind.
Well, I would call that “culture”, and I am happy to have it here. I am not sure what benefit exactly would we get by dismantling it. (A well-kept garden that committed suicide because it loved contrarianism too much?) I mean, it’s not like none of us ever goes beyond the walls of LessWrong.
I think Less Wrong is a failure at instilling abilities-in-practice, and designed in a way that detracts from people’s real-world performance.
I’m going to try to explain what LW is, why that’s bad, and sketch what a tool to actually help people become more rational would look like.
This post is based on the assumption that a) the creators of Less Wrong wish Less Wrong to result in people becoming better at achieving their goals (instrumental rationality, aka “efficient productivity”), and b) Some (perhaps many) readers read it towards that goal. It is this I think is self-deception.
what is Less Wrong? It is a blog, a succession of short fun posts with comments, most likely read when people wish to distract or entertain themselves, and tuned for producing shiny ideas which successfully distract and entertain people. As Merlin Mann says: “Joining a Facebook group about creative productivity is like buying a chair about jogging”. Well, reading a blog to overcome akrasia IS joining a Facebook group about creative productivity.
Many (most?) participants are allowing LW to grab their attention because it is fun and easy, and thus simultaneously distracting themselves from Work (reducing their overall Work time) while convincing themselves that this distraction is helping them to become more rational. This reduces the chance that they will consciously Work towards rationality, since they feel they are already working towards that goal with their LW reading time.
So, what kind of observation specifically does your hypothesis disallow?
Trying (admittedly only for a very short time) to steelman your position, I’d say the “dogma” of LessWrong is that having an aspiring rationalist community is a good thing. Because LW is an aspiring rationalist community, so obviously people who think such community is stupid, filter themselves out of LW. In other words, the shared opinion of LW members is that LW should exist.
Everything being polite and rational is informational; the point is to demonstrate that those qualities are not evidence of the hive mind quality. Something else is, which I clearly identity. Incidentally, though I didn’t realize it at the time, I wasn’t actually advocating dismantling it, or that it was a bad thing to have at all.
I mean, it’s not like none of us ever goes beyond the walls of LessWrong.
That’s the perception that LessWrong would benefit from correcting; it is as if LessWrongers never go outside the walls of LessWrong. Obviously you physically do, but there are strict procedures and social processes in place that prevent planting outside seeds in the fertile soil within the walls. When you come inside the walls, you quarantine yourself to only those ideas which LessWrong already accepts as being discussable. The article you link is three years old; what has happened in the time? If it was so well-received, where are the results? There is learning happening that is advancing human rationality far more qualitatively than LessWrong will publicly acknowledge. It’s in a stalemate with itself for accomplishing its own mission statement; a deadlock of ideas enforced by a self-reinforcing social dynamic against ideas that are too far outside the very narrow norm.
Insofar as LessWrong is a hive mind, that same mind is effectively afraid of thinking and doing everything it can to not do so.
That’s odd and catches me completely off guard. I wouldn’t expect someone who seems to be deeply inside the hive to both cognize my stance as well as you have and be judging that my heretofore unstated arguments might be worth hearing. Your submission history reflects what I assume; that you are on the outer edges of the hive despite an apparently deep investment.
With the forewarning that my ideas may well be hard to rid yourself of and that you might lack the communicate skills to adequately convey the ideas to your peers, are you willing to accept the consequences of being rejected by the immune system? You’re risking becoming a “carrier” of the ideas here.
Why don’t you just post them explicitly? As long they don’t involve modeling a vengeful far-future AI everyone will be fine. Plus, then you can actually test to see if they will be rejected.
Why are you convinced I haven’t posted them explicitly? Or otherwise tested the reactions of LessWrongers to my ideas? Are you under the impression that they were going to be recognized as worth thinking about and that they would be brought to your personal attention?
Let’s say I actually possess ideas with future light cones on order of strong AI. Do you earently expect me to honestly send that signal and bring a ton of attention to myself? In a world of fools that want nothing more than to believe in divinity? (Beliefs about strong AI are pretty qualitatively similar to religious ideas of god, up to and included, “Works in mysterious ways that we can’t hope to fathom.”)
I have every reason not to share my thoughts and every reason to play coy and try to get LessWrong thinking for itself. I’m getting pretty damn close to jumping ship and watching the aftermath here as it is.
I’m just trying to encourage you to make you contributions moderately interesting. I don’t really care how special you think you are.
Beliefs about strong AI are pretty qualitatively similar to religious ideas of god, up to and included, “Works in mysterious ways that we can’t hope to fathom.”
Wow, what an interesting perspective. Never heard that before.
I don’t really care how special you think you are.
See, that’s the kind of stance I can appreciate. Straight to the point without any wasted energy. That’s not the majority response LessWrong gives, though. If people really wanted me to post about this as the upvotes on the posts urging me to post about this would suggest, why is each and every one of my posts getting downvoted? How am I supposed to actually do what people are suggesting when they are actively preventing me from doing so?
...Or is the average voter simply not cognizant enough to realize this...?
Worst effect of having sub-zero karma? Having to wait ten minutes between comments.
Wow, what an interesting perspective. Never heard that before.
Sarcasm. We get the “oh this is just like theism!” position articulated here every ten months or so. Those of us who have been here a while are kind of bored with it. (Yes, yes, yes, no doubt that simply demonstrates our inadequate levels of self-awareness and metacognition.)
No, I suppose you’ll need a fuller description to see why the similarity is relevant.
LessWrong is sci-fi. Check what’s popular. Superintelligent AI, space travel, suspended animation, hyper-advanced nanotech...
These concepts straight out of sci-fi have next to zero basis. Who is to say there even are concepts that the human mind simply can’t grasp? I can’t visualize in n-dimensional space, but I can certainly understand the concept. Grey goo? Sounds plausible, but then again, there is zero evidence that physics can create anything like stable nanites. How fragile will the molecular bonds be? Are generation ships feasible? Is there some way to warp space to go fast enough that you don’t need an entire ecosystem on board? If complex information processing nanites aren’t feasible, is reanimation? These concepts aren’t new, they’ve been around for ages. It’s Magic 2.0.
If it’s not about evidence, what is it about? I’m not denying any of these possibilities, but aside from being fun ideas, we are nowhere near close to proving them legitimate. It’s not something people are believing in because “it only makes sense.” It’s fantasy at it’s base, and if it turns out to be halfway possible, great. What if it doesn’t? Is there going to be some point in the future where LessWrong lets go of these childish ideas of simulated worlds and supertechnological abilities? 100 years from now, if we don’t have AI and utility fog, is LessWrong going to give up these ideas? No. Because that just means that we’re closer to finally realizing the technology! Grow up already. This stuff isn’t reasonable, it’s just plausible, and our predictions are nothing more than mere predictions. LessWrong believes this stuff because LessWrong wants to believe in this stuff. At this moment in time, it is pure fiction.
If it’s not rationa—No, you’ve stopped following along by now. It’s not enough to point out that the ideas are pure fiction that humanity has dreamed about for ages. I can’t make an argument within the context that it’s irrational because you’ve heard it all before. What, do you just ignore it? Do you have an actual counter-point? Do you just shrug it off because “it’s obvious” and you don’t like the implications?
Seriously. Grow up. If there’s a reason for me to think LessWrong isn’t filled with children who like to believe in Magic 2.0, I’m certainly not seeing it.
LessWrong is sci-fi. Check what’s popular. Superintelligent AI, space travel, suspended animation, hyper-advanced nanotech...
It is true that people have written unrealistic books about these things. People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams.
Who is to say there even are concepts that the human mind simply can’t grasp? I can’t visualize in n-dimensional space, but I can certainly understand the concept
The human mind is finite, and there are infinitely many possible concepts. If you’re interested in the limits of human intelligence and the possibilities of artificial intelligence, you might want to read The Hanson-Yudkowsky Debate .
Grey goo? Sounds plausible, but then again, there is zero evidence that physics can create anything like stable nanites. How fragile will the molecular bonds be?
People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams.
Naturally, some of the ideas fiction holds are feasible. In order for your analogy to apply, however, we’d need a comprehensive run-down of how many and which fictional concepts have become feasible to date. I’d love to see some hard analysis across the span of human history. While I believe there is merit in nano-scale technology, I’m not holding my breath for femtoengineering. Nevertheless, if such things were as readily predictable as people seem to think, you have to ask why we don’t have the technology already. The answer is that actually expressing our ideas onto physical reality is non-trivial, and by direct consequence, potentially non-viable.
The human mind is finite, and there are infinitely many possible concepts.
I need backing on both of these points. As far as I know, there isn’t enough verified neuroscience to determine if our brains are conceptually limited in any way. Primarily because we don’t actually know how abstract mental concepts map onto physical neurons. Even ignoring that (contrary to memetic citation) the brain does grow new neural cells and repair itself in adults, even if the number of neurons is finite, the number of and potential for connections between them is astronomical. We simply don’t know the maximum conceptual complexity of the human brain.
As far as there being infinitely many concepts, “flying car” isn’t terribly more complicated than “car” and “flying.” Even if something in the far future is given a name other than “car,” we can still grasp the concept of “transportation device,” paired with any number of accessory concepts like, “cup holder,” “flies,” “transforms,” “teleports,” and so on. Maybe it’s closer to a “suit” than anything we would currently call a “car;” some sort of “jetpack” or other. I’d need an expansion on “concept” before you could effectively communicate that concept-space is infinite. Countably infinite or uncountably infinite? All the formal math I’m aware of indicates that things like conceptual language are incomputable or give rise to paradoxes or some other such problem that would make “infinite” simply be inapplicable/nonsensical.
This doesn’t actually counter my argument, for two main reasons:
That wasn’t my argument.
That doesn’t counter anything.
Please don’t bother replying to me unless you’re going to actually explain something. Anything else is disuseful and you know it. I want to know how you justify to yourself that LessWrong is anything but childish. If you’re not willing to explain that, I’m not interested.
I want to know how you justify to yourself that LessWrong is anything but childish.
I don’t.
I often have conversations here that interest me, which is all the justification I need for continuing to have conversations here. If I stopped finding them interesting, I would stop spending time here.
Perhaps those conversations are childish; if so, it follows that I am interested in childish conversations. Perhaps it follows that I myself am childish. That doesn’t seem true to me, but presumably if it is my opinions on the matter aren’t worth much.
All of that would certainly be a low-status admission, but denying it or pretending otherwise wouldn’t change the fact if it’s true. It seems more productive to pursue what interests me without worrying too much about how childish it is or isn’t, let alone worrying about demonstrating to others that I or LW meet some maturity threshold.
I have every reason not to share my thoughts and every reason to play coy and try to get LessWrong thinking for itself. I’m getting pretty damn close to jumping ship and watching the aftermath here as it is.
Few places online appreciate drama-queening, you know.
How specifically can you be surprised to hear “be specific” on LessWrong? (Because that’s more or less what Nancy said.) If nothing else, this suggests that your model of LessWrong is seriously wrong.
Giving specific examples of “LessWrong is unable to discuss X, Y, Z” is so much preferable to saying “you know… LessWrong is a hivemind… there are things you can’t think about...” without giving any specific examples.
How specifically? Easy. Because LessWrong is highly dismissive, and because I’ve been heavily signalling that I don’t have any actual arguments or criticisms. I do, obviously, but I’ve been signalling that that’s just a bluff on my part, up to an including this sentence. Nobody’s supposed to read this and think, “You know, he might actually have something that he’s not sharing.” Frankly, I’m surprised that with all the attention this article got that I haven’t been downvoted a hell of a lot more. I’m not sure where I messed up that LessWrong isn’t hammering me and is actually bothering to ask for specifics, but you’re right; it doesn’t fit the pattern I’ve seen prior to this thread.
I’m not yet sure where the limits of LessWrong’s patience lie, but I’ve come too far to stop trying to figure that out now.
I’m not yet sure where the limits of LessWrong’s patience lie, but I’ve come too far to stop trying to figure that out now.
I do not represent Less Wrong, but you have crossed a limit with me. The magic moment came when I realized that BaconServ means spambot. Spammers are the people I most love to hate. I respond to their provocations with a genuine desire to find them and torture them to death. If you were any more obnoxious, I wouldn’t even be telling you this, I would just be trying to find out who you are.
So wake the fuck up. We are all real people with lives, stop wasting our time. Try to keep the words “I”, “Less Wrong”, and “signalling” out of your next two hundred comments.
ETA This angry comment was written while under pressure and without a study of BaconServ’s full posting history, and should not be interpreted as a lucid assessment.
Intriguing and plausible. Does this forum really have a hive mind with a self-preservation instinct? Since the comments you linked to are downvoted below the default visibility level, do you mind writing a Discussion post (or maybe a Main post, if you are adventurous enough) on the subject? These remain visible unless deleted. I wish I could incentivise you with a few hundred karma points, but alas, there is no karma transfer/donation mechanism on the site.
My guess is that most “criticism of LessWrong itself” is not well-received because “LessWrong itself” is not very specific, and so criticism of this vague idea typically isn’t able to take the form of clearly expressed thoughts.
The thoughts are crystal clear in my mind and can be expressed—and communicated—perfectly accurately with existing language. The problem lies in the inability of LessWrong to accept that there are concepts that it simply does not have concise language to express. It’s not that it can’t be communicate or is unclear in any way, it’s that LessWrong’s collective membership is not having the thoughts. They are dismissed as vague because they are not recognized as blatantly and obviously true. We’d need new vocabluary to concretize the concepts to the point of making the argument effectively believable from the raw communication.
LessWrong lacks the capacity to think beyond what’s written in a useful way.
Tell me you’ve actually thought about this for a full five minutes before you’ve bothered responding or voting. (Note: Five minutes will not be enough time to overcome your cognitive biases, even with the implied challenge that it will not be enough time to think clearly. (You do not have the ability to detect if your thought processes are clear or muddled with bias. “I don’t feel like I’m missing something,” isn’t a valid counter argument.))
My question isn’t “is this happening?”—my question is, “how big is the effect, and does it matter?” I suspect that’s the case for a lot of LW readers.
This is a recurring theme that I find over and over. These sorts of biases and problems are obvious, they are the kind of thing that are pretty much guaranteed to exist, the kind of thing you could hardly hope to escape from. But that does not in any way mean that the effects are large enough to be relevant, or that the time spent fixing them cannot be better spent elsewhere. It is not enough to say that it it worthwhile; you must show that it is worthwhile enough to compete with other options.
This implies that your article, should you decide to write it, would in fact be understood, and that a good proportion of the LW readership has in fact considered your platform. For your article to be effective, it may be necessary for you to lay out the extent to which these issues are an actual problem, instead of simply pointing out the obvious.
Let me put it this way: The effect is big enough that I have no qualms calling it a blanket inability. This should be implied by the rules of common speech, but people who consider themselves intelligent find it easier to believe that such confidence is evidence of irrationality.
What’s interesting is that you think such an article can actually be written. (Let’s ignore that I earned sub-zero karma with my posts in this thread today.)
Consider the premise:
LessWrong doesn’t think about what’s written beyond what’s written.
(Obviously there are a few stray thoughts that you’ll find in the comments, but they are non-useful and do not generally proliferate into more descriptive articles.)
Let’s make it clear the purpose of such an article would be to get LessWrong to think about what’s written beyond what is written. This is necessary to make LessWrong useful beyond any other internet forum. Such an article would be advocating independent and bold thinking, and then voicing any compelling realizations back to LessWrong to spark further thought by others. A few short passes of this process and you could see some pretty impressive brainstorming—all while maintaining LessWrong’s standards of rationality. Recall that thought being a very cheap and very effective resource is what makes machine intelligence so formidable. If the potential for communal superintelligence isn’t sufficient payoff here, nothing will be.
Keep in mind that this is only possible insofar as a significant portion of LessWrong is willing to think beyond what is written.
If we suppose that this is actually possible; that superintelligent-quality payoffs are possible here with only slight optimization of LessWrong, then why isn’t LessWrong already trying to do this? Why weren’t they trying years ago? Why weren’t they trying when That Alien Message was published? You might want to say that the supposing is what’s causing the apparent question; that if LessWrong could really trivially evolve into such a mechanism, that it most definitely would be, and that the reason we don’t see it doing this is because many consider this to be irrational and not worth trying for.
Okay.
Then what is the point of thinking beyond what’s written?
If there aren’t significant payoffs to self-realizations that increase rationality substantially, then what is the point? Why be LessWrong? Why bother coming here? Why bother putting in all this effort if you’re only going to end up performing marginally better? I can already hear half the readers thinking, “But marginally better performance can have significant payoffs!” Great, then that supports my argument that LessWrong could benefit tremendously from very minor optimization towards thought sharing. But that’s not what I was saying. I was saying, after all the payoffs are calculated, if they aren’t going to have been any more than marginally better even with intense increases in rationality, then what is the point? Are we just here to advertise the uFAI pseudo-hypothesis? (Not being willing to conduct the experiment makes it an unscientific hypothesis, regardless of however reasonable it is to not conduct the experiment.) If so, we could do a lot better by leaving people irrational as they are and spreading classic FUD on the matter. Write a few compelling stories that freak everyone out—even intelligence people.
That’s not what LessWrong is. Even if that was what Yudkowsky wanted out of it in the end, that’s not what LessWrong is. If that were all LessWrong was, there wouldn’t be nearly as many users as there are. I recall numerous times Yudkowsky himself stated that in order to make LessWrong grow, he would need to provide something legitimate beyond his own ulterior motives. By Yudkowsky’s own assertion, LessWrong is more than FAI propaganda.
LessWrong is what it states on the front page. I am not here writing this for my own hubris. (The comments I write under that premise sound vastly different.) I am writing this for one sole single purpose. If I can demonstrate to you that such an article and criticism cannot currently be written, that there is no sequence of words that will provoke a thinking beyond what’s written response in a significant portion of LessWrongers, then you will have to acknowledge that there is a significant resource here that remains significantly underutilized. If I can’t make that argument, I have to keep trying with others, waiting for someone to recognize that there is no immediate path to a LessWrong awakening.
I’ve left holes in my argument. Mostly because I’m tired and want to go to bed, but there’s nothing stopping me from simply not sending this and waiting until tomorrow. Sleepiness is not an excuse or a reason here. If I were more awake, I’d try writing a more optimum argument instead of stream-of-consciousness. But I don’t need to. I’m not just writing this to convince you of an argument. I’m writing this as a test, to see if you can accept (purely on principle) that thought is inherently useful. I’m attempting to convince you not of my argument, but to use your own ability to reason to derive your own stance. I’m not asking you to agree and I’d prefer if you didn’t. What I want is your thoughts on the matter. I don’t want knee-jerk sophomoric rejections to obvious holes that have nothing to do with my core argument. I don’t want to be told I haven’t thought about this enough. I don’t want to be told I need to demonstrate an actual method. I don’t want you to repeat what all other LessWrongers have told me after they summarily failed to grasp the core of my argument. The holes I leave open are intentional. They are tripholes for sophomores. They are meant to weed out impatient fools, even if it means getting downvoted. It means wasting less of my time on people who are skilled at pretending they’re actually listening to my argument.
LessWrong, in its current state, is beneath me. It performs marginally better than your average internet forum. There are non-average forums that perform significantly better than LessWrong in terms of not only advancing rationality, but just about everything. There is nothing that makes LessWrong special aside from the front page potential to form a community whose operations represent a superingellient process.
I’ve been slowly giving out slightly more detailed explanations of this here and there for the past month or so. I’ve left fewer holes here than anywhere else I’ve made similar arguments. I have put the idea so damn close to the finish line for you that for you to not spend two minutes reflecting on your own, past what’s written here, indicates to me exactly how strong the cognitive biases are that prevent LessWrong from recursive self-improvement.
Even in the individuals who signal being the most open minded and willing to hear my argument.
If you felt really motivated, you could systematically upvote all my posts, but I’d prefer that you didn’t; it would interfere with my current method of collecting information on the LessWrong collective itself.
What content does “LessWrong” have here? If anything other than the “LessWrong” monad can be criticized… that sounds totally great. I mean, even if you mean something like “this group’s general approach to the world” or something that still sounds like a much better place than Less Wrong actually is.
Really, if it weren’t for the provocative tone I would have no idea you weren’t making a compliment.
Hah, interesting. I didn’t notice I was making an interpretation duality here.
I suppose that only further clarifies that I haven’t actually listed a criticism. Indeed, I think it is a potentially good thing that LessWrong has hive mind qualities to it. There are ways this can be used, but it requires a slight bit more self-awareness, both on the individual and collective levels. Truth be told, the main problem I have with LessWrong is that, at my level of understanding, I can easily manipulate it in nearly any way I wish. My personal criticism is that it’s not good enough to match me, but that implies I think I’m better than everyone here combined. This is something I do think, of course, so asking for evidence of the extraordinary claim is fair. Trouble is, I’m not quite “super”-intelligent enough to know exactly how to provide that evidence without first interacting with LessWrong.
My personal complaint is really that LessWrong hasn’t activated yet.
Thank you for the question though; it tells me you know exactly which questions to ask yourself.
Interesting. I’ve have expected at least a few intrinsic contrarians, having seen them nearly everywhere else, and group identification usually isn’t one of the things they’ll avoid slamming.
The thing about collections of people that exceeds Dunbar’s number is that no one person can perfectly classify the boundaries of the group. There’s a fuzzy boundary, but the people that “dissent LessWrong” causatively tend towards opting out of the group, while the people that don’t vocally dissent causatively tend towards losing the dissent. Each of these is suffering social biases in thinking that the group is unitary; that you’re either “part of” LessWrong or else you’re not. Awhile ago I formally serialized one of my thoughts about society as, “Groups of people live or die based on their belief in the success of the group.” If nobody believe the group is going to accomplish anything or be useful in any way, they’ll stop showing up. This self-fulfills the prophecy. If they continue attending and see all the members that are still attending and think, “Imagine the good we can do together,” they’ll be intensely motivated to keep attending. This self-fulfills the prophecy.
Did I mention this is something I realized without any help or feedback from anyone?
There’s a fuzzy boundary, but the people that “dissent LessWrong” causatively tend towards opting out of the group, while the people that don’t vocally dissent causatively tend towards losing the dissent.
That’s usually not the case, or at least not to a hard extent. Even sub-Dunbar’s Number groups tend to have at least a few internal contrarians :there’s something in human psychology that encourages at least a few folk to keep the identity while simultaneously attacking the identity.
I’ve been that person for a while on a different board, although it took a while to recognize it.
True, the trend for the average person is to strongly identify within the group or not, but I’ve never seen that include all of them.
If nobody believe the group is going to accomplish anything or be useful in any way, they’ll stop showing up.
That assumes the only point of group membership is to see what the group accomplishes. There are other motivations, such as individual value fulfillment, group knowledge, ingroup selection preferences, even the knowledge gained from dissenting. There are some sociology experiments suggesting that groups can only remain successful in the long term if they sate those desires for newcomers. ((And even some arguments suggesting that these secondary, individual motivations are more important than the primary, group motivation.))
Did I mention this is something I realized without any help or feedback from anyone?
The demographic analysis is fascinating, especially if correct. The rough group analysis is not especially fresh outside of the possible lack of contrarians in this instance, though independent rediscovery is always a good thing, particularly given the weakness of social identity theory (and general sociology) research.
I’d caution that many of the phrases you’ve used here signal strongly enough for ‘trolling’ that you’re likely to have an effect on the supergroup even if you aren’t intending it.
Oh, troll is a very easy perception to overcome, especially in this context. Don’t worry about how I’ll be perceived beyond delayed participation in making posts. There is much utility in negative response. In a day I’ve lost a couple dozen karma, I’ve learned a lot about LessWrong’s perception. I suspect there is a user or two participating in political voting against my comments, possibly in response to my referencing the concept in one of my comments. Something like a grudge is a thing I can utilize heavily.
I’d expect more loss than that if someone really wanted to disable you; systemic karma abuse would end up being either resulting in karma loss equal to either some multiple of your total post count, or a multiple of the number of posts displayed per user history page (by default, 10).
Actually I think I found out the cause: Commenting on comments below the display threshold costs five karma. I believe this might actually be retroactive so that downvoting a comment below display the display threshold takes five karma from each user possessing a comment under it.
I iterated my entire comment history to find the source of an immediate −15 spike in karma; couldn’t find anything. My main hypothesis was moderator reprimand until I put the pieces together on the cost of replying to downvoted comments. Further analysis today seems to confirm my suspicion. I’m unsure if the retroactive quality of it is immediate or on a timer but I don’t see any reason it wouldn’t be immediate. Feel free to test on me, I think the voting has stabilized.
I’m utterly unclear on what evidence you were searching for (and failing to find) to indicate a source of an immediate 15-point karma drop. For example, how did you exclude the possibility of 15 separate downvotes on 15 different comments? Did you remember the previous karma totals of all your comments?
More or less, yeah. The totaled deltas weren’t of the necessary magnitude order in my approximation. It’s not that many pages if you set the relevant preference to 25 per page and have iterated all the way back a couple times before.
Gotcha; I understand now. If that’s actually a reliable method of analysis for you I’m impressed by your memory, but lacking the evidence of its reliability that you have access to I hop you’ll forgive me if it doesn’t significantly raise my confidence in the retroactive-karma-penalty theory.
Great post, thank you for taking the time to write it. It’s insightful and I think clearly identifies the problem with Less Wrong. It could probably do with being a little bit less provacative, i.e. by removing references to Less Wrong being a “hive”. Upvoted for valid criticism.
You’re misunderstanding; I am not here to gain karma or approval. I am not reformed by downvoting; I am merely informed. I know well the language LessWrong likes and hates; I’ve experimented, controls and everything. I didn’t say this because I was willing to write more about it; I wrote it because I’d already pre-determined you’d be sympathetic. I am not part of the hive and I will not be dissecting or reforming it; that is yours if you should see value in LessWrong. My job is not to sugar-coat it and make it sound nice; I can derive utility from downvotes and disapproval—and in more ways than the basic level of information it provides. I’m not going to call it something it isn’t and use terms that make it seems like anything less than a hive mind. I am giving my full opinion and detecting agents worth interacting with; I am not here to participate as part of the hive. It is not yours to defer to me as if I was going to resolve the problem in my capacity as a member of the hive; I highlighted your being on the border in my capacity as an agent beyond the concept of being inside or outside the hive. I can enter and leave freely; one moment inside, one moment outside. I am here to facilitate LessWrong, not to advance it.
I’m not sure how to act on this information or the corresponding downvoting. Is there something I could have done to make it more interesting? I’d really appreciate knowing.
To be clear: I replied before you edited the comment to make it a question about downvotes. Before your edit you were asking for an explanation of the inferential silence. That is what I explained. The downvotes are probably a combination of the boringness, the superiority you were signalling and left-over-bad-feeling from other comments you’ve made tonight. But I didn’t downvote.
Given the subject and content of the comment it probably couldn’t have been substantially less boring. It could, however, have been substantially shorter.
Literally speaking not at all, since it was an exaggeration. 10 pages of praise is clearly not necessary.
That said, I strongly believe that posts containing criticism of Less Wrong on average get many more downvotes (and less upvotes) than posts which remark on how great Less Wrong is. For example, I have seen “joke” posts on how Yudkowski is god that get about +50 points (was a while ago, would need to check to confirm memory). On the other hand, every time I post a criticism of Less Wrong, it gets a lot of downvotes (though usually some upvotes as well), and as for criticism posted by other people.… well I don’t see a lot of that, do you?
Maybe your criticisms of Less Wrong just aren’t all that well-reasoned. Plenty of Less Wrong criticism gets upvoted here. The most-upvoted post of all time is a criticism of MIRI, and several of my own most-upvoted comments are direct criticisms of Eliezer, e.g. this and this. See also this much-upvoted post.
Thanks for the reply. When you suggest that maybe the problem is on my end, are you really just offering that as a mere possibility, or do you believe that that is actually the case? I’m asking because while it is of course entirely reasonable to say that the fault lies with me, nobody as of yet has told me what specifically is wrong with my posts (other than: “not enough facts”, or: “You sound left-wing”). If the latter is the case, please tell me what specifically I could improve.
The first post you link to is the one by Holden that I specifically referred to above as the only type of criticism that does get upvoted. The reasons for this are varied:
1) Holden is high status: Nobody is going to tell Holden to shut up and go away (as I’ve been told to) because the mere fact that he is taking the MIRI seriously is good for the MIRI and Less Wrong.
2) Holden is exceedingly polite and says nothing that could even be taken as an excuse to be offended
3) Holden goes out of his way to praise Less Wrong as a community, which of course makes people here feel good.
4) Holden has spent a ridiculous amount of time and effort writing and supporting that exceedingly lengthy post, well beyond normal standards.
5) Holden doesn’t actually say anything that is considered Taboo here on Less Wrong. His post defends the proposition that donating to MIRI isn’t the best possible expenditure of money. That’s hardly going to rile people up.
Holden’s post is the equivalent of James Randi going to a dowser’s forum, and writing a 10 page thesis on why he thinks dowsing isn’t 100% effective, while repeatedly saying how he might be wrong, and he really wants to be able to change his mind, and isn’t the idea of dowsing wonderful and aren’t dowsers great people. Of course the dowsers would be very happy with a post like that: it only validates them to have something like James Randi say all that. This does NOT mean that dowsers are all rational individuals who are happy to receive criticism of their ideas.
The same point holds for your own posts criticizing Eliezer, albeit to a lesser extent. And again, criticizing Eliezer is not taboo here. Criticizing Less Wrong itself, more so.
I agree that Yudkowsky hero worship is extremely creepy and should stop.
Fair enough. What’s the most recent example of Yudkowsky hero worship you’ve observed here?
(nods) That’s a far more defensible statement. It might even be true.
I’m not sure what you mean by “a lot”. I’ve seen more criticism of LessWrong here than I’ve seen criticism of RationalWiki, for example, and less than I’ve seen criticism of the Catholic Church. More than I’ve seen criticism of Dan Dannett. I’m not sure if I’ve seen more criticism of Less Wrong than of Richard Dawkins, or less. What’s your standard?
We could instead ask: should there be more of it? Should there be less? I suspect that’s a wrong question as well though. Mostly, I think the criticism should be of higher quality. Most of what I see is tedious and redundant. Of course, Sturgeon’s Law applies in this as in everything.
All of that said, if I were to list off the top of my head the top ten critics of LessWrong who post on LW , your name would not even come up, so if you are attempting to suggest that you are somehow the singular contrarian voice on this site I can only conclude that you haven’t read much of the site’s archives.
There is also more criticism of Less Wrong here than there is criticism of people who think that the world is run by lizard-people. This is because Less Wrong is more relevant to Less Wrong than Lizard-people, not because the lizard-believers are actually considered more credible.
The only reasonable standard to me is comparing the amount of criticism with the amount of praise. I see much more posts talking about how great Less Wrong is than I see criticism of Less Wrong. More worryingly, the criticism of Less Wrong that I do see is on other forums, where it is widely agreed that Less Wrong is subject to group think, but which is summarily ignored here.
I assume you aren’t actually suggesting that RationalWiki, the Catholic Church, Dan Dannett and Richard Dawkins are as irrelevant to Less Wrong as lizard-people. I picked a few targets that seemed vaguely relevant; if you think I should pick different targets, let me know what they are.
Why is that? This doesn’t seem true to me at all.
Why does this worry you?
This might be true. Can you unpack what you mean by “group think”? (Or what you think those other people on other forums whom you’re reporting the statements of mean by it, if that’s more relevant?)
No, I am saying that comparing criticism of Less Wrong with criticism of other websites/people is not a valid metric at all, since the total amount written on the subject differs between each. You can’t look at absolute amounts of criticism here, it has to be relative or merely the total amount of posts would determine the answer.
It worries me that a lot of the criticism of Less Wrong is made outside of Less Wrong because this indicates that the criticism is not accepted here and Less Wrong exists in a bubble.
The exact criticism of Less Wrong usually isn’t very good, since people tend to not spend a lot of time writing thoughtful criticisms of websites that they aren’t affiliated with. It usually amounts to “gives off a bad vibe”, “uses their own little language”, “Copies Yudkowski in everything they believe” or “Disproportionally holds extreme views without thinking this is odd.” All of this indicates what I call group think, which is the act of paying too much attention to what others in the in-group believe and being isolated from the rest of the world.
All right. Thanks for clarifying.
You realize this is still true if one replaces “Less Wrong” with any other community.
Which would mean there is no genuinely rationalist (inviting updates) community anywhere,
How specifically would it mean that?
Imagine that you have a community X, which is perfectly rational and perfectly updating. (I am not saying LW is that community; this is just an example.) Of course there would be many people who disagree with X; some of them would be horribly offended by the views of X. Those people would criticize X a lot. So even with a perfectly updating super rationalist community, the worst criticism would come from outside.
Also, most criticism would come from outside simply because there are more non-members than members, and if the group is not secret and is somehow interesting, many non-members will express their opinions about the group.
Therefore, “a lot of the criticism of Less Wrong is made outside of Less Wrong” is not an evidence against rationality of LessWrong, because we would expect the same result both in universes where LW is rational and in universes where LW is irrational.
You write “so”, but that doesn’t follow. You are tacitly assuming that a community has to be held together by shared beliefs, but that does not match genuine rationality, since one cannot predetermie where rational enquiry will lead—to attempt to do so is to introduce confirmation bias., You also seem to think that the “worst” criticism is some kind of vitriolic invective. But what is of concern to genuine rationalists is the best—best argued, most effective—criticism.
If the group is discussing specialised topics, then good criticism can only come from those who are familiar with those topics.
You are still missing the point that a genuine rationalist community would invite criticism.
How specifically?
For example, should we ask all the critics from outside to publish an article on LW about what they think is wrong with LW? Do we also need to upvote such articles, regardless of their merit? Do we also have to write supporting comments to such articles, regardles of whether we agree with their points? Do we have to obsess about the same points again and again and again, never stopping? … What exactly should a community do to pass the “invites criticism” test?
Why not? Your other comments are strawmen. But inviting opposing views regularly happens on, eg acaemic philosophy.
Thank you for the specific suggestion!
I made the strawmen suggestions because I wasn’t sure what was your point, and I wanted to have also an “upper bound” on what the community is supposed to do to pass the “invites criticism” test. Because defining only the lower bound could easily lead to later responses of type: “Sure, you did X, Y and Z, but you are still not inviting criticism.”
The simplest solution would be to contact people already criticizing LW and invite them to write and publish a single article (without having to create an account, collect karma, learn markdown formatting, and all other trivial inconveniences), assuming the article passes at least some basic filter (no obvious insanity; claims of LW doing something backed up by hyperlinks). There is always a possibility that we would simply not notice some critics, but it can be partially solved by asking “have you noticed any new critic?” in Open Thread.
Somehow I don’t like the “behave like a dick and be rewarded by greater publicity” aspect this would inevitably have, since the most vocal critics of LW are the two or three people from RationalWiki whose typical manner of discussion is, uhm, less than polite. But if we don’t choose them, it could seem from outside like avoiding the strongest arguments. Let’s suppose this is a price we are willing to pay in the name of properly checking our beliefs—especially if it only happens once in a long time.
Seems like a good idea to me; at least worth trying once.
I guess the invited opponents in this situation are other academical philosophers, not e.g. a random blogger who built their fame by saying “philosophers are a bunch of idiots” and inserting ad-hominems about specific people.
So if we tried in a similar manner to speak with the polite equals, the invited critics would be people from other organizations (like Holden Karnofsky from GiveWell). Which kinda already happened. And it seems like not enough; partially because of the polite argumentation, but also because it only happened once.
Perhaps what we should aim for is something between Holden Karnofsky and our beloved stalkers at RationalWiki. Perhaps we should not ask people to express their opinion about whole LW (unless they volunteer to), but only about some specific aspect. That way they wouldn’t have to read everything to form an opinion (e.g. someone could review only the quantum physics part, ignoring the rest of the sequences).
Do you have a specific suggestion of people that could be invited to write their critism of LW here?
Your article would have been a lot more well received if you hadn’t mentioned LessWrong so much in your last main paragraph. If you had subtly avoided calling direct fault on LessWrong, I think this could have have been very well received. Just look at the comments here. Despite the karma on the article, this post is getting a lot of attention.
I’ve been probing LessWrong’s reactions to various things since Inferential Silence motivated me to bother with LessWrong. I can give you a discrete bullet point list of what LessWrong likes, loves, and hates. It’s hard to pinpoint the groupthink because the one special topic that LessWrong “never” disagrees on is so hard to find. You’re perfectly allowed to disagree with cryonics, SAI, Yudkowsky, you can discuss politics if you’re quiet about it, you can discuss any of a number of things and not suffer intense downvoting so long as you express your thoughts perfectly clearly. In this way LessWrong skillfully avoids noticing that it is participating in groupthink.
So what is it? It’s simple, recursive, ironic, and intensely obvious in hindsight. LessWrong is the focus of LessWrong. It’s not any given subject, topic, person, or method. It is the LessWrong collective itself. LessWrong is the one thing you cannot hate, while also being a part of LessWrong. To challenge LessWrong is to challenge rationality. Challenging Yudkowsky? Sure. Not like he’s the avatar of rationality or anything. Go ahead, disagree with him. Most people here disagree with him on some subject or another. I’m probably one of the few people that does understand and agree with Yudkowsky nearly entirely. The best advice I could give LessWrong is that, if it were down to the two of them as to which was a better fit for being the avatar of rationality, it is Yudkowsky. LessWrong disagrees. LessWrong is totally content to disavow credence to Yudkowsky. No, in LessWrong’s view, the title of avatar of rationality belongs to itself. Not to any particular person in the collective, but to the collective itself. So long as you avoid hitting that node and make your thoughts clear in LessWrong’s memetic language, you’re fine. Fall outside that boundary? Hell no. Not on LessWrong. Not while I’m still here. (For each member of the hive mind in turn.)
There is a counter argument here, in your and other’s increasing disallegiance to LessWrong. The problem is that most of you aren’t equipped to skillfully reform LessWrong, so you just end up leaving and the problem goes ignored. The effectively “removes” you from the hive, so despite that you hold the counter-stance, you’re not really part of LessWrong to the point where it can be claimed that LessWrong values anything other than LessWrong. Well suppose you don’t leave LessWrong. Despite your contrary view, you can barely identify the problem enough to voice it, making you question if your viewpoint in rationally legitimate. Right now you’re on the border of LessWrong, deciding if you’re going to be a part of the collective or not. In this way, you can construct a very precise measure of LessWrong with a small bit of introspection.
Judging by the reception of your comments here, I’d say you’re well equipped to speak the LessWrong language, so all you need is sufficient understanding of the hive’s mind to begin reforming it. I’d further suggest starting with something other than the ban on politics, but if this was the subject you picked, then I must assume you’re not hive-aware (compare to: self-aware) enough to formally recognize the other flaws.
I am not sure I follow your argument completely. It feels to me as if you suggested that discussing everything, as long as it is polite and rational, is the proof of LessWrong hivemind.
Well, I would call that “culture”, and I am happy to have it here. I am not sure what benefit exactly would we get by dismantling it. (A well-kept garden that committed suicide because it loved contrarianism too much?) I mean, it’s not like none of us ever goes beyond the walls of LessWrong.
But then you say it’s okay to criticize anything, as long as one doesn’t criticize LessWrong itself. Well, this is from article “Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality”, having 92 karma at this moment:
So, what kind of observation specifically does your hypothesis disallow?
Trying (admittedly only for a very short time) to steelman your position, I’d say the “dogma” of LessWrong is that having an aspiring rationalist community is a good thing. Because LW is an aspiring rationalist community, so obviously people who think such community is stupid, filter themselves out of LW. In other words, the shared opinion of LW members is that LW should exist.
Everything being polite and rational is informational; the point is to demonstrate that those qualities are not evidence of the hive mind quality. Something else is, which I clearly identity. Incidentally, though I didn’t realize it at the time, I wasn’t actually advocating dismantling it, or that it was a bad thing to have at all.
That’s the perception that LessWrong would benefit from correcting; it is as if LessWrongers never go outside the walls of LessWrong. Obviously you physically do, but there are strict procedures and social processes in place that prevent planting outside seeds in the fertile soil within the walls. When you come inside the walls, you quarantine yourself to only those ideas which LessWrong already accepts as being discussable. The article you link is three years old; what has happened in the time? If it was so well-received, where are the results? There is learning happening that is advancing human rationality far more qualitatively than LessWrong will publicly acknowledge. It’s in a stalemate with itself for accomplishing its own mission statement; a deadlock of ideas enforced by a self-reinforcing social dynamic against ideas that are too far outside the very narrow norm.
Insofar as LessWrong is a hive mind, that same mind is effectively afraid of thinking and doing everything it can to not do so.
I wouldn’t mind seeing some of the ideas you think are worthwhile but would be rejected by the LW memetic immune system.
That’s odd and catches me completely off guard. I wouldn’t expect someone who seems to be deeply inside the hive to both cognize my stance as well as you have and be judging that my heretofore unstated arguments might be worth hearing. Your submission history reflects what I assume; that you are on the outer edges of the hive despite an apparently deep investment.
With the forewarning that my ideas may well be hard to rid yourself of and that you might lack the communicate skills to adequately convey the ideas to your peers, are you willing to accept the consequences of being rejected by the immune system? You’re risking becoming a “carrier” of the ideas here.
Why don’t you just post them explicitly? As long they don’t involve modeling a vengeful far-future AI everyone will be fine. Plus, then you can actually test to see if they will be rejected.
Why are you convinced I haven’t posted them explicitly? Or otherwise tested the reactions of LessWrongers to my ideas? Are you under the impression that they were going to be recognized as worth thinking about and that they would be brought to your personal attention?
Let’s say I actually possess ideas with future light cones on order of strong AI. Do you earently expect me to honestly send that signal and bring a ton of attention to myself? In a world of fools that want nothing more than to believe in divinity? (Beliefs about strong AI are pretty qualitatively similar to religious ideas of god, up to and included, “Works in mysterious ways that we can’t hope to fathom.”)
I have every reason not to share my thoughts and every reason to play coy and try to get LessWrong thinking for itself. I’m getting pretty damn close to jumping ship and watching the aftermath here as it is.
I’m just trying to encourage you to make you contributions moderately interesting. I don’t really care how special you think you are.
Wow, what an interesting perspective. Never heard that before.
See, that’s the kind of stance I can appreciate. Straight to the point without any wasted energy. That’s not the majority response LessWrong gives, though. If people really wanted me to post about this as the upvotes on the posts urging me to post about this would suggest, why is each and every one of my posts getting downvoted? How am I supposed to actually do what people are suggesting when they are actively preventing me from doing so?
...Or is the average voter simply not cognizant enough to realize this...?
Worst effect of having sub-zero karma? Having to wait ten minutes between comments.
Not sure if sarcasm or...
Sarcasm.
We get the “oh this is just like theism!” position articulated here every ten months or so.
Those of us who have been here a while are kind of bored with it.
(Yes, yes, yes, no doubt that simply demonstrates our inadequate levels of self-awareness and metacognition.)
What, and you just ignore it?
No, I suppose you’ll need a fuller description to see why the similarity is relevant.
LessWrong is sci-fi. Check what’s popular. Superintelligent AI, space travel, suspended animation, hyper-advanced nanotech...
These concepts straight out of sci-fi have next to zero basis. Who is to say there even are concepts that the human mind simply can’t grasp? I can’t visualize in n-dimensional space, but I can certainly understand the concept. Grey goo? Sounds plausible, but then again, there is zero evidence that physics can create anything like stable nanites. How fragile will the molecular bonds be? Are generation ships feasible? Is there some way to warp space to go fast enough that you don’t need an entire ecosystem on board? If complex information processing nanites aren’t feasible, is reanimation? These concepts aren’t new, they’ve been around for ages. It’s Magic 2.0.
If it’s not about evidence, what is it about? I’m not denying any of these possibilities, but aside from being fun ideas, we are nowhere near close to proving them legitimate. It’s not something people are believing in because “it only makes sense.” It’s fantasy at it’s base, and if it turns out to be halfway possible, great. What if it doesn’t? Is there going to be some point in the future where LessWrong lets go of these childish ideas of simulated worlds and supertechnological abilities? 100 years from now, if we don’t have AI and utility fog, is LessWrong going to give up these ideas? No. Because that just means that we’re closer to finally realizing the technology! Grow up already. This stuff isn’t reasonable, it’s just plausible, and our predictions are nothing more than mere predictions. LessWrong believes this stuff because LessWrong wants to believe in this stuff. At this moment in time, it is pure fiction.
If it’s not rationa—No, you’ve stopped following along by now. It’s not enough to point out that the ideas are pure fiction that humanity has dreamed about for ages. I can’t make an argument within the context that it’s irrational because you’ve heard it all before. What, do you just ignore it? Do you have an actual counter-point? Do you just shrug it off because “it’s obvious” and you don’t like the implications?
Seriously. Grow up. If there’s a reason for me to think LessWrong isn’t filled with children who like to believe in Magic 2.0, I’m certainly not seeing it.
It is true that people have written unrealistic books about these things. People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams.
The human mind is finite, and there are infinitely many possible concepts. If you’re interested in the limits of human intelligence and the possibilities of artificial intelligence, you might want to read The Hanson-Yudkowsky Debate .
Drexler wrote a PhD thesis which probably answers this. For discussion on LessWrong, see Is Molecular Nanotechnology “Scientific”? and How probable is Molecular Nanotech?.
Naturally, some of the ideas fiction holds are feasible. In order for your analogy to apply, however, we’d need a comprehensive run-down of how many and which fictional concepts have become feasible to date. I’d love to see some hard analysis across the span of human history. While I believe there is merit in nano-scale technology, I’m not holding my breath for femtoengineering. Nevertheless, if such things were as readily predictable as people seem to think, you have to ask why we don’t have the technology already. The answer is that actually expressing our ideas onto physical reality is non-trivial, and by direct consequence, potentially non-viable.
I need backing on both of these points. As far as I know, there isn’t enough verified neuroscience to determine if our brains are conceptually limited in any way. Primarily because we don’t actually know how abstract mental concepts map onto physical neurons. Even ignoring that (contrary to memetic citation) the brain does grow new neural cells and repair itself in adults, even if the number of neurons is finite, the number of and potential for connections between them is astronomical. We simply don’t know the maximum conceptual complexity of the human brain.
As far as there being infinitely many concepts, “flying car” isn’t terribly more complicated than “car” and “flying.” Even if something in the far future is given a name other than “car,” we can still grasp the concept of “transportation device,” paired with any number of accessory concepts like, “cup holder,” “flies,” “transforms,” “teleports,” and so on. Maybe it’s closer to a “suit” than anything we would currently call a “car;” some sort of “jetpack” or other. I’d need an expansion on “concept” before you could effectively communicate that concept-space is infinite. Countably infinite or uncountably infinite? All the formal math I’m aware of indicates that things like conceptual language are incomputable or give rise to paradoxes or some other such problem that would make “infinite” simply be inapplicable/nonsensical.
(nods) IOW, it merely demonstrates our inadequate levels of self-awareness and meta-cognition.
This doesn’t actually counter my argument, for two main reasons:
That wasn’t my argument.
That doesn’t counter anything.
Please don’t bother replying to me unless you’re going to actually explain something. Anything else is disuseful and you know it. I want to know how you justify to yourself that LessWrong is anything but childish. If you’re not willing to explain that, I’m not interested.
I don’t.
I often have conversations here that interest me, which is all the justification I need for continuing to have conversations here. If I stopped finding them interesting, I would stop spending time here.
Perhaps those conversations are childish; if so, it follows that I am interested in childish conversations. Perhaps it follows that I myself am childish. That doesn’t seem true to me, but presumably if it is my opinions on the matter aren’t worth much.
All of that would certainly be a low-status admission, but denying it or pretending otherwise wouldn’t change the fact if it’s true. It seems more productive to pursue what interests me without worrying too much about how childish it is or isn’t, let alone worrying about demonstrating to others that I or LW meet some maturity threshold.
Few places online appreciate drama-queening, you know.
Hypothesis: the above was deliberate downvote-bait.
I’m willing to take the risk. PM or public comment as you prefer.
I would prefer public comment, or to be exposed to the information as well.
How specifically can you be surprised to hear “be specific” on LessWrong? (Because that’s more or less what Nancy said.) If nothing else, this suggests that your model of LessWrong is seriously wrong.
Giving specific examples of “LessWrong is unable to discuss X, Y, Z” is so much preferable to saying “you know… LessWrong is a hivemind… there are things you can’t think about...” without giving any specific examples.
How specifically? Easy. Because LessWrong is highly dismissive, and because I’ve been heavily signalling that I don’t have any actual arguments or criticisms. I do, obviously, but I’ve been signalling that that’s just a bluff on my part, up to an including this sentence. Nobody’s supposed to read this and think, “You know, he might actually have something that he’s not sharing.” Frankly, I’m surprised that with all the attention this article got that I haven’t been downvoted a hell of a lot more. I’m not sure where I messed up that LessWrong isn’t hammering me and is actually bothering to ask for specifics, but you’re right; it doesn’t fit the pattern I’ve seen prior to this thread.
I’m not yet sure where the limits of LessWrong’s patience lie, but I’ve come too far to stop trying to figure that out now.
I do not represent Less Wrong, but you have crossed a limit with me. The magic moment came when I realized that BaconServ means spambot. Spammers are the people I most love to hate. I respond to their provocations with a genuine desire to find them and torture them to death. If you were any more obnoxious, I wouldn’t even be telling you this, I would just be trying to find out who you are.
So wake the fuck up. We are all real people with lives, stop wasting our time. Try to keep the words “I”, “Less Wrong”, and “signalling” out of your next two hundred comments.
ETA This angry comment was written while under pressure and without a study of BaconServ’s full posting history, and should not be interpreted as a lucid assessment.
Intriguing and plausible. Does this forum really have a hive mind with a self-preservation instinct? Since the comments you linked to are downvoted below the default visibility level, do you mind writing a Discussion post (or maybe a Main post, if you are adventurous enough) on the subject? These remain visible unless deleted. I wish I could incentivise you with a few hundred karma points, but alas, there is no karma transfer/donation mechanism on the site.
My guess is that most “criticism of LessWrong itself” is not well-received because “LessWrong itself” is not very specific, and so criticism of this vague idea typically isn’t able to take the form of clearly expressed thoughts.
The thoughts are crystal clear in my mind and can be expressed—and communicated—perfectly accurately with existing language. The problem lies in the inability of LessWrong to accept that there are concepts that it simply does not have concise language to express. It’s not that it can’t be communicate or is unclear in any way, it’s that LessWrong’s collective membership is not having the thoughts. They are dismissed as vague because they are not recognized as blatantly and obviously true. We’d need new vocabluary to concretize the concepts to the point of making the argument effectively believable from the raw communication.
LessWrong lacks the capacity to think beyond what’s written in a useful way.
Tell me you’ve actually thought about this for a full five minutes before you’ve bothered responding or voting. (Note: Five minutes will not be enough time to overcome your cognitive biases, even with the implied challenge that it will not be enough time to think clearly. (You do not have the ability to detect if your thought processes are clear or muddled with bias. “I don’t feel like I’m missing something,” isn’t a valid counter argument.))
My question isn’t “is this happening?”—my question is, “how big is the effect, and does it matter?” I suspect that’s the case for a lot of LW readers.
This is a recurring theme that I find over and over. These sorts of biases and problems are obvious, they are the kind of thing that are pretty much guaranteed to exist, the kind of thing you could hardly hope to escape from. But that does not in any way mean that the effects are large enough to be relevant, or that the time spent fixing them cannot be better spent elsewhere. It is not enough to say that it it worthwhile; you must show that it is worthwhile enough to compete with other options.
This implies that your article, should you decide to write it, would in fact be understood, and that a good proportion of the LW readership has in fact considered your platform. For your article to be effective, it may be necessary for you to lay out the extent to which these issues are an actual problem, instead of simply pointing out the obvious.
Let me put it this way: The effect is big enough that I have no qualms calling it a blanket inability. This should be implied by the rules of common speech, but people who consider themselves intelligent find it easier to believe that such confidence is evidence of irrationality.
What’s interesting is that you think such an article can actually be written. (Let’s ignore that I earned sub-zero karma with my posts in this thread today.)
Consider the premise:
(Obviously there are a few stray thoughts that you’ll find in the comments, but they are non-useful and do not generally proliferate into more descriptive articles.)
Let’s make it clear the purpose of such an article would be to get LessWrong to think about what’s written beyond what is written. This is necessary to make LessWrong useful beyond any other internet forum. Such an article would be advocating independent and bold thinking, and then voicing any compelling realizations back to LessWrong to spark further thought by others. A few short passes of this process and you could see some pretty impressive brainstorming—all while maintaining LessWrong’s standards of rationality. Recall that thought being a very cheap and very effective resource is what makes machine intelligence so formidable. If the potential for communal superintelligence isn’t sufficient payoff here, nothing will be.
Keep in mind that this is only possible insofar as a significant portion of LessWrong is willing to think beyond what is written.
If we suppose that this is actually possible; that superintelligent-quality payoffs are possible here with only slight optimization of LessWrong, then why isn’t LessWrong already trying to do this? Why weren’t they trying years ago? Why weren’t they trying when That Alien Message was published? You might want to say that the supposing is what’s causing the apparent question; that if LessWrong could really trivially evolve into such a mechanism, that it most definitely would be, and that the reason we don’t see it doing this is because many consider this to be irrational and not worth trying for.
Okay.
Then what is the point of thinking beyond what’s written?
If there aren’t significant payoffs to self-realizations that increase rationality substantially, then what is the point? Why be LessWrong? Why bother coming here? Why bother putting in all this effort if you’re only going to end up performing marginally better? I can already hear half the readers thinking, “But marginally better performance can have significant payoffs!” Great, then that supports my argument that LessWrong could benefit tremendously from very minor optimization towards thought sharing. But that’s not what I was saying. I was saying, after all the payoffs are calculated, if they aren’t going to have been any more than marginally better even with intense increases in rationality, then what is the point? Are we just here to advertise the uFAI pseudo-hypothesis? (Not being willing to conduct the experiment makes it an unscientific hypothesis, regardless of however reasonable it is to not conduct the experiment.) If so, we could do a lot better by leaving people irrational as they are and spreading classic FUD on the matter. Write a few compelling stories that freak everyone out—even intelligence people.
That’s not what LessWrong is. Even if that was what Yudkowsky wanted out of it in the end, that’s not what LessWrong is. If that were all LessWrong was, there wouldn’t be nearly as many users as there are. I recall numerous times Yudkowsky himself stated that in order to make LessWrong grow, he would need to provide something legitimate beyond his own ulterior motives. By Yudkowsky’s own assertion, LessWrong is more than FAI propaganda.
LessWrong is what it states on the front page. I am not here writing this for my own hubris. (The comments I write under that premise sound vastly different.) I am writing this for one sole single purpose. If I can demonstrate to you that such an article and criticism cannot currently be written, that there is no sequence of words that will provoke a thinking beyond what’s written response in a significant portion of LessWrongers, then you will have to acknowledge that there is a significant resource here that remains significantly underutilized. If I can’t make that argument, I have to keep trying with others, waiting for someone to recognize that there is no immediate path to a LessWrong awakening.
I’ve left holes in my argument. Mostly because I’m tired and want to go to bed, but there’s nothing stopping me from simply not sending this and waiting until tomorrow. Sleepiness is not an excuse or a reason here. If I were more awake, I’d try writing a more optimum argument instead of stream-of-consciousness. But I don’t need to. I’m not just writing this to convince you of an argument. I’m writing this as a test, to see if you can accept (purely on principle) that thought is inherently useful. I’m attempting to convince you not of my argument, but to use your own ability to reason to derive your own stance. I’m not asking you to agree and I’d prefer if you didn’t. What I want is your thoughts on the matter. I don’t want knee-jerk sophomoric rejections to obvious holes that have nothing to do with my core argument. I don’t want to be told I haven’t thought about this enough. I don’t want to be told I need to demonstrate an actual method. I don’t want you to repeat what all other LessWrongers have told me after they summarily failed to grasp the core of my argument. The holes I leave open are intentional. They are tripholes for sophomores. They are meant to weed out impatient fools, even if it means getting downvoted. It means wasting less of my time on people who are skilled at pretending they’re actually listening to my argument.
LessWrong, in its current state, is beneath me. It performs marginally better than your average internet forum. There are non-average forums that perform significantly better than LessWrong in terms of not only advancing rationality, but just about everything. There is nothing that makes LessWrong special aside from the front page potential to form a community whose operations represent a superingellient process.
I’ve been slowly giving out slightly more detailed explanations of this here and there for the past month or so. I’ve left fewer holes here than anywhere else I’ve made similar arguments. I have put the idea so damn close to the finish line for you that for you to not spend two minutes reflecting on your own, past what’s written here, indicates to me exactly how strong the cognitive biases are that prevent LessWrong from recursive self-improvement.
Even in the individuals who signal being the most open minded and willing to hear my argument.
The forum promotes a tribal identity. All the usual consequences apply.
If you felt really motivated, you could systematically upvote all my posts, but I’d prefer that you didn’t; it would interfere with my current method of collecting information on the LessWrong collective itself.
I’d write such a post, but my intention isn’t really aimed to successfully write such a post.
I’m more or less doing my best to stand just outside the collective so I can study it without having to divide myself out of the input data.
What do you mean by a measure here?
A class of measurements that are decidedly quantifiable despite lacking formal definition/recognition.
What content does “LessWrong” have here? If anything other than the “LessWrong” monad can be criticized… that sounds totally great. I mean, even if you mean something like “this group’s general approach to the world” or something that still sounds like a much better place than Less Wrong actually is.
Really, if it weren’t for the provocative tone I would have no idea you weren’t making a compliment.
Hah, interesting. I didn’t notice I was making an interpretation duality here.
I suppose that only further clarifies that I haven’t actually listed a criticism. Indeed, I think it is a potentially good thing that LessWrong has hive mind qualities to it. There are ways this can be used, but it requires a slight bit more self-awareness, both on the individual and collective levels. Truth be told, the main problem I have with LessWrong is that, at my level of understanding, I can easily manipulate it in nearly any way I wish. My personal criticism is that it’s not good enough to match me, but that implies I think I’m better than everyone here combined. This is something I do think, of course, so asking for evidence of the extraordinary claim is fair. Trouble is, I’m not quite “super”-intelligent enough to know exactly how to provide that evidence without first interacting with LessWrong.
My personal complaint is really that LessWrong hasn’t activated yet.
Thank you for the question though; it tells me you know exactly which questions to ask yourself.
Interesting. I’ve have expected at least a few intrinsic contrarians, having seen them nearly everywhere else, and group identification usually isn’t one of the things they’ll avoid slamming.
Absolutely no disagreement?
The thing about collections of people that exceeds Dunbar’s number is that no one person can perfectly classify the boundaries of the group. There’s a fuzzy boundary, but the people that “dissent LessWrong” causatively tend towards opting out of the group, while the people that don’t vocally dissent causatively tend towards losing the dissent. Each of these is suffering social biases in thinking that the group is unitary; that you’re either “part of” LessWrong or else you’re not. Awhile ago I formally serialized one of my thoughts about society as, “Groups of people live or die based on their belief in the success of the group.” If nobody believe the group is going to accomplish anything or be useful in any way, they’ll stop showing up. This self-fulfills the prophecy. If they continue attending and see all the members that are still attending and think, “Imagine the good we can do together,” they’ll be intensely motivated to keep attending. This self-fulfills the prophecy.
Did I mention this is something I realized without any help or feedback from anyone?
That’s usually not the case, or at least not to a hard extent. Even sub-Dunbar’s Number groups tend to have at least a few internal contrarians :there’s something in human psychology that encourages at least a few folk to keep the identity while simultaneously attacking the identity.
I’ve been that person for a while on a different board, although it took a while to recognize it.
True, the trend for the average person is to strongly identify within the group or not, but I’ve never seen that include all of them.
That assumes the only point of group membership is to see what the group accomplishes. There are other motivations, such as individual value fulfillment, group knowledge, ingroup selection preferences, even the knowledge gained from dissenting. There are some sociology experiments suggesting that groups can only remain successful in the long term if they sate those desires for newcomers. ((And even some arguments suggesting that these secondary, individual motivations are more important than the primary, group motivation.))
The demographic analysis is fascinating, especially if correct. The rough group analysis is not especially fresh outside of the possible lack of contrarians in this instance, though independent rediscovery is always a good thing, particularly given the weakness of social identity theory (and general sociology) research.
I’d caution that many of the phrases you’ve used here signal strongly enough for ‘trolling’ that you’re likely to have an effect on the supergroup even if you aren’t intending it.
Oh, troll is a very easy perception to overcome, especially in this context. Don’t worry about how I’ll be perceived beyond delayed participation in making posts. There is much utility in negative response. In a day I’ve lost a couple dozen karma, I’ve learned a lot about LessWrong’s perception. I suspect there is a user or two participating in political voting against my comments, possibly in response to my referencing the concept in one of my comments. Something like a grudge is a thing I can utilize heavily.
I’d expect more loss than that if someone really wanted to disable you; systemic karma abuse would end up being either resulting in karma loss equal to either some multiple of your total post count, or a multiple of the number of posts displayed per user history page (by default, 10).
Actually I think I found out the cause: Commenting on comments below the display threshold costs five karma. I believe this might actually be retroactive so that downvoting a comment below display the display threshold takes five karma from each user possessing a comment under it.
It wasn’t retroactive when I did this test a while back. Natch, code changes over time, and I haven’t tested recently.
I iterated my entire comment history to find the source of an immediate −15 spike in karma; couldn’t find anything. My main hypothesis was moderator reprimand until I put the pieces together on the cost of replying to downvoted comments. Further analysis today seems to confirm my suspicion. I’m unsure if the retroactive quality of it is immediate or on a timer but I don’t see any reason it wouldn’t be immediate. Feel free to test on me, I think the voting has stabilized.
I’m utterly unclear on what evidence you were searching for (and failing to find) to indicate a source of an immediate 15-point karma drop. For example, how did you exclude the possibility of 15 separate downvotes on 15 different comments? Did you remember the previous karma totals of all your comments?
More or less, yeah. The totaled deltas weren’t of the necessary magnitude order in my approximation. It’s not that many pages if you set the relevant preference to 25 per page and have iterated all the way back a couple times before.
Gotcha; I understand now. If that’s actually a reliable method of analysis for you I’m impressed by your memory, but lacking the evidence of its reliability that you have access to I hop you’ll forgive me if it doesn’t significantly raise my confidence in the retroactive-karma-penalty theory.
Certainly; I wouldn’t expect it to.
Great post, thank you for taking the time to write it. It’s insightful and I think clearly identifies the problem with Less Wrong. It could probably do with being a little bit less provacative, i.e. by removing references to Less Wrong being a “hive”. Upvoted for valid criticism.
You’re misunderstanding; I am not here to gain karma or approval. I am not reformed by downvoting; I am merely informed. I know well the language LessWrong likes and hates; I’ve experimented, controls and everything. I didn’t say this because I was willing to write more about it; I wrote it because I’d already pre-determined you’d be sympathetic. I am not part of the hive and I will not be dissecting or reforming it; that is yours if you should see value in LessWrong. My job is not to sugar-coat it and make it sound nice; I can derive utility from downvotes and disapproval—and in more ways than the basic level of information it provides. I’m not going to call it something it isn’t and use terms that make it seems like anything less than a hive mind. I am giving my full opinion and detecting agents worth interacting with; I am not here to participate as part of the hive. It is not yours to defer to me as if I was going to resolve the problem in my capacity as a member of the hive; I highlighted your being on the border in my capacity as an agent beyond the concept of being inside or outside the hive. I can enter and leave freely; one moment inside, one moment outside. I am here to facilitate LessWrong, not to advance it.
I appreciate the sentiment though. :D
I’m really at a loss for reasons as to why this is being downvoted. Would anyone like to help me understand what’s so off-putting here?
It’s boring.
I’m not sure how to act on this information or the corresponding downvoting. Is there something I could have done to make it more interesting? I’d really appreciate knowing.
To be clear: I replied before you edited the comment to make it a question about downvotes. Before your edit you were asking for an explanation of the inferential silence. That is what I explained. The downvotes are probably a combination of the boringness, the superiority you were signalling and left-over-bad-feeling from other comments you’ve made tonight. But I didn’t downvote.
Given the subject and content of the comment it probably couldn’t have been substantially less boring. It could, however, have been substantially shorter.