I’ve banned all of eridu’s recent comments (except a few voted above 0) as an interim workaround, since hiding-from-Recent-Comments and charge-fee-to-all-descendants is still in progress for preventing future threads like these.
I respectfully request that you all stop doing this, both eridu and those replying to him.
I think Eridu’s downvotes were mostly well-deserved.
I don’t think this is a good idea.
I wonder if we could solve this problem from another direction. The issue from your perspective, as I understand it, is that you want to be able to follow every interesting discussion on this site, in semi-real time, but can’t. You can’t because your only view into “all comments everywhere” is only 5 items long, so fast-moving pointless discussions drown out the stuff you’re interested in. An RSS feed presumably isn’t sufficient either, since it pushes comments as they occur and doesn’t give the community a chance to filter them.
So if I’ve reasoned all this out correctly, you’d prefer a view of all comments, sorted descending by post time and configurably tree-filtered by karma and maybe username. But we haven’t the dev resources to build that, and measures like the ones you describe are a cheap, good-enough approximation.
The issue from your perspective, as I understand it, is that you want to be able to follow every interesting discussion on this site, in semi-real time, but can’t. You can’t because your only view into “all comments everywhere” is only 5 items long, so fast-moving pointless discussions drown out the stuff you’re interested in.
I think it’s more than that—he also doesn’t want other people to notice the pointless discussions, so that
1) people stop fanning the flames and feeding the trolls
2) people post in the worthwhile threads, resulting in more quality there
I realize that we want to get rid of trolls, and I agree that this is a worthy goal, but one single person shouldn’t be in charge of deciding who’s a troll and who isn’t.
Now that everyone knows that downvotes can cause a person to lose their ability to comment (I assume that’s what “ban” means, could be wrong though), unscrupulous community members (and we must have some, statistically speaking, as unpleasant as that thought is) can use their downvotes offensively—sort of like painting a target with a laser, allowing the Eliezer-nuke to home in.
Downvoting a comment does not always imply that the commenter is a troll. People also use downvotes to express things like “your argument is weak and unconvincing”, and “I disagree with you strongly”. We want to discourage the latter usage, and IMO we should encourage the former, but Eliezer’s new policy does nothing to achieve these goals, and in fact harms them.
If the problem is differentiating between trolls and simply weak, airy, or badly formed comments/arguments, I think the obvious simple solution would be to do what has worked elsewhere and add a “Report” or “Troll-Alert” option to bring the comment/post to the attention of moderators or send it to a community-review queue.
It certainly seems easier to control for abuse of a Report feature than to control for trolling and troll-feeding using a single linear score that doesn’t even tell you whether that −2 is just 2 * (-1) (two people think the poster is evil) or whether it’s +5 −7 (five cultists approve, seven rationalists think it’s a troll) (unless moderators can see a breakdown of this?).
There is a Report button when I view comments that are replies to my comments, or when I view private messages. There is no Report button when I view comments normally.
Oh, you’re right! Didn’t remember that, but the inbox does have “Context” and “Report” links instead of the standard buttons.
Edit: I suppose a clever bit of scripting could probably fix it browser-side, then, but that’s a very hacky solution and there’s still value in having a built-in report button for, say, people who don’t have the script or often access lesswrong from different browsers/computers.
See Issue 272. The report button was removed during a past redesign, as (I gather) redesigners didn’t feel it was motivated sufficiently to bother preserving it. The issue’s been in accepted/contributions-welcome mode since Sep 2011.
I agree that there are downsides, they just don’t seem that terrible..
What about the never-ending meta discussions, or are you counting on those dying down soon? Because I wouldn’t, unless the new policy is either dropped, or an extensive purge of the commentariat is carried out.
3) Newcomers who arrive at the site see productive discussion of new ideas, not a flamewar, in the Recent Comments section.
4) Trolls are not encouraged to stay; people who troll do not receive attention-reward for it and do not have their brain reinforced to troll some more. Productive discussion is rewarded by attention.
The discussion with eridu was probably worth ending, but I saw someone say it was the best discussion of those issues they’d ever seen, and I’d said so myself independently in a location that I’ve promised not to link to.
I am very impressed with LW that we managed to make that happen.
I am very impressed with LW that we managed to make that happen.
Did you learn something useful or interesting, or were you just impressed that the discussion remained relatively civil? If the former, can you summarize what you learned?
I learned something that might turn out to be useful.
I got a bit of perspective on the extent to which I amplify my rage and distrust at SJ-related material (I had a very rough time just reading a lot of racefail)-- I’m not sure what I want to do with this, but it’s something new at my end.
The civility of the discussion is very likely to have made this possible.
I got a bit of perspective on the extent to which I amplify my rage and distrust at SJ-related material (I had a very rough time just reading a lot of racefail)-- I’m not sure what I want to do with this, but it’s something new at my end.
I’m having trouble understanding this sentence. First, I guess SJ = “social justice” and racefail = “a famously controversial online discussion that was initially about writing fictional characters who are people of color”? But what does it mean to amplify your rage and distrust at some material? Do you mean some parts of the SJ-related materials made you angry and distrustful? Distrustful of who? Which parts made you feel that way? Why? And how did the eridu discussion help you realize the extent?
Did you learn something useful or interesting, or were you just impressed that the discussion remained relatively civil? If the former, can you summarize what you learned?
I’m curious myself. I honestly didn’t see anything useful said. (Perhaps I just took all the valid points for granted as obvious?)
You ask for “exist” “true” etc to be tabooed, which is hard. Assuming they even try, it would take a while to wade thru all the philosophical muck and actually get to something, by which point the moment has passed.
My usual response to requests for “X exists” to be tabooed is to start talking about reliably predicting future experiences E2 in a range of contexts C (as C approaches infinity) consistent with the past experiences E1 which led me to to put X in my model in the first place. If someone wants to talk about E2 being reliably predictable even though X “doesn’t really exist”, it’s not in the least bit clear to me what they’re talking about.
Thanks! This is a very useful explanation / reduction / taboo.
It also sheds some light and helped me understand quite a bit more, I believe, on this whole “instrumentalism” business some people here seem to really want to protect.
(link is just in case someone misunderstands this as an accusation of “Politics!”)
You’re welcome. I vaguely remember being involved in an earlier discussion that covered this idea at greater length, wherein I described myself as a compatibilist when it comes to instrumentalism, but the obvious google search doesn’t find it so perhaps I’m deluded.
the “right” probability distribution is the one that maximizes the expected utility of an expected utility maximizer using that probability distribution.
reliably predicting future experiences E2 in a range of contexts C (as C approaches infinity) consistent with the past experiences E1 which led me to to put X in my model in the first place.
I wholeheartedly approve of this approach. If more people used it, we would avoid the recurrent unproductive discussions of QM interpretations, qualia and such.
EDIT. Just to clarify, the part saying “put X in my model” is the essential bit to preempt the discussion of “but does it exist outside your model?”, since the latter would violate this definition of “exist”. such as this statement by our esteemed Kaj Sotala:
why those beings actually have qualia, and don’t merely act like it.
Unfortunately, the last sensible (to me) exchange in it was around
“Mark, I don’t think you understand the art of bucketcraft,” I say. “It’s not about using pebbles to control sheep. It’s about making sheep control pebbles. In this art, it is not necessary to begin by believing the art will work. Rather, first the art works, then one comes to believe that it works.”
After that the instrumentalist argument got heavily strawmanned:
“Ah! Now we come to the root of the problem,” says Mark. “What’s this so-called ‘reality’ business? I understand what it means for a hypothesis to be elegant, or falsifiable, or compatible with the evidence. It sounds to me like calling a belief ‘true’ or ‘real’ or ‘actual’ is merely the difference between saying you believe something, and saying you really really believe something.”
It gets worse after that, until EY kills the offending in-strawman-talist with some gusto.
Upvoted entirely for “in-strawman-talist”, which I will be giggling about at unpredictable intervals for days, probably requiring me to come up with some entirely false but more easily explained answer to “What’s so funny?”.
There are lots of words that I don’t know how to taboo, because I only have a partial and largely intuitive understanding of the concepts I’m referring to by them, and can’t fully explain those concepts. Examples: “exist”, “truth”, “correct”, “right”, “moral”, “rational”, “should”, “mathematical”. I don’t think anyone has asked me directly to taboo any of these words, but if someone did, I might ignore the request because I think my time could be better spent trying to communicate with others who seem to already share my understandings of these words.
In the case of “exist”, I think that something exists implies that I can care about it and not be irrational. (“care about”: for example, have a term for it in my utility function) This seems to at least capture a large part of what I mean when I say something exists, but I’m not sure if “exists” just means (something like) the correct decision theory allows a utility function to have a term for something, or if existence is somehow more fundamental than that and our ability to rationally care about something derives from its existence in that more fundamental sense. Does this make sense?
There are lots of words that I don’t know how to taboo, because I only have a partial and largely intuitive understanding of the concepts I’m referring to by them, and can’t fully explain those concepts. Examples: “exist”, “truth”, “correct”
Well, apparently TheOtherDave is bold enough to give a meaningful definition of “exist”. Would you agree with it? If not, what would be a counterexample?
I disagree with it because an agent (such as one using UDT) does not necessarily have memory and the associated concepts of “future experiences” and “past experiences”, but “exist” still seems meaningful even for such an agent.
I confess that I cannot make sense of this without learning more about UDT and your definition of agency. I thought this definition is more basic and independent of the decision theory models one adopts.
TheOtherDave’s approach makes a lot more sense to me.
Well, it would, given that you’re an instrumentalist. Since I’m not an instrumentalist, TheOtherDave’s suggestion (in so far as I understand it) clearly differs from what I mean when I talk about existence. Surely you wouldn’t maintain that the only possible tabooings of “existence” are instrumentalist-friendly ones.
But why do you think my formulation is a “fake formalization”? It captures what I mean by existence pretty well, I think. Is the worry that I haven’t provided an empirical criterion for existence?
TheOtherDave’s suggestion (in so far as I understand it) clearly differs from what I mean when I talk about existence
Awesome! I love clear differences. Can you give me an example of some thing that exists, for which my proposed tabooing of “existence” doesn’t apply? Or, conversely, of something for which my proposed tabooing applies, but which doesn’t exist?
With the caveat that I might not fully understand your proposed tabooing, here’s my concern with it. There are models which are empirically equivalent, yet disagree on the furniture of the world. As far as I can see, your tabooing, with its emphasis on predictive success, cannot distinguish between the ontological claims made by these models. I think one can. For instance, even if two theories make identical predictions, I would say the right move would be adopt the ontology of the simpler of the two.
Perhaps I can expand on my proposed tabooing. Instead of just “The set of Fs is non-empty”, make it “The set of Fs is non-empty according to our best physical theory”, where the “best physical theory” is determined not just by empirical success but by extra-empirical virtues such as simplicity.
Wrt your revised tabooing… that has the odd property that entities come into existence and cease existing as our physical theories change. I guess I’m OK with that… e.g., if you really want to say that quarks didn’t exist in 1492, but that quarks in 1492 now existed, I won’t argue, but it does seem like an odd way to talk.
Wrt your concern… hrm. Let me try to be more specific.
So, I have two empirically equivalent models M1 and M2, which make different ontological claims but predict the same experiences in a range of contexts C (as C approaches infinity). Let us say that M1 asserts the existence of X, and M2 asserts instead the existence of Y, and X is simpler than Y. I also have a set of experiences E1, on the basis of which I adopt M1 as my model (for several reasons, including the fact that my experiences have led me to prefer simpler models). Based on this, I predict that my future experiences E2 will be consistent with the past experiences E1 which led me to to put X in my model in the first place, which include the experiences that led me to endorse Occam’s Razor. If that prediction proves false—that is, if I have experiences that are inconsistent with that—I should reduce my confidence in the existence of X. If it proves true—that is, I have no experiences that are inconsistent with that—I should remain confident.
Is that example consistent with your understanding of how my proposed tabooing works?
If so, can you say more about your concern? Because it seems to me I am perfectly able to distinguish between M1 and M2 (and choose M1, insofar as I embrace Occam’s Razor) with this understanding of existence.
Wrt your revised tabooing… that has the odd property that entities come into existence and cease existing as our physical theories change. I guess I’m OK with that… e.g., if you really want to say that quarks didn’t exist in 1492, but that quarks in 1492 now existed, I won’t argue, but it does seem like an odd way to talk.
The tabooing is not supposed to be an analysis of what makes things exist; it is an analysis of when we are justified in believing something exists. It’s a criterion for ontological commitment, not ontology. I took it that this was what your tabooing was supposed to convey as well, since surely there can be things that exist that don’t feature in our models. Or maybe you don’t think so?
To get an actual criterion of ontology rather than just a criterion of ontological commitment, replace “our best physical theory” with “the best physical theory”, which may be one that nobody ever discovers.
Based on this, I predict that my future experiences E2 will be consistent with the past experiences E1 which led me to to put X in my model in the first place, which include the experiences that led me to endorse Occam’s Razor.
Ah, I see. This makes your view more congenial to me. Although it still depends on what you mean by consistent. If one of my future experiences is the discovery of an even simpler empirically adequate theory, then presumably you would say that that experience is in some sense inconsistent with E1? If yes, then I don’t think there is much of a difference between your proposal and mine.
I took it that this was what your tabooing was supposed to convey as well,
I understood the point to be to replace the phrase “X exists” with an expression of what we’re trying to convey about the world when we say “X exists.” Which might conceivably be identical to what we’re trying to convey about the world when we say “I’m justified in believing X exists”, depending on what we want to say about when a belief is justified, but if we allow for things that happen to be true but are nevertheless not justified beliefs (which I do) then they aren’t identical.
But, sure, if we’re talking about epistemology rather than ontology, then my objection about quarks is irrelevant.
If one of my future experiences is the discovery of an even simpler empirically adequate theory, then presumably you would say that that experience is in some sense inconsistent with E1? If yes, then I don’t think there is much of a difference between your proposal and mine.
If E2 includes experiences (such as that theory) that lead you to reject the model E1 led you to embrace, then yes, I would say E2 and E1 are inconsistent. (In the sense that they require that the world be two mutually exclusive ways. I’m not really sure what other sense of “inconsistent” there is.)
If yes, then I don’t think there is much of a difference between your proposal and mine.
What does “The set of all Fs is non-empty” mean? Surely it means “There exist at least one F”, and we are back to what “exist” means. So your definition does not taboo “exist”, it just rewords it without adding anything to the understanding of the issue.
Surely you wouldn’t maintain that the only possible tabooings of “existence” are instrumentalist-friendly ones.
Usually it’s just a postulate. I’ve yet to come across a different definition that is not a simple rewording or obfuscation. I would be very interested in seeing something non-instrumentalist that is.
I’ve banned all of eridu’s recent comments (except a few voted above 0)
Bravo. I have no idea whether that was someone pretending to be ignorant and toxic for the purpose of discrediting a group he was impersonating or whether it was sincere (and ignorant and toxic). Fortunately I don’t need to know and don’t care either way. Good riddance!
as an interim workaround, since hiding-from-Recent-Comments and charge-fee-to-all-descendants is still in progress for preventing future threads like these.
Is it just me or do others also find that Eliezer coming of as a tad petulant with the way he is handling people systematically opposing and downvoting his proposal? Everytimehegotdownvotedtooblivion he just came back with a new comment seemingly crafted to be more belligerent, whiny, condescending and cynical about the community than the last. (That’s hyperbole—in actuality it peaked in the middle somewhere.) Now we just keep getting reminded about it at every opportunity as noise in unrelated threads.
Mostly he’s coming across to me as having lost patience with the community not being what he wants it to be, and having decided that he can fix that by changing the infrastructure, and not granting much importance to the fact that more people express disapproval of this than approval.
Keep in mind that it’s not “more people” it’s more “people who participate in meta threads on Less Wrong”. I’ve observed a tremendous divergence between the latter set, and “what LWers seem to think during real-life conversations” (e.g. July Minicamp private discussions of LW which is where the anti-troll-thread ideas were discussed, asking what people thought about recent changes at Alicorn’s most recent dinner party). I’m guessing there’s some sort of effect where only people who disagree bother to keep looking at the thread, hence bother to comment.
Some “people” were claiming that we ought to fix things by moderation instead of making code changes, which does seem worth trying; so I’ve said to Alicorn to open fire with all weapons free, and am trying this myself while code work is indefinitely in progress. I confess I did anticipate that this would also be downvoted even though IIRC the request to do that was upvoted last time, because at this point I’ve formed the generalization “all moderator actions are downvoted”, either because only some people participate in meta threads, and/or the much more horrifying hypothesis “everyone who doesn’t like the status quo has already stopped regularly checking LessWrong”.
I’m diligently continuing to accept feedback from RL contact and attending carefully to this non-filtered source of impressions and suggestions, but I’m afraid I’ve pretty much written-off trying to figure out what the community-as-a-whole wants by looking at “the set of people who vigorously participate in meta discussions on LW” because it’s so much unlike the reactions I got when ideas for improving LW were being discussed at the July Minicamp, or the distribution of opinions at Alicorn’s last dinner party, and I presume that any other unfiltered source of reactions would find this conversation similarly unrepresentative.
Let me see if I understand you correctly: if someone cares about how Less Wrong is run, what they should do is not comment on Less Wrong—least of all in discussions on Less Wrong about how Less Wrong is run (“meta threads”). Instead, what they should do is move to California and start attending Alicorn’s dinner parties.
Let me see if I understand you correctly: if someone cares about how Less Wrong is run, what they should do is not comment on Less Wrong—least of all in discussions on Less Wrong about how Less Wrong is run (“meta threads”). Instead, what they should do is move to California and start attending Alicorn’s dinner parties.
I don’t see what this has to do with “loss aversion” (the phenomenon where people think losing a dollar is worse than failing to gain a dollar they could have gained), though that’s of course a tangential matter.
The point here is—and I say this with all due respect—it looks to me like you’re rationalizing a decision made for other reasons. What’s really going on here, it seems to me, is that, since you’re lucky enough to be part of a physical community of “similar” people (in which, of course, you happen to have high status), your brain thinks they are the ones who “really matter”—as opposed to abstract characters on the internet who weren’t part of the ancestral environment (and who never fail to critique you whenever they can).
That doesn’t change the fact that this is is an online community, and as such, is for us abstract characters, not your real-life dinner companions. You should be taking advice from the latter about running this site to about the same extent that Alicorn should be taking advice from this site about how to run her dinner parties.
Consider eating Roman-style to increase the intimacy / as a novel experience. Unfortunately, this is made way easier with specialized furniture- but you should be able to improvise with pillows. As well, it is a radically different way to eat that predates the invention of the fork (and so will work fine with hands or chopsticks, but not modern implements).
Consider seating logistics, and experiment with having different people decide who sits where (or next to whom). Dinner parties tend to turn out differently with different arrangements, but different subcultures will have different algorithms for establishing optimal seating, so the experimentation is usually necessary (and having different people decide serves both as a form of blinding and as a way to turn up evidence to isolate the algorithm faster).
Huh, I haven’t been assigning seats at all except for reserving the one with easiest kitchen access for myself. I’ve just been herding people towards the dining table.
since you’re lucky enough to be part of a physical community of “similar” people (in which, of course, you happen to have high status), your brain thinks they are the ones who “really matter”—as opposed to abstract characters on the internet who weren’t part of the ancestral environment (and who never fail to critique you whenever they can).
Was Eliezer “lucky” to have cofounded the Singularity Institute and Overcoming Bias? “Lucky” to have written the Sequences? “Lucky” to have founded LessWrong? “Lucky” to have found kindred minds, both online and in meatspace? Does he just “happen” to be among them?
Or has he, rather, searched them out and created communities for them to come together?
That doesn’t change the fact that this is is an online community, and as such, is for us abstract characters, not your real-life dinner companions. You should be taking advice from the latter about running this site to about the same extent that Alicorn should be taking advice from this site about how to run her dinner parties.
The online community of LessWrong does not own LessWrong. EY owns LessWrong, or some combination of EY, the SI, and whatever small number of other people they choose to share the running of the place with. To a limited extent it is for us, but its governance is not at all by us, and it wouldn’t be LessWrong if it was. The system of government here is enlightened absolutism.
since you’re lucky enough to be part of a physical community of “similar” people
Was Eliezer “lucky” to have cofounded the Singularity Institute and Overcoming Bias?
The causes of his being in such a happy situation (is that better?) were clearly not the point here, and, quite frankly, I think you knew that.
But if you insist on an answer to this irrelevant rhetorical question, the answer is yes. Eliezer_2012 is indeed quite fortunate to have been preceded by all those previous Eliezers who did those things.
EY owns LessWrong
Then, like I implied, he should just admit to making a decision on the basis of his own personal preference (if indeed that’s what’s going on), instead of constructing a rationalization about the opinions of offline folks being somehow more important or “appropriately” filtered.
Eliezer_2012 is indeed quite fortunate to have been preceded by all those previous Eliezers who did those things.
Eliezer only got to be Eliezer_2012 by doing all those things. Now, maybe Eliezer_201209120 did wake up this morning, as every morning, and think, “how extraordinarily, astoundingly lucky I am to be me!”, and there is some point to that thought—but not one that is relevant to this conversation.
Then, like I implied, he should just admit to making a decision on the basis of his own personal preference (if indeed that’s what’s going on), instead of constructing a rationalization about the opinions of offline folks being somehow more important or “appropriately” filtered.
It is tautologically his preference. I see no reason to think he is being dishonest in his stated reasons for that preference.
I’m afraid the above comment does not contribute any additional information to this discussion, and so I have downvoted it accordingly. Any substantive reply would consist of the repetition of points already made.
It’s easier to leave a forum than a country. Forum-dictators who abuse their power end up with empty forums.
Real world dictators who abuse their power often end up dead. (But perhaps not as much as real world dictators who do not abuse their power enough to secure it.)
Perhaps I misunderstood what ArisKatsaris was saying. I thought he meant something like this:
Dictators in countries tend to make living conditions in those countries less desirable. Dictators in forums tend to make posting in those forums (and/or reading them) more desirable.
If this is true, your objection is somewhat tangential to the topic (though an empty forum is less desirable than an active one). But perhaps he meant something else ?
Just my own personal experience of how moderated vs non-moderated forums tend to go, and as for countries, likewise my impression of what countries seem nice to live in.
You’re probably right about modern countries; however, as far as I understand, historically some countries did reasonably well under a dictatorship. Life under Hammurabi was far from being all peaches and cream, but it was still relatively prosperous, compared to the surrounding nations. A few Caesars did a pretty good job of administering Rome; of course, their successors royally screwed the whole thing up. Likewise, life in Tzarist Russia went through its ups and downs (mostly downs, to be fair).
Unfortunately, the kind of a person who seeks (and is able to achieve) absolute power is usually exactly the kind of person who should be kept away from power if at all possible. I’ve seen this happen in forums, where the unofficial grounds for banning a user inevitably devolve into “he doesn’t agree with me”, and “I don’t like his face, virtually speaking”.
Right, but that doesn’t mean they tend to be beneficial, either. We’re not arguing over which dictator is the worst, but whether dictators in forums are diametrically opposed to their real-world cousins.
I’d like to point out that Overcoming Bias, back in the day, was a dictatorship: Robin and Eliezer were explicitly in total control. Whereas Less Wrong was explictly set up to be community-moderated, with voting taking the place of moderator censorship. And the general consensus has always been that LW was an improvement over OB.
Freedom is never a terminal value. If you dig a bit, you should be able to explain why freedom is important/essential in particular circumstances.
Ironically, the appearance of freedom can be a default terminal value for humans and some other animals, if you take evolutionary psychology seriously. Or, to be more accurate, the appearance of absence of imposed restrictions can be a default terminal value that receives positive reinforcement cookies in the brain of humans and some other animals. Claustrophobia seems to be a particular subset of this that automates the jump from certain types of restrictions through the whole mental process that leads to panic-mode.
The abstract concept of freedom and its reality referent pattern, however, would be extremely unlikely to end up as a terminal value, if only even for its sheer mathematical complexity.
I’d be cautious about saying something’s never a terminal value. Given my model of the EEA, it wouldn’t be terribly surprising to me if some set of people did have poor reactions to certain types of external constraint independently of their physical consequences, though “freedom” and its various antonyms seem too broad to capture the way I’d expect this to work.
Someone’s probably studied this, although I can’t dig up anything offhand.
I take back the “never” part, it is way too strong. What I meant to say is that the probability of someone proclaiming that freedom is her terminal value not having dug deep enough to find her true terminal values is extremely high.
(...) it wouldn’t be terribly surprising to me if some set of people did have poor reactions to certain types of external constraint independently of their physical consequences, (...)
Yes, I was commenting on this at the same time. The mental perception of restrictions, or the mental perception of absence of restrictions, can become a direct brainwired value through evolution, and is a simple step enough from other things already in there AFAICT. Freedom itself, however, independent of perception/observation and as a pattern of real interactions and decision choices and so on, seems far too complex to be something the brain would just randomly stumble upon in one go, especially only in some humans and not others.
See if you can replace “freedom” with its substance, and then evaluate whether that substance is something the human brain would be likely to just happen to, once in a while, find as a terminal, worth-in-itself value for some humans but not others, considering the complexity of this substance.
Yes, the mental node/label “freedom” can become a terminal value (a single mental node is certainly simple enough for evolution to stumble upon once in a while), but that’s directly related to a perception of absence of constraints or restrictions within a situation or context.
More complex values will not spontaneously form as terminal, built-in-brain values for animals that came into being through evolution. Evolution just doesn’t do that. Humans don’t rewire their brains and don’t reach into the Great Void of Light from the Beyond to randomly pick their terminal values.
Basically, the systematic absence of conceptual incentives and punishment-threats organized such as to funnel the possible decisions of a mind or set of minds towards a specific subset of possible actions (this is a simplified reduction of “freedom” which is still full of giant paintbrush handles) is not something a human mind would just accidentally happen to form a terminal value around (barring astronomical odds on the order of sun-explodes-next-second) without first developing terminal values around punishment-threats (which not all humans have, if any), decision tree sizes, and various other components of the very complex pattern we call “lack of freedom” (because lack of freedom is much easier to describe than freedom, and freedom is the absence or diminution of lack(s) of freedom).
I don’t see any evidence that a sufficient number of humans happen to have most of the prerequisite terminal values for there to be any specimen which has this complex construct as a terminal value.
As I said in a different comment, though, it’s very possible (and very likely) that the lighting-up of the mental node for freedom could be a terminal value, which feels from inside like freedom itself is a terminal value. However, the terminal value is really just the perception of things that light up the “freedom!” mental node, not the concept of freedom itself.
Once you try to describe “freedom” in terms that a program or algorithm could understand, you realize that it becomes extremely difficult for the program to even know whether there is freedom in something or not, and that it is an abstraction of multiple levels interacting at multiple scales in complex ways far, far above the building blocks of matter and reality, and which requires values and algorithms for a lot of other things. You can value the output of this computation as a terminal value, but not the whole “freedom” business.
A very clever person might be capable of tricking their own brain by abusing an already built-in terminal value on a freedom mental-node by hacking in safety-checks that will force them to shut up and multiply, using best possible algorithms to evaluate “real” freedom-or-no-freedom, and then light up the mental node based on that, but it would require lots of training and mind-hacking.
Hence, I maintain that it’s extremely unlikely that someone really has freedom itself as a terminal value, rather than feeling from inside like they value freedom. A bit of Bayes suggests I shouldn’t even pay attention to it in the space of possible hypotheses, because of the sheer amount of values that get false positives as being terminal due to feeling as such from inside versus the amount of known terminal values that have such a high level of complexity and interconnections between many patterns, reality-referents, indirect valuations, etc.
because lack of freedom is much easier to describe than freedom, and freedom is the absence or diminution of lack(s) of freedom
“Lack of freedom” can’t be significantly easier to describe than freedom—they differ by at most one bit.
No opinion on whether the mental node representing “freedom” or actual freedom is valued—that seems to suffer/benefit from all of the same issues as any other terminal value representing reality.
If someone tries to manacle me in a dungeon, I will perform great violence upon that person. I will give up food, water, shelter, and sleep to avoid it. I will sell prized possessions or great works of art if necessary to buy weapons to attack that person. I can’t think of a better way to describe what a terminal value feels like.
Manacling you in a dungeon also triggers your mental node for freedom and also triggers the appearance of restrictions and constraints, and more so you are the direct subject yourself. It lacks a control group and feels like a confirmation-biased experiment.
If I simply told you (and you have easy means of confirming that I’m telling the truth) that I’m restricting the movements of a dozen people you’ve never heard of, and the restriction of freedom is done in such a way that the “victims” will never even be aware that their freedoms are being restricted (e.g. giving a mental imperative to spend eight hours a day in a certain room with a denial-of-denial clause for it), would you still have the same intense this-is-wrong terminal value for no other reason than that their freedom is taken from them in some manner?
If so, why are employment contracts not making you panic in a constant stream of negative utility? Or compulsive education? Or prison? Or any other form of freedom reduction which you might not consider to be about “freedom” but which certainly fits most reductions of it?
Yes, I meant “freedom for me”—I thought that was implied.
If I simply told you (and you have easy means of confirming that I’m telling the truth) that I’m restricting the movements of a dozen people you’ve never heard of, and the restriction of freedom is done in such a way that the “victims” will never even be aware that their freedoms are being restricted (e.g. giving a mental imperative to vote republican with a denial-of-denial clause for it), would you still have the same intense this-is-wrong terminal value for no other reason than that their freedom is taken from them in some manner?
I would not want to be one of those people. If you convincingly told me that I was one of those people, I’d try to get out of it. If I was concerned about those people and thought they also valued freedom, I’d try to help them.
employment contracts
My employment can be terminated at will by either party. There are some oppressive labor laws that make this less the case, but they mostly favor me and neither myself nor my employer is going to call on them. What’s an “employment contract” and why would I want one?
compulsive education
Compulsory education is horrible. It’s profoundly illiberal and I believe it’s a violation of the constitutional amendment against slavery. I will not send my children to school and “over my dead body” is my response to anyone who intends to take them. I try to convince my friends not to send their children to school either.
prison
I don’t intend to go to prison and would fight to avoid it. If my friends were in prison, I’d do what I could to get them out.
I would not want to be one of those people. If you convincingly told me that I was one of those people, I’d try to get out of it. If I was concerned about those people and thought they also valued freedom, I’d try to help them.
...therefore, if you are never aware of your own lack of freedom, you do not assign value to this. Which loops around back to the appearance of freedom being your true value. This would be the most uncharitable interpretation.
It seems, however, that in general you will be taking the course of action which maximizes the visible freedom that you can perceive, rather than a course of action you know to be optimized in general for widescale freedom. It seems more like a cognitive alert to certain triggers, and a high value being placed on not triggering this particular alert, than valuing the principles.
Edit: Also, thanks for indulging my curiosity and for all your replies on this topic.
Would you sell possessions to buy weapons to attack a person would runs an online voluntary community who changes the rules without consulting anyone?
If the two situations are comparable, I think it’s important to know exactly why.
Also note that manacling you to a dungeon isn’t just eliminating your ability freely choose things arbitrarily, it’s preventing you from having satisfying relationships, access to good food, meaningful life’s work and other pleasures. Would you mind being in a prison that enabled you to do those things?
Would you mind being in a prison that enabled you to do those things?
Yes. If this were many years ago and I weren’t so conversant on the massive differences between the ways different humans see the world, I’d be very confused that you even had to ask that question.
Would you sell possessions to buy weapons to attack a person would runs an online voluntary community who changes the rules without consulting anyone?
No. There are other options. At the moment I’m still vainly hoping that Eliezer will see reason. I’m strongly considering just dropping out.
I feel like asking this question is wrong, but I want the information:
If I know that letting you have freedom will be hurtful (like, say, I tell you you’re going to get run over by a train, and you tell me you won’t, but I know that you’re in denial-of-denial and subconsciously seeking to walk on train tracks, and my only way to prevent your death is to manacle you in a dungeon for a few days), would you still consider the freedom terminally important? More important than the hurt? Which other values can be traded off? Would it be possible to figure out an exchange rate with enough analysis and experiment?
Yes. If this were many years ago and I weren’t so conversant on the massive differences between the ways different humans see the world, I’d be very confused that you even had to ask that question.
Regarding this, what if I told you “Earth was a giant prison all along. We just didn’t know. Also, no one built the prison, and no one is actively working to keep us in here—there never was a jailor in the first place, we were just born inside the prison cell. We’re just incapable of taking off the manacles on our own, since we’re already manacled.”? In fact, I do tell you this. It’s pretty much true that we’ve been prisoners of many, many things. Is your freedom node only triggered at the start of imprisonment, the taking away of a freedom once had? What if someone is born in the prison Raemon proposes? Is it still inherently wrong? Is it inherently wrong that we are stuck on Earth? If no, would it become inherently wrong if you knew that someone is deliberately keeping us here on Earth by actively preventing us from learning how to escape Earth?
The key point being: What is the key principle that triggers your “Freedom” light? The causal action that removes freedoms? The intentions behind the constraints?
It seems logical to me to assume that if you have freedom as a terminal value, then being able to do anything, anywhere, be anything, anyhow, anywhen, control time and space and the whole universe at will better than any god, without any possible restrictions or limitations of any kind, should be the Ultimately Most Supremely Good maximal possible utility optimization, and therefore reality and physics would be your worst possible Enemy, seeing as how it is currently the strongest Jailer than restricts and constrains you the most. I’m quite aware that this is hyperbole and most likely a strawman, but it is, to me, the only plausible prediction for a terminal value of yourself being free.
You’re right, this does answer most of my questions. I had made incorrect assumptions about what you would consider optimal.
After updates based on this, it now appears much more likely for me that you use terminal valuation of your freedom node such that it gets triggered by more rational algorithms that really do attempt to detect restrictions and constraints in more than mere feeling-of-control manner. Is this closer to how you would describe your value?
I’m still having trouble with the idea of considering a universe optimized for one’s own personal freedom as a best thing (I tend to by default think of how to optimize for collective sum utilities of sets of minds, rather than one). It is not what I expected.
True, and I don’t quite see where I implied this. If you’re referring to the optimal universe question, it seems quite trivial that if the universe literally acts according to your every will with no restrictions whatsoever, any other terminal values will instantly be fulfilled to their absolute maximal states (including unbounded values that can increase to infinity) along with adjustment of their referents (if that’s even relevant anymore).
No compromise is needed, since you’re free from the laws of logic and physics and whatever else might prevent you from tiling the entire universe with paperclips AND tiling the entire universe with giant copies of Eliezer’s mind.
So if that sort of freedom is a terminal value, this counterfactual universe trivially becomes the optimal target, since it’s basically whatever you would find to be your optimal universe regardless of any restrictions.
Sometimes freedom is a bother, and sometimes it’s a way to die quickly, and sometimes it’s essential for survival and that “good life” of yours (depending on what you mean by it). You can certainly come up with plenty of examples of each. I recommend you do before pronouncing that freedom is a terminal value for you.
This is a community blog. If your community has a dictator, you should overthrow him.
With the caveats:
If the dictator isn’t particularly noticed to be behaving in that kind of way it is probably not worth enforcing the principle. ie. It is fine for people to have the absolute power to do whatever they want regardless of the will of the people as long as they don’t actually use it. A similar principle would also apply if the President of the United States started issuing pardons for whatever he damn well pleased. If US television informs me correctly (and it may not) then he is technically allowed to do so but I don’t imagine that power would remain if it was used frequently for his own ends. (And I doubt it the reaction against excessive abuse of power would be limited to just not voting for him again.)
The ‘should’ is weak. ie. It applies all else being equal but with a huge “if it is convenient to do so and you haven’t got something else you’d rather do with your time” implied.
“If you see someone about to die and can save them, you should.”
Now, you might agree or disagree with this. But “If you see someone about to die and can save them, you should, if it is convenient to do so and you haven’t got something else you’d rather do with your time” seems more like disagreement to me.
I don’t think so. I agree with that statement, with the same caveats. If there are also 100 people about to die and I can save them instead, I should probably do so. I suppose it depends how morally-informed you think “something else you’d rather do with your time” is supposed to be.
it’s subject to the Loss Aversion effect where the dissatisfied speak up in much greater numbers
But Eliezer Yudkowsky, too, is subject to the loss aversion effect. Just as those dissatisfied with changes overweight change’s negative consequences, so does Eliezer Yudkowsky overweight his dissatisfaction with changes initiated by the “community.” (For example, increased tolerance of responding to “trolling.”)
Moreover, if you discount the result of votes on rules, why do you assume votes on other matters are more rational? The “community” uses votes on substantive postings to discern a group consensus. These votes are subject to the same misdirection through loss aversion as are procedural issues. If the community has taken a mistaken philosophical or scientific position, people who agree with that position will be biased to vote down postings that challenge that position, a change away from a favored position being a loss. (Those who agree with the newly espoused position will be less energized, since they weight their potential gain less than their opponents weigh their potential loss.)
If you think “voting” is so highly distorted that it fails to represent opinion, you should probably abolish it entirely.
True. For that to be an effective communication channel, there would need to be a control group. As for how to create that control group or run any sort of blind (let alone double-blind) testing… yeah, I have no idea. Definitely a problem.
ETA: By “I have no idea”, I mean “Let me find my five-minute clock and I’ll get back to you on this if anything comes up”.
So I thought for five minutes, then looked at what’s been done in other websites before.
The best I have is monthly surveys with randomized questions from a pool of stuff that matters for LessWrong (according to the current or then-current staff, I would presume) with a few community suggestions, and then possibly later implementation of a weighing algorithm for diminishing returns when multiple users with similar thread participation (e.g. two people that always post in the same thread) give similar feedback.
The second part is full of holes and horribly prone to “Death by Poking With Stick”, but an ideal implementation of this seems like it would get a lot more quality feedback than what little gets through low-bandwidth in-person conversations.
There are other, less practical (but possibly more accurate) alternatives, of course. Like picking random LW users every so often, appearing at their front door, giving them a brain-scan headset (e.g. an Emotiv Epoc), and having them wear the headset while being on LW so you can collect tons of data.
I’d stick with live feedback and simple surveys to begin with.
I’ve moderated a few forums before, and with that experience in mind I’d have to agree that there’s a huge, and generally hugely negative, selection bias at play in online response to moderator decisions. It’d be foolish to take those responses as representative of the entire userbase, and I’ve seen more than one forum suffer as a result of such a misconception.
That being said, though, I think it’s risky to write off online user feedback in favor of physical. The people you encounter privately are just as much a filtered set as those who post feedback here, though the filters point in different directions: you’re selecting people involved in the LW interpersonal community, for one thing, which filters out new and casual users right off the bat, and since they’re probably more likely to be personally friendly to you we can also expect affect heuristics to come into play. Skepticism toward certain LW norms may also be selected against, which could lead people to favor new policies reinforcing those norms. Moreover, I’ve noticed a trend in the Bay Area group—not necessarily an irrational one, but a noticeable one—toward treating the online community as low-quality relative to local groups, which we might expect to translate into antipathy towards its status quo.
I don’t know what the weightings should be, but if you’re looking for a representative measure of user preferences I think it’d be wise to take both groups into account to some extent.
I will be starting another Less Wrong Census/Survey in about three weeks; in accordance with the tradition I will first start a thread asking for question ideas. If you can think of a good list of opinions you want polled in the next few weeks, consider posting them there and I’ll stick them in.
You… know I don’t optimize dinner parties as focus groups, right? The people who showed up that night were people who like chili (I had to swap in backup guests for some people who don’t) and who hadn’t been over too recently. A couple of the attendees from that party barely even post on LW.
You… know I don’t optimize dinner parties as focus groups, right?
It is perhaps more importantly dinner parties are optimised for status and social comfort. Actually giving honest feedback rather than guessing passwords would be a gross faux pas.
Getting feedback at dinner parties is a good way to optimise the social experience of getting feedback and translate one’s own status into the agreement of others.
If I were to guess, I’d guess that the main filter criteria for your dinner parties is geographical; when you have a dinner party in the Bay area, you invite people who can be reasonably expected to be in the Bay area. This is not entirely independant of viewpoint—memes which are more common local to the Bay area will be magnified in such a group—but the effect of that filter on moderation viewpoints is probably pretty random (similarly, the effect of the filter of ‘people who like chili’ on moderation viewpoints is probably also pretty random).
So the dinner party filter exists, but it less likely to pertain to the issue at hand than the online self-selection filter.
The problem with the dinner party filter is not that it is too strong, but that it is too weak: it will for example let through people who aren’t even regular users of the site.
That’s fair, and your strategy makes sense. I also agree with DaFranker, below, regarding meta-threads.
This said, however, at the time when I joined Less Wrong, my model of the site was something like, “a place where smart people hold well-reasoned discussions on a wide range of interesting topics” (*). TheOtherDave’s comment, in conjunction with yours, paints a different picture of what you’d like Less Wrong to be; let’s call it Less Wrong 2.0. It’s something akin to, “a place where Eliezer and a few of his real-life friends give lectures on topics they think are important, with Q&A afterwards”.
Both models have merit, IMO, but I probably wouldn’t have joined Less Wrong 2.0. I don’t mean that as any kind of an indictment; if I were in your shoes, I would definitely want to exclude people like this Bugmaster guy from Less Wrong 2.0, as well.
Still, hopefully this one data point was useful in some way; if not, please downvote me !
EY has always seemed to me to want LW to be a mechanism for “raising the sanity waterline”. To the extent that wide-ranging discussion leads to that, I’d expect him to endorse it; to the extent that wide-ranging discussion leads away from that, I’d expect him to reject it. This ought not be a surprise.
Nor ought it be surprising that much of the discussion here does not noticeably progress this goal.
That said, there does seem to be a certain amount of non-apple selling going on here; I don’t think there’s a cogent model of what activity on LW would raise the sanity waterline, so attention is focused instead on trying to eliminate the more blatant failures: troll-baiting, for example, or repetitive meta-threads.
Which is not a criticism; it is what it is. If I don’t know the cause, that’s no reason not to treat the symptoms.
This said, however, at the time when I joined Less Wrong, my model of the site was something like, “a place where smart people hold well-reasoned discussions on a wide range of interesting topics” (*). TheOtherDave’s comment, in conjunction with yours, paints a different picture of what you’d like Less Wrong to be; let’s call it Less Wrong 2.0. It’s something akin to, “a place where Eliezer and a few of his real-life friends give lectures on topics they think are important, with Q&A afterwards”.
No; you’re conflating “Eliezer considers he should have the last word on moderation policy” and “Eliezer considers LessWrong’s content should be mostly about what he has to say”.
The changes of policy Eliezer is pushing have no effect on the “main” content of the site, i.e. posts that are well-received, and upvoted. The only disagreement seems to be about sprawling threads and reactions to problem users. I don’t know where you’re getting “Eliezer and a few of his real-life friends give lectures on topics they think are important” out of that, it’s not as if Eliezer has been posting many “lectures” recently.
I was under the impression that Eliezer agreed with TheOtherDave’s comment upthread:
Mostly [Eliezer is] coming across to me as having lost patience with the community not being what he wants it to be...
Combined with Eliezer’s rather aggressive approach to moderation (f.ex. deleting downvoted comments outright), this did create the impression that Eliezer wants to restrict LessWrong’s content to a narrow list of specific topics.
Me too. Troll posts and really wrong people are too distracting without some form of intervention. Not sure the current solution is optimal (but this point has been extensively argued elsewhere), but I applaud the effort to actually stick one’s neck out and try something.
People who agree are more likely to keep quiet than people who disagree. Rewarding them for speaking up reduces that effect, which means comments get closer to accurately representing consensus.
It’s the impression I’ve got from informal observation, and it’s true when talking about myself specifically. (If I disagree, I presumably have something to say that has not yet been said. If I agree, that’s less likely to be true. I don’t know if that’s the whole reason, but it feels like a substantial part of it.)
My own experience is that while people are more likely to express immediate disagreement than agreement in contexts where disagreement is expressed at all, they are also more likely to express disagreement with expressed disagreement in such forums, from which agreement can be inferred (much as I can infer your agreement with EY’s behavior from your disagreement with Will_Newsome). The idea that they are more likely to keep quiet in general, or that people are more likely to anonymously downvote what they disagree with than upvote what they agree with, doesn’t jive with my experience.
And in contexts where disagreement is not expressed, I find the Asch results align pretty well with my informal expectations of group behavior.
I am confused by your confusion. The claim wasn’t that content people whine less, it was that they’re more likely to keep quiet. The only way I can make sense of your comments is if you’re equating the two—that is, if you assume that the only options are “keep quiet” or “whine”—but that seems an uncharitable reading. Still, if that is what you mean, I simply disagree.
if you assume that the only options are “keep quiet” or “whine”
Yeah, I phrased it quite poorly. Should have been “speak up less”. The point I was (unsuccessfully) making is that both groups have an option of acting (expensive) or not acting (cheap). Acting is what people generally do when they want to change the current state of the world, and non-acting when they are happy with it. Thus any expensive reaction is skewed toward negative. I should probably look up some sources on that, but I will just tap out instead, due to rapidly waning interest.
Sometimes AKA the “Forum Whiners” effect, well known in the PC games domain:
When new PC games are released, almost inevitably the main forums for the game will become flooded with a large surge of complaints, negative reviews, rage, rants, and other negative stuff. This is fully expected and the absence of such is actually a bad sign. People that are happy with the product are playing the game, not wasting their time looking for forums and posting comments there—while people who have a problem or are really unhappy often look for an outlet or a solution to their issues (though the former in much greater numbers, usually). If no one is bothering to post on the forums, then that’s evidence that no one cares about the game in the first place.
I see a lot of similarities here, so perhaps that’s one thing worth looking into? I’d expect some people somewhere to have done the math already on this feedback (possibly by comparing to overall sales, survey results and propagation data), though I may be overestimating the mathematical propensity of the people involved.
Regarding the stop-watching-threads thing, I’ve noticed that I pretty much always stop paying attention to a thread once I’ve gotten the information I wanted out of it, and will only come back to it if someone directly replies to one of my comments (since it shows up in the inbox). This has probably been suggested before, but maybe a “watchlist” to mark some threads to show up new comments visibly somewhere and/or a way to have grandchildren comments to one of your own show up somehow could help? I often miss it when someone replies to a reply to my comment.
In case you need assurance from the online sector. I wholeheartedly welcome any increase in the prevalence of the banhammer, and the “pay 5 karma” thing seems good too.
During that Eridu fiasco, I kept hoping a moderator would do something like “this thread is locked until Eridu taboos all those nebulous affect-laden words.”
Benevolent dictators who aren’t afraid of dissent are a huge win, IMO.
At risk of failing to JFGI: can someone quickly summarize what remaining code work we’d like done? I’ve started wading into the LW code, and am not finding it quite as impenetrable as last time, so concrete goals would be good to have.
Fair enough. All I see is the vote-counts and online comments, but the real-life commenters are of course also people, and I can understand deciding to attend more to them.
Yeah, exactly. Which is why I took it to mean a simple preference for considering the community of IRL folks. Which is not meant as a criticism; after all, I also take more seriously input from folks in my real life than folks on the internet.
Well, I don’t do that, clearly, since I don’t run such an Internet forum.
Less trivially, though… yeah, I suspect I would do so. The tendency to take more seriously people whose faces I can see is pretty strong. Especially if it were a case like this one, where what the RL people are telling me synchronizes better with what I want to do in the first place, and thus gives me a plausible-feeling justification for doing it.
I suspect you’re not really asking me what I do, though, so much as implicitly suggesting that what EY is doing is the wrong thing to do… that the admins ought to attend more to commenters and voters who are actually participating on the thread, rather than attending primarily to the folks who attend the minicamp or Alicorn’s dinner parties.
If so, I don’t think it’s that simple. Fundamentally it depends on whether LW’s sponsors want it to be a forum that demonstrates and teaches superior Internet discourse or whether it wants to be a forum for people interested in rational thinking to discuss stuff they like to discuss. If it’s the latter, then democracy is appropriate. If it’s the former, then purging stuff that fails to demonstrate superior Internet discourse is appropriate.
LW has seemed uncertain about which role it is playing for as long as I’ve been here.
LW has seemed uncertain about which role it is playing for as long as I’ve been here.
Yes, that’s certainly the single largest problem. If the LW moderators decided on their goals for the site, and committed to a plan for achieving those goals, the meta-tedium would be significantly reduced. The way it’s currently being done, there’s too much risk of overlap between run of the mill moderation squabbles and the pernicious Eliezer Yudkowsky cult/anticult squabbles.
Sure. I can’t speak for EY, clearly, but there are many things (including what other people think) that I find myself caring about, often a lot, but I don’t think are important. This is inconsistent, I know, but I find it pretty common among humans.
Is it just me or do others also find that Eliezer coming of as a tad petulant with the way he is handling people systematically opposing and downvoting his proposal? Every time he got downvoted to oblivion he just came back with a new comment seemingly crafted to be more belligerent, whiny, condescending and cynical about the community than the last. (That’s hyperbole—in actuality it peaked in the middle somewhere.) Now we just keep getting reminded about it at every opportunity as noise in unrelated threads.
I observe that wedifrid has taken advantage of this particular opportunity to remind everyone that he thinks I am belligerent, whiny, condescending, and cynical.
(So noted because I was a bit unhappy at how the conversation suddenly got steered there.)
I observe that wedifrid has taken advantage of this particular opportunity to remind everyone that he thinks I am belligerent, whiny, condescending, and cynical.
I notice that my criticism was made specifically regarding the exhibition of those behaviors in the comments he has made about the subject he has brought up here. We can even see that I made specific links. Eliezer seems to be conflating this with a declaration that he has those features as part of his innate disposition.
By saying that wedrifid is reminding people that he (supposedly) believes Eliezer has those dispositions he also implies that wedrifid has said this previously. This is odd because I find myself to be fairly open with making criticisms of Eliezer whenever I feel them justified and from what I recall “belligerent, whiny, condescending, and cynical [about the lesswrong community]” isn’t remotely like a list of weaknesses that I actually have described Eliezer as having in general or at any particular time that I recall.
Usually when people make this kind of muddled accusation I attribute it to a failure of epistemic rationality and luminosity. Many people just aren’t able to separate in their minds a specific criticism of an action and belief about innate traits. Dismissing Eliezer as merely being incompetent at the very skills he is renowned for would seem more insulting than simply concluding that he is being deliberately disingenuous.
So noted because I was a bit unhappy at how the conversation suddenly got steered there.
My suggestion is that Eliezer would be best served by not bringing the conversation here repeatedly. It sends all sorts of signals of incompetence. That ‘unhappy’ feeling is there to help him learn from his mistakes.
I also observe that wedrifid’s opinion of you doesn’t appear to be steered with equal expected posterior probability in light of how you react versus his predictions of your reactions.
I’m curious as to whether I’m on to something there, or whether I just pulled something random and my intuitions are wrong.
I also observe that wedrifid’s opinion of you doesn’t appear to be steered with equal expected posterior probability in light of how you react versus his predictions of your reactions.
I can’t even decipher what it is you are accusing wedrifid of here. Apart from being wrong and biased somehow.
On pain of paradox, a low probability of seeing strong evidence in one direction must be balanced by a high probability of observing weak counterevidence in the other direction.
This rule did not seem respected in what little I’ve seen of interactions between you and Eliezer, and I was looking for external feedback and evidence (one way or another) for this hypothesis, to see if there is a valid body of evidence justifying the selection of this hypothesis for consideration or if that simply happened out of bias and inappropriate heuristics.
I suspect that, if the latter, then there was probably an erroneous pattern-matching to the examples given in the related blogpost on the subject (and other examples I have seen of this kind of erroneous thinking).
I don’t know how to submit this stuff for feedback and review without using a specific “accusation” or wasting a lot of time creating (and double-checking for consistency) elaborating complex counterfactual scenarios.
I (and any other casual visitor) now have only indirect evidence regarding whether eridu’s comments were really bad or were well-meaning attempts to share feminist insights into the subject, followed by understandable frustration as everything she^Whe said was quoted out of context (if not misquoted outright) and interpreted in the worst possible way.
Agreed. I would prefer that a negative contributor be prospectively banned (that is, “prevented from posting further”) rather than retrospectively expunged (that is, “all their comments deleted from the record”), so as to avoid mutilating the record of past discussions.
For precedent, consider Wikipedia: if a contributor is found to be too much trouble (starting flamewars, edit-warring, etc.) they are banned, but their “talk page” discussion comments are not expunged. However, specific comments that are merely flaming, or which constitute harassment or the like, can be deleted.
Agreed. In this case, what I read of the discussion which included eridu indicated that they weren’t worth engaging with, but I’m actually rather impressed with what I saw of the community’s patience.
While the discussion arguably veered off-topic with respect to the original article, I don’t think we actually have a rule against that. And I don’t think eridu was actually trolling, though they do seem to have an overly-dismissive attitude towards the community. I do think there’s a place for social constructivist / radical feminist views to be aired where they apply on this site, and I don’t think eridu was doing a particularly bad job of it.
If we have a diversity of views, then people will disagree about fundamental sorts of things and we’ll end up with people thinking each other are “not even wrong” about some issues, which certainly seems downvote-worthy at the time. But we do want a diversity of views (it’s one of the primary benefits of having multiple people interacting in the first place), and so banning comments which are merely unpopular is not called-for, and will simply shunt out potential members of the community.
Of course, I’m basically guessing about your rationale in banning these comments, so if you’d like to provide some specific justification, that would be helpful.
I do think there’s a place for social constructivist / radical feminist views to be aired where they apply on this site, and I don’t think eridu was doing a particularly bad job of it.
Right now that sounds like one of the most brutal criticisms you could have made of radical feminism.
While the discussion arguably veered off-topic with respect to the original article,
I disagree. It was a perfect example of how the Worst Argument In The World (rather, an especially irritating subtype of the same) is often deployed in the field.
Thanks. I’m impressed with the story in the link, but also more convinced that he might as well be treated as a troll because he criticized someone for being a man explaining feminism to women.
Eh, that’s a relatively minor sin of argument, all things considered. It’s pretty easy to think that you’re excused from such a thing thanks to greater relative knowledge or better subcultural placement.
I’ve banned all of eridu’s recent comments (except a few voted above 0) as an interim workaround, since hiding-from-Recent-Comments and charge-fee-to-all-descendants is still in progress for preventing future threads like these.
I respectfully request that you all stop doing this, both eridu and those replying to him.
I think Eridu’s downvotes were mostly well-deserved.
I don’t think this is a good idea.
I wonder if we could solve this problem from another direction. The issue from your perspective, as I understand it, is that you want to be able to follow every interesting discussion on this site, in semi-real time, but can’t. You can’t because your only view into “all comments everywhere” is only 5 items long, so fast-moving pointless discussions drown out the stuff you’re interested in. An RSS feed presumably isn’t sufficient either, since it pushes comments as they occur and doesn’t give the community a chance to filter them.
So if I’ve reasoned all this out correctly, you’d prefer a view of all comments, sorted descending by post time and configurably tree-filtered by karma and maybe username. But we haven’t the dev resources to build that, and measures like the ones you describe are a cheap, good-enough approximation.
Do I have that right?
I think it’s more than that—he also doesn’t want other people to notice the pointless discussions, so that
1) people stop fanning the flames and feeding the trolls
2) people post in the worthwhile threads, resulting in more quality there
(and I agree with this point of view)
I dislike this solution, for several reasons.
I realize that we want to get rid of trolls, and I agree that this is a worthy goal, but one single person shouldn’t be in charge of deciding who’s a troll and who isn’t.
Now that everyone knows that downvotes can cause a person to lose their ability to comment (I assume that’s what “ban” means, could be wrong though), unscrupulous community members (and we must have some, statistically speaking, as unpleasant as that thought is) can use their downvotes offensively—sort of like painting a target with a laser, allowing the Eliezer-nuke to home in.
Downvoting a comment does not always imply that the commenter is a troll. People also use downvotes to express things like “your argument is weak and unconvincing”, and “I disagree with you strongly”. We want to discourage the latter usage, and IMO we should encourage the former, but Eliezer’s new policy does nothing to achieve these goals, and in fact harms them.
If the problem is differentiating between trolls and simply weak, airy, or badly formed comments/arguments, I think the obvious simple solution would be to do what has worked elsewhere and add a “Report” or “Troll-Alert” option to bring the comment/post to the attention of moderators or send it to a community-review queue.
It certainly seems easier to control for abuse of a Report feature than to control for trolling and troll-feeding using a single linear score that doesn’t even tell you whether that −2 is just 2 * (-1) (two people think the poster is evil) or whether it’s +5 −7 (five cultists approve, seven rationalists think it’s a troll) (unless moderators can see a breakdown of this?).
Do you not see a Report button? There at least used to be one; I can’t see because I only see a Ban button.
There is a Report button when I view comments that are replies to my comments, or when I view private messages.
There is no Report button when I view comments normally.
Oh, you’re right! Didn’t remember that, but the inbox does have “Context” and “Report” links instead of the standard buttons.
Edit: I suppose a clever bit of scripting could probably fix it browser-side, then, but that’s a very hacky solution and there’s still value in having a built-in report button for, say, people who don’t have the script or often access lesswrong from different browsers/computers.
I do not see a Report button.
See Issue 272. The report button was removed during a past redesign, as (I gather) redesigners didn’t feel it was motivated sufficiently to bother preserving it. The issue’s been in accepted/contributions-welcome mode since Sep 2011.
Okay, if there’s no longer a Report button, I at least am willing to field PMs from people who think I should consider banning specific comments.
Nope, no report button here. Upvote/downvote on the left, Parent/Reply/Permalink on the right (+Edit/Retract when own posts).
I see no such button, FWIW.
There are several moderators, I don’t think Eliezer is the most active.
It doesn’t, “ban” just means the comment is hidden.
I agree that there are downsides, they just don’t seem that terrible..
I am aware of this, but Eliezer came off as being particularly invested in personally combating people whom he perceives as trolls.
Ah, I stand corrected then, thanks for the info.
What about the never-ending meta discussions, or are you counting on those dying down soon? Because I wouldn’t, unless the new policy is either dropped, or an extensive purge of the commentariat is carried out.
Above all:
3) Newcomers who arrive at the site see productive discussion of new ideas, not a flamewar, in the Recent Comments section.
4) Trolls are not encouraged to stay; people who troll do not receive attention-reward for it and do not have their brain reinforced to troll some more. Productive discussion is rewarded by attention.
The discussion with eridu was probably worth ending, but I saw someone say it was the best discussion of those issues they’d ever seen, and I’d said so myself independently in a location that I’ve promised not to link to.
I am very impressed with LW that we managed to make that happen.
Did you learn something useful or interesting, or were you just impressed that the discussion remained relatively civil? If the former, can you summarize what you learned?
I learned something that might turn out to be useful.
I got a bit of perspective on the extent to which I amplify my rage and distrust at SJ-related material (I had a very rough time just reading a lot of racefail)-- I’m not sure what I want to do with this, but it’s something new at my end.
The civility of the discussion is very likely to have made this possible.
I’m having trouble understanding this sentence. First, I guess SJ = “social justice” and racefail = “a famously controversial online discussion that was initially about writing fictional characters who are people of color”? But what does it mean to amplify your rage and distrust at some material? Do you mean some parts of the SJ-related materials made you angry and distrustful? Distrustful of who? Which parts made you feel that way? Why? And how did the eridu discussion help you realize the extent?
I’m curious myself. I honestly didn’t see anything useful said. (Perhaps I just took all the valid points for granted as obvious?)
That discussion sucked. I was appalled at LW when I came back after a few hours and still “patriarchy” “abuse” etc hadn’t been tabooed.
You could have asked for them to be tabooed.
I did. Multiple times.
Thanks.
That’s interesting—as I recall, requests for words to be tabooed are usually at least somewhat honored.
Not in my experience.
You ask for “exist” “true” etc to be tabooed, which is hard. Assuming they even try, it would take a while to wade thru all the philosophical muck and actually get to something, by which point the moment has passed.
My usual response to requests for “X exists” to be tabooed is to start talking about reliably predicting future experiences E2 in a range of contexts C (as C approaches infinity) consistent with the past experiences E1 which led me to to put X in my model in the first place. If someone wants to talk about E2 being reliably predictable even though X “doesn’t really exist”, it’s not in the least bit clear to me what they’re talking about.
Thanks! This is a very useful explanation / reduction / taboo.
It also sheds some light and helped me understand quite a bit more, I believe, on this whole “instrumentalism” business some people here seem to really want to protect.
(link is just in case someone misunderstands this as an accusation of “Politics!”)
You’re welcome. I vaguely remember being involved in an earlier discussion that covered this idea at greater length, wherein I described myself as a compatibilist when it comes to instrumentalism, but the obvious google search doesn’t find it so perhaps I’m deluded.
Was it from a couple days ago?
(I found this with Wei Dai’s lesswrong_user.php script.)
Ayup, that’s the one. Thanks!
Yes. I recently described it as this:
I wholeheartedly approve of this approach. If more people used it, we would avoid the recurrent unproductive discussions of QM interpretations, qualia and such.
EDIT. Just to clarify, the part saying “put X in my model” is the essential bit to preempt the discussion of “but does it exist outside your model?”, since the latter would violate this definition of “exist”. such as this statement by our esteemed Kaj Sotala:
Oh, I very much doubt that. But I’d like to think so.
EDIT: I wrote the above before your edit, and don’t really understand your edit.
Instrumentalism is pretty unproductive when it comes to answering questions about what really exists.
Or at least unusual enough to be brushed aside as “wtf”.
I’d say that asking people to taboo “true” is very common, in certain circles outside Less Wrong. That’s why Eliezer wrote The Simple Truth.
Unfortunately, the last sensible (to me) exchange in it was around
After that the instrumentalist argument got heavily strawmanned:
It gets worse after that, until EY kills the offending in-strawman-talist with some gusto.
Upvoted entirely for “in-strawman-talist”, which I will be giggling about at unpredictable intervals for days, probably requiring me to come up with some entirely false but more easily explained answer to “What’s so funny?”.
There are lots of words that I don’t know how to taboo, because I only have a partial and largely intuitive understanding of the concepts I’m referring to by them, and can’t fully explain those concepts. Examples: “exist”, “truth”, “correct”, “right”, “moral”, “rational”, “should”, “mathematical”. I don’t think anyone has asked me directly to taboo any of these words, but if someone did, I might ignore the request because I think my time could be better spent trying to communicate with others who seem to already share my understandings of these words.
In the case of “exist”, I think that something exists implies that I can care about it and not be irrational. (“care about”: for example, have a term for it in my utility function) This seems to at least capture a large part of what I mean when I say something exists, but I’m not sure if “exists” just means (something like) the correct decision theory allows a utility function to have a term for something, or if existence is somehow more fundamental than that and our ability to rationally care about something derives from its existence in that more fundamental sense. Does this make sense?
ETA: See also this relevant post.
Well, apparently TheOtherDave is bold enough to give a meaningful definition of “exist”. Would you agree with it? If not, what would be a counterexample?
I disagree with it because an agent (such as one using UDT) does not necessarily have memory and the associated concepts of “future experiences” and “past experiences”, but “exist” still seems meaningful even for such an agent.
Would you say that when I say “X exists,” and an agent A without memory says “X exists,” that I and A are likely expressing the same belief about X?
I confess that I cannot make sense of this without learning more about UDT and your definition of agency. I thought this definition is more basic and independent of the decision theory models one adopts.
Would you be satisfied if I tabooed “Fs exist” as “The set of all Fs is non-empty”?
I dislike fake formalizations. TheOtherDave’s approach makes a lot more sense to me.
Well, it would, given that you’re an instrumentalist. Since I’m not an instrumentalist, TheOtherDave’s suggestion (in so far as I understand it) clearly differs from what I mean when I talk about existence. Surely you wouldn’t maintain that the only possible tabooings of “existence” are instrumentalist-friendly ones.
But why do you think my formulation is a “fake formalization”? It captures what I mean by existence pretty well, I think. Is the worry that I haven’t provided an empirical criterion for existence?
Awesome! I love clear differences.
Can you give me an example of some thing that exists, for which my proposed tabooing of “existence” doesn’t apply? Or, conversely, of something for which my proposed tabooing applies, but which doesn’t exist?
With the caveat that I might not fully understand your proposed tabooing, here’s my concern with it. There are models which are empirically equivalent, yet disagree on the furniture of the world. As far as I can see, your tabooing, with its emphasis on predictive success, cannot distinguish between the ontological claims made by these models. I think one can. For instance, even if two theories make identical predictions, I would say the right move would be adopt the ontology of the simpler of the two.
Perhaps I can expand on my proposed tabooing. Instead of just “The set of Fs is non-empty”, make it “The set of Fs is non-empty according to our best physical theory”, where the “best physical theory” is determined not just by empirical success but by extra-empirical virtues such as simplicity.
Wrt your revised tabooing… that has the odd property that entities come into existence and cease existing as our physical theories change. I guess I’m OK with that… e.g., if you really want to say that quarks didn’t exist in 1492, but that quarks in 1492 now existed, I won’t argue, but it does seem like an odd way to talk.
Wrt your concern… hrm. Let me try to be more specific.
So, I have two empirically equivalent models M1 and M2, which make different ontological claims but predict the same experiences in a range of contexts C (as C approaches infinity). Let us say that M1 asserts the existence of X, and M2 asserts instead the existence of Y, and X is simpler than Y. I also have a set of experiences E1, on the basis of which I adopt M1 as my model (for several reasons, including the fact that my experiences have led me to prefer simpler models). Based on this, I predict that my future experiences E2 will be consistent with the past experiences E1 which led me to to put X in my model in the first place, which include the experiences that led me to endorse Occam’s Razor. If that prediction proves false—that is, if I have experiences that are inconsistent with that—I should reduce my confidence in the existence of X. If it proves true—that is, I have no experiences that are inconsistent with that—I should remain confident.
Is that example consistent with your understanding of how my proposed tabooing works?
If so, can you say more about your concern? Because it seems to me I am perfectly able to distinguish between M1 and M2 (and choose M1, insofar as I embrace Occam’s Razor) with this understanding of existence.
The tabooing is not supposed to be an analysis of what makes things exist; it is an analysis of when we are justified in believing something exists. It’s a criterion for ontological commitment, not ontology. I took it that this was what your tabooing was supposed to convey as well, since surely there can be things that exist that don’t feature in our models. Or maybe you don’t think so?
To get an actual criterion of ontology rather than just a criterion of ontological commitment, replace “our best physical theory” with “the best physical theory”, which may be one that nobody ever discovers.
Ah, I see. This makes your view more congenial to me. Although it still depends on what you mean by consistent. If one of my future experiences is the discovery of an even simpler empirically adequate theory, then presumably you would say that that experience is in some sense inconsistent with E1? If yes, then I don’t think there is much of a difference between your proposal and mine.
I understood the point to be to replace the phrase “X exists” with an expression of what we’re trying to convey about the world when we say “X exists.” Which might conceivably be identical to what we’re trying to convey about the world when we say “I’m justified in believing X exists”, depending on what we want to say about when a belief is justified, but if we allow for things that happen to be true but are nevertheless not justified beliefs (which I do) then they aren’t identical.
But, sure, if we’re talking about epistemology rather than ontology, then my objection about quarks is irrelevant.
If E2 includes experiences (such as that theory) that lead you to reject the model E1 led you to embrace, then yes, I would say E2 and E1 are inconsistent. (In the sense that they require that the world be two mutually exclusive ways. I’m not really sure what other sense of “inconsistent” there is.)
All right.
What does “The set of all Fs is non-empty” mean? Surely it means “There exist at least one F”, and we are back to what “exist” means. So your definition does not taboo “exist”, it just rewords it without adding anything to the understanding of the issue.
Usually it’s just a postulate. I’ve yet to come across a different definition that is not a simple rewording or obfuscation. I would be very interested in seeing something non-instrumentalist that is.
If you click on the recent comments link you get a longer view.
Bravo. I have no idea whether that was someone pretending to be ignorant and toxic for the purpose of discrediting a group he was impersonating or whether it was sincere (and ignorant and toxic). Fortunately I don’t need to know and don’t care either way. Good riddance!
Is it just me or do others also find that Eliezer coming of as a tad petulant with the way he is handling people systematically opposing and downvoting his proposal? Every time he got downvoted to oblivion he just came back with a new comment seemingly crafted to be more belligerent, whiny, condescending and cynical about the community than the last. (That’s hyperbole—in actuality it peaked in the middle somewhere.) Now we just keep getting reminded about it at every opportunity as noise in unrelated threads.
It’s not just you.
I’m starting to think there should be community-elected moderators or something, and Eliezer should stop being allowed to suggest things.
Mostly he’s coming across to me as having lost patience with the community not being what he wants it to be, and having decided that he can fix that by changing the infrastructure, and not granting much importance to the fact that more people express disapproval of this than approval.
Keep in mind that it’s not “more people” it’s more “people who participate in meta threads on Less Wrong”. I’ve observed a tremendous divergence between the latter set, and “what LWers seem to think during real-life conversations” (e.g. July Minicamp private discussions of LW which is where the anti-troll-thread ideas were discussed, asking what people thought about recent changes at Alicorn’s most recent dinner party). I’m guessing there’s some sort of effect where only people who disagree bother to keep looking at the thread, hence bother to comment.
Some “people” were claiming that we ought to fix things by moderation instead of making code changes, which does seem worth trying; so I’ve said to Alicorn to open fire with all weapons free, and am trying this myself while code work is indefinitely in progress. I confess I did anticipate that this would also be downvoted even though IIRC the request to do that was upvoted last time, because at this point I’ve formed the generalization “all moderator actions are downvoted”, either because only some people participate in meta threads, and/or the much more horrifying hypothesis “everyone who doesn’t like the status quo has already stopped regularly checking LessWrong”.
I’m diligently continuing to accept feedback from RL contact and attending carefully to this non-filtered source of impressions and suggestions, but I’m afraid I’ve pretty much written-off trying to figure out what the community-as-a-whole wants by looking at “the set of people who vigorously participate in meta discussions on LW” because it’s so much unlike the reactions I got when ideas for improving LW were being discussed at the July Minicamp, or the distribution of opinions at Alicorn’s last dinner party, and I presume that any other unfiltered source of reactions would find this conversation similarly unrepresentative.
Let me see if I understand you correctly: if someone cares about how Less Wrong is run, what they should do is not comment on Less Wrong—least of all in discussions on Less Wrong about how Less Wrong is run (“meta threads”). Instead, what they should do is move to California and start attending Alicorn’s dinner parties.
Have I got that right?
That’s how politics usually works, yes.
Can we call this the social availability heuristic?
Also, you have to attend dinner parties on a day when Eliezer is invited and doesn’t decline due to being on a weird diet that week.
Don’t worry, I’m sure that venue’s attendees are selected neutrally.
All you have to do is run into me in any venue whatsoever where the attendees weren’t filtered by their interest in meta threads. :)
But now that you’ve stated this, you have the ability to rationalize any future IRL meta discussion...
Can “Direct email, skype or text-chat communications to E.Y.” count as a venue? Purely out of curiosity.
The problem is that if you initiate it, it’s subject to the Loss Aversion effect where the dissatisfied speak up in much greater numbers.
I don’t see what this has to do with “loss aversion” (the phenomenon where people think losing a dollar is worse than failing to gain a dollar they could have gained), though that’s of course a tangential matter.
The point here is—and I say this with all due respect—it looks to me like you’re rationalizing a decision made for other reasons. What’s really going on here, it seems to me, is that, since you’re lucky enough to be part of a physical community of “similar” people (in which, of course, you happen to have high status), your brain thinks they are the ones who “really matter”—as opposed to abstract characters on the internet who weren’t part of the ancestral environment (and who never fail to critique you whenever they can).
That doesn’t change the fact that this is is an online community, and as such, is for us abstract characters, not your real-life dinner companions. You should be taking advice from the latter about running this site to about the same extent that Alicorn should be taking advice from this site about how to run her dinner parties.
Do you have advice on how to run my dinner parties?
Vaniver and DaFranker have both offered sensible, practical, down-to-earth advice. I, on the other hand, have one word for you: Airship.
Not plastics?
Consider eating Roman-style to increase the intimacy / as a novel experience. Unfortunately, this is made way easier with specialized furniture- but you should be able to improvise with pillows. As well, it is a radically different way to eat that predates the invention of the fork (and so will work fine with hands or chopsticks, but not modern implements).
Consider seating logistics, and experiment with having different people decide who sits where (or next to whom). Dinner parties tend to turn out differently with different arrangements, but different subcultures will have different algorithms for establishing optimal seating, so the experimentation is usually necessary (and having different people decide serves both as a form of blinding and as a way to turn up evidence to isolate the algorithm faster).
Huh, I haven’t been assigning seats at all except for reserving the one with easiest kitchen access for myself. I’ve just been herding people towards the dining table.
Was Eliezer “lucky” to have cofounded the Singularity Institute and Overcoming Bias? “Lucky” to have written the Sequences? “Lucky” to have founded LessWrong? “Lucky” to have found kindred minds, both online and in meatspace? Does he just “happen” to be among them?
Or has he, rather, searched them out and created communities for them to come together?
The online community of LessWrong does not own LessWrong. EY owns LessWrong, or some combination of EY, the SI, and whatever small number of other people they choose to share the running of the place with. To a limited extent it is for us, but its governance is not at all by us, and it wouldn’t be LessWrong if it was. The system of government here is enlightened absolutism.
The causes of his being in such a happy situation (is that better?) were clearly not the point here, and, quite frankly, I think you knew that.
But if you insist on an answer to this irrelevant rhetorical question, the answer is yes. Eliezer_2012 is indeed quite fortunate to have been preceded by all those previous Eliezers who did those things.
Then, like I implied, he should just admit to making a decision on the basis of his own personal preference (if indeed that’s what’s going on), instead of constructing a rationalization about the opinions of offline folks being somehow more important or “appropriately” filtered.
I would replace preference with hypothesis of what constitutes the optimal rationality-refining community.
They are sensibly the same, but I find the latter to be a more useful reduction that is more open to being refined in turn.
Eliezer only got to be Eliezer_2012 by doing all those things. Now, maybe Eliezer_201209120 did wake up this morning, as every morning, and think, “how extraordinarily, astoundingly lucky I am to be me!”, and there is some point to that thought—but not one that is relevant to this conversation.
It is tautologically his preference. I see no reason to think he is being dishonest in his stated reasons for that preference.
I’m afraid the above comment does not contribute any additional information to this discussion, and so I have downvoted it accordingly. Any substantive reply would consist of the repetition of points already made.
You’re welcome.
This is a community blog. If your community has a dictator, you should overthrow him.
Is the overthrowing of dictators a terminal value to you, or is it that you associate it with good consequences?
A little of both. Freedom is a terminal value, and heuristically dictators cause bad consequences.
My own view: Dictators in countries tend to cause bad consequences. Dictators in forums tend to cause good consequences.
Do you have any evidence for that ? In my experience, it all depends on the dictator, not on the venue.
It’s easier to leave a forum than a country. Forum-dictators who abuse their power end up with empty forums.
Real world dictators who abuse their power often end up dead. (But perhaps not as much as real world dictators who do not abuse their power enough to secure it.)
Not as often as you seem to think.
Perhaps I misunderstood what ArisKatsaris was saying. I thought he meant something like this:
If this is true, your objection is somewhat tangential to the topic (though an empty forum is less desirable than an active one). But perhaps he meant something else ?
Since it’s easier to leave, a dictator in a forum has more motivation not to abuse his power.
Just my own personal experience of how moderated vs non-moderated forums tend to go, and as for countries, likewise my impression of what countries seem nice to live in.
You’re probably right about modern countries; however, as far as I understand, historically some countries did reasonably well under a dictatorship. Life under Hammurabi was far from being all peaches and cream, but it was still relatively prosperous, compared to the surrounding nations. A few Caesars did a pretty good job of administering Rome; of course, their successors royally screwed the whole thing up. Likewise, life in Tzarist Russia went through its ups and downs (mostly downs, to be fair).
Unfortunately, the kind of a person who seeks (and is able to achieve) absolute power is usually exactly the kind of person who should be kept away from power if at all possible. I’ve seen this happen in forums, where the unofficial grounds for banning a user inevitably devolve into “he doesn’t agree with me”, and “I don’t like his face, virtually speaking”.
“Dictators” in forums can’t kill people or hold them hostage.
Right, but that doesn’t mean they tend to be beneficial, either. We’re not arguing over which dictator is the worst, but whether dictators in forums are diametrically opposed to their real-world cousins.
I’d like to point out that Overcoming Bias, back in the day, was a dictatorship: Robin and Eliezer were explicitly in total control. Whereas Less Wrong was explictly set up to be community-moderated, with voting taking the place of moderator censorship. And the general consensus has always been that LW was an improvement over OB.
Freedom is never a terminal value. If you dig a bit, you should be able to explain why freedom is important/essential in particular circumstances.
Ironically, the appearance of freedom can be a default terminal value for humans and some other animals, if you take evolutionary psychology seriously. Or, to be more accurate, the appearance of absence of imposed restrictions can be a default terminal value that receives positive reinforcement cookies in the brain of humans and some other animals. Claustrophobia seems to be a particular subset of this that automates the jump from certain types of restrictions through the whole mental process that leads to panic-mode.
The abstract concept of freedom and its reality referent pattern, however, would be extremely unlikely to end up as a terminal value, if only even for its sheer mathematical complexity.
I agree with this.
I’d be cautious about saying something’s never a terminal value. Given my model of the EEA, it wouldn’t be terribly surprising to me if some set of people did have poor reactions to certain types of external constraint independently of their physical consequences, though “freedom” and its various antonyms seem too broad to capture the way I’d expect this to work.
Someone’s probably studied this, although I can’t dig up anything offhand.
I take back the “never” part, it is way too strong. What I meant to say is that the probability of someone proclaiming that freedom is her terminal value not having dug deep enough to find her true terminal values is extremely high.
That seems reasonable. Especially given how often freedom gets used as an applause light.
Yes, I was commenting on this at the same time. The mental perception of restrictions, or the mental perception of absence of restrictions, can become a direct brainwired value through evolution, and is a simple step enough from other things already in there AFAICT. Freedom itself, however, independent of perception/observation and as a pattern of real interactions and decision choices and so on, seems far too complex to be something the brain would just randomly stumble upon in one go, especially only in some humans and not others.
I agree that freedom is an instrumental value. I disagree that it is never a terminal value. It is constitutive of the good life.
See if you can replace “freedom” with its substance, and then evaluate whether that substance is something the human brain would be likely to just happen to, once in a while, find as a terminal, worth-in-itself value for some humans but not others, considering the complexity of this substance.
Yes, the mental node/label “freedom” can become a terminal value (a single mental node is certainly simple enough for evolution to stumble upon once in a while), but that’s directly related to a perception of absence of constraints or restrictions within a situation or context.
I don’t see what you’re getting at here. All terminal values are agent-specific.
More complex values will not spontaneously form as terminal, built-in-brain values for animals that came into being through evolution. Evolution just doesn’t do that. Humans don’t rewire their brains and don’t reach into the Great Void of Light from the Beyond to randomly pick their terminal values.
Basically, the systematic absence of conceptual incentives and punishment-threats organized such as to funnel the possible decisions of a mind or set of minds towards a specific subset of possible actions (this is a simplified reduction of “freedom” which is still full of giant paintbrush handles) is not something a human mind would just accidentally happen to form a terminal value around (barring astronomical odds on the order of sun-explodes-next-second) without first developing terminal values around punishment-threats (which not all humans have, if any), decision tree sizes, and various other components of the very complex pattern we call “lack of freedom” (because lack of freedom is much easier to describe than freedom, and freedom is the absence or diminution of lack(s) of freedom).
I don’t see any evidence that a sufficient number of humans happen to have most of the prerequisite terminal values for there to be any specimen which has this complex construct as a terminal value.
As I said in a different comment, though, it’s very possible (and very likely) that the lighting-up of the mental node for freedom could be a terminal value, which feels from inside like freedom itself is a terminal value. However, the terminal value is really just the perception of things that light up the “freedom!” mental node, not the concept of freedom itself.
Once you try to describe “freedom” in terms that a program or algorithm could understand, you realize that it becomes extremely difficult for the program to even know whether there is freedom in something or not, and that it is an abstraction of multiple levels interacting at multiple scales in complex ways far, far above the building blocks of matter and reality, and which requires values and algorithms for a lot of other things. You can value the output of this computation as a terminal value, but not the whole “freedom” business.
A very clever person might be capable of tricking their own brain by abusing an already built-in terminal value on a freedom mental-node by hacking in safety-checks that will force them to shut up and multiply, using best possible algorithms to evaluate “real” freedom-or-no-freedom, and then light up the mental node based on that, but it would require lots of training and mind-hacking.
Hence, I maintain that it’s extremely unlikely that someone really has freedom itself as a terminal value, rather than feeling from inside like they value freedom. A bit of Bayes suggests I shouldn’t even pay attention to it in the space of possible hypotheses, because of the sheer amount of values that get false positives as being terminal due to feeling as such from inside versus the amount of known terminal values that have such a high level of complexity and interconnections between many patterns, reality-referents, indirect valuations, etc.
“Lack of freedom” can’t be significantly easier to describe than freedom—they differ by at most one bit.
No opinion on whether the mental node representing “freedom” or actual freedom is valued—that seems to suffer/benefit from all of the same issues as any other terminal value representing reality.
If someone tries to manacle me in a dungeon, I will perform great violence upon that person. I will give up food, water, shelter, and sleep to avoid it. I will sell prized possessions or great works of art if necessary to buy weapons to attack that person. I can’t think of a better way to describe what a terminal value feels like.
Manacling you in a dungeon also triggers your mental node for freedom and also triggers the appearance of restrictions and constraints, and more so you are the direct subject yourself. It lacks a control group and feels like a confirmation-biased experiment.
If I simply told you (and you have easy means of confirming that I’m telling the truth) that I’m restricting the movements of a dozen people you’ve never heard of, and the restriction of freedom is done in such a way that the “victims” will never even be aware that their freedoms are being restricted (e.g. giving a mental imperative to spend eight hours a day in a certain room with a denial-of-denial clause for it), would you still have the same intense this-is-wrong terminal value for no other reason than that their freedom is taken from them in some manner?
If so, why are employment contracts not making you panic in a constant stream of negative utility? Or compulsive education? Or prison? Or any other form of freedom reduction which you might not consider to be about “freedom” but which certainly fits most reductions of it?
Yes, I meant “freedom for me”—I thought that was implied.
I would not want to be one of those people. If you convincingly told me that I was one of those people, I’d try to get out of it. If I was concerned about those people and thought they also valued freedom, I’d try to help them.
My employment can be terminated at will by either party. There are some oppressive labor laws that make this less the case, but they mostly favor me and neither myself nor my employer is going to call on them. What’s an “employment contract” and why would I want one?
Compulsory education is horrible. It’s profoundly illiberal and I believe it’s a violation of the constitutional amendment against slavery. I will not send my children to school and “over my dead body” is my response to anyone who intends to take them. I try to convince my friends not to send their children to school either.
I don’t intend to go to prison and would fight to avoid it. If my friends were in prison, I’d do what I could to get them out.
...therefore, if you are never aware of your own lack of freedom, you do not assign value to this. Which loops around back to the appearance of freedom being your true value. This would be the most uncharitable interpretation.
It seems, however, that in general you will be taking the course of action which maximizes the visible freedom that you can perceive, rather than a course of action you know to be optimized in general for widescale freedom. It seems more like a cognitive alert to certain triggers, and a high value being placed on not triggering this particular alert, than valuing the principles.
Edit: Also, thanks for indulging my curiosity and for all your replies on this topic.
Would you sell possessions to buy weapons to attack a person would runs an online voluntary community who changes the rules without consulting anyone?
If the two situations are comparable, I think it’s important to know exactly why.
Also note that manacling you to a dungeon isn’t just eliminating your ability freely choose things arbitrarily, it’s preventing you from having satisfying relationships, access to good food, meaningful life’s work and other pleasures. Would you mind being in a prison that enabled you to do those things?
Yes. If this were many years ago and I weren’t so conversant on the massive differences between the ways different humans see the world, I’d be very confused that you even had to ask that question.
No. There are other options. At the moment I’m still vainly hoping that Eliezer will see reason. I’m strongly considering just dropping out.
I feel like asking this question is wrong, but I want the information:
If I know that letting you have freedom will be hurtful (like, say, I tell you you’re going to get run over by a train, and you tell me you won’t, but I know that you’re in denial-of-denial and subconsciously seeking to walk on train tracks, and my only way to prevent your death is to manacle you in a dungeon for a few days), would you still consider the freedom terminally important? More important than the hurt? Which other values can be traded off? Would it be possible to figure out an exchange rate with enough analysis and experiment?
Regarding this, what if I told you “Earth was a giant prison all along. We just didn’t know. Also, no one built the prison, and no one is actively working to keep us in here—there never was a jailor in the first place, we were just born inside the prison cell. We’re just incapable of taking off the manacles on our own, since we’re already manacled.”? In fact, I do tell you this. It’s pretty much true that we’ve been prisoners of many, many things. Is your freedom node only triggered at the start of imprisonment, the taking away of a freedom once had? What if someone is born in the prison Raemon proposes? Is it still inherently wrong? Is it inherently wrong that we are stuck on Earth? If no, would it become inherently wrong if you knew that someone is deliberately keeping us here on Earth by actively preventing us from learning how to escape Earth?
The key point being: What is the key principle that triggers your “Freedom” light? The causal action that removes freedoms? The intentions behind the constraints?
It seems logical to me to assume that if you have freedom as a terminal value, then being able to do anything, anywhere, be anything, anyhow, anywhen, control time and space and the whole universe at will better than any god, without any possible restrictions or limitations of any kind, should be the Ultimately Most Supremely Good maximal possible utility optimization, and therefore reality and physics would be your worst possible Enemy, seeing as how it is currently the strongest Jailer than restricts and constrains you the most. I’m quite aware that this is hyperbole and most likely a strawman, but it is, to me, the only plausible prediction for a terminal value of yourself being free.
This should answer most of the questions above. Yes, the universe is terrible. It would be much better if the universe were optimized for my freedom.
All values are fungible. The exchange rate is not easily inspected, and thought experiments are probably no good for figuring them out.
You’re right, this does answer most of my questions. I had made incorrect assumptions about what you would consider optimal.
After updates based on this, it now appears much more likely for me that you use terminal valuation of your freedom node such that it gets triggered by more rational algorithms that really do attempt to detect restrictions and constraints in more than mere feeling-of-control manner. Is this closer to how you would describe your value?
I’m still having trouble with the idea of considering a universe optimized for one’s own personal freedom as a best thing (I tend to by default think of how to optimize for collective sum utilities of sets of minds, rather than one). It is not what I expected.
“freedom as a terminal value” != “freedom as the only terminal value”
True, and I don’t quite see where I implied this. If you’re referring to the optimal universe question, it seems quite trivial that if the universe literally acts according to your every will with no restrictions whatsoever, any other terminal values will instantly be fulfilled to their absolute maximal states (including unbounded values that can increase to infinity) along with adjustment of their referents (if that’s even relevant anymore).
No compromise is needed, since you’re free from the laws of logic and physics and whatever else might prevent you from tiling the entire universe with paperclips AND tiling the entire universe with giant copies of Eliezer’s mind.
So if that sort of freedom is a terminal value, this counterfactual universe trivially becomes the optimal target, since it’s basically whatever you would find to be your optimal universe regardless of any restrictions.
Sometimes freedom is a bother, and sometimes it’s a way to die quickly, and sometimes it’s essential for survival and that “good life” of yours (depending on what you mean by it). You can certainly come up with plenty of examples of each. I recommend you do before pronouncing that freedom is a terminal value for you.
With the caveats:
If the dictator isn’t particularly noticed to be behaving in that kind of way it is probably not worth enforcing the principle. ie. It is fine for people to have the absolute power to do whatever they want regardless of the will of the people as long as they don’t actually use it. A similar principle would also apply if the President of the United States started issuing pardons for whatever he damn well pleased. If US television informs me correctly (and it may not) then he is technically allowed to do so but I don’t imagine that power would remain if it was used frequently for his own ends. (And I doubt it the reaction against excessive abuse of power would be limited to just not voting for him again.)
The ‘should’ is weak. ie. It applies all else being equal but with a huge “if it is convenient to do so and you haven’t got something else you’d rather do with your time” implied.
Agreed. With the caveat that I think all ’should’s are that weak.
“If you see someone about to die and can save them, you should.”
Now, you might agree or disagree with this. But “If you see someone about to die and can save them, you should, if it is convenient to do so and you haven’t got something else you’d rather do with your time” seems more like disagreement to me.
I don’t think so. I agree with that statement, with the same caveats. If there are also 100 people about to die and I can save them instead, I should probably do so. I suppose it depends how morally-informed you think “something else you’d rather do with your time” is supposed to be.
But Eliezer Yudkowsky, too, is subject to the loss aversion effect. Just as those dissatisfied with changes overweight change’s negative consequences, so does Eliezer Yudkowsky overweight his dissatisfaction with changes initiated by the “community.” (For example, increased tolerance of responding to “trolling.”)
Moreover, if you discount the result of votes on rules, why do you assume votes on other matters are more rational? The “community” uses votes on substantive postings to discern a group consensus. These votes are subject to the same misdirection through loss aversion as are procedural issues. If the community has taken a mistaken philosophical or scientific position, people who agree with that position will be biased to vote down postings that challenge that position, a change away from a favored position being a loss. (Those who agree with the newly espoused position will be less energized, since they weight their potential gain less than their opponents weigh their potential loss.)
If you think “voting” is so highly distorted that it fails to represent opinion, you should probably abolish it entirely.
True. For that to be an effective communication channel, there would need to be a control group. As for how to create that control group or run any sort of blind (let alone double-blind) testing… yeah, I have no idea. Definitely a problem.
ETA: By “I have no idea”, I mean “Let me find my five-minute clock and I’ll get back to you on this if anything comes up”.
So I thought for five minutes, then looked at what’s been done in other websites before.
The best I have is monthly surveys with randomized questions from a pool of stuff that matters for LessWrong (according to the current or then-current staff, I would presume) with a few community suggestions, and then possibly later implementation of a weighing algorithm for diminishing returns when multiple users with similar thread participation (e.g. two people that always post in the same thread) give similar feedback.
The second part is full of holes and horribly prone to “Death by Poking With Stick”, but an ideal implementation of this seems like it would get a lot more quality feedback than what little gets through low-bandwidth in-person conversations.
There are other, less practical (but possibly more accurate) alternatives, of course. Like picking random LW users every so often, appearing at their front door, giving them a brain-scan headset (e.g. an Emotiv Epoc), and having them wear the headset while being on LW so you can collect tons of data.
I’d stick with live feedback and simple surveys to begin with.
I’ve moderated a few forums before, and with that experience in mind I’d have to agree that there’s a huge, and generally hugely negative, selection bias at play in online response to moderator decisions. It’d be foolish to take those responses as representative of the entire userbase, and I’ve seen more than one forum suffer as a result of such a misconception.
That being said, though, I think it’s risky to write off online user feedback in favor of physical. The people you encounter privately are just as much a filtered set as those who post feedback here, though the filters point in different directions: you’re selecting people involved in the LW interpersonal community, for one thing, which filters out new and casual users right off the bat, and since they’re probably more likely to be personally friendly to you we can also expect affect heuristics to come into play. Skepticism toward certain LW norms may also be selected against, which could lead people to favor new policies reinforcing those norms. Moreover, I’ve noticed a trend in the Bay Area group—not necessarily an irrational one, but a noticeable one—toward treating the online community as low-quality relative to local groups, which we might expect to translate into antipathy towards its status quo.
I don’t know what the weightings should be, but if you’re looking for a representative measure of user preferences I think it’d be wise to take both groups into account to some extent.
I will be starting another Less Wrong Census/Survey in about three weeks; in accordance with the tradition I will first start a thread asking for question ideas. If you can think of a good list of opinions you want polled in the next few weeks, consider posting them there and I’ll stick them in.
You… know I don’t optimize dinner parties as focus groups, right? The people who showed up that night were people who like chili (I had to swap in backup guests for some people who don’t) and who hadn’t been over too recently. A couple of the attendees from that party barely even post on LW.
It is perhaps more importantly dinner parties are optimised for status and social comfort. Actually giving honest feedback rather than guessing passwords would be a gross faux pas.
Getting feedback at dinner parties is a good way to optimise the social experience of getting feedback and translate one’s own status into the agreement of others.
FWIW, I eat chili but I don’t think the strongest of the proposed anti-troll measures are a good idea.
If I were to guess, I’d guess that the main filter criteria for your dinner parties is geographical; when you have a dinner party in the Bay area, you invite people who can be reasonably expected to be in the Bay area. This is not entirely independant of viewpoint—memes which are more common local to the Bay area will be magnified in such a group—but the effect of that filter on moderation viewpoints is probably pretty random (similarly, the effect of the filter of ‘people who like chili’ on moderation viewpoints is probably also pretty random).
So the dinner party filter exists, but it less likely to pertain to the issue at hand than the online self-selection filter.
The problem with the dinner party filter is not that it is too strong, but that it is too weak: it will for example let through people who aren’t even regular users of the site.
That’s kinda the point.
That’s fair, and your strategy makes sense. I also agree with DaFranker, below, regarding meta-threads.
This said, however, at the time when I joined Less Wrong, my model of the site was something like, “a place where smart people hold well-reasoned discussions on a wide range of interesting topics” (*). TheOtherDave’s comment, in conjunction with yours, paints a different picture of what you’d like Less Wrong to be; let’s call it Less Wrong 2.0. It’s something akin to, “a place where Eliezer and a few of his real-life friends give lectures on topics they think are important, with Q&A afterwards”.
Both models have merit, IMO, but I probably wouldn’t have joined Less Wrong 2.0. I don’t mean that as any kind of an indictment; if I were in your shoes, I would definitely want to exclude people like this Bugmaster guy from Less Wrong 2.0, as well.
Still, hopefully this one data point was useful in some way; if not, please downvote me !
(*) It is possible this model was rather naive.
EY has always seemed to me to want LW to be a mechanism for “raising the sanity waterline”. To the extent that wide-ranging discussion leads to that, I’d expect him to endorse it; to the extent that wide-ranging discussion leads away from that, I’d expect him to reject it. This ought not be a surprise.
Nor ought it be surprising that much of the discussion here does not noticeably progress this goal.
That said, there does seem to be a certain amount of non-apple selling going on here; I don’t think there’s a cogent model of what activity on LW would raise the sanity waterline, so attention is focused instead on trying to eliminate the more blatant failures: troll-baiting, for example, or repetitive meta-threads.
Which is not a criticism; it is what it is. If I don’t know the cause, that’s no reason not to treat the symptoms.
No; you’re conflating “Eliezer considers he should have the last word on moderation policy” and “Eliezer considers LessWrong’s content should be mostly about what he has to say”.
The changes of policy Eliezer is pushing have no effect on the “main” content of the site, i.e. posts that are well-received, and upvoted. The only disagreement seems to be about sprawling threads and reactions to problem users. I don’t know where you’re getting “Eliezer and a few of his real-life friends give lectures on topics they think are important” out of that, it’s not as if Eliezer has been posting many “lectures” recently.
I was under the impression that Eliezer agreed with TheOtherDave’s comment upthread:
Combined with Eliezer’s rather aggressive approach to moderation (f.ex. deleting downvoted comments outright), this did create the impression that Eliezer wants to restrict LessWrong’s content to a narrow list of specific topics.
I very much appreciate the attempts at greater moderation, including the troll penalty. Thank you.
Me too. Troll posts and really wrong people are too distracting without some form of intervention. Not sure the current solution is optimal (but this point has been extensively argued elsewhere), but I applaud the effort to actually stick one’s neck out and try something.
Thank you both. Very much, and sincerely.
Accepting thanks with sincerity, while somewhat-flippantly mostly-disregarding complaints? …I must be missing some hidden justification?
People who agree are more likely to keep quiet than people who disagree. Rewarding them for speaking up reduces that effect, which means comments get closer to accurately representing consensus.
Can you summarize your reasons for believing that people who agree are more likely to keep quiet than people who disagree?
It’s the impression I’ve got from informal observation, and it’s true when talking about myself specifically. (If I disagree, I presumably have something to say that has not yet been said. If I agree, that’s less likely to be true. I don’t know if that’s the whole reason, but it feels like a substantial part of it.)
http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/ provides an anecdote, and suggests that Eliezer has also gotten the same impression.
I certainly agree with your last sentence.
My own experience is that while people are more likely to express immediate disagreement than agreement in contexts where disagreement is expressed at all, they are also more likely to express disagreement with expressed disagreement in such forums, from which agreement can be inferred (much as I can infer your agreement with EY’s behavior from your disagreement with Will_Newsome). The idea that they are more likely to keep quiet in general, or that people are more likely to anonymously downvote what they disagree with than upvote what they agree with, doesn’t jive with my experience.
And in contexts where disagreement is not expressed, I find the Asch results align pretty well with my informal expectations of group behavior.
I admit that I hadn’t considered this mechanism. I have no gut feeling for whether it’s true or not, but it sounds plausible.
Do you doubt that content people whine less?
No, I don’t doubt that content people whine less.
Then I do not understand your request for further explanations.
I am confused by your confusion. The claim wasn’t that content people whine less, it was that they’re more likely to keep quiet. The only way I can make sense of your comments is if you’re equating the two—that is, if you assume that the only options are “keep quiet” or “whine”—but that seems an uncharitable reading. Still, if that is what you mean, I simply disagree.
Yeah, I phrased it quite poorly. Should have been “speak up less”. The point I was (unsuccessfully) making is that both groups have an option of acting (expensive) or not acting (cheap). Acting is what people generally do when they want to change the current state of the world, and non-acting when they are happy with it. Thus any expensive reaction is skewed toward negative. I should probably look up some sources on that, but I will just tap out instead, due to rapidly waning interest.
He is thanking them for their support, not their information.
Sometimes AKA the “Forum Whiners” effect, well known in the PC games domain:
When new PC games are released, almost inevitably the main forums for the game will become flooded with a large surge of complaints, negative reviews, rage, rants, and other negative stuff. This is fully expected and the absence of such is actually a bad sign. People that are happy with the product are playing the game, not wasting their time looking for forums and posting comments there—while people who have a problem or are really unhappy often look for an outlet or a solution to their issues (though the former in much greater numbers, usually). If no one is bothering to post on the forums, then that’s evidence that no one cares about the game in the first place.
I see a lot of similarities here, so perhaps that’s one thing worth looking into? I’d expect some people somewhere to have done the math already on this feedback (possibly by comparing to overall sales, survey results and propagation data), though I may be overestimating the mathematical propensity of the people involved.
Regarding the stop-watching-threads thing, I’ve noticed that I pretty much always stop paying attention to a thread once I’ve gotten the information I wanted out of it, and will only come back to it if someone directly replies to one of my comments (since it shows up in the inbox). This has probably been suggested before, but maybe a “watchlist” to mark some threads to show up new comments visibly somewhere and/or a way to have grandchildren comments to one of your own show up somehow could help? I often miss it when someone replies to a reply to my comment.
Upvoted for the “watchlist” idea, I really wish Less Wrong had it.
Each individual post/comment has its own RSS feed (below your user name, karma scores etc. and above “Nearest meetups” in the right sidebar).
In case you need assurance from the online sector. I wholeheartedly welcome any increase in the prevalence of the banhammer, and the “pay 5 karma” thing seems good too.
During that Eridu fiasco, I kept hoping a moderator would do something like “this thread is locked until Eridu taboos all those nebulous affect-laden words.”
Benevolent dictators who aren’t afraid of dissent are a huge win, IMO.
At risk of failing to JFGI: can someone quickly summarize what remaining code work we’d like done? I’ve started wading into the LW code, and am not finding it quite as impenetrable as last time, so concrete goals would be good to have.
http://code.google.com/p/lesswrong/issues/list
Fair enough. All I see is the vote-counts and online comments, but the real-life commenters are of course also people, and I can understand deciding to attend more to them.
I think his point is that there is less selection bias IRL.
But that’s almost certainly false. IRL input has distinct selection bias from viewing meta threads, but not no selection bias.
Yeah, exactly. Which is why I took it to mean a simple preference for considering the community of IRL folks. Which is not meant as a criticism; after all, I also take more seriously input from folks in my real life than folks on the internet.
Even when the topic on which you are receiving input is how to run an internet forum (on which the real-life folks don’t post)?
Well, I don’t do that, clearly, since I don’t run such an Internet forum.
Less trivially, though… yeah, I suspect I would do so. The tendency to take more seriously people whose faces I can see is pretty strong. Especially if it were a case like this one, where what the RL people are telling me synchronizes better with what I want to do in the first place, and thus gives me a plausible-feeling justification for doing it.
I suspect you’re not really asking me what I do, though, so much as implicitly suggesting that what EY is doing is the wrong thing to do… that the admins ought to attend more to commenters and voters who are actually participating on the thread, rather than attending primarily to the folks who attend the minicamp or Alicorn’s dinner parties.
If so, I don’t think it’s that simple. Fundamentally it depends on whether LW’s sponsors want it to be a forum that demonstrates and teaches superior Internet discourse or whether it wants to be a forum for people interested in rational thinking to discuss stuff they like to discuss. If it’s the latter, then democracy is appropriate. If it’s the former, then purging stuff that fails to demonstrate superior Internet discourse is appropriate.
LW has seemed uncertain about which role it is playing for as long as I’ve been here.
Yes, that’s certainly the single largest problem. If the LW moderators decided on their goals for the site, and committed to a plan for achieving those goals, the meta-tedium would be significantly reduced. The way it’s currently being done, there’s too much risk of overlap between run of the mill moderation squabbles and the pernicious Eliezer Yudkowsky cult/anticult squabbles.
Then he is OK with this particular selection bias :)
Those who actually don’t care about such things what people think don’t tend to convey this level of active provocation and defiance.
Sure. I can’t speak for EY, clearly, but there are many things (including what other people think) that I find myself caring about, often a lot, but I don’t think are important. This is inconsistent, I know, but I find it pretty common among humans.
I observe that wedifrid has taken advantage of this particular opportunity to remind everyone that he thinks I am belligerent, whiny, condescending, and cynical.
(So noted because I was a bit unhappy at how the conversation suddenly got steered there.)
I notice that my criticism was made specifically regarding the exhibition of those behaviors in the comments he has made about the subject he has brought up here. We can even see that I made specific links. Eliezer seems to be conflating this with a declaration that he has those features as part of his innate disposition.
By saying that wedrifid is reminding people that he (supposedly) believes Eliezer has those dispositions he also implies that wedrifid has said this previously. This is odd because I find myself to be fairly open with making criticisms of Eliezer whenever I feel them justified and from what I recall “belligerent, whiny, condescending, and cynical [about the lesswrong community]” isn’t remotely like a list of weaknesses that I actually have described Eliezer as having in general or at any particular time that I recall.
Usually when people make this kind of muddled accusation I attribute it to a failure of epistemic rationality and luminosity. Many people just aren’t able to separate in their minds a specific criticism of an action and belief about innate traits. Dismissing Eliezer as merely being incompetent at the very skills he is renowned for would seem more insulting than simply concluding that he is being deliberately disingenuous.
My suggestion is that Eliezer would be best served by not bringing the conversation here repeatedly. It sends all sorts of signals of incompetence. That ‘unhappy’ feeling is there to help him learn from his mistakes.
If that bothers you, you may consider that whining that people find you whiny might not be the optimal strategy for making them change their mind.
I also observe that wedrifid’s opinion of you doesn’t appear to be steered with equal expected posterior probability in light of how you react versus his predictions of your reactions.
I’m curious as to whether I’m on to something there, or whether I just pulled something random and my intuitions are wrong.
I can’t even decipher what it is you are accusing wedrifid of here. Apart from being wrong and biased somehow.
I’m referring to a specific part of bayesian updating, conservation of expected evidence. Specifically:
This rule did not seem respected in what little I’ve seen of interactions between you and Eliezer, and I was looking for external feedback and evidence (one way or another) for this hypothesis, to see if there is a valid body of evidence justifying the selection of this hypothesis for consideration or if that simply happened out of bias and inappropriate heuristics.
I suspect that, if the latter, then there was probably an erroneous pattern-matching to the examples given in the related blogpost on the subject (and other examples I have seen of this kind of erroneous thinking).
I don’t know how to submit this stuff for feedback and review without using a specific “accusation” or wasting a lot of time creating (and double-checking for consistency) elaborating complex counterfactual scenarios.
Is “ban” meaning “delete” a reddit-ism?
When I hear “ban” I think “author isn’t allowed to post for a while”.
“Ban” here means “make individual posts and comments invisible to everyone except moderators”. (I agree “ban” is confusing.)
Correct. Sorry, the button I use says “Ban”.
Bad button!
Sorry, it was very tempting. =P
I (and any other casual visitor) now have only indirect evidence regarding whether eridu’s comments were really bad or were well-meaning attempts to share feminist insights into the subject, followed by understandable frustration as everything she^Whe said was quoted out of context (if not misquoted outright) and interpreted in the worst possible way.
Agreed. I would prefer that a negative contributor be prospectively banned (that is, “prevented from posting further”) rather than retrospectively expunged (that is, “all their comments deleted from the record”), so as to avoid mutilating the record of past discussions.
For precedent, consider Wikipedia: if a contributor is found to be too much trouble (starting flamewars, edit-warring, etc.) they are banned, but their “talk page” discussion comments are not expunged. However, specific comments that are merely flaming, or which constitute harassment or the like, can be deleted.
Agreed. In this case, what I read of the discussion which included eridu indicated that they weren’t worth engaging with, but I’m actually rather impressed with what I saw of the community’s patience.
Once again, please don’t do that. (Hiding-from-Recent-Comments is totally okay, however.)
While the discussion arguably veered off-topic with respect to the original article, I don’t think we actually have a rule against that. And I don’t think eridu was actually trolling, though they do seem to have an overly-dismissive attitude towards the community. I do think there’s a place for social constructivist / radical feminist views to be aired where they apply on this site, and I don’t think eridu was doing a particularly bad job of it.
If we have a diversity of views, then people will disagree about fundamental sorts of things and we’ll end up with people thinking each other are “not even wrong” about some issues, which certainly seems downvote-worthy at the time. But we do want a diversity of views (it’s one of the primary benefits of having multiple people interacting in the first place), and so banning comments which are merely unpopular is not called-for, and will simply shunt out potential members of the community.
Of course, I’m basically guessing about your rationale in banning these comments, so if you’d like to provide some specific justification, that would be helpful.
Right now that sounds like one of the most brutal criticisms you could have made of radical feminism.
I should note that I’m not a fan, so that sort of thing should be expected.
I disagree. It was a perfect example of how the Worst Argument In The World (rather, an especially irritating subtype of the same) is often deployed in the field.
Minor point: Do we have evidence on eridu’s gender?
Yes, he described himself as male here. Not that it particularly matters, except insofar as it makes playing the pronoun game easier.
Thanks. I’m impressed with the story in the link, but also more convinced that he might as well be treated as a troll because he criticized someone for being a man explaining feminism to women.
Eh, that’s a relatively minor sin of argument, all things considered. It’s pretty easy to think that you’re excused from such a thing thanks to greater relative knowledge or better subcultural placement.
Or simply fundamental attribution error.