I find it unpleasant that you always bring your hobbyhorse in, but in an “abstract” way that doesn’t allow discussing the actual object level question. It makes me feel attacked in a way that allows for no legal recourse to defend myself.
First and foremost, LW is a space for intellectual progress about rationality and related topics. Currently, we don’t ban people for being fixated on a topic, or ‘darkly hinting,’ or posts they make off-site, and I don’t think we should. We do keep a careful eye on such people, and interpret behavior in ‘grey areas’ accordingly, in a way that I think reflects both good Bayesianism and good moderation practice.
In my favorite world, people who disagree on object-level questions (both political and non-political) can nevertheless civilly discuss abstract issues. This favors asymmetric weapons and is a core component of truth-seeking. So, while hurt feelings and finding things unpleasant are legitimate and it’s worth spending effort optimizing to prevent them, we can’t give them that much weight unless they differentiate the true and the untrue.
That said, there are ways to bring up true things that as a whole move people away from the truth, and you might be worried about agreements on abstractions being twisted to force agreement on object-level issues. These are hard to fight, and frustrating if you see them and others don’t. The best response I know is to catalog the local truths and lay out how they add up to a lie, or establish the case that agreement on those abstractions doesn’t force agreement on the object-level issues, and bring up the catalog every time the local truth advances a global lie. This is a lot more work than flyswatting, but has a much stronger bent towards truth. If you believe this is what Zack is doing, I encourage you to write a compilation post and point people to it as needed; due to the nature of that post, and where it falls on the spectrum from naming abstract dynamics to call-out post, we might leave it on your personal blog or ask that you publish it outside of LW (and link to it as necessary).
That is very reasonable and fair. I think that in practice I won’t write such a compilation post any time soon, because (i) I already created too much drama, (ii) I don’t enjoy writing call-out posts and (iii) my time is much better spent working on AI alignment.
Upon reflection, my strong reaction was probably because my System 1 is designed to deal with Dunbar-number-size groups. In such a tribe, one voice with an agenda which, if implemented, would put me in physical danger, is already notable risk. However, in a civilization of millions the significance of one such voice is microscopic (unless it’s very exceptional in its charisma or otherwise). On the other hand, AGI is a serious risk, and it’s one that I’m much better equipped to affect.
Sorry for causing all this trouble! Hopefully putting this analysis here in public will help me to stay focused in the future :)
one voice with an agenda which, if implemented, would put me in physical danger
Okay, I think I have a right to respond to this.
People being in physical danger is a bad thing. I don’t think of myself as having a lot of strong political beliefs, but I’m going to take a definite stand here: I am against people being in physical danger.
If someone were to present me with a persuasive argument that my writing elsewhere is increasing the number of physical-danger observer-moments in the multiverse on net, then I would seriously consider revising or retracting some of it! But I’m not aware of any such argument.
For what it’s worth, it seems to me that the argument “this writing puts me in physical danger” is absurd as applied to this particular case.
However, as far as I can tell (having re-read her comment several times), that’s not quite the argument Vanessa was making. What she seems to be saying is not “this writing puts me in physical danger” but “this writing expresses a viewpoint whose preferred agenda, if implemented, puts me in physical danger”.
Now, it’s a somewhat subtle distinction. Nevertheless, there does seem to be a difference; and insofar as there is a difference, the latter argument is (it seems to me) rather worse and more dangerous.
Why? Well, firstly, because upon a casual reading it can easily appear to be the first argument—as we just saw. This, of course, sets up a perfect motte-and-bailey situation: if people read the argument, come away being convinced of the former point, but then someone challenges the argument’s author, they can protest that what they actually wrote was the latter (and of course that’ll be true).
This is itself is not blameworthy per se (though it suggests that anyone making the latter point ought to be quite careful and explicit in distinguishing their position from the former point, to prevent such misuse—intentional or otherwise). However, there are more problems.
It is, perhaps, possible to show that a piece of writing, or a piece of rhetoric, would put certain people in danger. It’s difficult—such claims are often almost impossible to falsify—but possible. But what do you do with the claim “this is a voice with an agenda which, if implemented, would put [certain people] in danger”? Is there any way to defend against such an accusation, other than by turning discussion of any piece of writing or rhetoric even vaguely associated with a given “side” into a referendum on the entirety of that side’s worldview?
Finally, suppose the accusation is true. Suppose that I write a post, in which I argue for certain claims which, if true, (allegedly) support that view that Crimea is properly to be considered a part of Russia. (It is not difficult to see, I hope, how such a view could have an agenda associated with it which, if implemented, presents a physical danger to certain people.)
Now, what is to be done about this? Should such writing be banned? Discouraged? Penalized? (Let’s suppose I write this, not on Less Wrong, but on some forum which is devoted to geopolitics, while, however, also being dedicated to neutrality and non-partisan truthseeking; so my post is by no means off-topic.) Should I be censured? Suppose that other members of this forum take offense, and say that I am a voice whose agenda puts them in danger, and that perhaps I ought not to post such things, or enter into discussions of the topic at all. Are they right? Should their wishes be put into action? And what will be the consequences of a forum policy like this?
I think that what you’re saying here is mostly right, but I feel like it leaves out an important facet of the problem.
Some speech acts lower the message length of proposals to attack some groups, or raise the message length of attempts to prevent such attacks. This is a kind of meta-attack or threat, like concentrating troops on a country’s border.
The situation is often asymmetrical in particular contexts—given existing power structures & official narratives, some such meta-attacks are easier to perform than others—and in particular, proposals to alter the official narrative can look more “political” than moves in the opposite direction, even when the official narrative is obviously not a reasonable prior.
This problem is aggravated by a norm of avoiding “political” discourse—if one side of an argument is construed as political and the other isn’t, we get a biased result that favors & intensifies existing power arrangements. It’s also aggravated by norms of calm, impersonal discourse, since that’s easier to perform if you feel safe.
Some speech acts lower the message length of proposals to attack some groups, or raise the message length of attempts to prevent such attacks.
This is true; indeed, it’s difficult to see how it can fail to be true, even in the absence of any awareness or intention on anyone’s part. Yet it seems an exceedingly abstract basis on which to consider even censuring or discouraging certain sorts of speech, much less punishing or banning it.
I agree. I think this makes discouraging political or heated speech hard to do without introducing substantively harmful bias. That’s the context in which Zack’s speech can create a problem for Vanessa (and in which others’ speech created a structurally similar problem for Zack!).
Well, as for “heated” speech, I think discouraging that is easy enough. But where “political” is concerned, my point is exactly that the perspective you take makes it difficult to see where “political” ends, and “non-political” begins—indeed, it does not seem to me to be difficult to start from that view, and construct an argument that all speech is “political”! (And if I understand Zack’s point correctly, he seems to be saying that this has, in essence, already happened, on one particular topic.)
Let me try to be a bit clearer with an example. I’m saying that in, for instance, a discussion of human decisionmaking that uses utilitarian frameworks, posts like Totalitarian ethical systems and Should Effective Altruism be at war with North Korea? ought to be considered on-topic, since they discuss patterns of thinking that this framework is likely to push us towards, and point to competing considerations that are harder to express in that frame, which we might want to make sure we don’t lose sight of. Right now, on LessWrong, such posts are ambiguously permissible, in ways that cause Vanessa Kosoy to be legitimately uncertain about whether and to what extent—if she extends the interpretive labor of explaining what she thinks the problems are with Zack’s points—her work will be judged admissible.
IMO, on-topic is a strict subset of what is allowable on LW. There are plenty of topics that are about rationality (especially about group rationality and social/peer norms) but don’t work here because they’re related to topics that tend to trigger tribal or social status problems.
I’m starting to see that “on LW” is different for me than for at least some readers and moderators—it may be that I’m too restrictive in my opinion of non-promoted posts. I’m still going to downvote them.
(Only speaking as a participant, not as a moderator. The rules are currently very clear that you can downvote and upvote whatever you like.)
I do think I would prefer it if you would not downvote personal blogposts if they feel off-topic to you. You can always just uncheck the “show personal blogposts” checkbox on the frontpage. I care a lot about people being able to just explore ideas freely on the site, and you can always downvote them if we do move them to frontpage.
I think that’s fair—I don’t want to discourage exploration of ideas not yet ready for publication, but I _AM_ concerned that people other than me may take the leniency as permission to discuss overtly political topics here. I think I’ll stop voting on non-promoted posts and comments for a bit and see if my worries get worse or better.
Is there a way to tell whether a post is promoted or not, on the page that contains the voting buttons?
We just added the ability to easily identify a post as frontpage or personal on our test-server today. Should be out by early next week.
You can currently tell by hovering over a post in a list of posts, or by looking at the moderation guidelines at the bottom of the post (which will always include “frontpage moderation guidelines” if it’s a frontpage post).
Those posts are definitely permissible on LessWrong from the site-rule perspective, though there is a sense in which they are off-topic in that we didn’t promote them to the frontpage.
I do think that imbalance of frontpage vs. personal already creates some problems, though I think the distinction is doing a bunch of important work that I don’t know how to achieve in other ways.
Some speech acts lower the message length of proposals to attack some groups, or raise the message length of attempts to prevent such attacks.
Rationality is the common interest of many causes; the whole point of it is to lower the message-description-length of proposals that will improve overall utility, while (conversely, and inevitably) raising the message-description-length of other possible proposals that can be expected to worsen it. To be against rationality on such a basis would seem to be quite incoherent. Yes, in rare cases, it might be that this also involves an “attack” on some identified groups, such as theists. But I don’t know of a plausible case that theists have been put in physical danger because rationality has now made their distinctive ideas harder to express! (In this as in many other cases, the more rationality, the less religious persecution/conflict we see out there in the real world!) And I have no reason to think that substituting “trans advocacy that makes plausibly-wrong claims about how the real-world (in this case: human psychology) factually works” for “theism” would lead to a different conclusion. Both instances of this claim seem just as unhinged and plausibly self-serving, in a way that’s hard not to describe as involving bad faith.
the whole point of it is to lower the message-description-length of proposals that will improve overall utility
I thought the point was to help us model the things we care about more accurately and efficiently, which doesn’t require utilitarianism to be an appealing proximate goal (it just has to require caring about something which depends on objective reality).
Theists can have a hard time to formulate their value to harmony and community building. Advancing “hard facts” kind of can make them more appear to be hateful ignorants which can make it seem okay to be more confrontational socially with them which might involve more touching. The psychology behind how racism causes dangerous situations for black people might be a good example how you don’t need explicit representations of acknowledged dangerous things to be in fact dangerous.
I live in a culture that treats “religion oriented people” as more of a “whatever floats your boat privately” kind of people and not the kind of “people zealously pushing for false beliefs”. I feel that the latter kind of rhetoric makes it easier to paint them “as the enemy” and can be a contributing factor in legitimizing violence against them. Some of the “rationalist-inspired” work pushes harder on the “truth vs falsity” than on “irrelevance of bullshit” which has negative impact on near term security and the positive impact on security is contigent on the strategy working out. Note that the danger that rationalist inspired work can create migth en up materialising in the hands of people that are far from ideally rational. Yes some fights are worth fighting but people also usually agree that having to fight to accomplish somehting is worse than not fighting to accomplish that. And if you rally people to fight for truth you are still rallying people to fight. Even your explicit intetnion was to avoid rallying but you ended up doing it anyway.
The rationality community itself is far from static; it tends to steadily improve over time, even in the sorts of proposals that it tends to favor. If you go browse RationalWiki (a very early example indeed of something that’s at least comparable to the modern “rationalist” memeplex) you’ll in fact see plenty of content connoting a view of theists as “people who are zealously pushing for false beliefs (and this is bad, really really bad)”. Ask around now on LW itself, or even more clearly on SSC, and you’ll very likely see a far more nuanced view of theism, that de-emphasizes the “pushing for false beliefs” side while pointing out the socially-beneficial orientation towards harmony and community building that might perhaps be inherent in theists’ way of life. But such change cannot and will not happen unless current standards are themselves up for debate! One simply cannot afford to reject debate simply on the view that this might make standards “hazy” or “fuzzy”, and thus less effective at promoting some desirable goals (including, perhaps, the goal of protecting vulnerable people from very real harm and from a low quality of life more generally). An ineffective standard, as the case of views-of-theism shows, is far more dangerous than one that’s temporarily “hazy” or “fuzzy”. Preventing all rational debate on the most “sensitive” issues is the very opposite of an effective, truth-promoting policy; it systematically pushes us towards having the wrong sorts of views, and away from having the right ones.
One should also note that it’s hard to predict how our current standards are going to change in the future. For instance, at least among rationalists, the more recent view “theism? meh, whatever floats your boat” tends to practically go hand-in-hand with a “post-rationalist” redefinition of “what exactly it is that theists mean by ‘God’ ”. You can see this very explicitly in the popularity of egregores like “Gnon”, “Moloch”, “Elua” or “Ra”, which are arguably indistinguishable, at least within a post-rationalist POV, from the “gods” of classical myths! But such a “twist” would be far beyond what the average RationalWiki contributor would have been able to predict as the consensus view about the issue back in that site’s heyday—even if he was unusually favorable to theists! Clearly, if we retroactively tried to apply the argument “we (RationalWiki/the rationalist community) should be a lot more pro-theist than we are, and we cannot allow this to be debated under any circumstances because that would clearly lead to very bad consequences”, we would’ve been selling the community short.
A case more troublesome than an ineffective standard is an actively harmful one. Part of the rationalist virtue sphere is recognising your actual impact even when it goes wildly against your expectations. Political speech being known to be a clusterfuck should orient as to “get it right” and not so much “apply solutions”. People that grow up into harmony (optimise for harmony in agent speech) while using epistemology as a dumpstat are more effective in conversation safety. Even if rationalist are having more useful beliefs about other belief-groups the rational memplex being more distant from other memplexes means meaningful interaction is harder. We run the risk of having our models of groups such as theists advocate their interests rather than the persons themselfs. Sure we have distinct reasons why we can’t implement group-interoperatibility the same way that they can/do implement it. But if we empatsize how little we value safety vs accuracy it doesn’t make us move to solve safety. And we are supposedly good at intentionally setting out to solve hard problems. And it should be permissible to try to remove unneccary obstacles for people to join in the conversation. If the plan is to come up with an awesome way to conduct business/conversation and then let that discovery benefit others a move that makes discovery easier but sharing of the results harder might not move that much closer to the goal than naively only caring about discovery.
I’m very sorry that we seem to be going around in circles on this one. In many ways, the whole point of that call to doing “post-rationality” was indeed an attempt to better engage with the sort of people who, as you say, “have epistemology as a dumpstat”. It was a call to understand that no, engaging in dark side epistemology does not necessarily make one a werewolf that’s just trying to muddy the surface-level issues, that indeed there is a there there. Absent a very carefully laid-out argument about what exactly it is that’s being expected of us I’m never going to accept the prospect that the rationalist community should be apologizing for our incredibly hard work in trying to salvage something workable out of the surface-level craziness that is the rhetoric and arguments that these people ordinarily make. Because, as a matter of fact, calling for that would be the quickest way by far of plunging the community back to the RationalWiki-level knee-jerk reflex of shouting “werewolf, werewolf! Out, out out, begone from this community!” whenever we see a “dark-side-epistemology” pattern being deployed.
(I also think that this whole concern with “safety” is something that I’ve addressed already. But of course, in principle, there’s no reason why we couldn’t simply encompass that into what we mean by a standard/norm being “ineffective”—and I think that I have been explicitly allowing for this with my previous comment.)
I does seem weird why so little communication is achieved with so many words.
I might be conflicted with interpreting messages in opposite directions on different layers.
> Clearly, if we retroactively tried to apply the argument “we (RationalWiki/the rationalist community) should be a lot more pro-theist than we are, and we cannot allow this to be debated under any circumstances because that would clearly lead to very bad consequences”, we would’ve been selling the community short.
This seem like a statement that argument of “we should be pro-teist & can not allow debate because bad consequence” would have been an error. If it would have been presented as proposal it would have indeed been an argument. “can not allow debate” would seem like a stance against being able to start arguments. It seems self-refuting and in general wanting censorship of censorship which I have very thought time on whether it’s for or against censorship. Now the situation would be very different if there was a silent or assumed consensus that debate could not be had, but it’s kinda differnt if debate and decision not to have debate is had.
I lost how exactly it relates to this but I realised that the “look these guys spreading known falsehoods” kind of attitude made me not want to engage socially probably by pattern-matching to a sufficiently lost soul to not be reachable within discussion timeframe. And I realised that the standard for sanity I was using for that comparison came from my local culture and realised that the “sanity waterline” situation here might be good enough that I don’t understand other peoples need for order. And the funny thing being that there is enough “sanity seeking” within religious groups that I was used for veteran religious persons to guide novice religious persons away from those pitfalls. If someone was praying for a miracle for themselfs that would be punished and intervened and I kinda knew the guidance even if I didn’t really feel “team religion”. Asking for mysterious power for your personal benefit is magic. It’s audacious to ask for that and it would not be good for the moral development of the prayer to grant that prayer. That is phrased in terms of virtue instead of epistemology. But never the less it’s insanity by other conceptualization. The other kind of argumentation avenue that focuses that prayer doesn’t work seemed primitive by comparison. I was way too intune in tracking how sane vs insane religion works to really believe a pitting of reason vs religion (I guess reading there is that kind of pitting present in parents of this comment, I think I am assuming that other people might have all of their or essentially all of their religion pool insane so that their opinion of religion as insane is justified (and even came up with stories why it could be so because of history)).
I guess part of what sparked me initially to write was that “increasing description length” of things that worsens overall utility seemd like making non-sense harder to understand. My impression was that it’s the goal to make nonsense plainly and easy so. There is some allusion that there is some kind of zero-sum going on with description lengths. But my impression was that people have a hard time processing any kind of option and that shortening of all options is on the table.
I had some idea about how if a decison procedure is too reflex like it doesn’t ever enter into concious mind to be subject to critique. But negating a unreflective decision procedure is not intelligence. So what you want is have it enter into concious thought where you can verify it’s appropriateness (where it can be selectively allowed). If you are suffering from an optical illusion you do not close your eye but critically evaluate what you are seeing.
You do realize that viewpoints about the state-of-Nature don’t have preferred agendas? Hume teaches us that you can’t derive an ought from an is. By the same token, you can’t refute an is from an ought!
I didn’t say anything about “viewpoints about the state-of-Nature”. I’m not sure what you think I’m saying, but if you interpreted my comment on the basis of the assumption that I am unfamiliar with Hume, then you’ve probably misinterpreted it.
Okay, so where exactly do you see Zack M. Davis as having expressed claims/viewpoints of the “ought” sort? (i.e. viewpoints that might actually be said to involve a preferred agenda of some kind?) Or are you merely saying that this seems to be what Vanessa’s argument implies/relies on, without necessarily agreeing one way or the other?
Witch burnings are dangerous. Some peoples main defence against not being burned is to obediently follow community norms. If some of those norms become hazy then the strategy of addhering to them becomes harder and it’s possible that other factors than compliance influence who gets thus attacked thus the defence strategy revolving around compliance becomes less effective. Thus anyone that muddies the community norms is dangerous.
Now if the community norms screw over you personally in a major way that is very unfortunate and it might make sense to make more complex norms to mitigate inconvenience to some members. But this discussion really can’t take place without risking the stabilty of the solution for the “majority”, “usual” or “founder” case. Determining what kind of risk is acceptable for the forseeable improvement of the situation might be very controversial.
There are some physically very able humans that currently don’t have to think about their muscles being employed against each other based on a very simplistic box thinking. If you take their boxes away they might need to use more complex cognitive machinery which would have a higher chance of malfunctioning which would/could result in lower security situations.
In general it might not be fair to require persons improving the situation for the portion that suffers from it the most to take into account the slightest security worries of the least impacted. But the effect is there.
or establish the case that agreement on those abstractions doesn’t force agreement on the object-level issues
Why would anyone think that agreement on meta-level abstractions (like the thing I was trying to say in “Where to Draw the Boundaries?”) would force agreement on object-level issues? That would be crazy! Object-level issues in the real world are really complicated: you’d need to spend a lot more wordcount just to make a case for any particular object-level claim—let alone reach agreement!
Meta-level abstractions are by definition not real. So there’s no arguing whether one is right or wrong. Any “agreement” on an abstraction is about what parts of reality it helps to predict. Without object-level examples, what is there to discuss?
Maybe a better way to put it would be that agreement on meta-level principles more reliably forces agreement on simple object-level issues?
I think it’s important, valuable, and probably necessary to work out theoretical principles in an artificially simple and often “abstract” context, before you can understand how to correctly apply them to a more complicated situation—and the correct application to the more complicated situation is going to be a longer explanation than the simple case. The longer the explanation, the more chances someone has to get one of the burdensome details wrong, leading to more disagreements.
Students of physics first master problems about idealized point masses, frictionless planes, perfectly elastic collisions, &c. as a prerequisite for eventually being able to solve more real-world-relevant problems, like how to build a car or something—even if the ambition of real-world automotive engineering was one’s motivation for studying (or lecturing about) physics.
Similarly, I think students of epistemology need to first master problems about idealized bleggs and rubes with five binary attributes, before they can handle really complicated issues (e.g., the implications on social norms of humans’ ability to recognize each other’s sex)—even if the ambition of tackling hard sociology problems was one’s motivation for studying (or lecturing about) epistemology.
Imagine being at a physics lecture where one of the attendees kept raising their hand to complain that the speaker was using abstraction to “obfuscate” or “disguise” the real issue of how to build a car. That would be pretty weird, right??
Maybe a better way to put it would be that agreement on meta-level principles more reliably forces agreement on simple object-level issues?
In cases I can think of, this is the reverse of what happens. In fact, agreement on (simple, often too simple) object-level issues allows abstractions which can be agreed on.
Physics is a great example. Intro physics lectures include a massive number of demonstrations. These are simple (too simple to use to build a car), but are clearly and incontrovertibly real-world behaviors. There is object-level proof that the abstractions have at least some tie to reality.
That’s understandable, but I hope it’s also understandable that I find it unpleasant that our standard Bayesian philosophy-of-language somehow got politicized (!?), such that my attempts to do correct epistemology are perceived as attacking people?!
Like, imagine an alternate universe where posts about the minimum description length principle were perceived as an attack on Christians (because atheists often argue that Occam’s razor implies that theories about God are unnecessarily complex), and therefore somewhat unseemly (because politics is the mind-killer, and criticizing a popular religion has inextricable political consequences).
I can see how it would be really annoying if someone on your favorite rationality forum wrote a post about minimum description length, if you knew that their work about MDL was partially derived from other work (on a separate website, under a pseudonym) about atheism, and you happened to think that Occam’s razor actually doesn’t favor atheism.
Or maybe that analogy is going to be perceived as unfair because we live in a subculture that pattern-matches religion as “the bad guys” and atheism as the “good guys”? (I could try to protest, “But, but, you could imagine as part of the thought experiment that maybe Occam’s razor really doesn’t favor atheism”, but maybe that wouldn’t be perceived as credible.)
Fine. We can do better. Imagine instead some crank racist psuedoscientist who, in the process of pursuing their blatantly ideologically-motiviated fake “science”, happens to get really interested in the statistics of the normal distribution, and writes a post on your favorite rationality forum about the ratio of areas in the right tails of normal distributions with different means.
I can see how that would be really annoying—maybe even threatening! Which might make it all the more gratifying if you can find a mistake in the racist bastard’s math: then you could call out the mistake in the comments and bask in moral victory as the OP gets downvoted to oblivion for the sin of bad math.
But if you can’t find a mistake—if, in fact, the post is on-topic for the forum and correct in the literal things that it literally says, then complaining about the author’s motive for being interested in the normal distribution doesn’t seem like an obviously positive contribution to the discourse?—even if you’re correct about the author’s motive. (Although, you might not be correct.)
Like, maybe statistics is part of the common interest of many causes, such that, as a matter of local validity, you should assess arguments about statistics on their own merits in the context that those arguments are presented, without worrying about how those arguments might or might not be applied in other contexts?
What, realistically, do you expect the atheist—or the racist, or me—to do? Am I supposed to just passively accept that all of my thoughts about epistemology are tainted and unfit for this forum, because I happen to be interested in applying epistemology to other topics (on a separate website, under a pseudonym)?
I think the grandparent is an on-topic response to the OP, relating the theme of the OP (about how if you don’t have negative feedback or “No”s, then that makes positive feedback or “Yes”es less significant) to both a hypothetical example about social network voting mechanisms, and, separately, to another philosophy topic (about the cognitive function of categories) that I’ve been thinking a lot about lately! That’s generally what happens when people comment on posts: they think about the post in the context of their own knowledge and their own priorities, and then write a comment explaining their actual thoughts!
Leave a critical comment explaining what I got wrong (if you have time).
Those actions are unambiguously prosocial, because downvotes help other users decide what’s worth their time to read, and criticism of bad reasoning helps everyone reading get better at reasoning! But criticizing me because of what you know about my personal psychological motives for making otherwise-not-known-to-be-negative contributions seems … maybe less obviously prosocial?
Like, what happens if you apply this standard consistently? Did you know that Eliezer Yudkowsky’s writings that are ostensibly about human rationality, were actually mostly conceived in the context of his plans to build a superintelligence to literally take over the world?! (Although he denies it, of course.) That’s politics! Should we find it unpleasant that Yudkowsky always brings his hobbyhorse in, but in an “abstract” way that doesn’t allow discussing the actual object-level political question about whether he should rule the world?
Am I wrong here? Like, I see your concern! I really do! I’m sorry if we happen to be trapped in a zero-sum game whereby my attempts to think seriously in public about things I’m interested in ends up imposing negative externalities on you! But what, realistically, do you expect me to do? Happy to talk privately sometime if you’d like. (In a few weeks; I mostly want to focus on group theory and my dayjob for the rest of May.)
I’m not sure what your hobby horse is, but I do take objection to the assumption in this post that decoupling norms are the obvious and only correct way to deal with things. The problem with this is that if you actually care about the world, you can’t take arguments in isolation, but have to consider the context in which they are made.
1. It can be perfectly OK for the environment to bring up a topic once, but can make people less likely to want to visit the forum if someone brings it up all the time and tries to twist other people’s posts towards a discussion of their thing. It would be perfectly alright for moderators who didn’t want to drive away their visitors to ask this person to stop.
2. It can be perfectly OK to kick out someone who has a bad reputation that makes important posters unable to post on your website because they don’t want to associate with that person, even IF that person has good behavior.
3. It can be perfectly OK to downvote posts that are well-reasoned, on topic, and not misleading, because you’re worried about the incentives of those posts being highly upvoted.
All of these things are tradeoffs with decoupled conversation obviously, which has its’ own benefits. The website has to decide what values it stands for and will fight for, vs. what it will be flexible on depending on context. What I don’t think is OK is just to ignore context and assume that decoupling is always unambiguously the right call.
I do take objection to the assumption in this post that decoupling norms are the obvious and only correct way to deal with things.
Zack didn’t say this. What he said was:
Like, maybe statistics is part of the common interest of many causes, such that, as a matter of local validity, you should assess arguments about statistics on their own merits in the context that those arguments are presented, without worrying about how those arguments might or might not be applied in other contexts?
Which is compatible with thinking more details should be taken into account when the statistical arguments are applied in other contexts (in fact, I’m pretty sure this is what Zack thinks).
Discussion of abstract epistemology principles, which generalize across different contexts, is perhaps most of the point of this website...
Your points 1,2,3 have nothing to do with the epistemic problem of decoupling vs contextualizing, they have to do with political tradeoffs in moderating a forum; they apply to people doing contextualization in their analysis, too. I hate that the phrase “contextualizing norms” is being used to conflate between “all sufficiently relevant information should be used” and “everything should be about politics”.
Your points 1,2,3 have noting to do with the epistemic problem of decoupling vs contextualizing,
This is probably because I don’t know what the epistemic problem is. I only know about the linked post, which defines things like this:
Decoupling norms: It is considered eminently reasonable to require your claims to be considered in isolation—free of any context or potential implications. Attempts to raise these issues are often seen as sloppy thinking or attempts to deflect.
Contextualising norms: It is considered eminently reasonable to expect certain contextual factors or implications to be addressed. Not addressing these factors is often seen as sloppy or even an intentional evasion.
… To a contextualiser, decouplers’ ability to fence off any threatening implications looks like a lack of empathy for those threatened, while to a decoupler the contextualiser’s insistence that this isn’t possible looks like naked bias and an inability to think straight”
I sometimes round this off in my head to something like “pure decouplers think arguments should be considered only on their epistemic merits, and pure contextualizers think arguments should be considered only on their instrumental merits”.
There might be another use of decoupling and contextualizing that applies to an epistemic problem, but if so it’s not defined in the canonical article on the site.
My basic read of Zack’s entire post was him saying over and over “Well there might be really bad instrumental effects of these arguments, but you have to ignore that if their epistemics are good.” And my immediate reaction to that was “No I don’t, and that’s a bad norm.”
I sometimes round this off in my head to something like “pure decouplers think arguments should be considered only on their epistemic merits, and pure contextualizers think arguments should be considered only on their instrumental merits”.
The proper words for that aren’t decoupling vs contextualizing, it’s denotative vs enactive language. An orthogonal axis to how many relevant contextual factors are supposed to be taken into account. You can require lots of contextual factors to be taken into account in epistemic analysis, or require certain enactments to be made independent of context.
Note, the original post makes the conflation I’m complaining about here too!
It might just make more sense to give this one up to word inflation and come up with new words. I’ll happily use the denotative vs. enactive language to point to this thing in the future, but I’ll probably have to put a footnote that says something like (what most people in the community refer to as decoupling vs. contextualizing.
My basic read of Zack’s entire post was him saying over and over “Well there might be really bad instrumental effects of these arguments, but you have to ignore that if their epistemics are good.” And my immediate reaction to that was “No I don’t, and that’s a bad norm.”
It really looks like you’re defending the “appeal to consequences” as a reasonable way to think, and a respectable approach to public epistemology. But that seems so plainly absurd that I have to assume that I’ve misunderstood. What am I missing?
It really looks like you’re defending the “appeal to consequences” as a reasonable way to think, and a respectable approach to public epistemology. But that seems so plainly absurd that I have to assume that I’ve misunderstood. What am I missing?
It might be that we just have different definitions of absurd and you’re not missing anything, or it could be that you’re taking an extreme version of what I’m saying.
To wit, my stance is that to ignore the consequences of what you say is just obviously wrong. Even if you hold truth as a very high value, you have to value it insanely more than any other value to never encounter a situation where you’re not compromising other things you value by ignoring the difference you could make by not saying something/lying/being careful about how to phrase things, etc.
Now obviously, you also have to consider the effect this type of thinking/communication has on discourse and the public ability to seek the truth—and once you’ve done that you’re ALREADY thinking about the consequences of what you say and what you allow others to say, and the task at that point is to simply weigh them against each other.
It’s important to distinguish the question of whether, in your own personal decisionmaking, you should ever do things that aren’t maximally epistemically good (obviously, yes); from the question of whether the discourse norms of this website should tolerate appeals to consequences (obviously, no).
It might be morally right, in some circumstances, to pass off a false mathematical proof as a true one (e.g. in a situation where it is useful to obscure some mathematical facts related to engineering weapons of mass destruction). It’s still a violation of the norms of mathematics, with good reason. And it would be very wrong to argue that the norms of mathematics should change to accommodate people making this (by assumption, morally right) choice.
To summarize: you’re destroying the substrate. Stop it.
It’s important to distinguish the question of whether, in your own personal decisionmaking, you should ever do things that aren’t maximally epistemically good (obviously, yes); from the question of whether the discourse norms of this website should tolerate appeals to consequences (obviously, no).
I agree it’s important to realize that these things are fundamentally different.
It might be morally right, in some circumstances, to pass off a false mathematical proof as a true one (e.g. in a situation where it is useful to obscure some mathematical facts related to engineering weapons of mass destruction). It’s still a violation of the norms of mathematics, with good reason. And it would be very wrong to argue that the norms of mathematics should change to accommodate people making this (by assumption, morally right) choice.
A better norm of mathematics might be to NOT publish proofs that have obvious negative consequences like enabling weapons of mass destruction, and have a norm that actively disincentivizes people who publish that sort of research.
In other words, a norm might be to basically be epistemically pure, UNLESS the local instrumental considerations outweigh the cost to epistemic climate. This can be rounded down to “have norms about epistemics and break them sometimes,” but only if when someone points at edge cases where the norms are actively harmful, they’re challenged that sometimes the breaking of those norms is perfectly OK.
IE, if someone is using the norms of the community as a weapon, it’s important to point at that the norms are a means to an end, and that the community won’t blindly allow itself to be taken advantage of.
I think my actual concern with this line of argumentation is: if you have a norm of “If ‘X’ and ‘X implies Y’ then ‘Y’, EXCEPT when it’s net bad to have concluded ‘Y’”, then the werewolves win.
The question of whether it’s net bad to have concluded ‘Y’, is much, much more complicated than the question of whether, logically, ‘Y’ is true under these assumptions (of course, it is). There are many, many more opportunities for werewolves to gum up the works of this process, making the calculation come out wrong.
If we’re having a discussion about X and Y, someone moves to propose ‘Y’ (because, as it has already been agreed, ‘X’ and ‘X implies Y’), and then someone else says “no, we can’t do that, that has negative consequences!”, that second person is probably playing a werewolf strategy, gumming up the works of the epistemic substrate.
If we are going to have the exception to the norm at all, then there has to be a pretty high standard of evidence to prove that adding ‘Y’ to the discourse, in fact, has bad consequences. And, to get the right answer, that discussion itself is going to have to be up to high epistemic standards. To be trustworthy, it’s going to have to make logical inferences much more complex than “if ‘X’ and ‘X implies Y’, then ‘Y’”. What if someone objects to those logical inference steps, on the basis that they would have negative consequences? Where does that discussion happen?
In practice, these questions aren’t actually answered. In practice, what happens is that social epistemology doesn’t happen, and instead everything becomes about coalitional politics. Saying ‘Y’ doesn’t mean ‘Y is literally true’, it means you’re part of the coalition of people who wants consequences related to (but not even necessarily directly implied by!) the statement ‘Y’ to be put into effect, and that makes you blameworthy if those consequences hurt someone sympathetic, or that coalition is bad. Under such conditions, it is a major challenge to re-establish epistemic discourse, because everything is about violence, including attempts to talk about the “we don’t have epistemology and everything is about violence” problem.
We have something approaching epistemic discourse here on LessWrong, but we have to defend it, or it, too, becomes all about coalitional politics.
If we are going to have the exception to the norm at all, then there has to be a pretty high standard of evidence to prove that adding ‘Y’ to the discourse, in fact, has bad consequences.
I want to note that LW definitely has exceptions to this norm, if only because of the boring, normal exceptions. (If we would get in trouble with law enforcement for hosting something you might put on LW, don’t put it on LW.) We’ve had in the works (for quite some time) a post explaining our position on less boring cases more clearly, but it runs into difficulty with the sort of issues that you discuss here; generally these questions are answered in private in a way that connects to the judgment calls being made and the particulars of the case, as opposed to through transparent principles that can be clearly understood and predicted in advance (in part because, to extend the analogy, this empowers the werewolves as well).
Another common werewolf move is to take advantage of strong norms like epistemic honesty, and use them to drive wedges in a community or push their agenda, while knowing they can’t be called out because doing so would be akin to attacking the community’s norms.
I’ve seen the meme elsewhere in the rationality community that strong and rigid epistemic norms are a good sociopath repellent, and it’s ALMOST right. The truth is that competent sociopaths (in the Venkat Rao sense) are actually great at using rigid norms for their own ends, and are great at using the truth for their own ends as well. The reason it might work well in the rationality community (besides the obvious fact that sociopaths are even better at using lies to their own ends than the truth) is that strong epistemics are very close to what we’re actually fighting for—and remembering and always orienting towards the mission is ACTUALLY an effective first line defense against sociopaths (necessary but not sufficient IMO).
99 times out of a 100, the correct way to remember what we’re fighting for is to push for stronger epistemics above other considerations. I knew that when I made the original post, and I made it knowing I would get pushback for attacking a core value of the community.
However, 1 time out of 100 the correct way to remember what you’re fighting for is to realize that you have to sacrifice a sacred value for the greater good. And when you see someone explicitly pushing the gray area by trying to get you to accept harmful situations by appealing to that sacred value, it’s important to make clear (mostly to other people in the community) that sacrificing that value is an option.
What specifically do you mean by “werewolf” here & how do you think it relates to the way Jessica was using it? I’m worried that we’re getting close to just redefining it as a generic term for “enemies of the community.”
By werewolf I meant something like “someone who is pretending be working for the community as a member, but is actually working for their own selfish ends”. I thought Jessica was using it in the same way.
That’s not what I meant. I meant specifically someone who is trying to prevent common knowledge from being created (and more generally, to gum up the works of “social decisionmaking based on correct information”), as in the Werewolf party game.
Worth noting: “werewolf” as a jargon term strikes me as something that is inevitably going to get collapsed into “generic bad actor” over time, if it gets used a lot. I’m assuming that you’re thinking of it sort of as in the “preformal” stage, where it doesn’t make sense to over-optimize the terminology. But if you’re going to keep using it I think it’d make sense to come up with a term that’s somewhat more robust against getting interpreted that way.
(random default suggestion: “obfuscator”. Other options I came up with required multiple words to get the point across and ended up too convoluted. There might be a fun shorthand for a type of animal or mythological figure that is a) a predator or parasite, b) relies on making things cloudy. So far I could just come up with “squid” due to ink jets, but it didn’t really have the right connotations)
That is a bit more specific than what I meant. In this case though, the second more broad meaning of “someone who’s trying to gum up the works of social decisionmaking” still works in the context of the comment.
And when you see someone explicitly pushing the gray area by trying to get you to accept harmful situations by appealing to that sacred value
Um, in context, this sounds to me like you’re arguing that by writing “Where to Draw the Boundaries?” and my secret (“secret”) blog, I’m trying to get people to accept harmful situations? Am I interpreting you correctly? If so, can you explain in detail what specific harm you think is being done?
Sorry, I was trying to be really careful as I was writing of not accusing you specifically of bad intentions, but obviously it’s hard in a conversation like this where you’re jumping between the meta and the object-level.
It’s important to distinguish a couple things.
1. Jessica and I were talking about people with negative intentions in the last two posts. I’m not claiming that you’re one of those people that is deliberately using this type of argument to cause harm.
2. I’m not claiming that it was the writing of those two posts that were harmful in the way we were talking about. I was claiming that the long post you wrote at the top of the thread where you made several analogies about your response, were exactly the sort of gray area situations where, depending on context, the community might decide to sacrifice it’s sacred value. At the same time, you were banking on the fact that it was a sacred value to say “even in this case, we would uphold the sacred value.” This has the same structure as the werewolf move mentioned above, and it was important for me to speak up, even if you’re not a werewolf.
people with negative intentions [...] deliberately
So, it’s actually not clear to me that deliberate negative intentions are particularly important, here or elsewhere? Almost no one thinks of themselves as deliberately causing avoidable harm, and yet avoidable harm gets done, probably by people following incentive gradients that predictably lead towards harm, against truth, &c. all while maintaining a perfectly sincere subjective conscious narrative about how they’re doing God’s work, on the right side of history, toiling for the greater good, doing what needs to be done, maximizing global utility, acting in accordance with the moral law, practicing a virtue which is nameless, &c.
it was important for me to speak up, even if you’re not a werewolf.
Agreed. If I’m causing harm, and you acquire evidence that I’m causing harm, then you should present that evidence in an appropriate venue in order to either persuade me to stop causing harm, or persuade other people to coördinate to stop me from causing harm.
I was claiming that the long post you wrote at the top of the thread where you made several analogies about your response, were exactly the sort of gray area situations where, depending on context, the community might decide to sacrifice it’s sacred value.
So, my current guess (which is only a guess and which I would have strongly disagreed with ten years ago) is that this is a suicidally terrible idea that will literally destroy the world. Sound like an unreflective appeal to sacred values? Well, maybe!—you shouldn’t take my word for this (or anything else) except to the exact extent that you think my word is Bayesian evidence. Unfortunately I’m going to need to defer supporting argumentation to future Less Wrong posts, because mental and financial health requirements force me to focus on my dayjob for at least the next few weeks. (Oh, and group theory.)
So, it’s actually not clear to me that deliberate negative intentions are particularly important, here or elsewhere?
(responding, and don’t expect another response back because you’re busy).
I used to think this, but I’ve since realized that intentions STRONGLY matter. It seems like a system is fractal, the goals of the subparts/subagents get reflected in the goal of the broader system. People with aligned intentions will tend to shift the incentive gradients, as well people with unaligned intentions (of course, this isn’t a one way relationship, the incentive gradients will also shift the intentions).
I deny that your approach ever has an advantage over recognizing that definitions are tools which have no truth values, and then digging into goals or desires.
Thanks, these are some great points on some of the costs of decoupling norms! (As you’ve observed, I’m generally pretty strongly in favor of decoupling norms, but policy debates should not appear one-sided.)
someone brings it up all the time
I would want to distinguish “brings it up all the time” in the sense of “this user posts about this topic when it’s not relevant” (which I agree is bad and warrants moderator action) versus the sense of “this user posts about this topic a lot, and not on other topics” (which I think is generally OK).
If someone is obsessively focused on their narrow special interest—let’s say, algebraic topology—and occasionally comments specifically when they happen to think of an application of algebraic topology to the forum topic, I think that’s fine, because people reading that particular thread get the benefit of a relevant algebraic topology application—even if looking at that user’s posting history leaves one with an unsettling sense of, “Wow, this person is creepily obsessed with their hobbyhorse.”
tries to twist other people’s posts towards a discussion of their thing
I agree that this would be bad, but I think it’s usually possible to distinguish “twist[ing] other people’s posts towards a discussion of their thing” from a genuinely relevant mention of the thing that couldn’t (or shouldn’t) be reasonably expected to derail the discussion?
In the present case, my great-great-grandparent comment notes that the list-of-koans format lends itself to readers contributing their own examples in the comments, and I tried to give two such examples (trying to mimic the æsthetic of the OP by continuing the numbered list and Alice/Bob/Charlie/&c. character name sequence), one of which related the theme of the OP to the main point of one of my recent posts.
In retrospect, maybe I should’ve thought more carefully about how to phrase the proposed example in a way that makes the connection to the OP more explicit/obvious? (Probably-better version: “A meaningful ‘Yes’ answer to the question ‘Is G an H?’ requires a definition of H such that the answer could be ‘No’.”)
It’s true that, while composing the great-great-grandparent, I was kind of hoping that some readers would click through the link and read my earlier post, which I worked really hard on and which I think is filling in a gap in “A Human’s Guide to Words” that I’ve seen people be confused about. But I don’t see how this can reasonably be construed as an attempt to derail the discussion? Like, I ordinarily wouldn’t expect a brief comment of the form “Great post! Here’s a couple more examples that occurred to me, personally” to receive any replies in the median case.
(Although unfortunately, it empirically looks like the discussion did, in fact, get derailed. I feel bad for Scott G. that we’re cluttering up his comment section like this, but I can’t think of anything I wish I had done differently other than wording the great-great grandparent more clearly, as mentioned in the paragraph-before-last. GivenVanessa’s reply, I felt justified in writing my counterreply … and here we are.)
It would be perfectly alright for moderators who didn’t want to drive away their visitors to ask this person to stop.
Agreed, the moderators are God and their will must be obeyed.
kick out someone who has a bad reputation that makes important posters unable to post on your website because they don’t want to associate with that person, even IF that person has good behavior
So, the dynamic you describe here definitely exists, but I actually think it’s a pretty serious problem for our collective sanity: if some truths happen to lie outside of Society’s Overton window, then systematic truthseekers (who want to collect all the truths, not just the majority of them that are safely within the Overton window) will find themselves on the wrong side of Respectability, and if people who care about being Respectable (and thereby having power in Society) can’t even talk to people outside the Overton window (not even agree with—just talk to, using, for example, a website), then that could have negative instrumental consequences in the form of people with power in Society making bad policy decisions on account of having inaccurate beliefs.
I want to write more about this in the future (albeit not on Less Wrong), but in the meantime, maybe see the immortal Scott Alexander’s “Kolmogorov Complicity And The Parable Of Lightning” for an expression of similar concerns:
Some other beliefs will be found to correlate heavily with lightning-heresy. Maybe atheists are more often lightning-heretics; maybe believers in global warming are too. The enemies of these groups will have a new cudgel to beat them with, “If you believers in global warming are so smart and scientific, how come so many of you believe in lightning, huh?” Even the savvy Kolmogorovs within the global warming community will be forced to admit that their theory just seems to attract uniquely crappy people. It won’t be very convincing. Any position correlated with being truth-seeking and intelligent will be always on the retreat, having to forever apologize that so many members of their movement screw up the lightning question so badly.
Regarding “Kolmogorov complicity”, I just want to make clear that I don’t want to censor your opinion on the political question. Such censorship would only serve to justify your notion that “we only refuse to believe X because it’s heresy, while any systematic truthseeker would believe X”, which is something I very much disagree with. I might be interested in discussing the political question if we were allowed to do it. It is the double bind of, not being able to allowed to argue with you on the political quesiton while having to listen to you constantly hinting at it, is what bugging me. Then again, I don’t really have a good solution.
I’ve read Zack’s blog (the one that is not under the name Zack M. Davis), and his hobbyhorse has to do with transgender issues and gender categories. However, even when he is writing directly about the matter on his own blog, I am unclear what he is actually saying about these issues. There is still a certain abstractness and distance from the object level.
It is nearly impossible for a human being to write a correct program just by thinking really hard. And that is a situation where everything is cut and dried, mathematically exact. Mathematicians do fairly well at proving theorems rigorously, but they have an easier task than programmers, for they only have to convince people, not machines. Outside of those domains, abstract argument on its own is nothing more than abstract art, unless it is continually compared with the object level and exposed to modus delens.
And the object level is what we’re all doing this for, or what’s the point?
And the object level is what we’re all doing this for, or what’s the point?
What’s the point of concrete ideas, compared to more abstract ideas? The reasons seem similar, just with different levels of grounding in experience, like with a filter bubble that you can only peer beyond with great difficulty. This situation is an argument against emphasis on the concrete, not for it.
(I think there’s a mixup between “meta” and “abstract” in this subthread. It’s meta that exists for the object level, not abstractions. Abstractions are themselves on object level when you consider them in their own right.)
Abstractions are a central example of things considered on the object level, so I don’t understand them as being in opposition to the object level. They can be in opposition to more concrete ideas, those closer to experience, but not to being considered on object level.
The point is the relationship between the levels of the ladder of abstraction. Outside of mathematics and programming, long arguments at high levels go wrong without being checked against experience. If experience contradicts, so much the worse for the argument.
Unsure of mathematics, but software development goes wrong in exactly the same way—designs and ideas too far removed from the silicon go wildly wrong and don’t match at all what actually gets built. Eventually, the code wins and the arguments lose (or more often, the code fails and everybody loses).
That’s understandable, but I hope it’s also understandable that I find it unpleasant that our standard Bayesian philosophy-of-language somehow got politicized (!?), such that my attempts to do correct epistemology are perceived as attacking people?!
Our philosophy of language did not “somehow” got politicized. You personally (Zack M. Davis) politicized it by abusing it in the context of a political issue.
...Which might make it all the more gratifying if you can find a mistake in the racist bastard’s math: then you could call out the mistake in the comments and bask in moral victory as the OP gets downvoted to oblivion for the sin of bad math.
If you had interesting new math or non-trivial novel insights, I would not complain. Of course that’s somewhat subjective: someone else might consider your insights valuable.
But what, realistically, do you expect me to do?
You’re right, I don’t have a good meta-level solution. So, if you want to keep doing that thing you’re doing, knock yourself out.
I had hard time to track down what is the refefrent to the abuse mentioned in the parent post.
It does seem that the concept was employed in a political context. To my brain politizing is a particular kind of use. I get that if you effectively employ any kind of argument towards a political end it becomes politically relevant. However it would be weird if any tool employed would automatically become part of politics.
If beliefs are to pay rent and this particular point is established / marketed to establish a specific another point I could get on board with a expectation to disclose such “financial ties”. Up to this point I know that this belief is sponsored by another belief but I do not know which belief and I don’t fully get why it would be troublesome to reveal this belief.
I don’t really have a dog in whatever fight this is, but looking at Zack’spostsandcomments recently, I see nothing but interesting and correct insights and analysis, devoid of any explicit politics (but perhaps yielding insights about such?). How can you call this “abuse”? The overwhelming majority of the content that gets posts to Less Wrong these days should aspire to the level of quality of the stuff I just linked!
The abuse did not happen on LW. However, because I happen to be somewhat familiar with Davis’ political writing, I am aware of a sinister context to what ey write in LW of which you are not aware. Now, you may say that this is not a fair objection to Davis writing whatever ey write here, and you might well be right. However, I thought I at least have the right to express my feelings on this matter so that Davis and others can take them into account (or not). If we are supposed to be a community, then it should be normal for us to consider each other’s feelings, even when there was no norm violation per se involved, not so?
I am, frankly, appalled to read this sort of thing on Less Wrong. You are, in all seriousness, attacking someone’s writings about abstract epistemology and Bayesian inference, on Less Wrong, of all places (!!), not because there is anything at all mistaken about them, but because of some alleged “sinister context” that you are bringing in from somewhere else. To call this “not a fair objection” would be a gross understatement. It is shameful.
If we are supposed to be a community, then it should be normal for us to consider each other’s feelings, even when there was no norm violation per se involved, not so?
Absolutely not.
This sort of attitude is tremendously corrosive to productive discussion and genuine truth-seeking. We have discussed this before… and am I genuinely disappointed that this sort of thing is happening again.
Ugh, because productive discussion happens between perfectly dispassionate robots in a vacuum, and if I’m not one then it is my fault and I should be ashamed? Specifically, I should be ashamed just for saying that something made me uncomfortable rather than suffering in silence? I mean, if that’s your vision, it’s fine, I understand. But I wonder whether that’s really the predominant opinion around here? What about all the stuff about “community” and “Village” etc?
Ugh, because productive discussion happens between perfectly dispassionate robots in a vacuum, and if I’m not one then it is my fault and I should be ashamed?
As discussed in the linked thread—it is none of my business, nor the business of any of your interlocutors, whether you are, or are not, a “perfectly dispassionate robot in a vacuum”, when it comes to discussions on subjects like the OP. That is not something which should enter into the discussion at all; it is simply off-topic.
If we permit the introduction of such questions as whether you feel uncomfortable (about the topic, or any on-topic claims) into discussions of abstract epistemology, or Bayesian inference, or logic, etc., when that discomfort in no way bears on the truth or falsity of the claims under discussion, then we might as well close up shop, because at that point, we have bid good-bye even to the pretense of “rationality”, much less the fact of it.
And if the “predominant opinion” disagrees—so much the worse for predominant opinion; and so much the sadder for Less Wrong.
Edit: And all this is, of course, not even mentioning your conflation of “I am uncomfortable” with insinuating comments about “sinister context”, and implications of wrongdoing on Zack’s part!
Alright, let’s suppose it’s off-topic in this thread, or even on this forum. But is there another place within the community’s “discussion space” where it is on-topic? Or you don’t think such a place should exist at all?
I’ve found /r/TheMotte (recently forked from /r/slatestarcodex) to be a good place to discuss politically-charged topics? (Again, also happy to talk privately sometime.)
I wasn’t referring to “where to discuss politically charged topics”, I was referring to “where to discuss the fact that something that happens on LessWrong.com makes me uncomfortable because [reasons]”.
To be honest I prefer to avoid politically charged topics, as long as they avoid me (which they didn’t, in this case).
I just want to chime in quickly to say that I disagree with Said here pretty heavily, but also don’t know that I agree with any other single person in the conversation, and articulating what I actually believe would require more time than I have right now.
I love that you’re willing to say that, but I’m a bit confused as to what purpose that comment serves. Without some indication of which parts you disagree with, and what things you DO believe, all this is saying is “I take no responsibility for what everyone is saying here”, which I assume is true for all of us.
Personally, I agree with Said on a number of aspects—a reader’s reaction to a topic, or to a poster, is not sufficient reason to do anything. This is especially true when the reader’s reaction is primarily based on non-LW information. I DISAGREE that this makes all discussion fair game, as long as it’s got a robe of abstraction which allows deniability that it relates to the painful topic.
I don’t know that I’ve seen anyone besides me claim that the abstraction seems too thin. It would take a discussion of when it applies and when it does not to get me to ignore my (limited) understanding of the participants’ positions on the related-but-not-on-LW topic.
Generally, if you want to talk about how LW is moderated or unpleasant behavior happening here, you should talk to me. [If you think I’m making mistakes, the person to talk to is probably Habryka.] We don’t have an official ombudsman, and perhaps it’s worth putting some effort into finding one.
I mean, the sum total of spaces that the rationalist community uses to hold discussions, propagate information, do collective decision making, (presumably) provide mutual support et cetera, to the extent these spaces are effective in fulfilling their functions. Anywhere where I can say something and people in the community will listen to me, and take this new information into account if it’s worth taking into account, or at least provide me with compassionate feedback even if it’s not.
Firstly, I have always said (and this incident has once again reinforced my view of this) that “we”, which is to say “rationalists”, should not be a “community”.
But, of course, things are what they are. Still, it is hardly any of my business, as a participant of Less Wrong, what discussions you have elsewhere, on some other forum. Why should it be?
Of course, it would be quite beyond the pale if the outcomes of those discussions were used in deciding (by those who have the authority to decide these things—basically, I mean the admins of Less Wrong) how to treat someone here!
In short, I am saying: in other places, discuss whatever you want to discuss (assuming your discussions are appropriate thereto… but, in any case—not my business). None of that should affect any discussions here. “I propose to treat <Less Wrong participant X> in such-and-such a way—why? because he said or did so-and-so, in another place entirely”—this ought not be acceptable or tolerated.
Firstly, I have always said (and this incident has once again reinforced my view of this) that “we”, which is to say “rationalists”, should not be a “community”.
Well, that is a legitimate opinion. I just want to point out that it did not appear to be the consensus so far. If it is the consensus (or becomes such) then it seems fair to ask to make it clear, in particular to inform’s people’s decisions about how and whether to interact with the forum.
I won’t go so far as to say there should be no community, but I do believe that it (or they; there are likely lots of involved communities of rationalists) is not synonymous with LessWrong. There is overlap in topics discussed, but there are good LW topics that are irrelevant to some or all communities, and there are LOTS of community topics that don’t do well on LW.
And that includes topics that, in a vacuum, would be appropriate to LW, but are deeply related to topics in a community which are NOT good for LW. Sorry, but that entanglement of ideas makes it impossible to discuss rationally in a large group.
The dispute in question isn’t about epistemology but ontology and I think it’s worth keeping the two apart mentally but I think your general point still stands.
I think it needs clarification. It’s clearly vague enough that it’s not a valid reason by itself. However it is reasonable to think that part of the “bad vibe” would be the type why political meshing is bad while part of it could be relevant.
For example it could be that there is worry that constantly mentioning a specific point goes for “mere exposure” where just being exposed to a viewpoint increases ones belief in it without actual argumentation for it. Zack_M_Davis could then argue that the posting doesn’t get exposure more than would have been gotten by legimate means.
But we can’t go that far because there is no clear image what is the worry and unpacking the whole context would probably derail into the political point or otherwise be out-of-scope for epistemology.
For example if some crazy scientist like a nazi-scientist was burning people (I am assuming that burning people is ethically very bad) to see what happens I would probably want to make sure that the results that he produces contains actual reusable information. Yet I would probably vote against burning people. If I just contain myself to the epistemological sphere I might know to advice that larger sample-sizes lead to more realiable results. However being acutely aware that the trivial way to increase the sample size would lead to significant activity I oppose (ie my advice burns more people) I would probably think a little harder whether there is a lives-spent efficient way to get reliability. Sure refusing any cooperation ensures that I don’t cause any burned people. But it is likely that left to their own devices they would end up burning more people than if they were supplied with basic statistics and how to get maximum data from each trial. On one hand value is fragile and small epistemology improvements might correspond to big dips in average well-being. On the other hand taking the ethical dimension effectively into account it will seemingly “corrupt” the cold-hearted data processing. From lives-saved ambivalent viewpoint those nudges are needless inefficiencies, “errors”. Now I don’t know whether the worry about this case is that big but I would in general be interested when small linkages are likely to have big impacts. I guess from a pure epistemological viewpoint it would be “value chaoticness” where small formulation differences have big or unpredictable implications for values.
Imagine instead some crank racist psuedoscientist who, in the process of pursuing their blatantly ideologically-motiviated fake “science”, happens to get really interested in the statistics of the normal distribution, and writes a post on your favorite rationality forum about the ratio of areas in the right tails of normal distributions with different means.
Can you say more about why you think La Griffe du Lion is a “crank racist psuedoscientist”? My impression (based on cursory familiarity with the HBD community) is that La Griffe du Lion seems to be respected/recommended by many.
Thanks for asking! So, a Straussian reading was actually intended there.
(Sorry, I know this is really obnoxious. My only defense is that, unlike some more cowardly authors, on the occasions when I stoop to esotericism, Iactually explain the Straussian reading when questioned.)
In context, I’m trying to defend the principle that we shouldn’t derail discussions about philosophy on account of the author’s private reason for being interested in that particular area of philosophy having to do with a contentious object-level topic. I first illustrated my point with an Occam’s-razor/atheism example, but, as I said, I was worried that that might come off as self-serving: I want my point to be accepted because the principle I’m advancing is a good one, not due to the rhetorical trick of associating my interlocutor with something locally considered low-status, like religion. So I tried to think of another illustration where my stance (in favor of local validity, or “decoupling norms”) would be associated with something low-status, and what I came up with was statistics-of-the-normal-distribution/human-biodiversity. Having chosen the illustration on the basis of the object-level topic being disreputable, it felt like effective rhetoric to link to an example and performatively “lean in” to the disrepute with a denunciation (“crank racist psuedoscientist”).
In effect, the function of denouncing du Lion was not to denounce du Lion (!), but as a “showpiece” while protecting the principle that we need the unrestricted right to talk about math on this website. Explicitly Glomarizing my views on the merits of HBD rather than simply denouncing would have left an opening for further derailing the conversation on that. This was arguably intellectually dishonest of me, but I felt comfortable doing it because I expected many readers to “get the joke.”
Not every line in 37 Ways is my “standard Bayesian philosophy,” nor do I believe much of what you say follows from anything standard.
This probably isn’t our central disagreement, but humans are Adaptation-Executers, not Fitness-Maximizers. Expecting humans to always use words for Naive Bayes alone seems manifestly irrational. I would go so far as to say you shouldn’t expect people to use them for Naive Bayes in every case, full stop. (This seems to border on subconsciously believing that evolution has a mind.) If you believe someone is making improper inferences, stop trying to change the subject and name an inference you think they’d agree with (that you consider false).
I find it unpleasant that you always bring your hobbyhorse in, but in an “abstract” way that doesn’t allow discussing the actual object level question.
...
That’s understandable, but I hope it’s also understandable that I find it unpleasant that our standard Bayesian philosophy-of-language somehow got politicized (!?), such that my attempts to do correct epistemology are perceived as attacking people?!
I note that this isn’t a denial of the accusation that you’re bringing up a hobbyhorse, disguised by abstraction. It sounds more like a defense of discussing a political specific by means of abstraction. I’ve noted in at least some of your posts that I don’t find your abstractions very compelling without examples, and I that I don’t much care for the examples I can think of to reify your abstractions.
It’s at times like this that I’m happy I’m not part of a “rationalist community” that includes repetitive indirection of political fights along with denial that that’s what they are. But I wish you’d keep it off less wrong.
On the next level down, your insistence that words have consistent meaning and categories are real and must be consistent across usages (including both context changes and internal reasoning vs external communication) seems a blind spot. I don’t know if it’s caused by the examples you’re choosing (and not sharing), or if the reverse is true.
It sounds more like a defense of discussing a political specific by means of abstraction.
Zack said:
Like, maybe statistics is part of the common interest of many causes, such that, as a matter of local validity, you should assess arguments about statistics on their own merits in the context that those arguments are presented, without worrying about how those arguments might or might not be applied in other contexts?
What, realistically, do you expect the atheist—or the racist, or me—to do? Am I supposed to just passively accept that all of my thoughts about epistemology are tainted and unfit for this forum, because I happen to be interested in applying epistemology to other topics (on a separate website, under a pseudonym)?
Which isn’t saying specifics should be discussed by discussing abstracts, it says abstracts should be discussed, even when part of the motivation for discussing the abstract is specific. Like, people should be able to collaborate on statistics textbooks even if they don’t agree with their co-authors’ specific applications of statistics to their non-statistical domains. (It would be pretty useless to discuss abstracts if there we no specific motivations, after all...)
Right. At least some abstract topics should be discussed, and part of the discussion is which, if any, specifics might be exemplary of such abstractions. Other abstract topics should be avoided, if the relevant examples are politically-charged and the abstraction doesn’t easily encompass other points of view.
Choosing to discuss abstracts primarily which happen to support a specific position, without disclosing that tie, is not OK. It’s discussing the specific in the guise of the abstract. I can’t be sure that’s what Zack is doing, but that’s how it appears from my outsider viewpoint.
Other abstract topics should be avoided, if the relevant examples are politically-charged and the abstraction doesn’t easily encompass other points of view.
Why?
Choosing to discuss abstracts primarily which happen to support a specific position, without disclosing that tie, is not OK.
How exactly does this differ from, “if the truth is on the wrong side politically, so much the worse for the truth”? Should we limit ourselves to abstract discussions that don’tconstrain our anticipations on things we care about?
How exactly does this differ from, “if the truth is on the wrong side politically, so much the worse for the truth”?
It differs in that there is no truth involved. The entire conversation is about which models and ontologies are best, without specifying what purpose they’re serving. The abstraction is avoiding talking about any actual truth (what predictions will be made, and how the bets will be resolved), while asserting that it improves some abstract concept of truth.
I’ve noted in at least some of your posts that I don’t find your abstractions very compelling without examples, and I that I don’t much care for the examples I can think of to reify your abstractions.
I agree that it’s reasonable for readers to expect authors to provide examples, which is why I do in fact provide examples. What do you want from me, exactly??
We now do have a “banned user” feature. If I understand it right you should be able to put down Zack_M_Davis name in it and afterwards you won’t see anymore of his posts. If you want to avoid reading his posts because they make you feel bad, that seems to me like the ideal solution given the current way LW works.
Blocking Zack isn’t an appropriate response if, as Vanessa thinks, Zack is attacking her and others in a way that makes these attacks hard to challenge directly. Then he’d still be attacking people even after being blocked, by saying the things he says in a way that influences general opinion.
Feelings are information, not numbers to maximize.
It’s possible that your actual concern is with “I feel” language being used for communication.
You’re right that “feelings are information, not numbers to maximize” and that hiding a user’s posts is often not a good solution because of this.
I don’t think Christian is making this mistake though.
When someone is suffering from an injury they cannot heal, there are two problems, not one. The first is the injury itself — the broken leg, the loss of a relationship, whatever it may be. The second is that incessant alarm saying “THIS IS BAD THIS IS BAD THIS IS BAD” even when there’s nothing you can do.
If you want to help someone in this situation, it’s important to distinguish (and help them distinguish) between the two problem and come to agreement about which one it is that you should be trying to solve: are we trying to fix the injury here, or are we just trying to become more comfortable with the fact that we’re injured? Even asking this question can literally transform the sensation of pain, if the resulting reflection concludes “yeah, there’s nothing else to do about this injury” and “yeah, actually the sensation of pain itself isn’t a problem”.
Earlier in this discussion, Vanessa said “I feel X”, and the response she got was taking the problem to be about the “X” part, and arguing that X is not true. This is a great and satisfying response so long as the perceived problem is definitely “X” and not at all “I feel”. The response wasn’t satisfying though, and she responded by saying that she thought “I feel” was enough to be worth saying.
Since it has already been said that “if the problem is X, we can discuss whether X is actually true, and solve it if it is”, Christian’s contribution was to add “and if it’s not that you think X is actually true and just want help with your feelings, here’s a way that can help”. It’s helpful in the case where Vanessa decides “yes, the problem is primarily the feeling itself, which is maladaptive here”, and it’s also helpful in clarifying (to her and to others) that if she isn’t interested in taking the nerve block, her objection must be a factual claim about X itself, which can then be dealt with as we deal with factual claims (without special regards to feelings, which have been decided to be “not the problem”).
It’s not the most warm and welcoming way to deal with feelings (which may or may not reflect accurate/perceived as accurate upon reflection information), but not every space has to be warm and welcoming. There is a risk of conflating “it helps build community to help people manage their feelings” with “catering to feelings takes precedence over recognizing fact”, and that’s a nasty failure mode to fall into. If we want to manage that rule with a hard and fast “no emotional labor will be supplied here, you must manage your feelings in your own time”, that is a valid approach. And if there is a real threat of that conflation taking over, it’s probably the right one. However, there are better (more pleasant, welcoming/community building, and yes, truth-finding) methods to that we can play with once we’re comfortable that we’re safe from feelings becoming a negative utility monster problem. It’s just that in order to play with them safely, we must be very clear about the distinction between “I feel X, and this is valid evidence which you need to deal with” and “I feel X, and this is my problem, which I would appreciate assistance with even though you’re obviously not obligated to fix it for me”.
I like “I feel language”. I think nonviolent communication is good. It’s however a huge misunderstand of violent communication that it’s simply about exchanging a few words without exchanging the meaning. It cheapens the language. It has an aspect of dishonesty. People engaging in that kind of dishonesty is the reason that there are articles written about how nonviolent communication doesn’t work.
The institution of trigger warning exists because some people don’t want to be exposed to certain information that makes them feel bad. Banning users on LW who write posts that make you feel bad is similar.
I think it’s important to give people ways to avoid situations that make them feel bad if that’s their desire.
Quick clarification: That is not what that feature does. It currently only prevents users from commenting on any of your blogposts. I feel quite hesitant to make content-blocking too easy on LessWrong for a variety of reasons, though I am not fundamentally opposed to it. Will see whether I can write my full thoughts up sometime soon.
Actually, I would like clarification from the LW admins on this. As I understood it, the “banned user” feature prevents the given user from commenting on your posts (and… responding to your comments, maybe? I’m not clear on this part either). I am not aware of it doing anything to prevent you from seeing the “banned” user’s posts/comments which they post elsewhere.
That having been said, GreaterWrong does have an “ignore user” feature (which automatically collapses comments from a given user). (Being GW-specific, of course, it does nothing for you if you prefer to use the official site to browse LW content.)
I see no reason why you are not allowed to discuss a specific example when someone talks abstractly when you consider this example to be important.
If you think the principle advocated in a post like Where to Draw the Boundaries? gets case X very wrong, it would be illuminating to write out why you think the principle that’s advocated gets case X wrong and how that shows that the abstract principle is flawed.
It’s harder to defend as that means you actually have to articulate a coherent concept of the concept of identity and argue why that concept is better then Where to Draw the Boundaries? but it’s very far removed from saying that you are not allowed to defend yourself.
As far as the norms of double crux goes, when there’s an object level disagreement and the crux seems to be on a higher abstract layer, the standard way to proceed would be to actually discuss the higher layer.
AFAIU discussing charged political issues is not allowed, or at least very frowned upon on LW, and for good reasons. So, I can’t discuss the object level. On the other hand, the meta level is too vague. That is, the error is in the way the abstract reasoning is applied to case X (it’s just not the right model), rather than in the abstract reasoning itself.
Note: you can generally talk about political stuff on your personal blog section. Part of the point of the frontpage/personal-blog distinction is so that there can be a bit of soft-pressure there without actually preventing people from talking about things.
There are certain areas that we might need to make individual judgement calls about (see Vaniver’s comment elsethread). And in general when discussing hot-button political issues I’d suggest you reflect on your goals and life choices (since I think it’s an easy domain to think you’re discussing something important when you’re mostly not). But that’s different from a ban.
Wait, what? If non-promoted posts (if that’s what you mean by “personal blogs”—I can’t find any other way to separate a post) have very different expectations and standards, why is the voting and karma identical and additive between personal blogs and the main site? Political hotbutton topics are difficult or impossible to discuss on LW.
I totally support the personal blogs being more lightly moderated, so things that push boundaries or even cross a line may not be noticed as quickly or at all. I expect to be downvoted massively if I post an unpopular political topic, regardless of whether I check “may promote” or not. It used to be I’d expect the same even for a popular political position, but looking at some recent vote totals for posts, I may be wrong on that.
why is the voting and karma identical and additive between personal blogs and the main site?
This is mostly for technical reasons, which we haven’t invested in fixing because the discussion of hot-button political issues hasn’t really been a problem for the last year in a way that would make me highly concerned about the overlapping vote systems.
I don’t know precisely yet what the best voting system would look like to account for this, but the two obvious options would be to either completely deactivate karma accumulation for non-frontpage posts and comments (which feels a bit bad to me, but might be fine, and was our initial plan), or to add a special flag that we selectively add to posts that deactivates karma accumulation (which adds more judgement on our part).
the meta level is too vague. That is, the error is in the way the abstract reasoning is applied to case X (it’s just not the right model), rather than in the abstract reasoning itself
Why not write a meta-level post about the general class of problem for which the abstract reasoning doesn’t apply? That could be an interesting post!
I’m guessing you might be thinking something along the lines of, “The ‘draw category boundaries around clusters of high density in configuration space’ moral doesn’t apply straightfowardly to things that are socially constructed by collective agreement”? (Examples: money, or Christmas. These things exist, but only because everyone agrees that they exist.)
I personally want to do more thinking about how social construction works (I have some preliminary thoughts on the matter that I haven’t finished fleshing out yet), and might write such a post myself eventually!
Given that Zack chose “Easy Going—I just delete obvious spam and trolling” as the commenting guidelines, I don’t see how it’s not allowed to write an object level comment about “this principle A is often wrongly applied in case X, but it shouldn’t be applied in case X as case X is different for reasons”.
If you actually add something interesting in for reasons* I would be very surprised if your posts gets downvoted. If you actually are able to articulate reasons that address blindspots that people have when thinking about principle A, such a comment would also likely be strongly upvoted given how LW functions even if it would partly political.
*and in this case it would be good to have object level reasons and not appeal to personal feelings.
Do you have a blog? If so, discussing the matter there may be sensible.
If you have no blog (or only have one that’s purposed for technical / professional / etc. topics), then it may be worthwhile to set up a third-party LessWrongMeta forum, for discussions of topics like this one. (There would, of course, be many challenges surrounding such a project, but it seems worth trying, and from a technical perspective the barriers to the attempt are low.)
I find it unpleasant that you always bring your hobbyhorse in, but in an “abstract” way that doesn’t allow discussing the actual object level question. It makes me feel attacked in a way that allows for no legal recourse to defend myself.
[Written as an admin]
First and foremost, LW is a space for intellectual progress about rationality and related topics. Currently, we don’t ban people for being fixated on a topic, or ‘darkly hinting,’ or posts they make off-site, and I don’t think we should. We do keep a careful eye on such people, and interpret behavior in ‘grey areas’ accordingly, in a way that I think reflects both good Bayesianism and good moderation practice.
In my favorite world, people who disagree on object-level questions (both political and non-political) can nevertheless civilly discuss abstract issues. This favors asymmetric weapons and is a core component of truth-seeking. So, while hurt feelings and finding things unpleasant are legitimate and it’s worth spending effort optimizing to prevent them, we can’t give them that much weight unless they differentiate the true and the untrue.
That said, there are ways to bring up true things that as a whole move people away from the truth, and you might be worried about agreements on abstractions being twisted to force agreement on object-level issues. These are hard to fight, and frustrating if you see them and others don’t. The best response I know is to catalog the local truths and lay out how they add up to a lie, or establish the case that agreement on those abstractions doesn’t force agreement on the object-level issues, and bring up the catalog every time the local truth advances a global lie. This is a lot more work than flyswatting, but has a much stronger bent towards truth. If you believe this is what Zack is doing, I encourage you to write a compilation post and point people to it as needed; due to the nature of that post, and where it falls on the spectrum from naming abstract dynamics to call-out post, we might leave it on your personal blog or ask that you publish it outside of LW (and link to it as necessary).
That is very reasonable and fair. I think that in practice I won’t write such a compilation post any time soon, because (i) I already created too much drama, (ii) I don’t enjoy writing call-out posts and (iii) my time is much better spent working on AI alignment.
Upon reflection, my strong reaction was probably because my System 1 is designed to deal with Dunbar-number-size groups. In such a tribe, one voice with an agenda which, if implemented, would put me in physical danger, is already notable risk. However, in a civilization of millions the significance of one such voice is microscopic (unless it’s very exceptional in its charisma or otherwise). On the other hand, AGI is a serious risk, and it’s one that I’m much better equipped to affect.
Sorry for causing all this trouble! Hopefully putting this analysis here in public will help me to stay focused in the future :)
Okay, I think I have a right to respond to this.
People being in physical danger is a bad thing. I don’t think of myself as having a lot of strong political beliefs, but I’m going to take a definite stand here: I am against people being in physical danger.
If someone were to present me with a persuasive argument that my writing elsewhere is increasing the number of physical-danger observer-moments in the multiverse on net, then I would seriously consider revising or retracting some of it! But I’m not aware of any such argument.
For what it’s worth, it seems to me that the argument “this writing puts me in physical danger” is absurd as applied to this particular case.
However, as far as I can tell (having re-read her comment several times), that’s not quite the argument Vanessa was making. What she seems to be saying is not “this writing puts me in physical danger” but “this writing expresses a viewpoint whose preferred agenda, if implemented, puts me in physical danger”.
Now, it’s a somewhat subtle distinction. Nevertheless, there does seem to be a difference; and insofar as there is a difference, the latter argument is (it seems to me) rather worse and more dangerous.
Why? Well, firstly, because upon a casual reading it can easily appear to be the first argument—as we just saw. This, of course, sets up a perfect motte-and-bailey situation: if people read the argument, come away being convinced of the former point, but then someone challenges the argument’s author, they can protest that what they actually wrote was the latter (and of course that’ll be true).
This is itself is not blameworthy per se (though it suggests that anyone making the latter point ought to be quite careful and explicit in distinguishing their position from the former point, to prevent such misuse—intentional or otherwise). However, there are more problems.
It is, perhaps, possible to show that a piece of writing, or a piece of rhetoric, would put certain people in danger. It’s difficult—such claims are often almost impossible to falsify—but possible. But what do you do with the claim “this is a voice with an agenda which, if implemented, would put [certain people] in danger”? Is there any way to defend against such an accusation, other than by turning discussion of any piece of writing or rhetoric even vaguely associated with a given “side” into a referendum on the entirety of that side’s worldview?
Finally, suppose the accusation is true. Suppose that I write a post, in which I argue for certain claims which, if true, (allegedly) support that view that Crimea is properly to be considered a part of Russia. (It is not difficult to see, I hope, how such a view could have an agenda associated with it which, if implemented, presents a physical danger to certain people.)
Now, what is to be done about this? Should such writing be banned? Discouraged? Penalized? (Let’s suppose I write this, not on Less Wrong, but on some forum which is devoted to geopolitics, while, however, also being dedicated to neutrality and non-partisan truthseeking; so my post is by no means off-topic.) Should I be censured? Suppose that other members of this forum take offense, and say that I am a voice whose agenda puts them in danger, and that perhaps I ought not to post such things, or enter into discussions of the topic at all. Are they right? Should their wishes be put into action? And what will be the consequences of a forum policy like this?
I think that what you’re saying here is mostly right, but I feel like it leaves out an important facet of the problem.
Some speech acts lower the message length of proposals to attack some groups, or raise the message length of attempts to prevent such attacks. This is a kind of meta-attack or threat, like concentrating troops on a country’s border.
The situation is often asymmetrical in particular contexts—given existing power structures & official narratives, some such meta-attacks are easier to perform than others—and in particular, proposals to alter the official narrative can look more “political” than moves in the opposite direction, even when the official narrative is obviously not a reasonable prior.
This problem is aggravated by a norm of avoiding “political” discourse—if one side of an argument is construed as political and the other isn’t, we get a biased result that favors & intensifies existing power arrangements. It’s also aggravated by norms of calm, impersonal discourse, since that’s easier to perform if you feel safe.
This is true; indeed, it’s difficult to see how it can fail to be true, even in the absence of any awareness or intention on anyone’s part. Yet it seems an exceedingly abstract basis on which to consider even censuring or discouraging certain sorts of speech, much less punishing or banning it.
I agree. I think this makes discouraging political or heated speech hard to do without introducing substantively harmful bias. That’s the context in which Zack’s speech can create a problem for Vanessa (and in which others’ speech created a structurally similar problem for Zack!).
Well, as for “heated” speech, I think discouraging that is easy enough. But where “political” is concerned, my point is exactly that the perspective you take makes it difficult to see where “political” ends, and “non-political” begins—indeed, it does not seem to me to be difficult to start from that view, and construct an argument that all speech is “political”! (And if I understand Zack’s point correctly, he seems to be saying that this has, in essence, already happened, on one particular topic.)
The problem is doing so without the specified harmful consequences. Obviously one can discourage heated speech.
This complexity is in the territory, not just in the map.
Let me try to be a bit clearer with an example. I’m saying that in, for instance, a discussion of human decisionmaking that uses utilitarian frameworks, posts like Totalitarian ethical systems and Should Effective Altruism be at war with North Korea? ought to be considered on-topic, since they discuss patterns of thinking that this framework is likely to push us towards, and point to competing considerations that are harder to express in that frame, which we might want to make sure we don’t lose sight of. Right now, on LessWrong, such posts are ambiguously permissible, in ways that cause Vanessa Kosoy to be legitimately uncertain about whether and to what extent—if she extends the interpretive labor of explaining what she thinks the problems are with Zack’s points—her work will be judged admissible.
IMO, on-topic is a strict subset of what is allowable on LW. There are plenty of topics that are about rationality (especially about group rationality and social/peer norms) but don’t work here because they’re related to topics that tend to trigger tribal or social status problems.
I’m starting to see that “on LW” is different for me than for at least some readers and moderators—it may be that I’m too restrictive in my opinion of non-promoted posts. I’m still going to downvote them.
(Only speaking as a participant, not as a moderator. The rules are currently very clear that you can downvote and upvote whatever you like.)
I do think I would prefer it if you would not downvote personal blogposts if they feel off-topic to you. You can always just uncheck the “show personal blogposts” checkbox on the frontpage. I care a lot about people being able to just explore ideas freely on the site, and you can always downvote them if we do move them to frontpage.
I think that’s fair—I don’t want to discourage exploration of ideas not yet ready for publication, but I _AM_ concerned that people other than me may take the leniency as permission to discuss overtly political topics here. I think I’ll stop voting on non-promoted posts and comments for a bit and see if my worries get worse or better.
Is there a way to tell whether a post is promoted or not, on the page that contains the voting buttons?
Note for any GreaterWrong users who might have a similar question:
When viewing a post, you’ll see an icon under the post name, at the left. It indicates what kind of post it is, e.g.:
(In order, those are: personal, frontpage, curated, Meta, Alignment Forum.)
We just added the ability to easily identify a post as frontpage or personal on our test-server today. Should be out by early next week.
You can currently tell by hovering over a post in a list of posts, or by looking at the moderation guidelines at the bottom of the post (which will always include “frontpage moderation guidelines” if it’s a frontpage post).
Those posts are definitely permissible on LessWrong from the site-rule perspective, though there is a sense in which they are off-topic in that we didn’t promote them to the frontpage.
I do think that imbalance of frontpage vs. personal already creates some problems, though I think the distinction is doing a bunch of important work that I don’t know how to achieve in other ways.
Rationality is the common interest of many causes; the whole point of it is to lower the message-description-length of proposals that will improve overall utility, while (conversely, and inevitably) raising the message-description-length of other possible proposals that can be expected to worsen it. To be against rationality on such a basis would seem to be quite incoherent. Yes, in rare cases, it might be that this also involves an “attack” on some identified groups, such as theists. But I don’t know of a plausible case that theists have been put in physical danger because rationality has now made their distinctive ideas harder to express! (In this as in many other cases, the more rationality, the less religious persecution/conflict we see out there in the real world!) And I have no reason to think that substituting “trans advocacy that makes plausibly-wrong claims about how the real-world (in this case: human psychology) factually works” for “theism” would lead to a different conclusion. Both instances of this claim seem just as unhinged and plausibly self-serving, in a way that’s hard not to describe as involving bad faith.
I thought the point was to help us model the things we care about more accurately and efficiently, which doesn’t require utilitarianism to be an appealing proximate goal (it just has to require caring about something which depends on objective reality).
Theists can have a hard time to formulate their value to harmony and community building. Advancing “hard facts” kind of can make them more appear to be hateful ignorants which can make it seem okay to be more confrontational socially with them which might involve more touching. The psychology behind how racism causes dangerous situations for black people might be a good example how you don’t need explicit representations of acknowledged dangerous things to be in fact dangerous.
I live in a culture that treats “religion oriented people” as more of a “whatever floats your boat privately” kind of people and not the kind of “people zealously pushing for false beliefs”. I feel that the latter kind of rhetoric makes it easier to paint them “as the enemy” and can be a contributing factor in legitimizing violence against them. Some of the “rationalist-inspired” work pushes harder on the “truth vs falsity” than on “irrelevance of bullshit” which has negative impact on near term security and the positive impact on security is contigent on the strategy working out. Note that the danger that rationalist inspired work can create migth en up materialising in the hands of people that are far from ideally rational. Yes some fights are worth fighting but people also usually agree that having to fight to accomplish somehting is worse than not fighting to accomplish that. And if you rally people to fight for truth you are still rallying people to fight. Even your explicit intetnion was to avoid rallying but you ended up doing it anyway.
The rationality community itself is far from static; it tends to steadily improve over time, even in the sorts of proposals that it tends to favor. If you go browse RationalWiki (a very early example indeed of something that’s at least comparable to the modern “rationalist” memeplex) you’ll in fact see plenty of content connoting a view of theists as “people who are zealously pushing for false beliefs (and this is bad, really really bad)”. Ask around now on LW itself, or even more clearly on SSC, and you’ll very likely see a far more nuanced view of theism, that de-emphasizes the “pushing for false beliefs” side while pointing out the socially-beneficial orientation towards harmony and community building that might perhaps be inherent in theists’ way of life. But such change cannot and will not happen unless current standards are themselves up for debate! One simply cannot afford to reject debate simply on the view that this might make standards “hazy” or “fuzzy”, and thus less effective at promoting some desirable goals (including, perhaps, the goal of protecting vulnerable people from very real harm and from a low quality of life more generally). An ineffective standard, as the case of views-of-theism shows, is far more dangerous than one that’s temporarily “hazy” or “fuzzy”. Preventing all rational debate on the most “sensitive” issues is the very opposite of an effective, truth-promoting policy; it systematically pushes us towards having the wrong sorts of views, and away from having the right ones.
One should also note that it’s hard to predict how our current standards are going to change in the future. For instance, at least among rationalists, the more recent view “theism? meh, whatever floats your boat” tends to practically go hand-in-hand with a “post-rationalist” redefinition of “what exactly it is that theists mean by ‘God’ ”. You can see this very explicitly in the popularity of egregores like “Gnon”, “Moloch”, “Elua” or “Ra”, which are arguably indistinguishable, at least within a post-rationalist POV, from the “gods” of classical myths! But such a “twist” would be far beyond what the average RationalWiki contributor would have been able to predict as the consensus view about the issue back in that site’s heyday—even if he was unusually favorable to theists! Clearly, if we retroactively tried to apply the argument “we (RationalWiki/the rationalist community) should be a lot more pro-theist than we are, and we cannot allow this to be debated under any circumstances because that would clearly lead to very bad consequences”, we would’ve been selling the community short.
A case more troublesome than an ineffective standard is an actively harmful one. Part of the rationalist virtue sphere is recognising your actual impact even when it goes wildly against your expectations. Political speech being known to be a clusterfuck should orient as to “get it right” and not so much “apply solutions”. People that grow up into harmony (optimise for harmony in agent speech) while using epistemology as a dumpstat are more effective in conversation safety. Even if rationalist are having more useful beliefs about other belief-groups the rational memplex being more distant from other memplexes means meaningful interaction is harder. We run the risk of having our models of groups such as theists advocate their interests rather than the persons themselfs. Sure we have distinct reasons why we can’t implement group-interoperatibility the same way that they can/do implement it. But if we empatsize how little we value safety vs accuracy it doesn’t make us move to solve safety. And we are supposedly good at intentionally setting out to solve hard problems. And it should be permissible to try to remove unneccary obstacles for people to join in the conversation. If the plan is to come up with an awesome way to conduct business/conversation and then let that discovery benefit others a move that makes discovery easier but sharing of the results harder might not move that much closer to the goal than naively only caring about discovery.
I’m very sorry that we seem to be going around in circles on this one. In many ways, the whole point of that call to doing “post-rationality” was indeed an attempt to better engage with the sort of people who, as you say, “have epistemology as a dumpstat”. It was a call to understand that no, engaging in dark side epistemology does not necessarily make one a werewolf that’s just trying to muddy the surface-level issues, that indeed there is a there there. Absent a very carefully laid-out argument about what exactly it is that’s being expected of us I’m never going to accept the prospect that the rationalist community should be apologizing for our incredibly hard work in trying to salvage something workable out of the surface-level craziness that is the rhetoric and arguments that these people ordinarily make. Because, as a matter of fact, calling for that would be the quickest way by far of plunging the community back to the RationalWiki-level knee-jerk reflex of shouting “werewolf, werewolf! Out, out out, begone from this community!” whenever we see a “dark-side-epistemology” pattern being deployed.
(I also think that this whole concern with “safety” is something that I’ve addressed already. But of course, in principle, there’s no reason why we couldn’t simply encompass that into what we mean by a standard/norm being “ineffective”—and I think that I have been explicitly allowing for this with my previous comment.)
I does seem weird why so little communication is achieved with so many words.
I might be conflicted with interpreting messages in opposite directions on different layers.
> Clearly, if we retroactively tried to apply the argument “we (RationalWiki/the rationalist community) should be a lot more pro-theist than we are, and we cannot allow this to be debated under any circumstances because that would clearly lead to very bad consequences”, we would’ve been selling the community short.
This seem like a statement that argument of “we should be pro-teist & can not allow debate because bad consequence” would have been an error. If it would have been presented as proposal it would have indeed been an argument. “can not allow debate” would seem like a stance against being able to start arguments. It seems self-refuting and in general wanting censorship of censorship which I have very thought time on whether it’s for or against censorship. Now the situation would be very different if there was a silent or assumed consensus that debate could not be had, but it’s kinda differnt if debate and decision not to have debate is had.
I lost how exactly it relates to this but I realised that the “look these guys spreading known falsehoods” kind of attitude made me not want to engage socially probably by pattern-matching to a sufficiently lost soul to not be reachable within discussion timeframe. And I realised that the standard for sanity I was using for that comparison came from my local culture and realised that the “sanity waterline” situation here might be good enough that I don’t understand other peoples need for order. And the funny thing being that there is enough “sanity seeking” within religious groups that I was used for veteran religious persons to guide novice religious persons away from those pitfalls. If someone was praying for a miracle for themselfs that would be punished and intervened and I kinda knew the guidance even if I didn’t really feel “team religion”. Asking for mysterious power for your personal benefit is magic. It’s audacious to ask for that and it would not be good for the moral development of the prayer to grant that prayer. That is phrased in terms of virtue instead of epistemology. But never the less it’s insanity by other conceptualization. The other kind of argumentation avenue that focuses that prayer doesn’t work seemed primitive by comparison. I was way too intune in tracking how sane vs insane religion works to really believe a pitting of reason vs religion (I guess reading there is that kind of pitting present in parents of this comment, I think I am assuming that other people might have all of their or essentially all of their religion pool insane so that their opinion of religion as insane is justified (and even came up with stories why it could be so because of history)).
I guess part of what sparked me initially to write was that “increasing description length” of things that worsens overall utility seemd like making non-sense harder to understand. My impression was that it’s the goal to make nonsense plainly and easy so. There is some allusion that there is some kind of zero-sum going on with description lengths. But my impression was that people have a hard time processing any kind of option and that shortening of all options is on the table.
I had some idea about how if a decison procedure is too reflex like it doesn’t ever enter into concious mind to be subject to critique. But negating a unreflective decision procedure is not intelligence. So what you want is have it enter into concious thought where you can verify it’s appropriateness (where it can be selectively allowed). If you are suffering from an optical illusion you do not close your eye but critically evaluate what you are seeing.
You do realize that viewpoints about the state-of-Nature don’t have preferred agendas? Hume teaches us that you can’t derive an ought from an is. By the same token, you can’t refute an is from an ought!
I didn’t say anything about “viewpoints about the state-of-Nature”. I’m not sure what you think I’m saying, but if you interpreted my comment on the basis of the assumption that I am unfamiliar with Hume, then you’ve probably misinterpreted it.
Okay, so where exactly do you see Zack M. Davis as having expressed claims/viewpoints of the “ought” sort? (i.e. viewpoints that might actually be said to involve a preferred agenda of some kind?) Or are you merely saying that this seems to be what Vanessa’s argument implies/relies on, without necessarily agreeing one way or the other?
The latter.
Witch burnings are dangerous. Some peoples main defence against not being burned is to obediently follow community norms. If some of those norms become hazy then the strategy of addhering to them becomes harder and it’s possible that other factors than compliance influence who gets thus attacked thus the defence strategy revolving around compliance becomes less effective. Thus anyone that muddies the community norms is dangerous.
Now if the community norms screw over you personally in a major way that is very unfortunate and it might make sense to make more complex norms to mitigate inconvenience to some members. But this discussion really can’t take place without risking the stabilty of the solution for the “majority”, “usual” or “founder” case. Determining what kind of risk is acceptable for the forseeable improvement of the situation might be very controversial.
There are some physically very able humans that currently don’t have to think about their muscles being employed against each other based on a very simplistic box thinking. If you take their boxes away they might need to use more complex cognitive machinery which would have a higher chance of malfunctioning which would/could result in lower security situations.
In general it might not be fair to require persons improving the situation for the portion that suffers from it the most to take into account the slightest security worries of the least impacted. But the effect is there.
Why would anyone think that agreement on meta-level abstractions (like the thing I was trying to say in “Where to Draw the Boundaries?”) would force agreement on object-level issues? That would be crazy! Object-level issues in the real world are really complicated: you’d need to spend a lot more wordcount just to make a case for any particular object-level claim—let alone reach agreement!
Meta-level abstractions are by definition not real. So there’s no arguing whether one is right or wrong. Any “agreement” on an abstraction is about what parts of reality it helps to predict. Without object-level examples, what is there to discuss?
Maybe a better way to put it would be that agreement on meta-level principles more reliably forces agreement on simple object-level issues?
I think it’s important, valuable, and probably necessary to work out theoretical principles in an artificially simple and often “abstract” context, before you can understand how to correctly apply them to a more complicated situation—and the correct application to the more complicated situation is going to be a longer explanation than the simple case. The longer the explanation, the more chances someone has to get one of the burdensome details wrong, leading to more disagreements.
Students of physics first master problems about idealized point masses, frictionless planes, perfectly elastic collisions, &c. as a prerequisite for eventually being able to solve more real-world-relevant problems, like how to build a car or something—even if the ambition of real-world automotive engineering was one’s motivation for studying (or lecturing about) physics.
Similarly, I think students of epistemology need to first master problems about idealized bleggs and rubes with five binary attributes, before they can handle really complicated issues (e.g., the implications on social norms of humans’ ability to recognize each other’s sex)—even if the ambition of tackling hard sociology problems was one’s motivation for studying (or lecturing about) epistemology.
Imagine being at a physics lecture where one of the attendees kept raising their hand to complain that the speaker was using abstraction to “obfuscate” or “disguise” the real issue of how to build a car. That would be pretty weird, right??
In cases I can think of, this is the reverse of what happens. In fact, agreement on (simple, often too simple) object-level issues allows abstractions which can be agreed on.
Physics is a great example. Intro physics lectures include a massive number of demonstrations. These are simple (too simple to use to build a car), but are clearly and incontrovertibly real-world behaviors. There is object-level proof that the abstractions have at least some tie to reality.
That’s understandable, but I hope it’s also understandable that I find it unpleasant that our standard Bayesian philosophy-of-language somehow got politicized (!?), such that my attempts to do correct epistemology are perceived as attacking people?!
Like, imagine an alternate universe where posts about the minimum description length principle were perceived as an attack on Christians (because atheists often argue that Occam’s razor implies that theories about God are unnecessarily complex), and therefore somewhat unseemly (because politics is the mind-killer, and criticizing a popular religion has inextricable political consequences).
I can see how it would be really annoying if someone on your favorite rationality forum wrote a post about minimum description length, if you knew that their work about MDL was partially derived from other work (on a separate website, under a pseudonym) about atheism, and you happened to think that Occam’s razor actually doesn’t favor atheism.
Or maybe that analogy is going to be perceived as unfair because we live in a subculture that pattern-matches religion as “the bad guys” and atheism as the “good guys”? (I could try to protest, “But, but, you could imagine as part of the thought experiment that maybe Occam’s razor really doesn’t favor atheism”, but maybe that wouldn’t be perceived as credible.)
Fine. We can do better. Imagine instead some crank racist psuedoscientist who, in the process of pursuing their blatantly ideologically-motiviated fake “science”, happens to get really interested in the statistics of the normal distribution, and writes a post on your favorite rationality forum about the ratio of areas in the right tails of normal distributions with different means.
I can see how that would be really annoying—maybe even threatening! Which might make it all the more gratifying if you can find a mistake in the racist bastard’s math: then you could call out the mistake in the comments and bask in moral victory as the OP gets downvoted to oblivion for the sin of bad math.
But if you can’t find a mistake—if, in fact, the post is on-topic for the forum and correct in the literal things that it literally says, then complaining about the author’s motive for being interested in the normal distribution doesn’t seem like an obviously positive contribution to the discourse?—even if you’re correct about the author’s motive. (Although, you might not be correct.)
Like, maybe statistics is part of the common interest of many causes, such that, as a matter of local validity, you should assess arguments about statistics on their own merits in the context that those arguments are presented, without worrying about how those arguments might or might not be applied in other contexts?
What, realistically, do you expect the atheist—or the racist, or me—to do? Am I supposed to just passively accept that all of my thoughts about epistemology are tainted and unfit for this forum, because I happen to be interested in applying epistemology to other topics (on a separate website, under a pseudonym)?
I think the grandparent is an on-topic response to the OP, relating the theme of the OP (about how if you don’t have negative feedback or “No”s, then that makes positive feedback or “Yes”es less significant) to both a hypothetical example about social network voting mechanisms, and, separately, to another philosophy topic (about the cognitive function of categories) that I’ve been thinking a lot about lately! That’s generally what happens when people comment on posts: they think about the post in the context of their own knowledge and their own priorities, and then write a comment explaining their actual thoughts!
Like, if you think the actual text of anything I write on this website is off-topic, or poorly-reasoned, or misleading on account of omitting relevant considerations, then please:
Downvote it, and
Leave a critical comment explaining what I got wrong (if you have time).
Those actions are unambiguously prosocial, because downvotes help other users decide what’s worth their time to read, and criticism of bad reasoning helps everyone reading get better at reasoning! But criticizing me because of what you know about my personal psychological motives for making otherwise-not-known-to-be-negative contributions seems … maybe less obviously prosocial?
Like, what happens if you apply this standard consistently? Did you know that Eliezer Yudkowsky’s writings that are ostensibly about human rationality, were actually mostly conceived in the context of his plans to build a superintelligence to literally take over the world?! (Although he denies it, of course.) That’s politics! Should we find it unpleasant that Yudkowsky always brings his hobbyhorse in, but in an “abstract” way that doesn’t allow discussing the actual object-level political question about whether he should rule the world?
Am I wrong here? Like, I see your concern! I really do! I’m sorry if we happen to be trapped in a zero-sum game whereby my attempts to think seriously in public about things I’m interested in ends up imposing negative externalities on you! But what, realistically, do you expect me to do? Happy to talk privately sometime if you’d like. (In a few weeks; I mostly want to focus on group theory and my dayjob for the rest of May.)
I’m not sure what your hobby horse is, but I do take objection to the assumption in this post that decoupling norms are the obvious and only correct way to deal with things. The problem with this is that if you actually care about the world, you can’t take arguments in isolation, but have to consider the context in which they are made.
1. It can be perfectly OK for the environment to bring up a topic once, but can make people less likely to want to visit the forum if someone brings it up all the time and tries to twist other people’s posts towards a discussion of their thing. It would be perfectly alright for moderators who didn’t want to drive away their visitors to ask this person to stop.
2. It can be perfectly OK to kick out someone who has a bad reputation that makes important posters unable to post on your website because they don’t want to associate with that person, even IF that person has good behavior.
3. It can be perfectly OK to downvote posts that are well-reasoned, on topic, and not misleading, because you’re worried about the incentives of those posts being highly upvoted.
All of these things are tradeoffs with decoupled conversation obviously, which has its’ own benefits. The website has to decide what values it stands for and will fight for, vs. what it will be flexible on depending on context. What I don’t think is OK is just to ignore context and assume that decoupling is always unambiguously the right call.
Zack didn’t say this. What he said was:
Which is compatible with thinking more details should be taken into account when the statistical arguments are applied in other contexts (in fact, I’m pretty sure this is what Zack thinks).
Discussion of abstract epistemology principles, which generalize across different contexts, is perhaps most of the point of this website...
Your points 1,2,3 have nothing to do with the epistemic problem of decoupling vs contextualizing, they have to do with political tradeoffs in moderating a forum; they apply to people doing contextualization in their analysis, too. I hate that the phrase “contextualizing norms” is being used to conflate between “all sufficiently relevant information should be used” and “everything should be about politics”.
This is probably because I don’t know what the epistemic problem is. I only know about the linked post, which defines things like this:
I sometimes round this off in my head to something like “pure decouplers think arguments should be considered only on their epistemic merits, and pure contextualizers think arguments should be considered only on their instrumental merits”.
There might be another use of decoupling and contextualizing that applies to an epistemic problem, but if so it’s not defined in the canonical article on the site.
My basic read of Zack’s entire post was him saying over and over “Well there might be really bad instrumental effects of these arguments, but you have to ignore that if their epistemics are good.” And my immediate reaction to that was “No I don’t, and that’s a bad norm.”
The proper words for that aren’t decoupling vs contextualizing, it’s denotative vs enactive language. An orthogonal axis to how many relevant contextual factors are supposed to be taken into account. You can require lots of contextual factors to be taken into account in epistemic analysis, or require certain enactments to be made independent of context.
Note, the original post makes the conflation I’m complaining about here too!
It might just make more sense to give this one up to word inflation and come up with new words. I’ll happily use the denotative vs. enactive language to point to this thing in the future, but I’ll probably have to put a footnote that says something like (what most people in the community refer to as decoupling vs. contextualizing.
It really looks like you’re defending the “appeal to consequences” as a reasonable way to think, and a respectable approach to public epistemology. But that seems so plainly absurd that I have to assume that I’ve misunderstood. What am I missing?
It might be that we just have different definitions of absurd and you’re not missing anything, or it could be that you’re taking an extreme version of what I’m saying.
To wit, my stance is that to ignore the consequences of what you say is just obviously wrong. Even if you hold truth as a very high value, you have to value it insanely more than any other value to never encounter a situation where you’re not compromising other things you value by ignoring the difference you could make by not saying something/lying/being careful about how to phrase things, etc.
Now obviously, you also have to consider the effect this type of thinking/communication has on discourse and the public ability to seek the truth—and once you’ve done that you’re ALREADY thinking about the consequences of what you say and what you allow others to say, and the task at that point is to simply weigh them against each other.
It’s important to distinguish the question of whether, in your own personal decisionmaking, you should ever do things that aren’t maximally epistemically good (obviously, yes); from the question of whether the discourse norms of this website should tolerate appeals to consequences (obviously, no).
It might be morally right, in some circumstances, to pass off a false mathematical proof as a true one (e.g. in a situation where it is useful to obscure some mathematical facts related to engineering weapons of mass destruction). It’s still a violation of the norms of mathematics, with good reason. And it would be very wrong to argue that the norms of mathematics should change to accommodate people making this (by assumption, morally right) choice.
To summarize: you’re destroying the substrate. Stop it.
I agree it’s important to realize that these things are fundamentally different.
A better norm of mathematics might be to NOT publish proofs that have obvious negative consequences like enabling weapons of mass destruction, and have a norm that actively disincentivizes people who publish that sort of research.
In other words, a norm might be to basically be epistemically pure, UNLESS the local instrumental considerations outweigh the cost to epistemic climate. This can be rounded down to “have norms about epistemics and break them sometimes,” but only if when someone points at edge cases where the norms are actively harmful, they’re challenged that sometimes the breaking of those norms is perfectly OK.
IE, if someone is using the norms of the community as a weapon, it’s important to point at that the norms are a means to an end, and that the community won’t blindly allow itself to be taken advantage of.
I think my actual concern with this line of argumentation is: if you have a norm of “If ‘X’ and ‘X implies Y’ then ‘Y’, EXCEPT when it’s net bad to have concluded ‘Y’”, then the werewolves win.
The question of whether it’s net bad to have concluded ‘Y’, is much, much more complicated than the question of whether, logically, ‘Y’ is true under these assumptions (of course, it is). There are many, many more opportunities for werewolves to gum up the works of this process, making the calculation come out wrong.
If we’re having a discussion about X and Y, someone moves to propose ‘Y’ (because, as it has already been agreed, ‘X’ and ‘X implies Y’), and then someone else says “no, we can’t do that, that has negative consequences!”, that second person is probably playing a werewolf strategy, gumming up the works of the epistemic substrate.
If we are going to have the exception to the norm at all, then there has to be a pretty high standard of evidence to prove that adding ‘Y’ to the discourse, in fact, has bad consequences. And, to get the right answer, that discussion itself is going to have to be up to high epistemic standards. To be trustworthy, it’s going to have to make logical inferences much more complex than “if ‘X’ and ‘X implies Y’, then ‘Y’”. What if someone objects to those logical inference steps, on the basis that they would have negative consequences? Where does that discussion happen?
In practice, these questions aren’t actually answered. In practice, what happens is that social epistemology doesn’t happen, and instead everything becomes about coalitional politics. Saying ‘Y’ doesn’t mean ‘Y is literally true’, it means you’re part of the coalition of people who wants consequences related to (but not even necessarily directly implied by!) the statement ‘Y’ to be put into effect, and that makes you blameworthy if those consequences hurt someone sympathetic, or that coalition is bad. Under such conditions, it is a major challenge to re-establish epistemic discourse, because everything is about violence, including attempts to talk about the “we don’t have epistemology and everything is about violence” problem.
We have something approaching epistemic discourse here on LessWrong, but we have to defend it, or it, too, becomes all about coalitional politics.
I want to note that LW definitely has exceptions to this norm, if only because of the boring, normal exceptions. (If we would get in trouble with law enforcement for hosting something you might put on LW, don’t put it on LW.) We’ve had in the works (for quite some time) a post explaining our position on less boring cases more clearly, but it runs into difficulty with the sort of issues that you discuss here; generally these questions are answered in private in a way that connects to the judgment calls being made and the particulars of the case, as opposed to through transparent principles that can be clearly understood and predicted in advance (in part because, to extend the analogy, this empowers the werewolves as well).
Another common werewolf move is to take advantage of strong norms like epistemic honesty, and use them to drive wedges in a community or push their agenda, while knowing they can’t be called out because doing so would be akin to attacking the community’s norms.
I’ve seen the meme elsewhere in the rationality community that strong and rigid epistemic norms are a good sociopath repellent, and it’s ALMOST right. The truth is that competent sociopaths (in the Venkat Rao sense) are actually great at using rigid norms for their own ends, and are great at using the truth for their own ends as well. The reason it might work well in the rationality community (besides the obvious fact that sociopaths are even better at using lies to their own ends than the truth) is that strong epistemics are very close to what we’re actually fighting for—and remembering and always orienting towards the mission is ACTUALLY an effective first line defense against sociopaths (necessary but not sufficient IMO).
99 times out of a 100, the correct way to remember what we’re fighting for is to push for stronger epistemics above other considerations. I knew that when I made the original post, and I made it knowing I would get pushback for attacking a core value of the community.
However, 1 time out of 100 the correct way to remember what you’re fighting for is to realize that you have to sacrifice a sacred value for the greater good. And when you see someone explicitly pushing the gray area by trying to get you to accept harmful situations by appealing to that sacred value, it’s important to make clear (mostly to other people in the community) that sacrificing that value is an option.
What specifically do you mean by “werewolf” here & how do you think it relates to the way Jessica was using it? I’m worried that we’re getting close to just redefining it as a generic term for “enemies of the community.”
By werewolf I meant something like “someone who is pretending be working for the community as a member, but is actually working for their own selfish ends”. I thought Jessica was using it in the same way.
That’s not what I meant. I meant specifically someone who is trying to prevent common knowledge from being created (and more generally, to gum up the works of “social decisionmaking based on correct information”), as in the Werewolf party game.
Worth noting: “werewolf” as a jargon term strikes me as something that is inevitably going to get collapsed into “generic bad actor” over time, if it gets used a lot. I’m assuming that you’re thinking of it sort of as in the “preformal” stage, where it doesn’t make sense to over-optimize the terminology. But if you’re going to keep using it I think it’d make sense to come up with a term that’s somewhat more robust against getting interpreted that way.
(random default suggestion: “obfuscator”. Other options I came up with required multiple words to get the point across and ended up too convoluted. There might be a fun shorthand for a type of animal or mythological figure that is a) a predator or parasite, b) relies on making things cloudy. So far I could just come up with “squid” due to ink jets, but it didn’t really have the right connotations)
That is a bit more specific than what I meant. In this case though, the second more broad meaning of “someone who’s trying to gum up the works of social decisionmaking” still works in the context of the comment.
Um, in context, this sounds to me like you’re arguing that by writing “Where to Draw the Boundaries?” and my secret (“secret”) blog, I’m trying to get people to accept harmful situations? Am I interpreting you correctly? If so, can you explain in detail what specific harm you think is being done?
Sorry, I was trying to be really careful as I was writing of not accusing you specifically of bad intentions, but obviously it’s hard in a conversation like this where you’re jumping between the meta and the object-level.
It’s important to distinguish a couple things.
1. Jessica and I were talking about people with negative intentions in the last two posts. I’m not claiming that you’re one of those people that is deliberately using this type of argument to cause harm.
2. I’m not claiming that it was the writing of those two posts that were harmful in the way we were talking about. I was claiming that the long post you wrote at the top of the thread where you made several analogies about your response, were exactly the sort of gray area situations where, depending on context, the community might decide to sacrifice it’s sacred value. At the same time, you were banking on the fact that it was a sacred value to say “even in this case, we would uphold the sacred value.” This has the same structure as the werewolf move mentioned above, and it was important for me to speak up, even if you’re not a werewolf.
Thanks for clarifying!
So, it’s actually not clear to me that deliberate negative intentions are particularly important, here or elsewhere? Almost no one thinks of themselves as deliberately causing avoidable harm, and yet avoidable harm gets done, probably by people following incentive gradients that predictably lead towards harm, against truth, &c. all while maintaining a perfectly sincere subjective conscious narrative about how they’re doing God’s work, on the right side of history, toiling for the greater good, doing what needs to be done, maximizing global utility, acting in accordance with the moral law, practicing a virtue which is nameless, &c.
Agreed. If I’m causing harm, and you acquire evidence that I’m causing harm, then you should present that evidence in an appropriate venue in order to either persuade me to stop causing harm, or persuade other people to coördinate to stop me from causing harm.
So, my current guess (which is only a guess and which I would have strongly disagreed with ten years ago) is that this is a suicidally terrible idea that will literally destroy the world. Sound like an unreflective appeal to sacred values? Well, maybe!—you shouldn’t take my word for this (or anything else) except to the exact extent that you think my word is Bayesian evidence. Unfortunately I’m going to need to defer supporting argumentation to future Less Wrong posts, because mental and financial health requirements force me to focus on my dayjob for at least the next few weeks. (Oh, and group theory.)
(End of thread for me.)
(responding, and don’t expect another response back because you’re busy).
I used to think this, but I’ve since realized that intentions STRONGLY matter. It seems like a system is fractal, the goals of the subparts/subagents get reflected in the goal of the broader system. People with aligned intentions will tend to shift the incentive gradients, as well people with unaligned intentions (of course, this isn’t a one way relationship, the incentive gradients will also shift the intentions).
I deny that your approach ever has an advantage over recognizing that definitions are tools which have no truth values, and then digging into goals or desires.
Thanks, these are some great points on some of the costs of decoupling norms! (As you’ve observed, I’m generally pretty strongly in favor of decoupling norms, but policy debates should not appear one-sided.)
I would want to distinguish “brings it up all the time” in the sense of “this user posts about this topic when it’s not relevant” (which I agree is bad and warrants moderator action) versus the sense of “this user posts about this topic a lot, and not on other topics” (which I think is generally OK).
If someone is obsessively focused on their narrow special interest—let’s say, algebraic topology—and occasionally comments specifically when they happen to think of an application of algebraic topology to the forum topic, I think that’s fine, because people reading that particular thread get the benefit of a relevant algebraic topology application—even if looking at that user’s posting history leaves one with an unsettling sense of, “Wow, this person is creepily obsessed with their hobbyhorse.”
I agree that this would be bad, but I think it’s usually possible to distinguish “twist[ing] other people’s posts towards a discussion of their thing” from a genuinely relevant mention of the thing that couldn’t (or shouldn’t) be reasonably expected to derail the discussion?
In the present case, my great-great-grandparent comment notes that the list-of-koans format lends itself to readers contributing their own examples in the comments, and I tried to give two such examples (trying to mimic the æsthetic of the OP by continuing the numbered list and Alice/Bob/Charlie/&c. character name sequence), one of which related the theme of the OP to the main point of one of my recent posts.
In retrospect, maybe I should’ve thought more carefully about how to phrase the proposed example in a way that makes the connection to the OP more explicit/obvious? (Probably-better version: “A meaningful ‘Yes’ answer to the question ‘Is G an H?’ requires a definition of H such that the answer could be ‘No’.”)
It’s true that, while composing the great-great-grandparent, I was kind of hoping that some readers would click through the link and read my earlier post, which I worked really hard on and which I think is filling in a gap in “A Human’s Guide to Words” that I’ve seen people be confused about. But I don’t see how this can reasonably be construed as an attempt to derail the discussion? Like, I ordinarily wouldn’t expect a brief comment of the form “Great post! Here’s a couple more examples that occurred to me, personally” to receive any replies in the median case.
(Although unfortunately, it empirically looks like the discussion did, in fact, get derailed. I feel bad for Scott G. that we’re cluttering up his comment section like this, but I can’t think of anything I wish I had done differently other than wording the great-great grandparent more clearly, as mentioned in the paragraph-before-last. Given Vanessa’s reply, I felt justified in writing my counterreply … and here we are.)
Agreed, the moderators are God and their will must be obeyed.
So, the dynamic you describe here definitely exists, but I actually think it’s a pretty serious problem for our collective sanity: if some truths happen to lie outside of Society’s Overton window, then systematic truthseekers (who want to collect all the truths, not just the majority of them that are safely within the Overton window) will find themselves on the wrong side of Respectability, and if people who care about being Respectable (and thereby having power in Society) can’t even talk to people outside the Overton window (not even agree with—just talk to, using, for example, a website), then that could have negative instrumental consequences in the form of people with power in Society making bad policy decisions on account of having inaccurate beliefs.
I want to write more about this in the future (albeit not on Less Wrong), but in the meantime, maybe see the immortal Scott Alexander’s “Kolmogorov Complicity And The Parable Of Lightning” for an expression of similar concerns:
Regarding “Kolmogorov complicity”, I just want to make clear that I don’t want to censor your opinion on the political question. Such censorship would only serve to justify your notion that “we only refuse to believe X because it’s heresy, while any systematic truthseeker would believe X”, which is something I very much disagree with. I might be interested in discussing the political question if we were allowed to do it. It is the double bind of, not being able to allowed to argue with you on the political quesiton while having to listen to you constantly hinting at it, is what bugging me. Then again, I don’t really have a good solution.
I’ve read Zack’s blog (the one that is not under the name Zack M. Davis), and his hobbyhorse has to do with transgender issues and gender categories. However, even when he is writing directly about the matter on his own blog, I am unclear what he is actually saying about these issues. There is still a certain abstractness and distance from the object level.
Just FYI.
(I had originally strong-downvoted the parent because I don’t think it’s relevant, but alas, it looks like the voting population disagreed.)
Wait, really? Am I that bad of a writer??
Well, yes. I’m a rationalist. What do you expect?
Engagement with the object level.
It is nearly impossible for a human being to write a correct program just by thinking really hard. And that is a situation where everything is cut and dried, mathematically exact. Mathematicians do fairly well at proving theorems rigorously, but they have an easier task than programmers, for they only have to convince people, not machines. Outside of those domains, abstract argument on its own is nothing more than abstract art, unless it is continually compared with the object level and exposed to modus delens.
And the object level is what we’re all doing this for, or what’s the point?
What’s the point of concrete ideas, compared to more abstract ideas? The reasons seem similar, just with different levels of grounding in experience, like with a filter bubble that you can only peer beyond with great difficulty. This situation is an argument against emphasis on the concrete, not for it.
(I think there’s a mixup between “meta” and “abstract” in this subthread. It’s meta that exists for the object level, not abstractions. Abstractions are themselves on object level when you consider them in their own right.)
Everything is on the object level when considered in its own right.
Abstractions are a central example of things considered on the object level, so I don’t understand them as being in opposition to the object level. They can be in opposition to more concrete ideas, those closer to experience, but not to being considered on object level.
The point is the relationship between the levels of the ladder of abstraction. Outside of mathematics and programming, long arguments at high levels go wrong without being checked against experience. If experience contradicts, so much the worse for the argument.
Unsure of mathematics, but software development goes wrong in exactly the same way—designs and ideas too far removed from the silicon go wildly wrong and don’t match at all what actually gets built. Eventually, the code wins and the arguments lose (or more often, the code fails and everybody loses).
Our philosophy of language did not “somehow” got politicized. You personally (Zack M. Davis) politicized it by abusing it in the context of a political issue.
If you had interesting new math or non-trivial novel insights, I would not complain. Of course that’s somewhat subjective: someone else might consider your insights valuable.
You’re right, I don’t have a good meta-level solution. So, if you want to keep doing that thing you’re doing, knock yourself out.
I had hard time to track down what is the refefrent to the abuse mentioned in the parent post.
It does seem that the concept was employed in a political context. To my brain politizing is a particular kind of use. I get that if you effectively employ any kind of argument towards a political end it becomes politically relevant. However it would be weird if any tool employed would automatically become part of politics.
If beliefs are to pay rent and this particular point is established / marketed to establish a specific another point I could get on board with a expectation to disclose such “financial ties”. Up to this point I know that this belief is sponsored by another belief but I do not know which belief and I don’t fully get why it would be troublesome to reveal this belief.
See my reply to Said Achmiz.
I don’t really have a dog in whatever fight this is, but looking at Zack’s posts and comments recently, I see nothing but interesting and correct insights and analysis, devoid of any explicit politics (but perhaps yielding insights about such?). How can you call this “abuse”? The overwhelming majority of the content that gets posts to Less Wrong these days should aspire to the level of quality of the stuff I just linked!
The abuse did not happen on LW. However, because I happen to be somewhat familiar with Davis’ political writing, I am aware of a sinister context to what ey write in LW of which you are not aware. Now, you may say that this is not a fair objection to Davis writing whatever ey write here, and you might well be right. However, I thought I at least have the right to express my feelings on this matter so that Davis and others can take them into account (or not). If we are supposed to be a community, then it should be normal for us to consider each other’s feelings, even when there was no norm violation per se involved, not so?
… a “sinister context”?!
I am, frankly, appalled to read this sort of thing on Less Wrong. You are, in all seriousness, attacking someone’s writings about abstract epistemology and Bayesian inference, on Less Wrong, of all places (!!), not because there is anything at all mistaken about them, but because of some alleged “sinister context” that you are bringing in from somewhere else. To call this “not a fair objection” would be a gross understatement. It is shameful.
Absolutely not.
This sort of attitude is tremendously corrosive to productive discussion and genuine truth-seeking. We have discussed this before… and am I genuinely disappointed that this sort of thing is happening again.
Ugh, because productive discussion happens between perfectly dispassionate robots in a vacuum, and if I’m not one then it is my fault and I should be ashamed? Specifically, I should be ashamed just for saying that something made me uncomfortable rather than suffering in silence? I mean, if that’s your vision, it’s fine, I understand. But I wonder whether that’s really the predominant opinion around here? What about all the stuff about “community” and “Village” etc?
As discussed in the linked thread—it is none of my business, nor the business of any of your interlocutors, whether you are, or are not, a “perfectly dispassionate robot in a vacuum”, when it comes to discussions on subjects like the OP. That is not something which should enter into the discussion at all; it is simply off-topic.
If we permit the introduction of such questions as whether you feel uncomfortable (about the topic, or any on-topic claims) into discussions of abstract epistemology, or Bayesian inference, or logic, etc., when that discomfort in no way bears on the truth or falsity of the claims under discussion, then we might as well close up shop, because at that point, we have bid good-bye even to the pretense of “rationality”, much less the fact of it.
And if the “predominant opinion” disagrees—so much the worse for predominant opinion; and so much the sadder for Less Wrong.
Edit: And all this is, of course, not even mentioning your conflation of “I am uncomfortable” with insinuating comments about “sinister context”, and implications of wrongdoing on Zack’s part!
Alright, let’s suppose it’s off-topic in this thread, or even on this forum. But is there another place within the community’s “discussion space” where it is on-topic? Or you don’t think such a place should exist at all?
I’ve found /r/TheMotte (recently forked from /r/slatestarcodex) to be a good place to discuss politically-charged topics? (Again, also happy to talk privately sometime.)
I wasn’t referring to “where to discuss politically charged topics”, I was referring to “where to discuss the fact that something that happens on LessWrong.com makes me uncomfortable because [reasons]”.
To be honest I prefer to avoid politically charged topics, as long as they avoid me (which they didn’t, in this case).
I just want to chime in quickly to say that I disagree with Said here pretty heavily, but also don’t know that I agree with any other single person in the conversation, and articulating what I actually believe would require more time than I have right now.
I love that you’re willing to say that, but I’m a bit confused as to what purpose that comment serves. Without some indication of which parts you disagree with, and what things you DO believe, all this is saying is “I take no responsibility for what everyone is saying here”, which I assume is true for all of us.
Personally, I agree with Said on a number of aspects—a reader’s reaction to a topic, or to a poster, is not sufficient reason to do anything. This is especially true when the reader’s reaction is primarily based on non-LW information. I DISAGREE that this makes all discussion fair game, as long as it’s got a robe of abstraction which allows deniability that it relates to the painful topic.
I don’t know that I’ve seen anyone besides me claim that the abstraction seems too thin. It would take a discussion of when it applies and when it does not to get me to ignore my (limited) understanding of the participants’ positions on the related-but-not-on-LW topic.
Generally, if you want to talk about how LW is moderated or unpleasant behavior happening here, you should talk to me. [If you think I’m making mistakes, the person to talk to is probably Habryka.] We don’t have an official ombudsman, and perhaps it’s worth putting some effort into finding one.
This information should be publicly findable. And ideally anonymous information about reports received should also be published.
Alright, thank you!
What do you mean by ‘the community’s “discussion space”’? Are you referring to Less Wrong? Or something else?
I mean, the sum total of spaces that the rationalist community uses to hold discussions, propagate information, do collective decision making, (presumably) provide mutual support et cetera, to the extent these spaces are effective in fulfilling their functions. Anywhere where I can say something and people in the community will listen to me, and take this new information into account if it’s worth taking into account, or at least provide me with compassionate feedback even if it’s not.
Firstly, I have always said (and this incident has once again reinforced my view of this) that “we”, which is to say “rationalists”, should not be a “community”.
But, of course, things are what they are. Still, it is hardly any of my business, as a participant of Less Wrong, what discussions you have elsewhere, on some other forum. Why should it be?
Of course, it would be quite beyond the pale if the outcomes of those discussions were used in deciding (by those who have the authority to decide these things—basically, I mean the admins of Less Wrong) how to treat someone here!
In short, I am saying: in other places, discuss whatever you want to discuss (assuming your discussions are appropriate thereto… but, in any case—not my business). None of that should affect any discussions here. “I propose to treat <Less Wrong participant X> in such-and-such a way—why? because he said or did so-and-so, in another place entirely”—this ought not be acceptable or tolerated.
Well, that is a legitimate opinion. I just want to point out that it did not appear to be the consensus so far. If it is the consensus (or becomes such) then it seems fair to ask to make it clear, in particular to inform’s people’s decisions about how and whether to interact with the forum.
I think it is fairly clear that it’s not the consensus; I alluded to this in my comment (perhaps too obliquely?).
The rest of my comment should be read with the understanding that I’m aware of the above fact.
I won’t go so far as to say there should be no community, but I do believe that it (or they; there are likely lots of involved communities of rationalists) is not synonymous with LessWrong. There is overlap in topics discussed, but there are good LW topics that are irrelevant to some or all communities, and there are LOTS of community topics that don’t do well on LW.
And that includes topics that, in a vacuum, would be appropriate to LW, but are deeply related to topics in a community which are NOT good for LW. Sorry, but that entanglement of ideas makes it impossible to discuss rationally in a large group.
The dispute in question isn’t about epistemology but ontology and I think it’s worth keeping the two apart mentally but I think your general point still stands.
I think it needs clarification. It’s clearly vague enough that it’s not a valid reason by itself. However it is reasonable to think that part of the “bad vibe” would be the type why political meshing is bad while part of it could be relevant.
For example it could be that there is worry that constantly mentioning a specific point goes for “mere exposure” where just being exposed to a viewpoint increases ones belief in it without actual argumentation for it. Zack_M_Davis could then argue that the posting doesn’t get exposure more than would have been gotten by legimate means.
But we can’t go that far because there is no clear image what is the worry and unpacking the whole context would probably derail into the political point or otherwise be out-of-scope for epistemology.
For example if some crazy scientist like a nazi-scientist was burning people (I am assuming that burning people is ethically very bad) to see what happens I would probably want to make sure that the results that he produces contains actual reusable information. Yet I would probably vote against burning people. If I just contain myself to the epistemological sphere I might know to advice that larger sample-sizes lead to more realiable results. However being acutely aware that the trivial way to increase the sample size would lead to significant activity I oppose (ie my advice burns more people) I would probably think a little harder whether there is a lives-spent efficient way to get reliability. Sure refusing any cooperation ensures that I don’t cause any burned people. But it is likely that left to their own devices they would end up burning more people than if they were supplied with basic statistics and how to get maximum data from each trial. On one hand value is fragile and small epistemology improvements might correspond to big dips in average well-being. On the other hand taking the ethical dimension effectively into account it will seemingly “corrupt” the cold-hearted data processing. From lives-saved ambivalent viewpoint those nudges are needless inefficiencies, “errors”. Now I don’t know whether the worry about this case is that big but I would in general be interested when small linkages are likely to have big impacts. I guess from a pure epistemological viewpoint it would be “value chaoticness” where small formulation differences have big or unpredictable implications for values.
Can you say more about why you think La Griffe du Lion is a “crank racist psuedoscientist”? My impression (based on cursory familiarity with the HBD community) is that La Griffe du Lion seems to be respected/recommended by many.
The entire HBD community is seen as racist pseudoscientists by many.
Are the HBD community respected themselves?
Thanks for asking! So, a Straussian reading was actually intended there.
(Sorry, I know this is really obnoxious. My only defense is that, unlike some more cowardly authors, on the occasions when I stoop to esotericism, I actually explain the Straussian reading when questioned.)
In context, I’m trying to defend the principle that we shouldn’t derail discussions about philosophy on account of the author’s private reason for being interested in that particular area of philosophy having to do with a contentious object-level topic. I first illustrated my point with an Occam’s-razor/atheism example, but, as I said, I was worried that that might come off as self-serving: I want my point to be accepted because the principle I’m advancing is a good one, not due to the rhetorical trick of associating my interlocutor with something locally considered low-status, like religion. So I tried to think of another illustration where my stance (in favor of local validity, or “decoupling norms”) would be associated with something low-status, and what I came up with was statistics-of-the-normal-distribution/human-biodiversity. Having chosen the illustration on the basis of the object-level topic being disreputable, it felt like effective rhetoric to link to an example and performatively “lean in” to the disrepute with a denunciation (“crank racist psuedoscientist”).
In effect, the function of denouncing du Lion was not to denounce du Lion (!), but as a “showpiece” while protecting the principle that we need the unrestricted right to talk about math on this website. Explicitly Glomarizing my views on the merits of HBD rather than simply denouncing would have left an opening for further derailing the conversation on that. This was arguably intellectually dishonest of me, but I felt comfortable doing it because I expected many readers to “get the joke.”
Not every line in 37 Ways is my “standard Bayesian philosophy,” nor do I believe much of what you say follows from anything standard.
This probably isn’t our central disagreement, but humans are Adaptation-Executers, not Fitness-Maximizers. Expecting humans to always use words for Naive Bayes alone seems manifestly irrational. I would go so far as to say you shouldn’t expect people to use them for Naive Bayes in every case, full stop. (This seems to border on subconsciously believing that evolution has a mind.) If you believe someone is making improper inferences, stop trying to change the subject and name an inference you think they’d agree with (that you consider false).
...
I note that this isn’t a denial of the accusation that you’re bringing up a hobbyhorse, disguised by abstraction. It sounds more like a defense of discussing a political specific by means of abstraction. I’ve noted in at least some of your posts that I don’t find your abstractions very compelling without examples, and I that I don’t much care for the examples I can think of to reify your abstractions.
It’s at times like this that I’m happy I’m not part of a “rationalist community” that includes repetitive indirection of political fights along with denial that that’s what they are. But I wish you’d keep it off less wrong.
On the next level down, your insistence that words have consistent meaning and categories are real and must be consistent across usages (including both context changes and internal reasoning vs external communication) seems a blind spot. I don’t know if it’s caused by the examples you’re choosing (and not sharing), or if the reverse is true.
Zack said:
Which isn’t saying specifics should be discussed by discussing abstracts, it says abstracts should be discussed, even when part of the motivation for discussing the abstract is specific. Like, people should be able to collaborate on statistics textbooks even if they don’t agree with their co-authors’ specific applications of statistics to their non-statistical domains. (It would be pretty useless to discuss abstracts if there we no specific motivations, after all...)
Right. At least some abstract topics should be discussed, and part of the discussion is which, if any, specifics might be exemplary of such abstractions. Other abstract topics should be avoided, if the relevant examples are politically-charged and the abstraction doesn’t easily encompass other points of view.
Choosing to discuss abstracts primarily which happen to support a specific position, without disclosing that tie, is not OK. It’s discussing the specific in the guise of the abstract. I can’t be sure that’s what Zack is doing, but that’s how it appears from my outsider viewpoint.
Why?
How exactly does this differ from, “if the truth is on the wrong side politically, so much the worse for the truth”? Should we limit ourselves to abstract discussions that don’t constrain our anticipations on things we care about?
It differs in that there is no truth involved. The entire conversation is about which models and ontologies are best, without specifying what purpose they’re serving. The abstraction is avoiding talking about any actual truth (what predictions will be made, and how the bets will be resolved), while asserting that it improves some abstract concept of truth.
“Where to Draw the Boundaries?” includes examples about dolphins, geographic and political maps, poison, heaps of sand, and job titles. In the comment section, I gave more examples about Scott Alexander’s critique of neoreactionary authors, Müllerian mimickry in snakes, chronic fatigue syndrome, and accent recognition.
I agree that it’s reasonable for readers to expect authors to provide examples, which is why I do in fact provide examples. What do you want from me, exactly??
I have no idea what this is about, but it clearly doesn’t belong here. Can you have this discussion elsewhere?
We now do have a “banned user” feature. If I understand it right you should be able to put down Zack_M_Davis name in it and afterwards you won’t see anymore of his posts. If you want to avoid reading his posts because they make you feel bad, that seems to me like the ideal solution given the current way LW works.
Blocking Zack isn’t an appropriate response if, as Vanessa thinks, Zack is attacking her and others in a way that makes these attacks hard to challenge directly. Then he’d still be attacking people even after being blocked, by saying the things he says in a way that influences general opinion.
Feelings are information, not numbers to maximize.
It’s possible that your actual concern is with “I feel” language being used for communication.
You’re right that “feelings are information, not numbers to maximize” and that hiding a user’s posts is often not a good solution because of this.
I don’t think Christian is making this mistake though.
When someone is suffering from an injury they cannot heal, there are two problems, not one. The first is the injury itself — the broken leg, the loss of a relationship, whatever it may be. The second is that incessant alarm saying “THIS IS BAD THIS IS BAD THIS IS BAD” even when there’s nothing you can do.
If you want to help someone in this situation, it’s important to distinguish (and help them distinguish) between the two problem and come to agreement about which one it is that you should be trying to solve: are we trying to fix the injury here, or are we just trying to become more comfortable with the fact that we’re injured? Even asking this question can literally transform the sensation of pain, if the resulting reflection concludes “yeah, there’s nothing else to do about this injury” and “yeah, actually the sensation of pain itself isn’t a problem”.
Earlier in this discussion, Vanessa said “I feel X”, and the response she got was taking the problem to be about the “X” part, and arguing that X is not true. This is a great and satisfying response so long as the perceived problem is definitely “X” and not at all “I feel”. The response wasn’t satisfying though, and she responded by saying that she thought “I feel” was enough to be worth saying.
Since it has already been said that “if the problem is X, we can discuss whether X is actually true, and solve it if it is”, Christian’s contribution was to add “and if it’s not that you think X is actually true and just want help with your feelings, here’s a way that can help”. It’s helpful in the case where Vanessa decides “yes, the problem is primarily the feeling itself, which is maladaptive here”, and it’s also helpful in clarifying (to her and to others) that if she isn’t interested in taking the nerve block, her objection must be a factual claim about X itself, which can then be dealt with as we deal with factual claims (without special regards to feelings, which have been decided to be “not the problem”).
It’s not the most warm and welcoming way to deal with feelings (which may or may not reflect accurate/perceived as accurate upon reflection information), but not every space has to be warm and welcoming. There is a risk of conflating “it helps build community to help people manage their feelings” with “catering to feelings takes precedence over recognizing fact”, and that’s a nasty failure mode to fall into. If we want to manage that rule with a hard and fast “no emotional labor will be supplied here, you must manage your feelings in your own time”, that is a valid approach. And if there is a real threat of that conflation taking over, it’s probably the right one. However, there are better (more pleasant, welcoming/community building, and yes, truth-finding) methods to that we can play with once we’re comfortable that we’re safe from feelings becoming a negative utility monster problem. It’s just that in order to play with them safely, we must be very clear about the distinction between “I feel X, and this is valid evidence which you need to deal with” and “I feel X, and this is my problem, which I would appreciate assistance with even though you’re obviously not obligated to fix it for me”.
I like “I feel language”. I think nonviolent communication is good. It’s however a huge misunderstand of violent communication that it’s simply about exchanging a few words without exchanging the meaning. It cheapens the language. It has an aspect of dishonesty. People engaging in that kind of dishonesty is the reason that there are articles written about how nonviolent communication doesn’t work.
The institution of trigger warning exists because some people don’t want to be exposed to certain information that makes them feel bad. Banning users on LW who write posts that make you feel bad is similar.
I think it’s important to give people ways to avoid situations that make them feel bad if that’s their desire.
Quick clarification: That is not what that feature does. It currently only prevents users from commenting on any of your blogposts. I feel quite hesitant to make content-blocking too easy on LessWrong for a variety of reasons, though I am not fundamentally opposed to it. Will see whether I can write my full thoughts up sometime soon.
I thought that was what the personal feature is about. It feels to me like “banned users all” also needs a tooltip.
Actually, I would like clarification from the LW admins on this. As I understood it, the “banned user” feature prevents the given user from commenting on your posts (and… responding to your comments, maybe? I’m not clear on this part either). I am not aware of it doing anything to prevent you from seeing the “banned” user’s posts/comments which they post elsewhere.
That having been said, GreaterWrong does have an “ignore user” feature (which automatically collapses comments from a given user). (Being GW-specific, of course, it does nothing for you if you prefer to use the official site to browse LW content.)
Ah, apparently I wrote exactly this at the same time you made this comment. See this comment of mine.
I didn’t know about this feature. It has advantages and disadvantages, but I will at least consider it. Thank you!
My sibling comment may be relevant to your interests.
I see no reason why you are not allowed to discuss a specific example when someone talks abstractly when you consider this example to be important.
If you think the principle advocated in a post like Where to Draw the Boundaries? gets case X very wrong, it would be illuminating to write out why you think the principle that’s advocated gets case X wrong and how that shows that the abstract principle is flawed.
It’s harder to defend as that means you actually have to articulate a coherent concept of the concept of identity and argue why that concept is better then Where to Draw the Boundaries? but it’s very far removed from saying that you are not allowed to defend yourself.
As far as the norms of double crux goes, when there’s an object level disagreement and the crux seems to be on a higher abstract layer, the standard way to proceed would be to actually discuss the higher layer.
AFAIU discussing charged political issues is not allowed, or at least very frowned upon on LW, and for good reasons. So, I can’t discuss the object level. On the other hand, the meta level is too vague. That is, the error is in the way the abstract reasoning is applied to case X (it’s just not the right model), rather than in the abstract reasoning itself.
Note: you can generally talk about political stuff on your personal blog section. Part of the point of the frontpage/personal-blog distinction is so that there can be a bit of soft-pressure there without actually preventing people from talking about things.
There are certain areas that we might need to make individual judgement calls about (see Vaniver’s comment elsethread). And in general when discussing hot-button political issues I’d suggest you reflect on your goals and life choices (since I think it’s an easy domain to think you’re discussing something important when you’re mostly not). But that’s different from a ban.
Wait, what? If non-promoted posts (if that’s what you mean by “personal blogs”—I can’t find any other way to separate a post) have very different expectations and standards, why is the voting and karma identical and additive between personal blogs and the main site? Political hotbutton topics are difficult or impossible to discuss on LW.
I totally support the personal blogs being more lightly moderated, so things that push boundaries or even cross a line may not be noticed as quickly or at all. I expect to be downvoted massively if I post an unpopular political topic, regardless of whether I check “may promote” or not. It used to be I’d expect the same even for a popular political position, but looking at some recent vote totals for posts, I may be wrong on that.
This is mostly for technical reasons, which we haven’t invested in fixing because the discussion of hot-button political issues hasn’t really been a problem for the last year in a way that would make me highly concerned about the overlapping vote systems.
I don’t know precisely yet what the best voting system would look like to account for this, but the two obvious options would be to either completely deactivate karma accumulation for non-frontpage posts and comments (which feels a bit bad to me, but might be fine, and was our initial plan), or to add a special flag that we selectively add to posts that deactivates karma accumulation (which adds more judgement on our part).
Why not write a meta-level post about the general class of problem for which the abstract reasoning doesn’t apply? That could be an interesting post!
I’m guessing you might be thinking something along the lines of, “The ‘draw category boundaries around clusters of high density in configuration space’ moral doesn’t apply straightfowardly to things that are socially constructed by collective agreement”? (Examples: money, or Christmas. These things exist, but only because everyone agrees that they exist.)
I personally want to do more thinking about how social construction works (I have some preliminary thoughts on the matter that I haven’t finished fleshing out yet), and might write such a post myself eventually!
Given that Zack chose “Easy Going—I just delete obvious spam and trolling” as the commenting guidelines, I don’t see how it’s not allowed to write an object level comment about “this principle A is often wrongly applied in case X, but it shouldn’t be applied in case X as case X is different for reasons”.
If you actually add something interesting in for reasons* I would be very surprised if your posts gets downvoted. If you actually are able to articulate reasons that address blindspots that people have when thinking about principle A, such a comment would also likely be strongly upvoted given how LW functions even if it would partly political.
*and in this case it would be good to have object level reasons and not appeal to personal feelings.
Do you have a blog? If so, discussing the matter there may be sensible.
If you have no blog (or only have one that’s purposed for technical / professional / etc. topics), then it may be worthwhile to set up a third-party LessWrongMeta forum, for discussions of topics like this one. (There would, of course, be many challenges surrounding such a project, but it seems worth trying, and from a technical perspective the barriers to the attempt are low.)