Sure, this is short form. I’m not trying very hard to make a complete argument to defend my thoughts, just putting them out there. There is no norm that I need always abide everywhere to present the best (for some notion of best) version of my reasons for things I claim, least of all, I think, in this space as opposed to, say, in a frontpage post. Thus it feels to me a bit out of place to object in this way here, sort of like objecting that my fridge poetry is not very good or my shower singing is off key.
Now, your point is well taken, but I also generally choose to simply not be willing to cross more than a small amount of inferential distance in my writing (mostly because I think slowly and it requires significant time and effort for me to chain back far enough to be clear to successively wider audiences), since I often think of it as leaving breadcrumbs for those who might be nearby rather than leading people a long way towards a conclusion. I trust people to think things through for themselves and agree with me or not as their reason dictates.
Yes, this means I am often quite distanced from easily verifying the most complex models I have, but such seems to be the nature of complex models that I don’t even have complete in my own mind yet, much less complete in a way that I would lay them out precisely such that they could be precisely verified point by point. This perhaps makes me frustratingly inscrutable about my most exciting claims to those with the least similar priors, but I view it as a tradeoff for aiming to better explain more of the world to myself and those much like me at the expense of failing to make those models legible enough for those insufficiently similar to me to verify them.
Maybe my circumstances will change enough that one day I’ll make a much different tradeoff?
What I’m objecting to isn’t the shortform, but the fundamental presumptuousness inherent in declaring that you know better than everyone else what they’re experiencing, *particularly* in the context of spirituality, where you self-describe as more advanced than most people.
To take a group of people (LWers) who largely say “nah, that stuff you’re on is sketchy and fake” and say “aha, actually, I secretly know that you’re in my domain of expertise and don’t even know it!” is a recipe for all sorts of bad stuff. Like, “not only am I *not* on some sketchy fake stuff, I’m actually superior to my naysayers by very virtue of the fact that they don’t recognize what I’m pointing at! Their very objection is evidence that I see more clearly than they do!”
I’m pouring a lot into your words, but the point isn’t that your words carried all that so much as that they COULD carry all that, in a motte-and-bailey sort of way. The way you’re saying stuff opens the door to abuse, both social and epistemic. My objection wasn’t actually a call for you to give more explanation. It was me saying “cut it out,” while at the same time acknowledging that one COULD, in principle, make the same claim in a justified fashion, if they cared to.
Note: what follows responds literally to what you said. I’m suspicious enough that my interpretation is correct that I’ll respond based on it, but I’m open to the possibility this was meant more metaphorically and I’ve misunderstood your intention.
It was me saying “cut it out,”
Ah, but that’s not up to you, at least not here. You are welcome to dislike what I say, claim or argue that I am dangerous in some way, downvote me, flag my posts, etc. BUT it’s not up to you to enforce a norm here to the best of my knowledge, even if it’s what you would like to do.
Sorry if that is uncharacteristically harsh and direct of me, but if that was your motivation, I think it important to say I don’t recognize you as having the authority to do that in this space, consider it a violation of my commenting guidelines, and will delete future comments that attempt to do the same.
Hey Gordon, let me see if I understand your model of this thread. I’ll write mine and can you tell me if it matches your understanding?
You write a post giving your rough understanding of a commonly discussed topic that many are confused by
Duncan objects to a framing sentence that he claims means “I know better than other people what’s going on in those other people’s heads; I am smarter/wiser/more observant/more honest.” because it seems inappropriate and dangerous in this domain (spirituality)
You say “Dude, I’m just getting some quick thoughts off my chest, and it’s hard to explain everything”
Duncan says you aren’t responding to him properly—he does not believe this is a disagreement but a norm-violation
You say that Duncan is not welcome to prosecute norm violations on your wall unless they are norms that you support
nods Then I suppose I feel confused by your final response.
If I imagine writing a shortform post and someone said it was:
Very rude to another member of the community
Endorsing a study that failed to replicate
Lied about an experience of mine
Tried to unfairly change a narrative so that I was given more status
I would often be like “No, you’re wrong” or maybe “I actually stand by it and intended to be rude” or “Thanks, that’s fair, I’ll edit”. I can also imagine times where the commenter is needlessly aggressive and uncooperative where I’d just strong downvote and ignore.
But I’m confused by saying “you’re not allowed to tell me off for norm-violations on my shortform”. To apply that principle more concretely, it could say “you’re not allowed to tell me off for lying on my shortform”.
My actual model of you feels a bit confused by Duncan’s claim or something, and wants to fight back against being attacked for something you don’t see as problematic. Like, it feels presumptuous of Duncan to walk into your post and hold you to what feels mostly like high standards of explanation, and you want to (rightly) say that he’s not allowed to do that.
Yes. To add to this what I’m most strongly reacting to is not what he says he’s doing explicitly, which I’m fine with, but what further conversation suggests he is trying to do: to act as norm enforcer rather than as norm enforcement recommender.
I cannot adequately do that here because it relies on information you conveyed to me in a non-public conversation.
I accept that you say that’s not what you’re doing, and I am happy to concede that your internal experience of yourself as you experience it tells you that you are doing what you are doing, but I now believe that my explanation better describes why you are doing what you are doing than the explanation you are able to generate to explain your own actions.
The best I can maybe offer is that I believe you have said things that are better explained by an intent to enforce norms rather than argue for norms and imply that general case should be applied in this specific case. I would say the main lines of evidence revolve around how I interpret your turns of phrase, how I read your tone (confrontational and defensive), what aspects of things I have said you have chosen to respond to, how you have directed the conversation, and my general model of human psychology with the specifics you are giving me filled in.
Certainly I may be mistaken in this case and I am reasoning off circumstantial evidence which is not a great situation to be in, but you have pushed me hard enough here and elsewhere that it has made me feel it is necessary to act to serve the purpose of supporting the conversation norms I prefer in the places you have engaged me. I would actually really like this conversation to end because it is not serving anything I value, other than that I believe not responding would simply allow what I dislike to continue and be subtly accepted, and I am somewhat enjoying the opportunity to engage in ways I don’t normally so I can benefit from the new experience.
I note for the record that the above is strong evidence that Gordon was not just throwing an offhand turn of phrase in his original post; he does and will regularly decide that he knows better than other people what’s going on in those other people’s heads. The thing I was worried about, and attempting to shine a light on, was not in my imagination; it’s a move that Gordon endorses, on reflection, and it’s the sort of thing that, historically, made the broader culture take forever to recognize e.g. the existence of people without visual imagery, or the existence of episodics, or the existence of bisexuals, or any number of other human experiences that are marginalized by confident projection.
I’m comfortable with just leaving the conversation at “he, I, and LessWrong as a community are all on the same page about the fact that Gordon endorses making this mental move.” Personally, I find it unjustifiable and morally abhorrent. Gordon clearly does not. Maybe that’s the crux.
[He] does and will regularly decide that he knows better than other people what’s going on in those other people’s heads. [...] Personally, I find it unjustifiable and morally abhorrent.
How can it be morally abhorrent? It’s an epistemic issue. Factual errors often lead to bad consequences, but that doesn’t make those errors moral errors. A moral error is an error about a moral fact, assignement of value to situations, as opposed to prediction of what’s going on. And what someone thinks is a factual question, not a question of assigning value to an event.
Things that are morally abhorrent are not necessarily moral errors. For example I can find wildlife suffering morally abhorrent but there’s obviously no moral errors or any kind of errors being committed there. Given that the dictionary defines abhorrent as “inspiring disgust and loathing; repugnant” I think “I find X morally abhorrent” just means “my moral system considers X to be very wrong or to have very low value.”
That’s one way for my comment to be wrong, as in “Systematic recurrence of preventable epistemic errors is morally abhorrent.”
When I was writing the comment, I was thinking of another way it’s wrong: given morality vs. axiology distinction, and distinction between belief and disclosure of that belief, it might well be the case that it’s a useful moral principle to avoid declaring beliefs about what others think, especially when those others disagree with the declarations. In that case it’s a violation of this principle, a moral wrong, to declare such beliefs. (A principle like this gets in the way of honesty, so promoting it is contentious and shouldn’t be an implicit background assumption. And the distinction between belief and its declaration was not clearly made in the above discussion.)
I find it morally abhorrent because, when not justified and made-cruxy (i.e. when done the only way I’ve ever seen Gordon do it), it’s tantamount to trying to erase another person/another person’s experience, and (as noted in my first objection) it often leads, in practice, to socially manipulative dismissiveness and marginalization that’s not backed by reality.
So it’s a moral principle under the belief vs. declaration distinction (as in this comment). In that case I mostly object to not making that distinction (a norm to avoid beliefs of that form is on entirely different level than a norm to avoid their declarations).
Personally I don’t think the norm about declarations is on the net a good thing, especially on LW, as it inhibits talking about models of thought. The examples you mentioned are important but should be covered by a more specialized norm that doesn’t cause as much collateral damage.
I’m not sure I’m exactly responding to what you want me to respond to, but:
It seems to me that a declaration like “I think this is true of other people in spite of their claims to the contrary; I’m not even sure if I could justify why? But for right now, that’s just the state of what’s in my head”
is not objectionable/doesn’t trigger the alarm I was trying to raise. Because even though it fails to offer cruxes or detail, it at least signals that it’s not A STATEMENT ABOUT THE TRUE STATE OF THE UNIVERSE, or something? Like, it’s self-aware about being a belief that may or may not match reality?
Which makes me re-evaluate my response to Gordon’s OP and admit that I could have probably offered the word “think” something like 20% more charity, on the same grounds, though on net I still am glad that I spelled out the objection in public (like, the objection now seems to me to apply a little less, but not all the way down to “oops, the objection was fundamentally inappropriate”).
(By “belief” I meant a belief that talkes place in someone’s head, and its existence is not necessarily communicated to anyone else. So an uttered statement “I think X” is a declaration of belief in X, not just a belief in X. A belief in X is just a fact about that person’s mind, without an accompanying declaration. In this framing, the version of the norm about beliefs (as opposed to declarations) is the norm not to think certain thoughts, not a norm to avoid sharing the observations about the fact that you are thinking them.)
I think a salient distinction between declarations of “I think X” and “it’s true that X” is a bad thing, as described in this comment. The distinction is that in the former case you might lack arguments for the belief. But if you don’t endorse the belief, it’s no longer a belief, and “I think X” is a bug in the mind that shouldn’t be called “belief”. If you do endorse it, then “I think X” does mean “X”. It is plausibly a true statement about the state of the universe, you just don’t know why; your mind inscrutably says that it is and you are inclined to believe it, pending further investigation.
So the statement “I think this is true of other people in spite of their claims to the contrary” should mean approximately the same as “This is true of other people in spite of their claims to the contrary”, and a meaningful distinction only appears with actual arguments about those statements, not with different placement of “I think”.
I forget if we’ve talked about this specifically before, but I rarely couch things in ways that make clear I’m talking about what I think rather than what is “true” unless I am pretty uncertain and want to make that really clear or expect my audience to be hostile or primarily made up of essentialists. This is the result of having an epistemology where there is no direct access to reality so I literally cannot say anything that is not a statement about my beliefs about reality, so saying “I think” or “I believe” all the time is redundant because I don’t consider eternal notions of truth meaningful (even mathematical truth, because that truth is contingent on something like the meta-meta-physics of the world and my knowledge of it is still mediated by perception, cf. certain aspects of Tegmark).
I think of “truth” as more like “correct subjective predictions, as measured against (again, subjective) observation”, so when I make claims about reality I’m always making what I think of as claims about my perception of reality since I can say nothing else and don’t worry about appearing to make claims to eternal, essential truth since I so strongly believe such a thing doesn’t exist that I need to be actively reminded that most of humanity thinks otherwise to some extent. Sort of like going so hard in one direction that it looks like I’ve gone in the other because I’ve carved out everything that would have allowed someone to observe me having to navigate between what appear to others to be two different epistemic states where I only have one of them.
This is perhaps a failure of communication, and I think I speak in ways in person that make this much clearer and then I neglect the aspects of tone not adequately carried in text alone (though others can be the judge of that, but I basically never get into discussions about this concern in person, even if I do get into meta discussions about other aspects of epistemology). FWIW, I think Eliezer has (or at least had) a similar norm, though to be fair it got him into a lot of hot water too, so maybe I shouldn’t follow his example here!
leaving the conversation at “he, I, and LessWrong as a community are all on the same page about the fact that Gordon endorses making this mental move.”
Nesov scooped me on the obvious objection, but as long as we’re creating common knowledge, can I get in on this? I would like you and Less Wrong as a community to be on the same page about the fact that I, Zack M. Davis, endorse making the mental move of deciding that I know better than other people what’s going on in those other people’s heads when and only when it is in fact the case that I know better than those other people what’s going on in their heads (in accordance with the Litany of Tarski).
the existence of bisexuals
As it happens, bisexual arousal patterns in men are surprisingly hard to reproduce in the lab![1] This is a (small, highly inconclusive) example of the kind of observation that one might use to decide whether or not we live in a world in which the cognitive algorithm of “Don’t decide that you know other people’s minds better than they do” performs better or worse than other inference procedures.
when and only when it is in fact the case that I know better than those other people what’s going on in their heads (in accordance with the Litany of Tarski).
Yes, as clearly noted in my original objection, there is absolutely a time and a place for this, and a way to do it right; I too share this tool when able and willing to justify it. It’s only suspicious when people throw it out solely on the strength of their own dubious authority. My whole objection is that Gordon wasn’t bothering to (I believe as a cover for not being able to).
Acknowledged. (It felt important to react to the great-grandparent as a show of moral resistance to appeal-to-inner-privacy conversation halters, and it was only after posting the comment that I remembered that you had acknolwedged the point earlier in the thread, which, in retrospect, I should have at least acknowledged even if the great-grandparent still seemed worth criticizing.)
there is absolutely a time and a place for this
Exactly—and lesswrong.com is the place for people to report on their models of reality, which includes their models of other people’s minds as a special case.
Other places in Society are right to worry about erasure, marginalization, and socially manipulative dismissiveness! But in my rationalist culture, while standing in the Citadel of Truth, we’re not allowed to care whether a map is marginalizing or dismissive; we’re only allowed to care about whether the map reflects the territory. (And if there are other cultures competing for control of the “rationalist” brand name, then my culture is at war with them.)
My whole objection is that Gordon wasn’t bothering to
Great! Thank you for critcizing people who don’t justify their beliefs with adequate evidence and arguments. That’s really useful for everyone reading!
(I believe as a cover for not being able to).
In context, it seems worth noting that this is a claim about Gordon’s mind, and your only evidence for it is absence-of-evidence (you think that if he had more justification, he would be better at showing it). I have no problem with this (as we know, absence of evidence is evidence of absence), but it seems in tension with some of your other claims?
criticizing people who don’t justify their beliefs with adequate evidence and arguments
I think justification is in the nature of arguments, but not necessary for beliefs or declarations of beliefs. A belief offered without justification is a hypothesis called to attention. It’s concise, and if handled carefully, it can be sufficient for communication. As evidence, it’s a claim about your own state of mind, which holds a lot of inscrutable territory that nonetheless can channel understanding that doesn’t yet lend itself to arguments. Seeking arguments is certainly a good thing, to refactor and convey beliefs, but that’s only a small part of how human intelligence builds its map.
Yeah, if I had the comment to rewrite (I prefer not to edit it at this point) I would say “My whole objection is that Gordon wasn’t bothering to (and at this point in the exchange I have a hypothesis that it’s reflective of not being able to, though that hypothesis comes from gut-level systems and is wrong-until-proven-right as opposed to, like, a confident prior).”
So, having a little more space from all this now, I’ll say that I’m hesitant to try to provide justifications because certain parts of the argument require explaining complex internal models of human minds that are a level more complex than I can explain even though I’m using them (I only seem to be able to interpret myself coherently one level of organization less than the maximum level of organization present in my mind) and because other parts of the argument require gnosis of certain insights that I (and to the best of my knowledge, no one) knows how to readily convey without hundreds to thousands of hours of meditation and one-on-one interactions (though I do know a few people who continue to hope that they may yet discover a way to make that kind of thing scalable even though we haven’t figured it out in 2500 years, maybe because we were missing something important to let us do it).
So it is true that I can’t provide adequate episteme of my claim, and maybe that’s what you’re reacting to. I don’t consider this a problem, but I also recognize that within some parts of the rationalist community that is considered a problem (I model you as being one such person, Duncan). So given that, I can see why from your point of view it looks like I’m just making stuff up or worse since I can’t offer “justified belief” that you’d accept as “justified”, and I’m not really much interested in this particular case in changing your mind as I don’t yet completely know myself how to generate that change in stance towards epistemology in others even though I encountered evidence that lead me to that conclusion myself.
There’s a dynamic here that I think is somewhat important: socially recognized gnosis.
That is, contemporary American society views doctors as knowing things that laypeople don’t know, and views physicists as knowing things that laypeople don’t know, and so on. Suppose a doctor examines a person and says “ah, they have condition X,” and Amy responds with “why do you say that?”, and the doctor responds with “sorry, I don’t think I can generate a short enough explanation that is understandable to you.” It seems like the doctor’s response to Amy is ‘socially justified’, in that the doctor won’t really lose points for referring to a pre-existing distinction between those-in-the-know and laypeople (except maybe for doing it rudely or gracelessly). There’s an important sense in which society understands that it in fact takes many years of focused study to become a physicist, and physicists should not be constrained by ‘immediate public justification’ or something similar.
But then there’s a social question, of how to grant that status. One might imagine that we want astronomers to be able to do their astronomy and have their unintelligibility be respected, while we don’t want to respect the unintelligibility of astrologers.
So far I’ve been talking ‘nationally’ or ‘globally’ but I think a similar question holds locally. Do we want it to be the case that ‘rationalists as a whole’ think that meditators have gnosis and that this is respectable, or do we want ‘rationalists as a whole’ to think that any such respect is provisional or ‘at individual discretion’ or a mistake?
That is, when you say:
I don’t consider this a problem, but I also recognize that within some parts of the rationalist community that is considered a problem (I model you as being one such person, Duncan).
I feel hopeful that we can settle whether or not this is a problem (or at least achieve much more mutual understanding and clarity).
So it is true that I can’t provide adequate episteme of my claim, and maybe that’s what you’re reacting to.
This feels like the more important part (“if you don’t have episteme, why do you believe it?”) but I think there’s a nearly-as-important other half, which is something like “presenting as having respected gnosis” vs. “presenting as having unrespected gnosis.” If you’re like “as a doctor, it is my considered medical opinion that everyone has spirituality”, that’s very different from “look, I can’t justify this and so you should take it with a grain of salt, but I think everyone secretly has spirituality”. I don’t think you’re at the first extreme, but I think Duncan is reacting to signals along that dimension.
That’s not the point! Zack is talking about beliefs, not their declaration, so it’s (hopefully) not the case that there is “a time and a place” for certain beliefs (even when they are not announced), or that beliefs require ability and willingness to justify them (at least for some senses of “justify” and “belief”).
Oh, one last footnote: at no point did I consider the other conversation private, at no point did I request that it be kept private, and at no point did Gordon ask if he could reference it (to which I would have said “of course you can”). i.e. it’s not out of respect for my preferences that that information is not being brought in this thread.
Correct, it was made in a nonpublic but not private conversation, so you are not the only agent to consider, though admittedly the primary one other than myself in this context. I’m not opposed to discussing disclosure, but I’m also happy to let the matter drop at this point since I feel I have adequately pushed back against the behavior I did not want to implicitly endorse via silence since that was my primary purpose in continuing these threads past the initial reply to your comment.
There’s a world of difference between someone saying “[I think it would be better if you] cut it out because I said so” and someone saying “[I think it would be better if you] cut it out because what you’re doing is bad for reasons X, Y, and Z.” I didn’t bother to spell out that context because it was plainly evident in the posts prior. Clearly I don’t have any authority beyond the ability to speak; to
I mostly disagree that better reasons matter in a relevant way here, especially since I am currently reading your intent as not one of informing me of that you think there is a norm that should be enforced but instead a bid to enforce that norm. To me what’s relevant is intended effect.
Suppose I’m talking with a group of loose acquaintances, and one of them says (in full seriousness), “I’m not homophobic. It’s not that I’m afraid of gays, I just think that they shouldn’t exist.”
It seem to me that it is appropriate for me to say, “Hey man, that’s not ok to say.” It might be that a number of other people in the conversation would back me up (or it might be that they they defend the first guy), but there wasn’t common knowledge of that fact beforehand.
In some sense, this is a bid to establish a new norm, by pushing a the private opinions of a number of people into common knowledge. It also seems to me to be a virtuous thing to do in many situations.
(Noting that my response to the guy is not: “Hey, you can’t do that, because I get to decide what people do around here.” It’s “You can’t do that, because it’s bad” and depending on the group to respond to that claim in one way or another.)
“Here are some things you’re welcome to do, except if you do them I will label them as something else and disagree with them.”
Your claim that you had tentative conclusions that you were willing to update away from is starting to seem like lip service.
I am currently reading your intent as not one of informing me of that you think there is a norm that should be enforced
Literally my first response to you centers around the phrase “I think it’s a good and common standard to be skeptical of (and even hostile toward) such claims.” That’s me saying “I think there’s a norm here that it’s good to follow,” along with detail and nuance à la here’s when it’s good not to follow it.
This is a question of inferred intent, not what you literally said. I am generally hesitant to take much moderation action based on what I infer, but you have given me additional reason to believe my interpretation is correct in a nonpublic thread on Facebook.
(If admins feel this means I should use a reign of terror moderation policy I can switch to that.)
Regardless, I consider this a warning of my local moderation policy only and don’t plan to take action on this particular thread.
Er, I generally have FB blocked, but I have now just seen the thread on FB that Duncan made about you, and that does change how I read the dialogue (it makes Duncan’s comments feel more like they’re motivated by social coordination around you rather than around meditation/spirituality, which I’d previously assumed).
(Just as an aside, I think it would’ve been clearer to me if you’d said “I feel like you’re trying to attack me personally for some reason and so it feels especially difficult to engage in good faith with this particular public accusation of norm-violation” or something like that.)
I may make some small edit to my last comment up-thread a little after taking this into account, though I am still curious about your answer to the question as I initially stated it.
I can have different agendas and follow different norms on different platforms. Just saying. If I were trying to do the exact same thing in this thread as I am in the FB thread, they would have the same words, instead of different words.
(The original objection *does* contain the same words, but Gordon took the conversation in meaningfully different directions on the two different platforms.)
I note that above, Gordon is engaging in *exactly* the same behavior that I was trying to shine a spotlight on (claiming to understand my intent better than I do myself/holding to his model that I intend X despite my direct claims to the contrary).
Sure, this is short form. I’m not trying very hard to make a complete argument to defend my thoughts, just putting them out there. There is no norm that I need always abide everywhere to present the best (for some notion of best) version of my reasons for things I claim, least of all, I think, in this space as opposed to, say, in a frontpage post. Thus it feels to me a bit out of place to object in this way here, sort of like objecting that my fridge poetry is not very good or my shower singing is off key.
Now, your point is well taken, but I also generally choose to simply not be willing to cross more than a small amount of inferential distance in my writing (mostly because I think slowly and it requires significant time and effort for me to chain back far enough to be clear to successively wider audiences), since I often think of it as leaving breadcrumbs for those who might be nearby rather than leading people a long way towards a conclusion. I trust people to think things through for themselves and agree with me or not as their reason dictates.
Yes, this means I am often quite distanced from easily verifying the most complex models I have, but such seems to be the nature of complex models that I don’t even have complete in my own mind yet, much less complete in a way that I would lay them out precisely such that they could be precisely verified point by point. This perhaps makes me frustratingly inscrutable about my most exciting claims to those with the least similar priors, but I view it as a tradeoff for aiming to better explain more of the world to myself and those much like me at the expense of failing to make those models legible enough for those insufficiently similar to me to verify them.
Maybe my circumstances will change enough that one day I’ll make a much different tradeoff?
This response missed my crux.
What I’m objecting to isn’t the shortform, but the fundamental presumptuousness inherent in declaring that you know better than everyone else what they’re experiencing, *particularly* in the context of spirituality, where you self-describe as more advanced than most people.
To take a group of people (LWers) who largely say “nah, that stuff you’re on is sketchy and fake” and say “aha, actually, I secretly know that you’re in my domain of expertise and don’t even know it!” is a recipe for all sorts of bad stuff. Like, “not only am I *not* on some sketchy fake stuff, I’m actually superior to my naysayers by very virtue of the fact that they don’t recognize what I’m pointing at! Their very objection is evidence that I see more clearly than they do!”
I’m pouring a lot into your words, but the point isn’t that your words carried all that so much as that they COULD carry all that, in a motte-and-bailey sort of way. The way you’re saying stuff opens the door to abuse, both social and epistemic. My objection wasn’t actually a call for you to give more explanation. It was me saying “cut it out,” while at the same time acknowledging that one COULD, in principle, make the same claim in a justified fashion, if they cared to.
Note: what follows responds literally to what you said. I’m suspicious enough that my interpretation is correct that I’ll respond based on it, but I’m open to the possibility this was meant more metaphorically and I’ve misunderstood your intention.
Ah, but that’s not up to you, at least not here. You are welcome to dislike what I say, claim or argue that I am dangerous in some way, downvote me, flag my posts, etc. BUT it’s not up to you to enforce a norm here to the best of my knowledge, even if it’s what you would like to do.
Sorry if that is uncharacteristically harsh and direct of me, but if that was your motivation, I think it important to say I don’t recognize you as having the authority to do that in this space, consider it a violation of my commenting guidelines, and will delete future comments that attempt to do the same.
Hey Gordon, let me see if I understand your model of this thread. I’ll write mine and can you tell me if it matches your understanding?
You write a post giving your rough understanding of a commonly discussed topic that many are confused by
Duncan objects to a framing sentence that he claims means “I know better than other people what’s going on in those other people’s heads; I am smarter/wiser/more observant/more honest.” because it seems inappropriate and dangerous in this domain (spirituality)
You say “Dude, I’m just getting some quick thoughts off my chest, and it’s hard to explain everything”
Duncan says you aren’t responding to him properly—he does not believe this is a disagreement but a norm-violation
You say that Duncan is not welcome to prosecute norm violations on your wall unless they are norms that you support
Yes, that matches my own reading of how the interaction progressed, caveat any misunderstanding I have of Duncan’s intent.
nods Then I suppose I feel confused by your final response.
If I imagine writing a shortform post and someone said it was:
Very rude to another member of the community
Endorsing a study that failed to replicate
Lied about an experience of mine
Tried to unfairly change a narrative so that I was given more status
I would often be like “No, you’re wrong” or maybe “I actually stand by it and intended to be rude” or “Thanks, that’s fair, I’ll edit”. I can also imagine times where the commenter is needlessly aggressive and uncooperative where I’d just strong downvote and ignore.
But I’m confused by saying “you’re not allowed to tell me off for norm-violations on my shortform”. To apply that principle more concretely, it could say “you’re not allowed to tell me off for lying on my shortform”.
My actual model of you feels a bit confused by Duncan’s claim or something, and wants to fight back against being attacked for something you don’t see as problematic. Like, it feels presumptuous of Duncan to walk into your post and hold you to what feels mostly like high standards of explanation, and you want to (rightly) say that he’s not allowed to do that.
Does that all seem right?
Yes. To add to this what I’m most strongly reacting to is not what he says he’s doing explicitly, which I’m fine with, but what further conversation suggests he is trying to do: to act as norm enforcer rather than as norm enforcement recommender.
I explicitly reject Gordon’s assertions about my intentions as false, and ask (ASK, not demand) that he justify (i.e. offer cruxes) or withdraw them.
I cannot adequately do that here because it relies on information you conveyed to me in a non-public conversation.
I accept that you say that’s not what you’re doing, and I am happy to concede that your internal experience of yourself as you experience it tells you that you are doing what you are doing, but I now believe that my explanation better describes why you are doing what you are doing than the explanation you are able to generate to explain your own actions.
The best I can maybe offer is that I believe you have said things that are better explained by an intent to enforce norms rather than argue for norms and imply that general case should be applied in this specific case. I would say the main lines of evidence revolve around how I interpret your turns of phrase, how I read your tone (confrontational and defensive), what aspects of things I have said you have chosen to respond to, how you have directed the conversation, and my general model of human psychology with the specifics you are giving me filled in.
Certainly I may be mistaken in this case and I am reasoning off circumstantial evidence which is not a great situation to be in, but you have pushed me hard enough here and elsewhere that it has made me feel it is necessary to act to serve the purpose of supporting the conversation norms I prefer in the places you have engaged me. I would actually really like this conversation to end because it is not serving anything I value, other than that I believe not responding would simply allow what I dislike to continue and be subtly accepted, and I am somewhat enjoying the opportunity to engage in ways I don’t normally so I can benefit from the new experience.
I note for the record that the above is strong evidence that Gordon was not just throwing an offhand turn of phrase in his original post; he does and will regularly decide that he knows better than other people what’s going on in those other people’s heads. The thing I was worried about, and attempting to shine a light on, was not in my imagination; it’s a move that Gordon endorses, on reflection, and it’s the sort of thing that, historically, made the broader culture take forever to recognize e.g. the existence of people without visual imagery, or the existence of episodics, or the existence of bisexuals, or any number of other human experiences that are marginalized by confident projection.
I’m comfortable with just leaving the conversation at “he, I, and LessWrong as a community are all on the same page about the fact that Gordon endorses making this mental move.” Personally, I find it unjustifiable and morally abhorrent. Gordon clearly does not. Maybe that’s the crux.
How can it be morally abhorrent? It’s an epistemic issue. Factual errors often lead to bad consequences, but that doesn’t make those errors moral errors. A moral error is an error about a moral fact, assignement of value to situations, as opposed to prediction of what’s going on. And what someone thinks is a factual question, not a question of assigning value to an event.
Things that are morally abhorrent are not necessarily moral errors. For example I can find wildlife suffering morally abhorrent but there’s obviously no moral errors or any kind of errors being committed there. Given that the dictionary defines abhorrent as “inspiring disgust and loathing; repugnant” I think “I find X morally abhorrent” just means “my moral system considers X to be very wrong or to have very low value.”
That’s one way for my comment to be wrong, as in “Systematic recurrence of preventable epistemic errors is morally abhorrent.”
When I was writing the comment, I was thinking of another way it’s wrong: given morality vs. axiology distinction, and distinction between belief and disclosure of that belief, it might well be the case that it’s a useful moral principle to avoid declaring beliefs about what others think, especially when those others disagree with the declarations. In that case it’s a violation of this principle, a moral wrong, to declare such beliefs. (A principle like this gets in the way of honesty, so promoting it is contentious and shouldn’t be an implicit background assumption. And the distinction between belief and its declaration was not clearly made in the above discussion.)
I find it morally abhorrent because, when not justified and made-cruxy (i.e. when done the only way I’ve ever seen Gordon do it), it’s tantamount to trying to erase another person/another person’s experience, and (as noted in my first objection) it often leads, in practice, to socially manipulative dismissiveness and marginalization that’s not backed by reality.
So it’s a moral principle under the belief vs. declaration distinction (as in this comment). In that case I mostly object to not making that distinction (a norm to avoid beliefs of that form is on entirely different level than a norm to avoid their declarations).
Personally I don’t think the norm about declarations is on the net a good thing, especially on LW, as it inhibits talking about models of thought. The examples you mentioned are important but should be covered by a more specialized norm that doesn’t cause as much collateral damage.
I’m not sure I’m exactly responding to what you want me to respond to, but:
It seems to me that a declaration like “I think this is true of other people in spite of their claims to the contrary; I’m not even sure if I could justify why? But for right now, that’s just the state of what’s in my head”
is not objectionable/doesn’t trigger the alarm I was trying to raise. Because even though it fails to offer cruxes or detail, it at least signals that it’s not A STATEMENT ABOUT THE TRUE STATE OF THE UNIVERSE, or something? Like, it’s self-aware about being a belief that may or may not match reality?
Which makes me re-evaluate my response to Gordon’s OP and admit that I could have probably offered the word “think” something like 20% more charity, on the same grounds, though on net I still am glad that I spelled out the objection in public (like, the objection now seems to me to apply a little less, but not all the way down to “oops, the objection was fundamentally inappropriate”).
(By “belief” I meant a belief that talkes place in someone’s head, and its existence is not necessarily communicated to anyone else. So an uttered statement “I think X” is a declaration of belief in X, not just a belief in X. A belief in X is just a fact about that person’s mind, without an accompanying declaration. In this framing, the version of the norm about beliefs (as opposed to declarations) is the norm not to think certain thoughts, not a norm to avoid sharing the observations about the fact that you are thinking them.)
I think a salient distinction between declarations of “I think X” and “it’s true that X” is a bad thing, as described in this comment. The distinction is that in the former case you might lack arguments for the belief. But if you don’t endorse the belief, it’s no longer a belief, and “I think X” is a bug in the mind that shouldn’t be called “belief”. If you do endorse it, then “I think X” does mean “X”. It is plausibly a true statement about the state of the universe, you just don’t know why; your mind inscrutably says that it is and you are inclined to believe it, pending further investigation.
So the statement “I think this is true of other people in spite of their claims to the contrary” should mean approximately the same as “This is true of other people in spite of their claims to the contrary”, and a meaningful distinction only appears with actual arguments about those statements, not with different placement of “I think”.
I forget if we’ve talked about this specifically before, but I rarely couch things in ways that make clear I’m talking about what I think rather than what is “true” unless I am pretty uncertain and want to make that really clear or expect my audience to be hostile or primarily made up of essentialists. This is the result of having an epistemology where there is no direct access to reality so I literally cannot say anything that is not a statement about my beliefs about reality, so saying “I think” or “I believe” all the time is redundant because I don’t consider eternal notions of truth meaningful (even mathematical truth, because that truth is contingent on something like the meta-meta-physics of the world and my knowledge of it is still mediated by perception, cf. certain aspects of Tegmark).
I think of “truth” as more like “correct subjective predictions, as measured against (again, subjective) observation”, so when I make claims about reality I’m always making what I think of as claims about my perception of reality since I can say nothing else and don’t worry about appearing to make claims to eternal, essential truth since I so strongly believe such a thing doesn’t exist that I need to be actively reminded that most of humanity thinks otherwise to some extent. Sort of like going so hard in one direction that it looks like I’ve gone in the other because I’ve carved out everything that would have allowed someone to observe me having to navigate between what appear to others to be two different epistemic states where I only have one of them.
This is perhaps a failure of communication, and I think I speak in ways in person that make this much clearer and then I neglect the aspects of tone not adequately carried in text alone (though others can be the judge of that, but I basically never get into discussions about this concern in person, even if I do get into meta discussions about other aspects of epistemology). FWIW, I think Eliezer has (or at least had) a similar norm, though to be fair it got him into a lot of hot water too, so maybe I shouldn’t follow his example here!
Nesov scooped me on the obvious objection, but as long as we’re creating common knowledge, can I get in on this? I would like you and Less Wrong as a community to be on the same page about the fact that I, Zack M. Davis, endorse making the mental move of deciding that I know better than other people what’s going on in those other people’s heads when and only when it is in fact the case that I know better than those other people what’s going on in their heads (in accordance with the Litany of Tarski).
As it happens, bisexual arousal patterns in men are surprisingly hard to reproduce in the lab![1] This is a (small, highly inconclusive) example of the kind of observation that one might use to decide whether or not we live in a world in which the cognitive algorithm of “Don’t decide that you know other people’s minds better than they do” performs better or worse than other inference procedures.
J. Michael Bailey, “What Is Sexual Orientation and Do Women Have One?”, section titled “Sexual Arousal Patterns vs. the Kinsey Scale: The Case of Male Bisexuality”
Yes, as clearly noted in my original objection, there is absolutely a time and a place for this, and a way to do it right; I too share this tool when able and willing to justify it. It’s only suspicious when people throw it out solely on the strength of their own dubious authority. My whole objection is that Gordon wasn’t bothering to (I believe as a cover for not being able to).
Acknowledged. (It felt important to react to the great-grandparent as a show of moral resistance to appeal-to-inner-privacy conversation halters, and it was only after posting the comment that I remembered that you had acknolwedged the point earlier in the thread, which, in retrospect, I should have at least acknowledged even if the great-grandparent still seemed worth criticizing.)
Exactly—and lesswrong.com is the place for people to report on their models of reality, which includes their models of other people’s minds as a special case.
Other places in Society are right to worry about erasure, marginalization, and socially manipulative dismissiveness! But in my rationalist culture, while standing in the Citadel of Truth, we’re not allowed to care whether a map is marginalizing or dismissive; we’re only allowed to care about whether the map reflects the territory. (And if there are other cultures competing for control of the “rationalist” brand name, then my culture is at war with them.)
Great! Thank you for critcizing people who don’t justify their beliefs with adequate evidence and arguments. That’s really useful for everyone reading!
In context, it seems worth noting that this is a claim about Gordon’s mind, and your only evidence for it is absence-of-evidence (you think that if he had more justification, he would be better at showing it). I have no problem with this (as we know, absence of evidence is evidence of absence), but it seems in tension with some of your other claims?
I think justification is in the nature of arguments, but not necessary for beliefs or declarations of beliefs. A belief offered without justification is a hypothesis called to attention. It’s concise, and if handled carefully, it can be sufficient for communication. As evidence, it’s a claim about your own state of mind, which holds a lot of inscrutable territory that nonetheless can channel understanding that doesn’t yet lend itself to arguments. Seeking arguments is certainly a good thing, to refactor and convey beliefs, but that’s only a small part of how human intelligence builds its map.
Yeah, if I had the comment to rewrite (I prefer not to edit it at this point) I would say “My whole objection is that Gordon wasn’t bothering to (and at this point in the exchange I have a hypothesis that it’s reflective of not being able to, though that hypothesis comes from gut-level systems and is wrong-until-proven-right as opposed to, like, a confident prior).”
So, having a little more space from all this now, I’ll say that I’m hesitant to try to provide justifications because certain parts of the argument require explaining complex internal models of human minds that are a level more complex than I can explain even though I’m using them (I only seem to be able to interpret myself coherently one level of organization less than the maximum level of organization present in my mind) and because other parts of the argument require gnosis of certain insights that I (and to the best of my knowledge, no one) knows how to readily convey without hundreds to thousands of hours of meditation and one-on-one interactions (though I do know a few people who continue to hope that they may yet discover a way to make that kind of thing scalable even though we haven’t figured it out in 2500 years, maybe because we were missing something important to let us do it).
So it is true that I can’t provide adequate episteme of my claim, and maybe that’s what you’re reacting to. I don’t consider this a problem, but I also recognize that within some parts of the rationalist community that is considered a problem (I model you as being one such person, Duncan). So given that, I can see why from your point of view it looks like I’m just making stuff up or worse since I can’t offer “justified belief” that you’d accept as “justified”, and I’m not really much interested in this particular case in changing your mind as I don’t yet completely know myself how to generate that change in stance towards epistemology in others even though I encountered evidence that lead me to that conclusion myself.
There’s a dynamic here that I think is somewhat important: socially recognized gnosis.
That is, contemporary American society views doctors as knowing things that laypeople don’t know, and views physicists as knowing things that laypeople don’t know, and so on. Suppose a doctor examines a person and says “ah, they have condition X,” and Amy responds with “why do you say that?”, and the doctor responds with “sorry, I don’t think I can generate a short enough explanation that is understandable to you.” It seems like the doctor’s response to Amy is ‘socially justified’, in that the doctor won’t really lose points for referring to a pre-existing distinction between those-in-the-know and laypeople (except maybe for doing it rudely or gracelessly). There’s an important sense in which society understands that it in fact takes many years of focused study to become a physicist, and physicists should not be constrained by ‘immediate public justification’ or something similar.
But then there’s a social question, of how to grant that status. One might imagine that we want astronomers to be able to do their astronomy and have their unintelligibility be respected, while we don’t want to respect the unintelligibility of astrologers.
So far I’ve been talking ‘nationally’ or ‘globally’ but I think a similar question holds locally. Do we want it to be the case that ‘rationalists as a whole’ think that meditators have gnosis and that this is respectable, or do we want ‘rationalists as a whole’ to think that any such respect is provisional or ‘at individual discretion’ or a mistake?
That is, when you say:
I feel hopeful that we can settle whether or not this is a problem (or at least achieve much more mutual understanding and clarity).
This feels like the more important part (“if you don’t have episteme, why do you believe it?”) but I think there’s a nearly-as-important other half, which is something like “presenting as having respected gnosis” vs. “presenting as having unrespected gnosis.” If you’re like “as a doctor, it is my considered medical opinion that everyone has spirituality”, that’s very different from “look, I can’t justify this and so you should take it with a grain of salt, but I think everyone secretly has spirituality”. I don’t think you’re at the first extreme, but I think Duncan is reacting to signals along that dimension.
That’s not the point! Zack is talking about beliefs, not their declaration, so it’s (hopefully) not the case that there is “a time and a place” for certain beliefs (even when they are not announced), or that beliefs require ability and willingness to justify them (at least for some senses of “justify” and “belief”).
Oh, one last footnote: at no point did I consider the other conversation private, at no point did I request that it be kept private, and at no point did Gordon ask if he could reference it (to which I would have said “of course you can”). i.e. it’s not out of respect for my preferences that that information is not being brought in this thread.
Correct, it was made in a nonpublic but not private conversation, so you are not the only agent to consider, though admittedly the primary one other than myself in this context. I’m not opposed to discussing disclosure, but I’m also happy to let the matter drop at this point since I feel I have adequately pushed back against the behavior I did not want to implicitly endorse via silence since that was my primary purpose in continuing these threads past the initial reply to your comment.
There’s a world of difference between someone saying “[I think it would be better if you] cut it out because I said so” and someone saying “[I think it would be better if you] cut it out because what you’re doing is bad for reasons X, Y, and Z.” I didn’t bother to spell out that context because it was plainly evident in the posts prior. Clearly I don’t have any authority beyond the ability to speak; to
IS what I was doing, and all I was doing.
I mostly disagree that better reasons matter in a relevant way here, especially since I am currently reading your intent as not one of informing me of that you think there is a norm that should be enforced but instead a bid to enforce that norm. To me what’s relevant is intended effect.
What’s the difference?
Suppose I’m talking with a group of loose acquaintances, and one of them says (in full seriousness), “I’m not homophobic. It’s not that I’m afraid of gays, I just think that they shouldn’t exist.”
It seem to me that it is appropriate for me to say, “Hey man, that’s not ok to say.” It might be that a number of other people in the conversation would back me up (or it might be that they they defend the first guy), but there wasn’t common knowledge of that fact beforehand.
In some sense, this is a bid to establish a new norm, by pushing a the private opinions of a number of people into common knowledge. It also seems to me to be a virtuous thing to do in many situations.
(Noting that my response to the guy is not: “Hey, you can’t do that, because I get to decide what people do around here.” It’s “You can’t do that, because it’s bad” and depending on the group to respond to that claim in one way or another.)
“Here are some things you’re welcome to do, except if you do them I will label them as something else and disagree with them.”
Your claim that you had tentative conclusions that you were willing to update away from is starting to seem like lip service.
Literally my first response to you centers around the phrase “I think it’s a good and common standard to be skeptical of (and even hostile toward) such claims.” That’s me saying “I think there’s a norm here that it’s good to follow,” along with detail and nuance à la here’s when it’s good not to follow it.
This is a question of inferred intent, not what you literally said. I am generally hesitant to take much moderation action based on what I infer, but you have given me additional reason to believe my interpretation is correct in a nonpublic thread on Facebook.
(If admins feel this means I should use a reign of terror moderation policy I can switch to that.)
Regardless, I consider this a warning of my local moderation policy only and don’t plan to take action on this particular thread.
Er, I generally have FB blocked, but I have now just seen the thread on FB that Duncan made about you, and that does change how I read the dialogue (it makes Duncan’s comments feel more like they’re motivated by social coordination around you rather than around meditation/spirituality, which I’d previously assumed).
(Just as an aside, I think it would’ve been clearer to me if you’d said “I feel like you’re trying to attack me personally for some reason and so it feels especially difficult to engage in good faith with this particular public accusation of norm-violation” or something like that.)
I may make some small edit to my last comment up-thread a little after taking this into account, though I am still curious about your answer to the question as I initially stated it.
I can have different agendas and follow different norms on different platforms. Just saying. If I were trying to do the exact same thing in this thread as I am in the FB thread, they would have the same words, instead of different words.
(The original objection *does* contain the same words, but Gordon took the conversation in meaningfully different directions on the two different platforms.)
I note that above, Gordon is engaging in *exactly* the same behavior that I was trying to shine a spotlight on (claiming to understand my intent better than I do myself/holding to his model that I intend X despite my direct claims to the contrary).