Why people aren’t clamoring in the streets for the end of sickness and death?
What? Why? How would clamoring in the streets causally contribute to the end of sickness and death? Even if we interpret “clamoring in the streets” as a metonym for other forms of mass political action—presumably with the aim of increasing government funding for medical research?—it still just doesn’t seem like a very effective strategy compared to more narrowly-targeted interventions that can make direct incremental progress on the problem.
Concrete example: I have a friend who just founded a company to use video of D. magnia to more efficiently screen for potential anti-aging drugs. The causal pathway between my friend’s work and defeating aging is clear: if the company succeeds at building their water-flea camera rig drug-discovery process, then they might discover promising chemical compounds, some of which (after further research and development) will successfully treat some of the diseases of aging.
Of course, not everyone has the skillset to do biotechnology work! For example, I don’t. That means my causal contributions to ending sickness and death will be much more indirect. For example, my work on improving the error messages in the Rust compiler has the causal effect of making it ever-so-slightly easier to write software in Rust, some of which software might be used by, e.g., companies working on drug-discovery processes for finding promising chemical compounds, some of which (after further research and development) will successfully treat some of the diseases of aging.
That’s a pretty small and indirect effect, though! To do better, I might try to harness the power of comparative advantage and earn-to-give: instead of unpaid open-source work on Rust, maybe I should work harder at my paid software dayjob, negotiate for a raise, and use that money to fund someone else to do direct work on ending sickness and death. On the other hand, that’s assuming we know how to turn marginal money into marginal (good) research, which might not actually be true, either because a specific area is more talent-constrained than funding-constrained, or more generally because most donations in our inadequate civilization end up getting dissipated into bullshit jobs …
But while we’re thinking about how to contribute to ending sickness and death, it’s also important to track how actions might accidentally contribute to more sickness and/or death. For example, improving the error messages in the Rust compiler and having the causal effect of making it ever-so-slightly easier to write software in Rust, might have the causal effect of making it ever-so-slightly easier to write an unaligned recursively self-improving artificial intelligence that will destroy all value in our future light cone. Whoops! If it turns out that we live in that possible world, maybe I should do something less destructive with my time, like clamoring in the streets.
You care about comfort, but you also care about what your friends think. You might decide that Vibrams are just so damn comfortable they’re worth a bit of teasing.
I think the exposition here would be more compelling if you explicitly mention the social pressures in both the pro-Vibrams and anti-Vibrams directions: some people will tease you having “weird” toe-shoes, but some people will think better of you.
Soylent probably markets to a similar demographic niche as Vibrams. I’m sure some people drink Soylent for the “causal” reason of it being an efficient, practical alternative to cooking, rather than the “social” reason that they were suckered by its brilliant marketing to contrarian nerds as an efficient, practical alternative to cooking. But the ten unopened cases of Soylent sitting behind me as I type this represent an uncomfortable weight of evidence that I, personally, am not in the “causal” group, and I suspect some Vibrams-wearers might be in a similar position.
Yes, there are some people who talk about life extension, but they’re just playing at some group game the ways goths are. It’s just a club, a rallying point. It’s not about something. It’s just part of the social reality like everything else, and I see no reason to participate in that. I’ve got my own game which doesn’t involve being so weird, a much better strategy.
The phrase “doesn’t involve being so weird” makes me wonder if this is meant as deliberate irony? (“Being weird” is a social-reality concept!) You might want to rewrite this paragraph to clarify your intent.
What evidence do you use to distinguish between people who are playing the “talk about life extension” group game, and people who are actually making progress on making life extension happen in the real, physical universe? (I think this is a very hard problem!)
If you primarily inhabit causal reality (like most people on LessWrong)
The Less Wrong website certainly hosts a lot of insightfulblogpostsabout how to inhabit causal reality. How reliable is the causal pathway between “people read the blog posts” and “those people primarily inhabit causal reality”? That’s an empirical question!
Meta-note: while your comment adds very reasonable questions and objections which you went to the trouble of writing up at length (thanks!), its tone is slightly more combative than I’d like discussion of my posts to be. I don’t think conditions pertain that’d make that the ideal style here. I should perhaps put something like this in my moderation guidelines (update: now added).
I’d be grateful if you write future comments with a little more . . . not sure how to articulate . . .something like charity and less expression of incomprehension, more collaborative truth-seeking. Comment as though someone might have a reasonable point even if you can’t see it yet.
If you don’t understand the other person’s point (even after thinking a bit), what’s the collaborative move, other than expressing incomprehension? It seems that anything else would be pretending you understand when you actually don’t, which is adversarial to the collaborative truth-seeking process.
Connotation, denotation, implication, and subtext all come into play here, as do the underlying intent one can infer from them. If you don’t understand someone’s point, it’s entirely right to to state that, but there are diverse ways of expressing incomprehension. Contrast:
Expressing incomprehension + a request for further clarification, e.g. “I don’t understand why you think X, especially in light of Y, what am I missing?”, as opposed to
Expressing incomprehension + judgment, opposition, e.g. “I don’t understand, how could anyone think X given that Y!?”
Though inferences about underlying intent and mindstates are still only inferences, I’d say the first version is a lot more expected from a stance of “I assign some credence to you have a point that I missed (or at least act as though I do for the sake of production discussion) and I’m willing to listen so that we can talk and figure out which of us is really correct here.” When I imagine the second one, it feels like it comes from a place of “You are obviously wrong. Your reasoning is obviously wrong. I want you and everyone else to know that you’re wrong and your beliefs should be dismissed.” (It doesn’t have to mean that—and among people where there is common knowledge that everyone respects everyone else’s reasoning it could even be good—but that’s not the situation on the public comments here.)
The first version of expression incomprehension, I’d read as coming from a desire to figure out who is right here (hence the collaboration). The second feels more like someone is already sure they are right and wish to demolish they see as wrong (more adversarial).
it feels like it comes from a place of “You are obviously wrong. Your reasoning is obviously wrong. I want you and everyone else to know that you’re wrong and your beliefs should be dismissed.”
I would think that if someone’s reasoning is obviously wrong, then that person and everyone else should be informed that they are wrong (and that the particular beliefs that are wrong should be dismissed), because then everyone involved will be less wrong, which is what this website is all about!
Certainly, one would be advised to be very careful before asserting that someone’s reasoning is obviously wrong. (Obvious mistakes are more likely to be caught before publication than subtle ones, so if you think you’ve found an obvious mistake in someone’s post, you should strongly consider the alternative hypotheses that either you’re the one who is wrong, or that you’re, e.g., erroneously expecting short inferential distances.)
More generally, I’m in favor of politeness norms where politeness doesn’t sacrifice expressive power, but I’m wary of excessive emphasis on collaborative norms (what some authors would call “tone-policing”) being used to obfuscate information exchange or even shut it down (via what Yudkowsky characterized as appeal-to-egalitarianism conversation-halters).
If someone is wrong, this should definitely be made legible, so that no one leaves believing the wrong thing. The problem is with the “obviously” part. Once the truth of the object-level question is settled, there is the secondary question of how much we should update our estimate of the competence of whoever made a mistake. I think we should by default try to be clear about the object-level question and object-level mistake, and by default glomarize about the secondary question.
I read Ruby as saying that we should by default glomarize about the secondary question, and also that we should be much more hesitant about assuming an object-level error we spot is real. I think this makes sense as a conversation norm, where clarification is fast, but is bad in a forum, where asking someone to clarify their bad argument frequently leads to a dropped thread and a confusing mess for anyone who comes across the conversation later.
There’s an implication in your comment I don’t necessarily agree with, now that you point it out: “we should be much more hesitant about assuming an object-level error we spot is real” → “we should ask for clarification when we notice something.”
Person A argues X, Person B thinks X is wrong and wants to respond with argument Y. I don’t think they have to ask for clarification, I think it’s enough that they speak in a way that grants that maybe they’re missing something, in a way that’s consistent with having some non-negligible prior that the other person is correct. More about changing how you say things than what you say. So if asking for clarification isn’t helpful, don’t do it.
I would think that if someone’s reasoning is obviously wrong, then that person and everyone else should be informed that they are wrong
As you say in your next paragraph, one should be careful before asserting someone is obviously wrong. But sometimes they are. But if the goal is everyone being less wrong, I think some means of communicating are going to be more effective than others. I, at least, am a social monkey. If I am bluntly told I am wrong (even if I agree, even in private—but especially in public), I will feel attacked (if only at the S1 level), threatened (socially), and become defensive. It makes it hard to update and it makes it easy to dislike the one who called me out. The harsh calling out might be effective for onlookers, I suppose. But the strength of the “wrongness assertion” really should come from the arguments behind it, not the rhetoric force of the speaker. If the arguments are solid, it should be damning even with a gentle tone. If people ought to update that my reasoning is poor, they can do so even if the speaker was being polite and according respect.
Even if you wish to express that someone is wrong, I think this is done more effectively if one simultaneously continues to implicitly express “I think there is still some prior that you are correct and I curious to hear your thoughts”, or failing that “You are very clearly wrong here yet I still respect you as a thinker who is worth my time to discourse with.” If neither of those is true, you’re in a tough position. Maybe you want them to go away, or you just want other people not believe false things. There’s an icky thing here I feel like for there to be productive and healthy discussion you have to act as though at least one of the above statements is true, even if it isn’t. No one is going to respond well to discussion with someone who they think doesn’t respect them and is happy to broadcast that judgment to everyone else (doing so is legitimately quite a hostile social move).
The hard thing is here is that’s about perceptions more than intentions. People interpreting things differently, people have different fears and anxieties, and that means things can come across as more hostile than they’re intended. Or the receiver judges is more afraid about others will think than the speaker (reasonably—the receiver has more at stake).
Though around here, people are pretty good at admitting they’re wrong. But I think certain factors about how they’re communicated with can determine whether it feels like a helpful correction vs a personal attack.
More generally, I’m in favor of politeness norms where politeness doesn’t sacrifice expressive power,
This might be because I’m overly confident in my writing ability, but I don’t think maintaining politeness would ever curtail my expressive power, although admittedly it can take a lot more time. Do you have any examples, real or fictional, where you feel expressiveness was sacrificed to politeness?
but I’m wary of excessive emphasis on collaborative norms (what some authors would call “tone-policing”) being used to obfuscate information exchange or even shut it down
At the risk of sparking controversy, can you link to any examples of this on LessWrong from the past few years? I want to know if we’re actually at in danger of this at all.
(what some authors would call “tone-policing”) being used to obfuscate information exchange or even shut it down (via what Yudkowsky characterized as appeal-to-egalitarianism conversation-halters).
I think the tough thing is that all norms can be weaponized and abused. Basically, if you have a goal which isn’t truth-seeking (which we all do), then there is no norm I can imagine which on its own will stop you. The absence of tone-policing permits heated angry exchanges, attacks, and bullying—but so does a tone policing which is selectively enforced.
On LessWrong, I think we need to strike a balance. We should never say “you used a mean tone, therefore you are wrong and must immediately leave the discussion” or “all opinions must be given equal respect” (cue your link); but we should still say “no, you can’t call people idiots here” and “if you’re going to argue with someone, this can go a lot better if you’re open to the fact that you could be the wrong one.”
Naturally, there’s a lot of grey area in the middle. I like the idea of us being a community where we discuss what ideal discussion looks like, continually refining our norms to something that works really well.
(Hence my writing all this at length—trying to get my own thoughts in order and have something to refer back to/later compile into a post.)
I basically don’t find this compelling, for reasons analogous to No, It’s not The Incentives, it’s you. Yes, there are ways to establish emotional safety between people so that I can point out errors in your reasoning in a way that reduces the degree of threat you feel. But there are also ways for you to reduce the number of bucket errors in your mind, so that I can point out errors in your reasoning without it seeming like an attack on “am I ok?” or something similar.
Versions of this sort of thing that look more like “here is how I would gracefully make that same objection” (which has the side benefit of testing for illusion of transparency) seem to me more likely to be helpful, whereas versions that look closer to “we need to settle this meta issue before we can touch the object level” seem to me like they’re less likely to be helpful, and more likely to be the sort of defensive dodge that should be taxed instead of subsidized.
Strongly agreed. To expand on this—when I see a comment like this:
If I am bluntly told I am wrong (even if I agree, even in private—but especially in public), I will feel attacked (if only at the S1 level), threatened (socially), and become defensive.
The question I have for anyone who says this sort of thing is… do you endorse this reaction? If you do, then don’t hide behind the “social monkey” excuse; honestly declare your endorsement of this reaction, and defend it, on its own merits. Don’t say “I got defensive, as is only natural, what with your tone and all”; say “you attacked me”, and stand behind your words.
But if you don’t endorse this reaction—then deal with it yourself. Clearly, you are aware that you have it; you are aware of the source and nature of your defensiveness. Well, all the better; you should be able, then, to attend to your own involuntary responses. And if you fail to do so—as, being only human, you sometimes (though rarely, one hopes!) will—then the right thing to do is to apologize to your interlocutor: “I know that my defensiveness was irrational, and I regret that it got the better of me, this time; I will endeavor to exercise more self-control, in the future.”
But if you don’t endorse this reaction—then deal with it yourself.
I agree with the above two comments (Vaniver’s and yours) except for a certain connotation of this point. Rejection of own defensiveness does not imply endorsement of insensitivity to tone. I’ve been making this error in modeling others until recently, and I currently cringe at many of my “combative” comments and forum policy suggestions from before 2014 or so. In most cases defensiveness is flat wrong, but so is not optimizing towards keeping the conversation comfortable. It’s tempting to shirk that responsibility in the name of avoiding the danger of compromising the signal with polite distortions. But there is a lot of room for safe optimization in that direction, and making sure people are aware of this is important. “Deal with it yourself” suggests excluding this pressure. Ten years ago, I would have benefitted from it.
To be clear I agree with the benefits of politeness, and also think people probably *underweight* the benefits of politeness because they’re less easy to see. (And, further, there’s a selection effect that people who are ‘rude’ are disproportionately likely to be ones who find politeness unusually costly or difficult to understand, and have less experience with its benefits.)
This is one of the reasons I like an injunction that’s closer to “show the other person how to be polite to you” than “deal with it yourself”; often the person who ‘didn’t see how to word it any other way’ will look at your script and go “oh, I could have written that,” and sometimes you’ll notice that you’re asking them to thread a very narrow needle or are objecting to the core of their message instead of their tone.
I think that’s a good complaint and I’m glad Vaniver pointed it out.
The question I have for anyone who says this sort of thing is… do you endorse this reaction? If you do, then don’t hide behind the “social monkey” excuse; honestly declare your endorsement of this reaction, and defend it, on its own merits. Don’t say “I got defensive, as is only natural, what with your tone and all”; say “you attacked me”, and stand behind your words.
I think this is a very good question. Upon reflection, my answer is that I do endorse it on many occasions (I can’t say that I endorse it on all occasions, especially in the abstract, but many). I think that myself and others find ourselves feeling defensive not merely because of uncleared bucket errors, but because we have been “attacked” to some lesser or greater extent.
You are right, the “social monkey” thing is something of an excuse, arguably born out of perhaps excessive politeness. You offer such an excuse when requesting someone else change in order to be polite, to accept some of the blame for the situation yourself rather than be confrontational and say it’s all them. Trying to paint a way out of conflict where they can save face . (If someone’s behavior already feels uncomfortably confrontational to you and you want to de-escalate, the polite behavior is what comes to mind.)
In truth though, I think that my “monkey brain” (and those of others) pick up on real things: real slights, real hostility, real attempts to do harm. Some are minor, but they’re still real, and it’s fair to push back on them. Some defensiveness is both justified and adaptive.
Upon reflection, my answer is that I do endorse it on many occasions
The salient question is whether it’s a good idea to respond to possible attacks in a direct fashion. Situations that can be classified as attacks (especially in a sense that allows the attacker to remain unaware of this fact) are much more common.
I agree with that. Granting to yourself that you feel legitimately defensive because of a true external attack does not equate to necessarily responding directly (or in any other way). You might say “I am legitimately defensive and it is good my mind caused me to notice the threat”, and then still decide to “suck it up.”
Some defensiveness is both justified and adaptive.
This seems right but tricky. That is, it seems important to distinguish ‘adaptive for my situation’ and ‘adaptive for truth-seeking’ (either as an individual or as a community), and it seems right that hostility or counterattack or so on are sometimes the right tool for individual and community truth-seeking. (Sometimes you are better off if you gag Loki: even though gagging in general is a ‘symmetric weapon,’ gagging of trolls is as asymmetric as your troll-identification system.) Further, there’s this way in which ‘social monkey’-style defenses seem like they made it harder to know (yourself, or have it known in the community) that you have validly identified the person you’re gagging as Loki (because you’ve eroded the asymmetry of your identification system).
It seems like the hoped behavior is something like the follows: Alice gets a vibe that Bob is being non-cooperative, Alice points out an observation that is relevant to Alice’s vibe (“Bob’s tone”) that also could generate the same vibe in others, and then Bob either acts in a reassuring manner (“oh, I didn’t mean to offend you, let me retract the point or state it more carefully”) or in a confronting manner (“I don’t think you should have been offended by that, and your false accusation / tone policing puts you in the wrong”), and then there are three points to track: object-level correctness, whether Bob is being cooperative once Bob’s cooperation has been raised to salience, and whether Alice’s vibe of Bob’s intent was a valid inference.
It seems to me like we can still go through a similar script without making excuses or obfuscating, but it requires some creativity and this might not be the best path to go down.
I think some means of communicating are going to be more effective than others
Yes, marketing is important.
I think there is still some prior that you are correct and I curious to hear your thoughts”, or failing that “You are very clearly wrong here yet I still respect you as a thinker who is worth my time to discourse with.” [...] I feel like for there to be productive and healthy discussion you have to act as though at least one of the above statements is true, even if it isn’t.
You can just directly respond to your interlocutor’s arguments. Whether or not you respect them as a thinker is off-topic. “You said X, but this is wrong because of Y” isn’t a personal attack!
this can go a lot better if you’re open to the fact that you could be the wrong one
Your degree of openness to the hypothesis that you could be the wrong one should be proportional to the actual probability that you are, in fact, the wrong one. Rules that require people to pretend to be more uncertain than they actually are (because disagreement is disrespect) run a serious risk of degenerating into “I accept a belief from you if you accept a belief from me” social exchange.
can you link to any examples of this on LessWrong from the past few years?
For example, I’m not sure how I’m supposed to rewrite my initial comment on this post to be more collaborative without making it worse writing.
Whether or not you respect them as a thinker is off-topic.
Unless I evaluate someone else to be far above my level or I have a strong credence that there’s definitely something I have to learn from them, then my interest in conversing heavily depends on whether I think they will act as though they respect me. It’s not just on-topic, it’s the very default fundamental premise on which I decide to converse with people or not—and a very good predictor of whether the conversation will be at all productive. I have greatly reduced motivation to talk to people who have decided that they have no respect for my reasoning, are only there to “enlighten” me, and are going to transparently act that way.
“You said X, but this is wrong because of Y” isn’t a personal attack!
Not inherently. But “tone” is a big deal and yours is consistently one of attack around statements which needn’t be so.
For example, I’m not sure how I’m supposed to rewrite my initial comment on this post to be more collaborative without making it worse writing.
Some examples of unnecessary aspects of your writing which make it hostile and worse:
As you said yourself, this was rhetorical, even feigned surprise, and that you were attempting “naive Socratic ducking.”
Of course, not everyone has the skillset to do biotechnology work!
That’s a pretty small and indirect effect, though!
Whoops!
The tone of these sentences, appending an exclamation mark to trivial statements, registers to me as actually condescending and rude. It’s as though you’re trying to educate me as though I were a child, adding energy and surprise to your lessons.
Content-wise, it’s a bit similar. You’re using a lot of examples to back up a very simple point (that clamoring in streets isn’t an effective strategy). In fact I think you misunderstood my point (which is not unfair, the motivating example was only a paragraph) and if you’d simply said “I don’t see how this is surprising, that wouldn’t be very strategic”, I could have clarified that since my predictions of people’s behavior does not assume people are strategic, that something isn’t strategic doesn’t reduce my surprise. Or maybe that wasn’t quite the misunderstanding or complaint—but you could expect it was something.
In practice, you spent 400 words elaborating on points which are kinda basic and with which I don’t disagree. Decomposing my impression that your comment was hostile, I think a bunch of it stemmed from the fact that you thought you needed to explain those points to me—that you thought my statement which seemed wrong to you was based on my failure to comprehend a very simple thing rather than perhaps me failing to have communicated a more complicated thing to you which made more sense.
Thinking about it, I think your comment is worse writing for how you’ve gone about it. A more effective version might go like this:
“I find your statement surprising. Of course people aren’t clamoring in the streets—that’d hardly be effective. Effective things might be something like my friend’s company . . .
You needn’t educate me about comparative advantage and earning to give. I won’t assume you’re paying attention, but you might have noticed that I post a moderate amount on LessWrong and am in fact a member of the LessWrong 2.0 team. I’m not new this community. Probably, I’ve heard of comparative advantage and earning to give.
It also feels to me (though I grant this could entirely be in my head) like you’re maybe leaning a bit too much on your sources/references/links for credibility in a way that also registers as condescending. As though you’re 1) trying to make your position seem like the obviously correct one because of the scholarship backing it up despite those links going to elementary resources and concepts, 2) trying to make yourself seem like the teacher who is educating me.
Rules that require people to pretend to be more uncertain than they actually are (because disagreement is disrespect) run a serious risk of degenerating into “I accept a belief from you if you accept a belief from me” social exchange.
I disagree. I haven’t seen that happen in any rationalist conversation I’ve been a part of. What I have seen (and made the mistake myself too many times) is people being overconfident that they’re correct. A norm, aka cultural wisdom, that says maybe you’re not so obviously right as you think helps correct for this in addition to the fact that conversations go better when people don’t feel they’re being judged and talked down to.
Yes, marketing is important.
1) As another example, this is dismissive and rude too. 2) I don’t think anything I described fits a reasonable definition of marketing. I want to guess that marketing here is being used somewhat pejoratively, and at best as a noncentral fallacy.
At this point, I’ve think we’d better wrap up this discussion. I doubt neither of us is going to start feeling more warmly towards the other with further comments, nor do I expect us to communicate much more information than we already have. I’m happy to read another reply, but I probably won’t respond further.
I disagree. I haven’t seen that happen in any rationalist conversation I’ve been a part of.
Just noting that I have seen this a large number of times.
A norm, aka cultural wisdom, that says maybe you’re not so obviously right as you think helps correct for this in addition to the fact that conversations go better when people don’t feel they’re being judged and talked down to.
I also disagree with some aspects of this, though in a more complicated way. Probably won’t participate in this whole discussion but wanted to highlight my disagreement (which feels particularly relevant given that the above might be taken as consensus of the LW team)
I think the occasional rhetorical question is a pretty ordinary part of the way people naturally talk and discuss ideas? I can avoid it if the discourse norms in a particular space demand it, but I tend to feel like this is excessive optimization for politeness at the cost of expressivity. Perhaps different writers place different weights on the relative value of politeness, but I should hope to at least be consistent in what behavior I display and what behavior I expect from others: if you see me tone-policing others over statements whose tone is as harsh as statements I’ve made in comparable situations, then I would be being hypocritical and you should criticize me for it!
The tone of these sentences, appending an exclamation mark to a trivial statements [...] adding energy and surprise to your lessons
I often use a “high-energy” writing style with lots of italics and exclamation points! I think it textually mimics the way I talk when I’m excited! (I think if you scan over my Less Wrong contributions, my personal blog, or my secret (“secret”) blog, you’ll see this a lot.) I can see how some readers might find this obnoxious, but I don’t think it’s accurate to read it as an indicator of contempt for my present interlocutor. (It probably correlates somewhat with contempt, but not nearly as much as you seem to be assuming?)
you’re maybe leaning a bit too much on your sources/references/links for credibility in a way that also registers as condescending [...] despite those links going to elementary resources and concepts
Likewise, I think lots of hyperlinks to jargon and concepts are a pretty persistent feature of my writing style? (To a greater extent in public forum posts like this rather than private emails.) In-body hyperlinks are pretty unobtrusive—readers who are interested in the link can click it, and readers who aren’t can not-click it.
I wouldn’t denigrate the value of having “elementary” resources easily at hand! I often find myself, e.g., looking up the definition of words I ostensibly “already know,” not because I can’t successfully use the word in a sentence, but to “sync up” my learned understanding of what the word means with what the dictionary says. (For example, I looked up brusque while composing this comment.)
You’re using a lot of examples to back up a very simple point (that clamoring in streets isn’t an effective strategy).
The intent wasn’t just to back up the point that clamoring in the streets is ineffective, but to illustrate what I thought cause-and-effect (causal reality) reasoning would look like in contrast to social (social reality) reasoning—I took “clamoring in the steets” to be an example of the kind of action that social-reality reasoning would recommend. I thought such illustration could provide value to the comment thread, even though you’ve doubtlessly already heard of earning to give. (I didn’t mean to falsely imply you hadn’t.)
In practice, you spent 400 words
Yes, it was a bit of a tangent. (Once I start excitedly explaining something, it can be hard to know exactly when to stop! The 29 karma (in 13 votes) suggests that the voters seemed to like it, at least?)
I won’t assume you’re paying attention, but you might have noticed that I post a moderate amount on LessWrong and am in fact a member of the LessWrong 2.0 team.
I noticed, yes. I don’t think this should affect my writing that much? Certainly, how I write should depend on my model of who I’m talking to, but my model of you is mostly informed by the text you’ve written. (I think we also met at a party once? Aren’t you Miranda’s husband?) The fact that you work for Less Wrong doesn’t alter my perception much.
As another example, this is dismissive and rude too
I wouldn’t say “dismissive”, exactly, but it’s definitely brusque, which, in the context of the surrounding thread, was an awful writing choice on my part. I’m sorry about that! Now that you’ve correctly pointed out that I made a terrible writing decision, let me try to make partial amends for it by exerting some more interpretive labor to unpack what I meant—
I suspect we have a pretty large disagreement on the degree to which respect is a necessary prerequisite for whether a conversation with someone will be productive? I think if someone is making good arguments, then I consider it my responsibility to update on the information content of what they’re saying. Because I’m a social monkey, I certainly find it harder to update (especially publicly) if someone’s good arguments are phrased in a way that doesn’t seem to respect me. Correspondingly, for my own emotional well-being, I prefer discussion spaces with strong politeness norms. But from the standpoint of minds as inference engines, I consider this a bug in my cognition: I expect to perform better if I can somehow muster the mental toughness to learn from people who hate my guts. (As it is written of the fifth virtue: “Do not believe you do others a favor if you accept their arguments; the favor is to you.”)
From that perspective (which you might disagree with!), can you see why it might be tempting to metaphorically characterize the respectful-behavior-is-necessary mindset as “expecting to be marketed to”?
I doubt neither of us is going to start feeling more warmly towards the other with further comments, nor do I expect us to communicate much more information than we already have.
I take that as a challenge! I hope this comment has succeeded at making you feel more warmly towards me and communicating much more information that we already have! But, I’m also assigning a substantial probability that I failed in this ambition. I’m sorry if I failed.
I thought of a way to provide evidence that I respect you as a thinker! I liked your “planning is recursive” post back in March, to the extent that I made two flashcards about it for my Mnemosyne spaced-repetition deck, so that I wouldn’t forget. Here are some screenshots—
Edit 19-07-02: I think I went too far with this post and I wish I’d said different things (both in content and manner, some of the positions and judgments I made here I think were wrong). With more thought, this was not the correct response in multiple ways. I’m still processing and will eventually say more somewhere.
. . .
That is persuasive that you respect my ability to think and even flattering. I would have also taken it as strong evidence if you’d simply said “I respect your thinking” at some earlier point. Yet, 1) when I said that someone (at least acting) as though they respected my thinking was pivotal in whether I wanted to talk to them and expected the conversation to be productive, you forcefully argued that respect wasn’t important. 2) You emphasized that it was important that when someone is wrong, everyone is made aware of it. In combination, this led me to think you weren’t here to have a productive conversation with someone you thought was a competent thinker, instead you’d come in order to do me the favor of informing me I was flat-out, no questions about it, wrong.
I want to emphasize again that the key thing here is someone acts in a way I interpret as some level of respect and consideration. It matters more to me that they be willing to act that way than they actually feel it. Barring the last two comments (kind of), your writing here has not (as I try to explain) registered that way to me.
I am sympathetic to positions that fear certain norms prioritize politeness over truth-seeking and information exchange. I wrote Conversational Cultures: Combat vs Nurture in which I expressed that the combative style was natural to me, but I also wrote a follow-up that each culture depended on appropriate context. I am not combative (I am not sure if I would describe your style this way, but maybe approximately) in online comments—certainly not with someone I don’t know and don’t feel safe with. The conditions do not pertain for it.
I am at my limit of explaining my position regarding respect and politeness, etc., and why I think those are necessary. I grant that there are legitimate fears and also that I haven’t fully expressed my comprehension of them and countered them fully and rigorously. But I’m at my limit. I’m inclined to think that your behavior is in part the result of considered principles which aren’t crazy, though naive and maybe willfully dismissive of counter-considerations.
I can see that at the core you are a person with ideas who is theoretically worth talking to and with whom I could have a valuable discussion. But also this entire exchange has been stressful and aggravating. Your initial comments were already unpleasant, and this further exchange about conversational norms has been the absolute lowlight of my weekend (indeed, receiving your comments has made my whole week feel worse). I am not sure if your excitedness as expressed by your bangs (!) indicates that you’re having fun, but I’m not. I’ve persisted because it seemed like the right thing to do. I’m at my limit of explaining why my position is reasonable. I’m at my limit of willingness to talk to you.
I am strongly tempted to ban you from commenting on any of my posts to save myself further aggravation (as any user above 50 [on Personal blogposts] or 2000 karma [Frontpage posts] threshold can do). I generally want people to know they don’t have to put with stuff like this. I hesitate because some of my posts are posted somewhat in a non-personal capacity such as the welcome page, FAQ, and even my personal thoughts about LessWrong strategy; I feel less authorized to unilaterally ban you from those. Though were it up to me, I think I would probably ban you from the entire site. I think you are making conversation worse and I fear that for everyone who talks in your style, people experience lots of really unpleasant feelings and we lose a dozen potential commenters who don’t want to be in a place where people discourse like this. (My low-fidelity shoulder habryka is suspicious of this kind of reasoning, but we can clarify that later.) Given that I think those abstract people are being quite reasonable, I am sad to lose them and I feel like I want to protect the garden:
But I have seen it happen—over and over, with myself urging the moderators on and supporting them whether they were people I liked or not, and the moderators still not doing enough to prevent the slow decay. Being too humble, doubting themselves an order of magnitude more than I would have doubted them. It was a rationalist hangout, and the third besetting sin of rationalists is underconfidence.
This about the Internet: Anyone can walk in. And anyone can walk out. And so an online community must stay fun to stay alive. Waiting until the last resort of absolute, blatant, undeniable egregiousness—waiting as long as a police officer would wait to open fire—indulging your conscience and the virtues you learned in walled fortresses, waiting until you can be certain you are in the right, and fear no questioning looks—is waiting far too late. [emphasis added]
I see the principles behind your writing style and why these seem reasonable to you. I am telling you how I perceive them and the reaction they provoke in me (stress, aggravation—not fun). I writing this to say that if you make no alterations to how you write (which you are not forced to generally), then I do not want to talk to you and personally would advocate for your removal from our communal places of discussion.
This is not because I think politeness is more important than truth. Emphatically not. It is because I think your naive (and perhaps willfully oblivious) stances emphatically get in the way of productive of valuable truth-seeking discussion between humans as they exist (and I don’t think those humans are being unreasonable).
I place few limits on what people can say to each other content-wise and would fight against any norms that go in the way of that. I don’t think anyone should ever to had to hide what they think, why, or that they think something is really dumb. I do think people ought to invest some effort in communicating in a way that indicates some respect and consideration for their interlocutors (of their feelings even if not their thinking). I grant that that can be somewhat costly and effortful—but I think it’s a necessary cost and I’m unwilling to converse with people unwilling to go to that effort barring exceptional exceptions. Unwillingness to do so (to me) reads as someone prioritizing their own effort and experience as completely outweighing my own.
(A nice signal that you cared about how I felt would have been that if after I’d said your bangs (!) and rhetorical question marks (?) felt condescending to me, you’d made an effort to reduce your usage rather than ramping them up to 11. At least for this conversation to show some good will. I’m actually quite annoyed about this. You said “I don’t know how I could have written my post more politely without making it worse”, I pointed out a few things. You responded by doing more of those things. Way more. Literally 11 bangs and 9 question marks.)
It’s not just about respecting my thinking, it’s about someone showing that they care at all about how I feel and how their words impact me. A perhaps controversial opinion is that I think that claims of “if you cared about truth, you’d be happy to learn from my ideas no matter how I speak” are used to excuse an emotional selfishness (“I want to write like this, it’d be less fun for me if I had to otherwise—can’t you see how much I’m doing you a favor by telling you you’re wrong?”) and that if we accept such arguments, we giving people basically a free license to be unpleasant jerks who can get away with rudeness, bullying, belittling, attack etc., all under the guise of “information exchange”.
Just a note to make salient the opposite perspective—as far as I am concerned, a Less Wrong that banned Zack (and/or others like him) would be much, much less fun to participate in.
In contrast, this sort of … hectoring about punctuation, and other such minutiae of alleged ‘tone’ … I find extremely tedious, and having to attend to such things makes Less Wrong quite a bit less fun.
I just don’t comment in these sorts of threads because I figure the site is a lost cause and the mods will ban all the interesting people regardless of what words I type into the box.
Like, feel free to call the site a lost cause, but I am highly surprised that you expect us to ban all the interesting people. We have basically never banned anyone from LW2 except weird crackpots and some people who violated norms really hard, but no one who I expect you would ever classify as being part of the “interesting people”.
On the other hand, suppose you said to me: “Said, you can of course continue posting here, we’re not banning you, but you must not ever mention World of Warcraft again; and if you do, then we will ban you.”
Or: “Said, post as much as you like, but none of your posts must contain em-dashes—on pain of banning.”
… or something else along these lines. Well, that’s not a ban. It’s not even a temporary ban! It’s nothing at all. Right?
Would you be surprised if I stopped participating, after an injunction like that? Surely, you would not be.
Would you call what had happened a ‘ban’, then?
Now, to be clear, I do not consider Less Wrong a lost cause; as you see, I continue to participate, both on the object and the meta levels. (I understand namespace’s sentiment, of course, even if I disagree.)
That said, while the distinction between literal administrative actions, and the threat thereof, is not entirely unimportant… it is not, perhaps, the most important question, when it comes to discussions of the site’s health, and what participants we may expect to retain or lose, etc.
I think that in this context it might be helpful for me to mention that I’ve recently seriously considered giving up on LessWrong, not because of overt bans or censorship, but because of my impression that the nudges I do see reflect some badly misplaced priorities.
These kinds of nudges both reflect the sort of judgment that might be tested later in higher-stakes situations (say, something actually controversial enough for the right call to require a lot of social courage on the mods’ part), and serve as a coordination mechanism by which people illegibly negotiate norms for later use.
I ended up deciding to contact the mods privately to see if we could double-crux on this, since “try at all” is an important thing to do before “give up” for a forum with as much talent and potential as this one. I’m only mentioning this in here because I think these kinds of things tend to be handled illegibly in ways that make them easy to miss when modeling things like chilling effects.
I agree. Though I would also be surprised if the people that namespace finds most interesting are worried about being banned based on that threat. If they do, then I think I would really like to change that (obviously depending on what the exact behavior is that they feel worried about being punished for, but my model is that we mostly agree on what would be ban-worthy).
I am interested in hearing from people who are worried about being banned for doing X (for almost any X), and will try my best to give clear answers of whether I think something like X would result in a ban, since I think being clear about rules like that is quite valuable.
This is of course admirable, but also not quite the point; the question isn’t whether the policies are clear (although that’s a question, and certainly an important one also); the question is, whether the policies—whatever they are—are good.
Or, to put it another way… you said:
… I would also be surprised if the people that namespace finds most interesting are worried about being banned based on that threat. If they do, then I think I would really like to change that (obviously depending on what the exact behavior is that they feel worried about being punished for, but my model is that we mostly agree on what would be ban-worthy).
[emphasis mine]
The problem with is, essentially, the same as the problem with CEV: it’s all very well and good if everyone does, indeed, agree on what is ban-worthy (and in this case clarity of policy just is the solution to all problems)… but what if, actually, people—including “interesting” people!—disagree on this?
Consider this scenario:
Alice, a commenter: Gosh, I’m really hesitant to post on Less Wrong. I’m worried that they might ban me!
Bob, a moderator: Oh? Why do you think that, Alice? What would we ban you for, do you think? I’d like you to be totally clear on what our policies are!
Alice: Well… I kinda think you might ban me for using em-dashes in my comments??
Bob: Ah! I understand. Please allow me to shed some light on that: yes, we will definitely ban you if you use em-dashes in your comments.
Alice: … oh. Ok.
Bob: I hope that cleared up any concerns you had?
Alice: … um. Well. I … am not worried anymore. So there’s that.
Bob: Great!
… not exactly “problem solved and all’s well”, yes?
Anyway, I think I’ve beaten this horse to death sufficiently for now.
1. I do not expect people namespace considers interesting to be afraid of making their interesting contributions due to fear of being banned, and if they are would like to fix that (I am only about 75% confident in this, but do expect this to be the case).
2. I separately want to ensure that our rules are clear, to ensure that people are only afraid of consequences that are actually likely to take place and am happy to invest resources into making that the case.
Agree that leaving this discussion as is seems fine for now.
I do not expect people namespace considers interesting to be afraid of making their interesting contributions due to fear of being banned
It’s important to think on the margin—not only do actions short of banning (e.g., “mere” threats of banning) have an impact on users’ behavior (as Said pointed out), they can also have different effects on users with different opportunity costs. I expect the people Namespace is thinking of face different opportunity costs than me: their voice/exit trade-off between writing for Less Wrong and their second-best choice of forum looks different from mine.
A 34-comments-and-counting meta trainwreck that started because a Less Wrong moderator found my use of a rhetorical question, exclamation marks, and reference hyperlinks to be insufficiently “collaborative.”
Neither of these discussions left me with a fear of being banned—insofar as both conversations had an unfortunately inextricable political component, I count them both as decisive “victories” for me (judging by the karma scores and what was said)—but they did suck up an enormous amount of my time and emotional energy that I could have spent doing other things. Someone otherwise like me but with lower opportunity costs would probably be smarter to just leave and try to have intellectual discussions in some other venue where it wasn’t necessary to decisively win a political slapfight on whether philosophers should consider each other’s feelings while discussing philosophy. Arguably I would be smarter to leave, too, but I’m stuck, because I joined a cult ten years ago when I was twenty-one years old, and now the cult owns my soul and I don’t have anywhere else to go.
I was at the first Overcoming Bias meetup in Millbrae in February 2008. I did the visual design for the 2009 and 2010 Singularity Summit program booklets. The first time I was paid money for programming work was when I wrote some Python scripts to help organize the Singularity Institute’s donor database in 2011. In 2012, I designed PowerPoint slides for the preliminary “kata” (about the sunk cost fallacy) for what would soon be dubbed the Center for Applied Rationality, to which I would later donate $16,500 between 2013 and 2016 after I got a real programming job. Today, I live in Berkeley and all of my friends are “rationalists.”
I mention all this (mostly, hopefully) not to try to pull rank—you really shouldn’t be making moderation decisions based on seniority!—but to illustrate exactly how serious a threat “removal from our communal places of discussion” is to me. My entire adult life only makes sense in the context of this website. If the forces of blandness want me gone because I use too many exclamation points (or perhaps some other reason), I in particular have an unusally strong incentive to either stand my ground or die trying.
(Um, as long as you’re initiating an interaction, maybe I should mention that I have been planning to very belatedly address your concern aboutpremature abstraction potentially functioning as a covert meta-attack by putting up a non-Frontpagable “Motivation and Political Context for My Philosophy of Language Agenda” post in conjunction with my next philosophy-of-language post? I’m hoping that will make things better rather than worse from your perspective? But if not, um, sorry.)
I guess that putting up such a post would make things much more fair, at least. But, I’m not sure I will be willing to comment on it publicly, given the risk of another drain of time and energy.
So, I’m against the forces of blandness too, but, is “I’m trapped in this cult” really an argument for not banning you rather than an argument for banning you? (I mean, banning you for saying that would create bad incentives, of course, but still)
Cults take weak people and make them weaker. Maybe try taking a break and getting some perspective? I doubt you’re so stuck you can’t leave. (There’s lots of standard advice for leaving cults)
Sorry if I’m being mean here, I’m trying to make sense of the actual considerations at play.
I thought it made sense to use the word “cult” pejoratively in the specific context of what the grandparent was trying to say, but it was a pretty noncentral usage (as the hyperlink to “Every Cause Wants To Be …” was meant to indicate); I don’t think the standard advice is going to directly apply well to the case of my disappointment with what the rationalist community is in 2019—although the standard advice might be a fertile source of ideas for how to diversify my “portfolio” of social ties, which is definitely worth doing independently of the Sorites problem about where to draw the category boundary around “cults”. (I was wondering if anyone was going to notice the irony of the grandparent mentioning the sunk cost fallacy!)
I have at least two more posts to finish about the cognitive function of categories (working titles: “Schelling Categories, and Simple Membership Tests” and “Instrumental Categories, and War”) that need to go on this website because they’re part of a Sequence and don’t make sense anywhere else. After that, I might reallocate attention back to my other avocations.
Quick note that I roughly endorse the set of frames here. (I have a post brewing about how people tend to see banning someone from a community as a “light” sentence, when actually it’s one of the worst things you can do to a person, at least in some cases)
(This may be another case where it would make sense to detach this derailed thread into its own post in order to avoid polluting the comments on “Causal Reality vs. Social Reality”, if that’s cheap to do.)
Very quick note that I’m not sure whether I endorse habryka’s phrasing here (don’t have time to fully articulate the disagreement, just wanted to flag it)
To be fair, in this context, I did say upthread that I wanted to ban Zack from my posts and possibly the entire site. As someone with moderator status (though I haven’t been moderating very much to date) I should have been much more cautious about mentioning banning people, even if that’s just me, no matter my level of aggravation and frustration.
I’m not sure what the criteria for “interesting” is, but my current personal leaning would be to exert more pressure than banning just crackpots and “violated norms really hard”, but I haven’t thought about this or discussed it all that much. I would do so before advocating hard for particular standard to be adopted widely.
But these are my personal feelings, not ones I’ve really discussed with the team and definitely not any team consensus about norms or policies.
(Possibly relevant or irrelevant I wrote before habryka’s most recent comment below.)
*nods* To give outsiders a bit of a perspective on this: Ruby has joined the team relatively recently and so I expect to have a pretty significant number of disagreements with him on broader moderation and site culture. I also think it’s really important for all members of the LW team to be able to freely express their opinions in public and participate in public conversations with their own models and opinions.
In practice, I expect Ruby’s opinions to obviously factor into where we will go in terms of site moderation, but that based on how we made decisions in the past that we would try really hard to come to agreement first and then try to explain our new positions publicly and get more feedback before we make any large changes to the way we enforce site norms.
I personally think that banning people for things in the category of “tone” or “adversarialness” should be done only with very large hesitation and after many iterations of conversations, and I expect this to stay our site policy for the foreseeable future.
should be done only with very large hesitation and after many iterations of conversations, and I expect this to stay our site policy for the foreseeable future.
For a long-standing community member, this does seem correct to me.
I appreciate you noting that. I’m hoping to wrap up my involvement on this thread soon, but maybe we will find future opportunities to discuss further.
This comment contains no italics and no exclamation points. (I didn’t realize that was the implied request—as Wei intuited, I was trying to show that that’s just how I talk sometimes for complicated psychological reasons, and that I didn’t think it should be taken personally. Now that you’ve explicitly told me to not do that, I will. As you’ve noticed, I’m not always very good at subtext, but I should hope to be capable of complying with explicit requests.)
That is persuasive that you respect my ability to think and even flattering. I would have also taken it as strong evidence if you’d simply said “I respect your thinking” at some earlier point.
I don’t think that would be strong evidence. Anyone could have said “I respect your thinking” in order to be nice (or to deescalate the conflict), even if they didn’t, in fact, respect you. The Mnemosyne cards are stronger evidence because they already existed.
you’d come in order to do me the favor of informing me I was flat-out, no questions about it, wrong
I came to offer relevant arguments and commentary in response to the OP. Whether or not my arguments and commentary were pursasive (or show that you were “wrong”) is up for each individual reader to decide for themselves.
I am strongly tempted to ban you from commenting on any of my posts to save myself further aggravation
That’s fine with me. (I’ve done this once with one user whose comments I didn’t like; it would be hypocritical for me to object if someone else did it to me because they didn’t like my comments.)
this further exchange about conversational norms has been the absolute lowlight of my weekend (indeed, receiving your comments has made my whole week feel worse) [...] I’m at my limit of willingness to talk to you.
Yes, this meta exchange about discourse norms has been quite stressful for me, too. (The conversation about the post itself was fine for me.) I hope you feel better soon.
I’ve been thinking about this thread as well as discourse norms generally. After additional thought, I’ve updated that I responded poorly throughout this thread and misjudged quite a few things. I think I felt disproportionately attacked by Zack’s initial comment (perhaps because I haven’t been active enough online to ever receive a direct combative comment like that one), and after that I was biased to view subsequent comments as more antagonistic than they probably were.
Zack’s comments contain some reasonable and valuable points. I think they could be written better to let the good points be readily be seen (content, structure, and tone), but notwithstanding it’s probably on the whole good that Zack contributed them, including the first one as written.
The above update makes me also update towards more caution around norms which dictate how one communicates. I think it probably would be bad if there’d been norms I could have invoked to punish or silence when I felt upset with Zack and Zack’s comments. (This isn’t a final statement of my thoughts, just an interim update, as I continue to think more carefully about this topic.)
So lastly, I’m sorry @Zack. I shouldn’t have responded quite as I did, and I regret that I did. I apologize for the stress and aggravation that I am responsible for causing you.. Thank you for contributions and persistence. Maybe we’ll have some better exchanges in the future!?
I feel sympathy for both sides here. I think I personally am fine with both kinds of cultures, but sometimes kind of miss the more combative style of LW1, which I think can be fun and productive for a certain type of people (as evidenced by the fact that many people did enjoy participating on LW1 and it produced a lot of progress during its peak). I think in an ideal world there would be two vibrant LW2s, one for each conversational culture, because right now it’s not clear where people who strongly prefer combat culture are supposed to go.
A nice signal that you cared about how I felt would have been that if after I’d said your bangs (!) felt condescending to me, you’d made an effort to reduce your usage rather than ramping them up to 11.
I think he might have been trying to signal that using lots of bangs is just his natural writing style, and therefore you needn’t feel condescension as a result of them.
The debate here feels like something more than combat vs other cultures of discussion. There are versions of combative cultures which are fine and healthy and which I like a lot, but also versions which are much less so. I would be upset if anyone thought I was opposed to combative discussion altogether, though I do think they need to be done right and with sensitivity to the significance of the speech acts involved.
Addressing what you said:
I think in an ideal world there would be two vibrant LW2s, one for each conversational culture, because right now it’s not clear where people who strongly prefer combat culture are supposed to go.
I think there’s some room on LessWrong for that. Certainly under the Archipelago model, authors can set the norms they prefer for discussions on their posts. Outside of that, it seems fine, even good, if users who’ve established trust with each other and have both been seen to opt-in a combative culture choose to have exchanges which go like that.
I realize this isn’t quite the same as a website where you universally know without checking that in any place on the site one can abide by their preferred norms. So you might be right—the ideal world might be require more than one LessWrong and anything else is going to fall short. Possibly we build “subreddits” and those could have an established universal culture where you just know “this is how people talk here”.
I can imagine a world where eventually it was somehow decided by all (or enough of the relevant) parties that the default on LessWrong was an unfiltered, unrestrained combative culture. I could imagine being convinced that actually that was best . . . though it’d be surprising. If it was known as the price of admission, then maybe that would work okay.
In this case, though, the “What? Why?” actually was rhetorical on my part. (Note the link to “Fake Optimization Criteria”, which was intended to suggest that I don’t think the optimization criterion of defeating death recommends the policy of clamoring in the streets.) It’s not that I didn’t understand the “cishumanists accept Death because they believe that the customs of their tribe are the laws of nature” point, it was that I disagreed with its attempted use as an illustration of the concept of social reality (because I think transhumanists similarly fail to understand that the customary optimism of their tribe is no substitute for engineering know-how), and was trying to use “naïve” Socratic questioning/inquiry to illustrate what I thought means-end reasoning about causal reality actually looks like. I can see how the this could be construed as a violation of some possible discourse norms (like the Recurse Center’s “No feigned surprise” rule), but sometimes I find some such norms unduly constraining on the way I naturally talk and express ideas!
I endeavor to obey the moderation guidelines of any posts I comment on.
collaborative truth-seeking
I’m happy at the coincidence that you happened to use this phrase, because it reminded me of an old (May 2017) Facebook post of mine that I had totally forgotten about, but which might be worth re-sharing as a Question here. (And if it’s not, then downvote it.) It’s written the same kind of “aggressively Socratic” style that you disliked in the grandparent, but I think that style is serving a specific and important purpose, even if it wouldn’t be appropriate in the comments of a post with contrary norm-enforcing moderation guidelines.
Even if we interpret “clamoring in the streets” as a metonym for other forms of mass political action—presumably with the aim of increasing government funding for medical research?
Yes, “clamoring in the streets” is not to be taken too literally here. I mean that it is something people have strong feelings about, something that they push for in whatever way. They seen grandma getting sicker and sicker, suffering more and more, and they feel outrage: why have we not solved this yet?
I don’t think the question of strategicness is is relevant here. For one thing, humans are not automatically strategic. But beyond that, I believe my point stands because most people are not taking any actions based on a belief that aging and death are solvable and it’s terrible that we’re not going as fast as we could be. I maintain this is evidence they are not living in a world (in their minds) where this is a real option. Your friend is an extreme outlier, and you too if your Rust example holds up.
I think the exposition here would be more compelling if you explicitly mention the social pressures in both the pro-Vibrams and anti-Vibrams directions: some people will tease you having “weird” toe-shoes, but some people will think better of you.
It’s true the social pressures exist in both directions. The point of that statement is merely to state that social considerations can be weighed within a causal frame, but they can be traded off against other things which are not social. I don’t think an exhaustive enumeration of the different social pressures helps make that point further.
The phrase “doesn’t involve being so weird” makes me wonder if this is meant as deliberate irony? (“Being weird” is a social-reality concept!) You might want to rewrite this paragraph to clarify your intent.
Yes, that paragraph was written from the mock-perspective of someone inhabiting a social reality frame, not my personal outside-analyzing frame as the OP. I apologize if that wasn’t adequately clear from context.
What evidence do you use to distinguish between people who are playing the “talk about life extension” group game, and people who are actually making progress on making life extension happen in the real, physical universe? (I think this is a very hard problem!)
I agree this is a very hard problem and I have no easy answer. My point here was to say that a person in the social reality frame might not even be able to recognize the existence of people who working on life extension simply because they actually really care about life extension. That their whole assessment remains in the social frame (particularly at the S1 level).
(Meta: is this still too combative, or am I OK? Unfortunately, I fear there is only so much I know how to hold back on my natural writing style without at least one of either compromising the information content of what I’m trying to say, or destroying my motivation to write anything at all.)
Perhaps the crux is this: the example (of attitudes towards death) that you seem to be presenting as a contrast between a causal-reality worldview vs. a social-reality worldview, I’m instead interpeting as a contrast between between transhumanist social reality vs. “normie” social reality.
(This is probably also why I thought it would be helpful to mention pro-Vibrams social pressure: not to exhaustively enumerate all possible social pressures, but to credibly signal that you’re trying to make an intellectually substantive point, rather than just cheering for the smart/nonconformist/anti-death ingroup at the expense of the dumb/conformist/death-accommodationist outgroup.)
a belief that aging and death are solvable
But whether aging and death are solvable is an empirical question, right? What if they’re not solvable? Then the belief that aging and death are solvable would be incorrect.
I can pretty easily imagine there being an upper bound on humanly-achievable medical technology. Suppose defeating aging would require advanced molecular nanotechnology, but all human civilizations inevitably destroy themselves shortly after reaching that point. (Say, because that same level of nanotech gives you super-fast computers that make it easy to brute-force unaligned AGI, and AI alignment is just too hard.)
and it’s terrible that we’re not going as fast as we could be.
I mean that it is something people have strong feelings about, something that they push for in whatever way. They seen grandma getting sicker and sicker, suffering more and more, and they feel outrage
I think people do this. In the OP, you linked to the immortal Scott Alexander’s “Who By Very Slow Decay”, which contains this passage—
In the cafeteria at lunch, [doctors] will—despite medical confidentiality laws that totally prohibit this—compare stories of the most ridiculous families. “I have a blind 90 year old patient with stage 4 lung cancer with brain mets and no kidney function, and the family is demanding I enroll her in a clinical trial from Sri Lanka.” “Oh, that’s nothing. I have a patient who can’t walk or speak who’s breathing from a ventilator and has anoxic brain injury, and the family is insisting I try to get him a liver transplant.”
What is harrassing doctors to demand a liver transplant, if it’s not feeling outrage and taking action?
why have we not solved this yet?
In social reality, this is a rhetorical question used to coordinate punishment of those who can be blamed for not solving it yet.
In causal reality, it’s a question with a very straightforward literal answer: the human organism is, in fact, subject to the biological process of senescence, and human civilization has not, in fact, developed the incredibly advanced technology that would be needed to circumvent this.
The cases Scott talks about are individuals clamoring for symbolic action in social reality in the aid of individuals that they want to signal they care about. It’s quite Hansonian, because the whole point is that these people are already dead and none of these interventions do anything but take away resources from other patients. They don’t ask ‘what would cause people I love to die less often’ at all, which my model says is because that question doesn’t even parse to them.
Perhaps the crux is this: the example (of attitudes towards death) that you seem to be presenting as a contrast between a causal-reality worldview vs. a social-reality worldview, I’m instead interpeting as a contrast between between transhumanist social reality vs. “normie” social reality.
Fwiw, I found this paragraph quite helpful. I initially bounced off your original comment because I couldn’t tell what the point was, and would have had an easier time following it if it had opened with something more like this paragraph.
(Meta: Yup, that’s much better. I appreciate the effort. To share some perspective from my end, I think this has been my most controversial post to date. I think I understand now why many people say posting can be very stressful. I know of one author who removed all their content from LW after finding the comments on their posts too stressful. So there’s a probably a trade off [I also empathize with the desire to express emphatic opinions as you feel them], where writing more directly can end up dissuading many people from posting or commenting at all.)
Perhaps the crux is this: the example (of attitudes towards death) that you seem to be presenting as a contrast between a causal-reality worldview vs. a social-reality worldview, I’m instead interpeting as a contrast between between transhumanist social reality vs. “normie” social reality.
I think that’s a reasonable point. My counter is that I’d argue that “transhumanist social reality” is more connected to the causal world than mainstream social reality. Transhumanists, even if they are biased and over-optimistic, etc., at least invoke arguments and evidence from the general physical world: telomeres, nanotechnology, the fact that turtles lives a really long time, experiments on worms, etc. Maybe they repeat each other’s socially sanctioned arguments, but those arguments invoke causal reality.
In contrast, the mainstream social reality appears to be very anchored on the status quo and history to date. You might be able to easily imagine that there’s an upper bound on humanly-achievable medical technology, but I’d wager that’s not the thought process most people go through when (assuming they ever even consider the possibility) they judge whether they think life-extension is possible or not. To quote the Chivers passage again:
“The first thing that pops up, obviously, is I vaguely assume my children will die the way we all do. My grandfather died recently; my parents are in their sixties; I’m almost 37 now. You see the paths of a human’s life each time; all lives follow roughly the same path. They have different toys—iPhones instead of colour TVs instead of whatever—but the fundamental shape of a human’s life is roughly the same.
Note that he’s not making an argument from physics or biology or technology at all. This argument is from comparison to other people. “My children will die the way we all do,” “all lives follow roughly the same path.” One might claim that isn’t unreasonable evidence. The past is a good prior, it’s a good outside view. But the past also shows tremendous advances in technology and medical science—including dramatic increases in lifespan. My claim is that these things aren’t considered in the ontology most people think within, one where how other people do things is dominant.
If I ask my parents, if I stop and ask people on the street, I don’t expect them to say they thought about radical life extension and dismissed it because of arguments about what is technologically realistic. I don’t expect them to say they’re not doing anything towards it (despite it seeming possible) because they see no realistic path for them to help. I expect them to not have thought about it, I expect them to have anchored on what human life has been like to date, or I expect them to have thought about it just long enough to note that it isn’t a commonly-held belief and conclude therefore it’s just a thing another group believes.
Even if the contrast is “transhumanist social reality”, I ask how did that social reality come to be and how did people join it? I’m pretty sure most transhumanists weren’t born to transhumanist families, educated in transhumanist schools, or surrounded at transhumanist friends. Something at some point prompted them to join this new social group—and I’d wager that in many cases it’s because on their own they reasoned how humans are now isn’t how they have to be—rightly or wrongly—they invoke a belief about what broader reality allows beyond what is commonly held opinion or practice to date. Maybe that’s a social reality too, but it’s a really different one.
The reason why the disease and death example is confusing to me is partly because I expect people to be highly emotion and unstrategic—willing to invest a great deal for only a small chance. People agonize over “maybe I could have done something” often enough. They demand doctors do things “so long as there’s a chance.” One can doubt that radical life extension is possible, but I don’t think one can be reasonably certain that it isn’t. I expect that if people thought there was any non-trivial chance that we didn’t need to millions of people to decay and die each year, they would be upset about it (especially given first-hand experience), and do something. As it is, I think most people take death and decay for granted. That’s just how it is. That’s what people do. That’s my confusion. How can you so blithely ignore the progress of the last few hundred years? Or the technological feats we continue to pull off. You think it’s reasonable for there to be giant flying metal cans? For us to split the atom and go to moon? To edit genes and have artificial hearts? To have double historical lifespans already? Yet to never wonder whether life could be better still? To never be upset that maybe the universe doesn’t require it to be this way, instead we (humanity) just haven’t got our shit together, and that’s a terrible tragedy.
This perspective is natural to me. Obvious. The question I am trying to explain is why am I different? I think I am the weird one (i.e., the unusual one). But what am I doing differently? How is my reality (social or otherwise) different? And one of the reasonable answers is that I invoke a different type of reasoning to infer what is possible. My evidence is that I don’t encounter people responding with like-kind arguments (or even having considered the question) to questions of elimination decay and death.
“Terrible” is a moral judgment. The anticipated experience is that when I point my “moral evaluator unit” at a morally terrible thing, it outputs “terrible.”
Even if the contrast is “transhumanist social reality”, I ask how did that social reality come to be and how did people join it? I’m pretty sure most transhumanists weren’t born to transhumanist families, educated in transhumanist schools, or surrounded at transhumanist friends. Something at some point prompted them to join this new social group
This isn’t necessarily a point in transhumanism’s favor! At least vertically-transmitted memeplexes (spread from parents to children, like established religions) face selective pressures tying the fitness of the meme to the fitness of the host. (Where evolutionary fitness isn’t necessarily good from a humane perspective, but there are at least bounds on how bad it can be.) Horizontally-transmitted memeplexes (like cults or mass political movements) don’t face this constraint and can optimize for raw marketing appeal independent of long-term consequences.
“Terrible” is a moral judgment. The anticipated experience is that when I point my “moral evaluator unit” at a morally terrible thing, it outputs “terrible.”
I think moral judgements are usually understood to have a social function—if I see someone stealing forty cakes and say that that’s terrible, there’s an implied call-to-action to punish the thief in accordance with the laws of our tribe. It seems weird to expect this as an alternative to social reality.
They seen grandma getting sicker and sicker, suffering more and more, and they feel outrage: why have we not solved this yet?
You expect them to get angry—at whom in particular? - because grandma keeps getting older? For tens of thousands of years of human history, the only alternative to this has been substantially worse for grandma. Unless she wants to die and you’re talking about euthanasia, but no additional medical research is needed for that. There is no precedent or direct empirical evidence that anything else is possible.
Maybe people are wrong for ignoring speculative arguments that anti-aging research is possible, but that’s a terrible example of people being bound by social reality.
1. True, for ten thousands of years of human history, it has been that way. But “there is no precedent or direct empirical evidence that anything else is possible” emphatically does not cut it. Within only a few hundred years the world has been transformed, we have magical god-devices that connect us across the world, we have artificial hearts, we can clean someone’s blood by pumping out of it and then back in, we operate on brains, we put man on the moon. In recent years you’ve got the rise of AI and gene editing. Lifespans are already double most of what they’ve been for most of history. What has held for tens of thousands of years is no longer so. It is not that hard to see that humankind’s mastery over reality is only continuing to grow. Precedent? Maybe not. But reason for hope? Yes. Actually pretty reasonable expectation that our medical science is not maxed out? Definitely.
This isn’t speculative. The scientific and technological progress should be apparent to those who’ve lived more than a few decades in the recent history.
2. Anger doesn’t always have to have a target. But if you need one then pick society, pick science, pick research, pick doctors, pick your neighbours.
3. Watching your loved ones decay and die is anguish. If people are going to yell at the doctors that they should do something, that something must be possible (though some would argue this is fake/performance), then let them also yell at state of the world. That this unnecessary circumstance has come to be. Yell at the universe.
4. The alternative explanation to saying that people see the world overwhelmingly via social reality is that people simply have terrible causal models. Perhaps to me the scientific/technological progress of the last few hundred years is obviously, obviously reason to believe far more is possible (and better today than in fifty years), but not to others. Perhaps I’m wrong about it, though I don’t think I am.
And you needn’t be absolutely certain that curing death and aging is possible to demand we try. A chance should be enough. If you demand that doctors do things which only might prolong grandma’s life, then why not ask that have better science because there’s chance for that working too.
Perhaps people really didn’t get enough of an education to appreciate science and technology (that we manipulate light itself to communicate near instantaneously sparks no wonder and awe, for example). So then I’d say they are overly anchored on the status quo. It is not so much being bound by social reality, but by how things are now, without extrapolation even fifty years forward or back—even when they themselves have lived through so much change.
5. I pick the example of disease and death because is so personal, so immediate, so painful for many. It doesn’t require that we posit any altruistic motivation and it’s a situation where I expect to see a lot of powerful emotion revealing how people relate to reality (rather than them taking the options they think are immediately available to them and strategic).
I don’t think the disagreement here is about the feasibility of life extension. (I agree that it looks feasible.) I think the point that Benquo and I have been separately trying to make is that admonishing people to be angry independently of their anger having some specific causal effect on a specific target, doesn’t make sense in the context of trying to explain the “causal reality vs. social reality” frame. “People should be angrier about aging” might be a good thesis for a blog post, but I think it would work better as a different post.
And you needn’t be absolutely certain that curing death and aging is possible to demand we try. A chance should be enough.
I think the point that Benquo and I have been separately trying to make is that admonishing people to be angry independently of their anger having some specific causal effect on a specific target, doesn’t make sense in the context of trying to explain the “causal reality vs. social reality” frame.
I wonder if this is a point where I being misunderstood. Based on this and a few in-person conversations, people think I’m taking a normative stance here. I’m not. Not primarily. I am trying to understand a thing I am confused about and to explain my observations. I observe that my models lead me to expect that people would be doing X, but I do not observe that—so what am I missing?
Fore the record, for all those reading:
This post isn’t trying to tell anyone to do anything, and I’m not actively stating a judgment. I haven’t thought about what people should be doing. I’m not saying they should be clamoring in the streets. There is no active admonishing directed at anyone here. There is no thesis. I haven’t thought about what people should be doing enough—I haven’t thought through what would actually be strategic for them. So I don’t know. Not with any confidence, not enough to tell them what to do.
Given this is about my confusion about what I expect people to do and that I don’t expect people to be strategic, the question of whether or not doing X would be strategic isn’t really relevant. My model doesn’t predict people to be strategic, so the fact that strategic action might not to be do X doesn’t make me less confused.
(A valid counter to my confusion is saying that people are in fact strategic, but I’m rather incredulous. I’m not sure if you or Benquo were saying that?)
I am a bit confused, I might not be reading you carefully enough, but it feels here like you’re trying to explain people’s behavior with reference to normative behavior rather than descriptive (in this comment and earlier ones).
It’s precisely because I expect most people to think “but there’s still a chance right” that I would expect the possibility of life extension to motivate to action—more so than if they cared about the magnitude. (Also, caring about magnitude is a causal reality thing, I would say as the notion of probabilities is, seemingly.)
Your argument doesn’t make sense unless whatever “clamoring in the streets” stands in for metaphorically is an available action to the people you’re referring to. It seems to me like the vast majority of people are neither in an epistemic position where they can reasonably think that they know that there’s a good chance of curing aging, nor do they have any idea how to go about causing the relevant research to happen.
They do know how to increase the salience of “boo death,” but so far in the best case that seems to result in pyramids, which don’t work and never could, and even then only for the richest.
Note that even for those of us who strive for legibility of action (“live in the causal world”), it’s not clear that aging and death CAN be solved in humans at all, and seems downright unlikely that any strategy or action can solve it fast enough to avoid the pain and fear of the death of my loved ones and myself.
Whether a loved one dies at 65, 85, 105, 205, or 1005, it’s going to suck when it happens. No amount of clamoring in the streets (or directed research into biology) is going to avoid that pain. Some amount of effort and sacrifice toward life extension _CAN_ have positive average and top-percentile lifespans, and that’s great if it applies to the people I care most about. And much research and behavior change is useful in improving the quality of the limited years of many people. Note that “quality” includes other people’s acceptance and support, so mixes social reality in with the definition.
It remains really unclear to me whether I should prefer that strangers live longer or that there are more strangers born to replace the dead ones. My intuition and initial preference is that fewer/longer is better than more/shorter lives, but I don’t have much rational justification for that, and with my current evidence for stagnation of beliefs and reduction in interest as people age, I suspect I may actually prefer more/shorter. I’m not sure how much of more/longer is possible as long as we’re limited to the current earth ecosystem.
Oops, went too far on the object level, sorry—my point is that there are many reasons someone might not spend much effort on eradicating aging, and “they live in social reality and don’t consider causal reality” is a very weak strawman for their choices.
They get progressively more theoretical as distance increases. It seems l care about my n-degrees-removed cousin (in the present or future) who I haven’t met and know no specifics about, about as much as any n-degrees-connected stranger. Note that I have no theory or considered belief that I _SHOULD_ care about some strangers or distant relatives more than others, this is pure introspection on what I seem to actually feel.
What? Why? How would clamoring in the streets causally contribute to the end of sickness and death? Even if we interpret “clamoring in the streets” as a metonym for other forms of mass political action—presumably with the aim of increasing government funding for medical research?—it still just doesn’t seem like a very effective strategy compared to more narrowly-targeted interventions that can make direct incremental progress on the problem.
Concrete example: I have a friend who just founded a company to use video of D. magnia to more efficiently screen for potential anti-aging drugs. The causal pathway between my friend’s work and defeating aging is clear: if the company succeeds at building their water-flea camera rig drug-discovery process, then they might discover promising chemical compounds, some of which (after further research and development) will successfully treat some of the diseases of aging.
Of course, not everyone has the skillset to do biotechnology work! For example, I don’t. That means my causal contributions to ending sickness and death will be much more indirect. For example, my work on improving the error messages in the Rust compiler has the causal effect of making it ever-so-slightly easier to write software in Rust, some of which software might be used by, e.g., companies working on drug-discovery processes for finding promising chemical compounds, some of which (after further research and development) will successfully treat some of the diseases of aging.
That’s a pretty small and indirect effect, though! To do better, I might try to harness the power of comparative advantage and earn-to-give: instead of unpaid open-source work on Rust, maybe I should work harder at my paid software dayjob, negotiate for a raise, and use that money to fund someone else to do direct work on ending sickness and death. On the other hand, that’s assuming we know how to turn marginal money into marginal (good) research, which might not actually be true, either because a specific area is more talent-constrained than funding-constrained, or more generally because most donations in our inadequate civilization end up getting dissipated into bullshit jobs …
But while we’re thinking about how to contribute to ending sickness and death, it’s also important to track how actions might accidentally contribute to more sickness and/or death. For example, improving the error messages in the Rust compiler and having the causal effect of making it ever-so-slightly easier to write software in Rust, might have the causal effect of making it ever-so-slightly easier to write an unaligned recursively self-improving artificial intelligence that will destroy all value in our future light cone. Whoops! If it turns out that we live in that possible world, maybe I should do something less destructive with my time, like clamoring in the streets.
I think the exposition here would be more compelling if you explicitly mention the social pressures in both the pro-Vibrams and anti-Vibrams directions: some people will tease you having “weird” toe-shoes, but some people will think better of you.
Soylent probably markets to a similar demographic niche as Vibrams. I’m sure some people drink Soylent for the “causal” reason of it being an efficient, practical alternative to cooking, rather than the “social” reason that they were suckered by its brilliant marketing to contrarian nerds as an efficient, practical alternative to cooking. But the ten unopened cases of Soylent sitting behind me as I type this represent an uncomfortable weight of evidence that I, personally, am not in the “causal” group, and I suspect some Vibrams-wearers might be in a similar position.
The phrase “doesn’t involve being so weird” makes me wonder if this is meant as deliberate irony? (“Being weird” is a social-reality concept!) You might want to rewrite this paragraph to clarify your intent.
What evidence do you use to distinguish between people who are playing the “talk about life extension” group game, and people who are actually making progress on making life extension happen in the real, physical universe? (I think this is a very hard problem!)
The Less Wrong website certainly hosts a lot of insightful blog posts about how to inhabit causal reality. How reliable is the causal pathway between “people read the blog posts” and “those people primarily inhabit causal reality”? That’s an empirical question!
Meta-note: while your comment adds very reasonable questions and objections which you went to the trouble of writing up at length (thanks!), its tone is slightly more combative than I’d like discussion of my posts to be. I don’t think conditions pertain that’d make that the ideal style here. I should perhaps put something like this in my moderation guidelines (update: now added).
I’d be grateful if you write future comments with a little more . . . not sure how to articulate . . .something like charity and less expression of incomprehension, more collaborative truth-seeking. Comment as though someone might have a reasonable point even if you can’t see it yet.
If you don’t understand the other person’s point (even after thinking a bit), what’s the collaborative move, other than expressing incomprehension? It seems that anything else would be pretending you understand when you actually don’t, which is adversarial to the collaborative truth-seeking process.
Connotation, denotation, implication, and subtext all come into play here, as do the underlying intent one can infer from them. If you don’t understand someone’s point, it’s entirely right to to state that, but there are diverse ways of expressing incomprehension. Contrast:
Expressing incomprehension + a request for further clarification, e.g. “I don’t understand why you think X, especially in light of Y, what am I missing?”, as opposed to
Expressing incomprehension + judgment, opposition, e.g. “I don’t understand, how could anyone think X given that Y!?”
Though inferences about underlying intent and mindstates are still only inferences, I’d say the first version is a lot more expected from a stance of “I assign some credence to you have a point that I missed (or at least act as though I do for the sake of production discussion) and I’m willing to listen so that we can talk and figure out which of us is really correct here.” When I imagine the second one, it feels like it comes from a place of “You are obviously wrong. Your reasoning is obviously wrong. I want you and everyone else to know that you’re wrong and your beliefs should be dismissed.” (It doesn’t have to mean that—and among people where there is common knowledge that everyone respects everyone else’s reasoning it could even be good—but that’s not the situation on the public comments here.)
The first version of expression incomprehension, I’d read as coming from a desire to figure out who is right here (hence the collaboration). The second feels more like someone is already sure they are right and wish to demolish they see as wrong (more adversarial).
I would think that if someone’s reasoning is obviously wrong, then that person and everyone else should be informed that they are wrong (and that the particular beliefs that are wrong should be dismissed), because then everyone involved will be less wrong, which is what this website is all about!
Certainly, one would be advised to be very careful before asserting that someone’s reasoning is obviously wrong. (Obvious mistakes are more likely to be caught before publication than subtle ones, so if you think you’ve found an obvious mistake in someone’s post, you should strongly consider the alternative hypotheses that either you’re the one who is wrong, or that you’re, e.g., erroneously expecting short inferential distances.)
More generally, I’m in favor of politeness norms where politeness doesn’t sacrifice expressive power, but I’m wary of excessive emphasis on collaborative norms (what some authors would call “tone-policing”) being used to obfuscate information exchange or even shut it down (via what Yudkowsky characterized as appeal-to-egalitarianism conversation-halters).
If someone is wrong, this should definitely be made legible, so that no one leaves believing the wrong thing. The problem is with the “obviously” part. Once the truth of the object-level question is settled, there is the secondary question of how much we should update our estimate of the competence of whoever made a mistake. I think we should by default try to be clear about the object-level question and object-level mistake, and by default glomarize about the secondary question.
I read Ruby as saying that we should by default glomarize about the secondary question, and also that we should be much more hesitant about assuming an object-level error we spot is real. I think this makes sense as a conversation norm, where clarification is fast, but is bad in a forum, where asking someone to clarify their bad argument frequently leads to a dropped thread and a confusing mess for anyone who comes across the conversation later.
There’s an implication in your comment I don’t necessarily agree with, now that you point it out: “we should be much more hesitant about assuming an object-level error we spot is real” → “we should ask for clarification when we notice something.”
Person A argues X, Person B thinks X is wrong and wants to respond with argument Y. I don’t think they have to ask for clarification, I think it’s enough that they speak in a way that grants that maybe they’re missing something, in a way that’s consistent with having some non-negligible prior that the other person is correct. More about changing how you say things than what you say. So if asking for clarification isn’t helpful, don’t do it.
See this old comment thread (especially my response to the response) for some related points.
As you say in your next paragraph, one should be careful before asserting someone is obviously wrong. But sometimes they are. But if the goal is everyone being less wrong, I think some means of communicating are going to be more effective than others. I, at least, am a social monkey. If I am bluntly told I am wrong (even if I agree, even in private—but especially in public), I will feel attacked (if only at the S1 level), threatened (socially), and become defensive. It makes it hard to update and it makes it easy to dislike the one who called me out. The harsh calling out might be effective for onlookers, I suppose. But the strength of the “wrongness assertion” really should come from the arguments behind it, not the rhetoric force of the speaker. If the arguments are solid, it should be damning even with a gentle tone. If people ought to update that my reasoning is poor, they can do so even if the speaker was being polite and according respect.
Even if you wish to express that someone is wrong, I think this is done more effectively if one simultaneously continues to implicitly express “I think there is still some prior that you are correct and I curious to hear your thoughts”, or failing that “You are very clearly wrong here yet I still respect you as a thinker who is worth my time to discourse with.” If neither of those is true, you’re in a tough position. Maybe you want them to go away, or you just want other people not believe false things. There’s an icky thing here I feel like for there to be productive and healthy discussion you have to act as though at least one of the above statements is true, even if it isn’t. No one is going to respond well to discussion with someone who they think doesn’t respect them and is happy to broadcast that judgment to everyone else (doing so is legitimately quite a hostile social move).
The hard thing is here is that’s about perceptions more than intentions. People interpreting things differently, people have different fears and anxieties, and that means things can come across as more hostile than they’re intended. Or the receiver judges is more afraid about others will think than the speaker (reasonably—the receiver has more at stake).
Though around here, people are pretty good at admitting they’re wrong. But I think certain factors about how they’re communicated with can determine whether it feels like a helpful correction vs a personal attack.
This might be because I’m overly confident in my writing ability, but I don’t think maintaining politeness would ever curtail my expressive power, although admittedly it can take a lot more time. Do you have any examples, real or fictional, where you feel expressiveness was sacrificed to politeness?
At the risk of sparking controversy, can you link to any examples of this on LessWrong from the past few years? I want to know if we’re actually at in danger of this at all.
I think the tough thing is that all norms can be weaponized and abused. Basically, if you have a goal which isn’t truth-seeking (which we all do), then there is no norm I can imagine which on its own will stop you. The absence of tone-policing permits heated angry exchanges, attacks, and bullying—but so does a tone policing which is selectively enforced.
On LessWrong, I think we need to strike a balance. We should never say “you used a mean tone, therefore you are wrong and must immediately leave the discussion” or “all opinions must be given equal respect” (cue your link); but we should still say “no, you can’t call people idiots here” and “if you’re going to argue with someone, this can go a lot better if you’re open to the fact that you could be the wrong one.”
Naturally, there’s a lot of grey area in the middle. I like the idea of us being a community where we discuss what ideal discussion looks like, continually refining our norms to something that works really well.
(Hence my writing all this at length—trying to get my own thoughts in order and have something to refer back to/later compile into a post.)
I basically don’t find this compelling, for reasons analogous to No, It’s not The Incentives, it’s you. Yes, there are ways to establish emotional safety between people so that I can point out errors in your reasoning in a way that reduces the degree of threat you feel. But there are also ways for you to reduce the number of bucket errors in your mind, so that I can point out errors in your reasoning without it seeming like an attack on “am I ok?” or something similar.
Versions of this sort of thing that look more like “here is how I would gracefully make that same objection” (which has the side benefit of testing for illusion of transparency) seem to me more likely to be helpful, whereas versions that look closer to “we need to settle this meta issue before we can touch the object level” seem to me like they’re less likely to be helpful, and more likely to be the sort of defensive dodge that should be taxed instead of subsidized.
Strongly agreed. To expand on this—when I see a comment like this:
The question I have for anyone who says this sort of thing is… do you endorse this reaction? If you do, then don’t hide behind the “social monkey” excuse; honestly declare your endorsement of this reaction, and defend it, on its own merits. Don’t say “I got defensive, as is only natural, what with your tone and all”; say “you attacked me”, and stand behind your words.
But if you don’t endorse this reaction—then deal with it yourself. Clearly, you are aware that you have it; you are aware of the source and nature of your defensiveness. Well, all the better; you should be able, then, to attend to your own involuntary responses. And if you fail to do so—as, being only human, you sometimes (though rarely, one hopes!) will—then the right thing to do is to apologize to your interlocutor: “I know that my defensiveness was irrational, and I regret that it got the better of me, this time; I will endeavor to exercise more self-control, in the future.”
I agree with the above two comments (Vaniver’s and yours) except for a certain connotation of this point. Rejection of own defensiveness does not imply endorsement of insensitivity to tone. I’ve been making this error in modeling others until recently, and I currently cringe at many of my “combative” comments and forum policy suggestions from before 2014 or so. In most cases defensiveness is flat wrong, but so is not optimizing towards keeping the conversation comfortable. It’s tempting to shirk that responsibility in the name of avoiding the danger of compromising the signal with polite distortions. But there is a lot of room for safe optimization in that direction, and making sure people are aware of this is important. “Deal with it yourself” suggests excluding this pressure. Ten years ago, I would have benefitted from it.
To be clear I agree with the benefits of politeness, and also think people probably *underweight* the benefits of politeness because they’re less easy to see. (And, further, there’s a selection effect that people who are ‘rude’ are disproportionately likely to be ones who find politeness unusually costly or difficult to understand, and have less experience with its benefits.)
This is one of the reasons I like an injunction that’s closer to “show the other person how to be polite to you” than “deal with it yourself”; often the person who ‘didn’t see how to word it any other way’ will look at your script and go “oh, I could have written that,” and sometimes you’ll notice that you’re asking them to thread a very narrow needle or are objecting to the core of their message instead of their tone.
I think that’s a good complaint and I’m glad Vaniver pointed it out.
I think this is a very good question. Upon reflection, my answer is that I do endorse it on many occasions (I can’t say that I endorse it on all occasions, especially in the abstract, but many). I think that myself and others find ourselves feeling defensive not merely because of uncleared bucket errors, but because we have been “attacked” to some lesser or greater extent.
You are right, the “social monkey” thing is something of an excuse, arguably born out of perhaps excessive politeness. You offer such an excuse when requesting someone else change in order to be polite, to accept some of the blame for the situation yourself rather than be confrontational and say it’s all them. Trying to paint a way out of conflict where they can save face . (If someone’s behavior already feels uncomfortably confrontational to you and you want to de-escalate, the polite behavior is what comes to mind.)
In truth though, I think that my “monkey brain” (and those of others) pick up on real things: real slights, real hostility, real attempts to do harm. Some are minor, but they’re still real, and it’s fair to push back on them. Some defensiveness is both justified and adaptive.
The salient question is whether it’s a good idea to respond to possible attacks in a direct fashion. Situations that can be classified as attacks (especially in a sense that allows the attacker to remain unaware of this fact) are much more common.
I agree with that. Granting to yourself that you feel legitimately defensive because of a true external attack does not equate to necessarily responding directly (or in any other way). You might say “I am legitimately defensive and it is good my mind caused me to notice the threat”, and then still decide to “suck it up.”
This seems right but tricky. That is, it seems important to distinguish ‘adaptive for my situation’ and ‘adaptive for truth-seeking’ (either as an individual or as a community), and it seems right that hostility or counterattack or so on are sometimes the right tool for individual and community truth-seeking. (Sometimes you are better off if you gag Loki: even though gagging in general is a ‘symmetric weapon,’ gagging of trolls is as asymmetric as your troll-identification system.) Further, there’s this way in which ‘social monkey’-style defenses seem like they made it harder to know (yourself, or have it known in the community) that you have validly identified the person you’re gagging as Loki (because you’ve eroded the asymmetry of your identification system).
It seems like the hoped behavior is something like the follows: Alice gets a vibe that Bob is being non-cooperative, Alice points out an observation that is relevant to Alice’s vibe (“Bob’s tone”) that also could generate the same vibe in others, and then Bob either acts in a reassuring manner (“oh, I didn’t mean to offend you, let me retract the point or state it more carefully”) or in a confronting manner (“I don’t think you should have been offended by that, and your false accusation / tone policing puts you in the wrong”), and then there are three points to track: object-level correctness, whether Bob is being cooperative once Bob’s cooperation has been raised to salience, and whether Alice’s vibe of Bob’s intent was a valid inference.
It seems to me like we can still go through a similar script without making excuses or obfuscating, but it requires some creativity and this might not be the best path to go down.
That is pretty much my picture. I agree completely about the trickiness of it all.
At some point I’d be curious to know your thoughts on the other potential paths.
Yes, marketing is important.
You can just directly respond to your interlocutor’s arguments. Whether or not you respect them as a thinker is off-topic. “You said X, but this is wrong because of Y” isn’t a personal attack!
Your degree of openness to the hypothesis that you could be the wrong one should be proportional to the actual probability that you are, in fact, the wrong one. Rules that require people to pretend to be more uncertain than they actually are (because disagreement is disrespect) run a serious risk of degenerating into “I accept a belief from you if you accept a belief from me” social exchange.
For example, I’m not sure how I’m supposed to rewrite my initial comment on this post to be more collaborative without making it worse writing.
Unless I evaluate someone else to be far above my level or I have a strong credence that there’s definitely something I have to learn from them, then my interest in conversing heavily depends on whether I think they will act as though they respect me. It’s not just on-topic, it’s the very default fundamental premise on which I decide to converse with people or not—and a very good predictor of whether the conversation will be at all productive. I have greatly reduced motivation to talk to people who have decided that they have no respect for my reasoning, are only there to “enlighten” me, and are going to transparently act that way.
Not inherently. But “tone” is a big deal and yours is consistently one of attack around statements which needn’t be so.
Some examples of unnecessary aspects of your writing which make it hostile and worse:
As you said yourself, this was rhetorical, even feigned surprise, and that you were attempting “naive Socratic ducking.”
The tone of these sentences, appending an exclamation mark to trivial statements, registers to me as actually condescending and rude. It’s as though you’re trying to educate me as though I were a child, adding energy and surprise to your lessons.
Content-wise, it’s a bit similar. You’re using a lot of examples to back up a very simple point (that clamoring in streets isn’t an effective strategy). In fact I think you misunderstood my point (which is not unfair, the motivating example was only a paragraph) and if you’d simply said “I don’t see how this is surprising, that wouldn’t be very strategic”, I could have clarified that since my predictions of people’s behavior does not assume people are strategic, that something isn’t strategic doesn’t reduce my surprise. Or maybe that wasn’t quite the misunderstanding or complaint—but you could expect it was something.
In practice, you spent 400 words elaborating on points which are kinda basic and with which I don’t disagree. Decomposing my impression that your comment was hostile, I think a bunch of it stemmed from the fact that you thought you needed to explain those points to me—that you thought my statement which seemed wrong to you was based on my failure to comprehend a very simple thing rather than perhaps me failing to have communicated a more complicated thing to you which made more sense.
Thinking about it, I think your comment is worse writing for how you’ve gone about it. A more effective version might go like this:
You needn’t educate me about comparative advantage and earning to give. I won’t assume you’re paying attention, but you might have noticed that I post a moderate amount on LessWrong and am in fact a member of the LessWrong 2.0 team. I’m not new this community. Probably, I’ve heard of comparative advantage and earning to give.
It also feels to me (though I grant this could entirely be in my head) like you’re maybe leaning a bit too much on your sources/references/links for credibility in a way that also registers as condescending. As though you’re 1) trying to make your position seem like the obviously correct one because of the scholarship backing it up despite those links going to elementary resources and concepts, 2) trying to make yourself seem like the teacher who is educating me.
I disagree. I haven’t seen that happen in any rationalist conversation I’ve been a part of. What I have seen (and made the mistake myself too many times) is people being overconfident that they’re correct. A norm, aka cultural wisdom, that says maybe you’re not so obviously right as you think helps correct for this in addition to the fact that conversations go better when people don’t feel they’re being judged and talked down to.
1) As another example, this is dismissive and rude too. 2) I don’t think anything I described fits a reasonable definition of marketing. I want to guess that marketing here is being used somewhat pejoratively, and at best as a noncentral fallacy.
At this point, I’ve think we’d better wrap up this discussion. I doubt neither of us is going to start feeling more warmly towards the other with further comments, nor do I expect us to communicate much more information than we already have. I’m happy to read another reply, but I probably won’t respond further.
Just noting that I have seen this a large number of times.
I also disagree with some aspects of this, though in a more complicated way. Probably won’t participate in this whole discussion but wanted to highlight my disagreement (which feels particularly relevant given that the above might be taken as consensus of the LW team)
Thanks for the informative writing feedback!
I think the occasional rhetorical question is a pretty ordinary part of the way people naturally talk and discuss ideas? I can avoid it if the discourse norms in a particular space demand it, but I tend to feel like this is excessive optimization for politeness at the cost of expressivity. Perhaps different writers place different weights on the relative value of politeness, but I should hope to at least be consistent in what behavior I display and what behavior I expect from others: if you see me tone-policing others over statements whose tone is as harsh as statements I’ve made in comparable situations, then I would be being hypocritical and you should criticize me for it!
I often use a “high-energy” writing style with lots of italics and exclamation points! I think it textually mimics the way I talk when I’m excited! (I think if you scan over my Less Wrong contributions, my personal blog, or my secret (“secret”) blog, you’ll see this a lot.) I can see how some readers might find this obnoxious, but I don’t think it’s accurate to read it as an indicator of contempt for my present interlocutor. (It probably correlates somewhat with contempt, but not nearly as much as you seem to be assuming?)
Likewise, I think lots of hyperlinks to jargon and concepts are a pretty persistent feature of my writing style? (To a greater extent in public forum posts like this rather than private emails.) In-body hyperlinks are pretty unobtrusive—readers who are interested in the link can click it, and readers who aren’t can not-click it.
I wouldn’t denigrate the value of having “elementary” resources easily at hand! I often find myself, e.g., looking up the definition of words I ostensibly “already know,” not because I can’t successfully use the word in a sentence, but to “sync up” my learned understanding of what the word means with what the dictionary says. (For example, I looked up brusque while composing this comment.)
The intent wasn’t just to back up the point that clamoring in the streets is ineffective, but to illustrate what I thought cause-and-effect (causal reality) reasoning would look like in contrast to social (social reality) reasoning—I took “clamoring in the steets” to be an example of the kind of action that social-reality reasoning would recommend. I thought such illustration could provide value to the comment thread, even though you’ve doubtlessly already heard of earning to give. (I didn’t mean to falsely imply you hadn’t.)
Yes, it was a bit of a tangent. (Once I start excitedly explaining something, it can be hard to know exactly when to stop! The 29 karma (in 13 votes) suggests that the voters seemed to like it, at least?)
I noticed, yes. I don’t think this should affect my writing that much? Certainly, how I write should depend on my model of who I’m talking to, but my model of you is mostly informed by the text you’ve written. (I think we also met at a party once? Aren’t you Miranda’s husband?) The fact that you work for Less Wrong doesn’t alter my perception much.
I wouldn’t say “dismissive”, exactly, but it’s definitely brusque, which, in the context of the surrounding thread, was an awful writing choice on my part. I’m sorry about that! Now that you’ve correctly pointed out that I made a terrible writing decision, let me try to make partial amends for it by exerting some more interpretive labor to unpack what I meant—
I suspect we have a pretty large disagreement on the degree to which respect is a necessary prerequisite for whether a conversation with someone will be productive? I think if someone is making good arguments, then I consider it my responsibility to update on the information content of what they’re saying. Because I’m a social monkey, I certainly find it harder to update (especially publicly) if someone’s good arguments are phrased in a way that doesn’t seem to respect me. Correspondingly, for my own emotional well-being, I prefer discussion spaces with strong politeness norms. But from the standpoint of minds as inference engines, I consider this a bug in my cognition: I expect to perform better if I can somehow muster the mental toughness to learn from people who hate my guts. (As it is written of the fifth virtue: “Do not believe you do others a favor if you accept their arguments; the favor is to you.”)
From that perspective (which you might disagree with!), can you see why it might be tempting to metaphorically characterize the respectful-behavior-is-necessary mindset as “expecting to be marketed to”?
I take that as a challenge! I hope this comment has succeeded at making you feel more warmly towards me and communicating much more information that we already have! But, I’m also assigning a substantial probability that I failed in this ambition. I’m sorry if I failed.
I thought of a way to provide evidence that I respect you as a thinker! I liked your “planning is recursive” post back in March, to the extent that I made two flashcards about it for my Mnemosyne spaced-repetition deck, so that I wouldn’t forget. Here are some screenshots—
Edit 19-07-02: I think I went too far with this post and I wish I’d said different things (both in content and manner, some of the positions and judgments I made here I think were wrong). With more thought, this was not the correct response in multiple ways. I’m still processing and will eventually say more somewhere.
. . .
That is persuasive that you respect my ability to think and even flattering. I would have also taken it as strong evidence if you’d simply said “I respect your thinking” at some earlier point. Yet, 1) when I said that someone (at least acting) as though they respected my thinking was pivotal in whether I wanted to talk to them and expected the conversation to be productive, you forcefully argued that respect wasn’t important. 2) You emphasized that it was important that when someone is wrong, everyone is made aware of it. In combination, this led me to think you weren’t here to have a productive conversation with someone you thought was a competent thinker, instead you’d come in order to do me the favor of informing me I was flat-out, no questions about it, wrong.
I want to emphasize again that the key thing here is someone acts in a way I interpret as some level of respect and consideration. It matters more to me that they be willing to act that way than they actually feel it. Barring the last two comments (kind of), your writing here has not (as I try to explain) registered that way to me.
I am sympathetic to positions that fear certain norms prioritize politeness over truth-seeking and information exchange. I wrote Conversational Cultures: Combat vs Nurture in which I expressed that the combative style was natural to me, but I also wrote a follow-up that each culture depended on appropriate context. I am not combative (I am not sure if I would describe your style this way, but maybe approximately) in online comments—certainly not with someone I don’t know and don’t feel safe with. The conditions do not pertain for it.
I am at my limit of explaining my position regarding respect and politeness, etc., and why I think those are necessary. I grant that there are legitimate fears and also that I haven’t fully expressed my comprehension of them and countered them fully and rigorously. But I’m at my limit. I’m inclined to think that your behavior is in part the result of considered principles which aren’t crazy, though naive and maybe willfully dismissive of counter-considerations.
I can see that at the core you are a person with ideas who is theoretically worth talking to and with whom I could have a valuable discussion. But also this entire exchange has been stressful and aggravating. Your initial comments were already unpleasant, and this further exchange about conversational norms has been the absolute lowlight of my weekend (indeed, receiving your comments has made my whole week feel worse). I am not sure if your excitedness as expressed by your bangs (!) indicates that you’re having fun, but I’m not. I’ve persisted because it seemed like the right thing to do. I’m at my limit of explaining why my position is reasonable. I’m at my limit of willingness to talk to you.
I am strongly tempted to ban you from commenting on any of my posts to save myself further aggravation (as any user above 50 [on Personal blogposts] or 2000 karma [Frontpage posts] threshold can do). I generally want people to know they don’t have to put with stuff like this. I hesitate because some of my posts are posted somewhat in a non-personal capacity such as the welcome page, FAQ, and even my personal thoughts about LessWrong strategy; I feel less authorized to unilaterally ban you from those. Though were it up to me, I think I would probably ban you from the entire site. I think you are making conversation worse and I fear that for everyone who talks in your style, people experience lots of really unpleasant feelings and we lose a dozen potential commenters who don’t want to be in a place where people discourse like this. (My low-fidelity shoulder habryka is suspicious of this kind of reasoning, but we can clarify that later.) Given that I think those abstract people are being quite reasonable, I am sad to lose them and I feel like I want to protect the garden:
I see the principles behind your writing style and why these seem reasonable to you. I am telling you how I perceive them and the reaction they provoke in me (stress, aggravation—not fun). I writing this to say that if you make no alterations to how you write (which you are not forced to generally), then I do not want to talk to you and personally would advocate for your removal from our communal places of discussion.
This is not because I think politeness is more important than truth. Emphatically not. It is because I think your naive (and perhaps willfully oblivious) stances emphatically get in the way of productive of valuable truth-seeking discussion between humans as they exist (and I don’t think those humans are being unreasonable).
I place few limits on what people can say to each other content-wise and would fight against any norms that go in the way of that. I don’t think anyone should ever to had to hide what they think, why, or that they think something is really dumb. I do think people ought to invest some effort in communicating in a way that indicates some respect and consideration for their interlocutors (of their feelings even if not their thinking). I grant that that can be somewhat costly and effortful—but I think it’s a necessary cost and I’m unwilling to converse with people unwilling to go to that effort barring exceptional exceptions. Unwillingness to do so (to me) reads as someone prioritizing their own effort and experience as completely outweighing my own.
(A nice signal that you cared about how I felt would have been that if after I’d said your bangs (!) and rhetorical question marks (?) felt condescending to me, you’d made an effort to reduce your usage rather than ramping them up to 11. At least for this conversation to show some good will. I’m actually quite annoyed about this. You said “I don’t know how I could have written my post more politely without making it worse”, I pointed out a few things. You responded by doing more of those things. Way more. Literally 11 bangs and 9 question marks.)
It’s not just about respecting my thinking, it’s about someone showing that they care at all about how I feel and how their words impact me. A perhaps controversial opinion is that I think that claims of “if you cared about truth, you’d be happy to learn from my ideas no matter how I speak” are used to excuse an emotional selfishness (“I want to write like this, it’d be less fun for me if I had to otherwise—can’t you see how much I’m doing you a favor by telling you you’re wrong?”) and that if we accept such arguments, we giving people basically a free license to be unpleasant jerks who can get away with rudeness, bullying, belittling, attack etc., all under the guise of “information exchange”.
Separately from my other comment…
You say this, but… everything else I see in this thread (and some others like it) signals otherwise.
Just a note to make salient the opposite perspective—as far as I am concerned, a Less Wrong that banned Zack (and/or others like him) would be much, much less fun to participate in.
In contrast, this sort of … hectoring about punctuation, and other such minutiae of alleged ‘tone’ … I find extremely tedious, and having to attend to such things makes Less Wrong quite a bit less fun.
I just don’t comment in these sorts of threads because I figure the site is a lost cause and the mods will ban all the interesting people regardless of what words I type into the box.
Like, feel free to call the site a lost cause, but I am highly surprised that you expect us to ban all the interesting people. We have basically never banned anyone from LW2 except weird crackpots and some people who violated norms really hard, but no one who I expect you would ever classify as being part of the “interesting people”.
So, on the one hand, that is entirely true.
On the other hand, suppose you said to me: “Said, you can of course continue posting here, we’re not banning you, but you must not ever mention World of Warcraft again; and if you do, then we will ban you.”
Or: “Said, post as much as you like, but none of your posts must contain em-dashes—on pain of banning.”
… or something else along these lines. Well, that’s not a ban. It’s not even a temporary ban! It’s nothing at all. Right?
Would you be surprised if I stopped participating, after an injunction like that? Surely, you would not be.
Would you call what had happened a ‘ban’, then?
Now, to be clear, I do not consider Less Wrong a lost cause; as you see, I continue to participate, both on the object and the meta levels. (I understand namespace’s sentiment, of course, even if I disagree.)
That said, while the distinction between literal administrative actions, and the threat thereof, is not entirely unimportant… it is not, perhaps, the most important question, when it comes to discussions of the site’s health, and what participants we may expect to retain or lose, etc.
I think that in this context it might be helpful for me to mention that I’ve recently seriously considered giving up on LessWrong, not because of overt bans or censorship, but because of my impression that the nudges I do see reflect some badly misplaced priorities.
These kinds of nudges both reflect the sort of judgment that might be tested later in higher-stakes situations (say, something actually controversial enough for the right call to require a lot of social courage on the mods’ part), and serve as a coordination mechanism by which people illegibly negotiate norms for later use.
I ended up deciding to contact the mods privately to see if we could double-crux on this, since “try at all” is an important thing to do before “give up” for a forum with as much talent and potential as this one. I’m only mentioning this in here because I think these kinds of things tend to be handled illegibly in ways that make them easy to miss when modeling things like chilling effects.
I agree. Though I would also be surprised if the people that namespace finds most interesting are worried about being banned based on that threat. If they do, then I think I would really like to change that (obviously depending on what the exact behavior is that they feel worried about being punished for, but my model is that we mostly agree on what would be ban-worthy).
I am interested in hearing from people who are worried about being banned for doing X (for almost any X), and will try my best to give clear answers of whether I think something like X would result in a ban, since I think being clear about rules like that is quite valuable.
This is of course admirable, but also not quite the point; the question isn’t whether the policies are clear (although that’s a question, and certainly an important one also); the question is, whether the policies—whatever they are—are good.
Or, to put it another way… you said:
[emphasis mine]
The problem with is, essentially, the same as the problem with CEV: it’s all very well and good if everyone does, indeed, agree on what is ban-worthy (and in this case clarity of policy just is the solution to all problems)… but what if, actually, people—including “interesting” people!—disagree on this?
Consider this scenario:
Alice, a commenter: Gosh, I’m really hesitant to post on Less Wrong. I’m worried that they might ban me!
Bob, a moderator: Oh? Why do you think that, Alice? What would we ban you for, do you think? I’d like you to be totally clear on what our policies are!
Alice: Well… I kinda think you might ban me for using em-dashes in my comments??
Bob: Ah! I understand. Please allow me to shed some light on that: yes, we will definitely ban you if you use em-dashes in your comments.
Alice: … oh. Ok.
Bob: I hope that cleared up any concerns you had?
Alice: … um. Well. I … am not worried anymore. So there’s that.
Bob: Great!
… not exactly “problem solved and all’s well”, yes?
Anyway, I think I’ve beaten this horse to death sufficiently for now.
Yes, to be clear. My comment had two points:
1. I do not expect people namespace considers interesting to be afraid of making their interesting contributions due to fear of being banned, and if they are would like to fix that (I am only about 75% confident in this, but do expect this to be the case).
2. I separately want to ensure that our rules are clear, to ensure that people are only afraid of consequences that are actually likely to take place and am happy to invest resources into making that the case.
Agree that leaving this discussion as is seems fine for now.
It’s important to think on the margin—not only do actions short of banning (e.g., “mere” threats of banning) have an impact on users’ behavior (as Said pointed out), they can also have different effects on users with different opportunity costs. I expect the people Namespace is thinking of face different opportunity costs than me: their voice/exit trade-off between writing for Less Wrong and their second-best choice of forum looks different from mine.
In the past month-and-a-half, we’ve had:
A 135-comment meta trainwreck that started because a MIRI Research Associate found a discussion-relevant reference to my work on the philosophy of language “unpleasant” (because my interest in that area of philosophy was motivated by my need to think about something else); and,
A 34-comments-and-counting meta trainwreck that started because a Less Wrong moderator found my use of a rhetorical question, exclamation marks, and reference hyperlinks to be insufficiently “collaborative.”
Neither of these discussions left me with a fear of being banned—insofar as both conversations had an unfortunately inextricable political component, I count them both as decisive “victories” for me (judging by the karma scores and what was said)—but they did suck up an enormous amount of my time and emotional energy that I could have spent doing other things. Someone otherwise like me but with lower opportunity costs would probably be smarter to just leave and try to have intellectual discussions in some other venue where it wasn’t necessary to decisively win a political slapfight on whether philosophers should consider each other’s feelings while discussing philosophy. Arguably I would be smarter to leave, too, but I’m stuck, because I joined a cult ten years ago when I was twenty-one years old, and now the cult owns my soul and I don’t have anywhere else to go.
I was at the first Overcoming Bias meetup in Millbrae in February 2008. I did the visual design for the 2009 and 2010 Singularity Summit program booklets. The first time I was paid money for programming work was when I wrote some Python scripts to help organize the Singularity Institute’s donor database in 2011. In 2012, I designed PowerPoint slides for the preliminary “kata” (about the sunk cost fallacy) for what would soon be dubbed the Center for Applied Rationality, to which I would later donate $16,500 between 2013 and 2016 after I got a real programming job. Today, I live in Berkeley and all of my friends are “rationalists.”
I mention all this (mostly, hopefully) not to try to pull rank—you really shouldn’t be making moderation decisions based on seniority!—but to illustrate exactly how serious a threat “removal from our communal places of discussion” is to me. My entire adult life only makes sense in the context of this website. If the forces of blandness want me gone because I use too many exclamation points (or perhaps some other reason), I in particular have an unusally strong incentive to either stand my ground or die trying.
Ugh. I’m sorry about that. It was exactly the same for me (re time and emotional energy).
No problem. Hope your research is going well!
(Um, as long as you’re initiating an interaction, maybe I should mention that I have been planning to very belatedly address your concern about premature abstraction potentially functioning as a covert meta-attack by putting up a non-Frontpagable “Motivation and Political Context for My Philosophy of Language Agenda” post in conjunction with my next philosophy-of-language post? I’m hoping that will make things better rather than worse from your perspective? But if not, um, sorry.)
My research is going very well, thank you :)
I guess that putting up such a post would make things much more fair, at least. But, I’m not sure I will be willing to comment on it publicly, given the risk of another drain of time and energy.
So, I’m against the forces of blandness too, but, is “I’m trapped in this cult” really an argument for not banning you rather than an argument for banning you? (I mean, banning you for saying that would create bad incentives, of course, but still)
Cults take weak people and make them weaker. Maybe try taking a break and getting some perspective? I doubt you’re so stuck you can’t leave. (There’s lots of standard advice for leaving cults)
Sorry if I’m being mean here, I’m trying to make sense of the actual considerations at play.
I thought it made sense to use the word “cult” pejoratively in the specific context of what the grandparent was trying to say, but it was a pretty noncentral usage (as the hyperlink to “Every Cause Wants To Be …” was meant to indicate); I don’t think the standard advice is going to directly apply well to the case of my disappointment with what the rationalist community is in 2019—although the standard advice might be a fertile source of ideas for how to diversify my “portfolio” of social ties, which is definitely worth doing independently of the Sorites problem about where to draw the category boundary around “cults”. (I was wondering if anyone was going to notice the irony of the grandparent mentioning the sunk cost fallacy!)
I have at least two more posts to finish about the cognitive function of categories (working titles: “Schelling Categories, and Simple Membership Tests” and “Instrumental Categories, and War”) that need to go on this website because they’re part of a Sequence and don’t make sense anywhere else. After that, I might reallocate attention back to my other avocations.
Quick note that I roughly endorse the set of frames here. (I have a post brewing about how people tend to see banning someone from a community as a “light” sentence, when actually it’s one of the worst things you can do to a person, at least in some cases)
(This may be another case where it would make sense to detach this derailed thread into its own post in order to avoid polluting the comments on “Causal Reality vs. Social Reality”, if that’s cheap to do.)
I agree. Was planning to request this.
Very quick note that I’m not sure whether I endorse habryka’s phrasing here (don’t have time to fully articulate the disagreement, just wanted to flag it)
To be fair, in this context, I did say upthread that I wanted to ban Zack from my posts and possibly the entire site. As someone with moderator status (though I haven’t been moderating very much to date) I should have been much more cautious about mentioning banning people, even if that’s just me, no matter my level of aggravation and frustration.
I’m not sure what the criteria for “interesting” is, but my current personal leaning would be to exert more pressure than banning just crackpots and “violated norms really hard”, but I haven’t thought about this or discussed it all that much. I would do so before advocating hard for particular standard to be adopted widely.
But these are my personal feelings, not ones I’ve really discussed with the team and definitely not any team consensus about norms or policies.
(Possibly relevant or irrelevant I wrote before habryka’s most recent comment below.)
*nods* To give outsiders a bit of a perspective on this: Ruby has joined the team relatively recently and so I expect to have a pretty significant number of disagreements with him on broader moderation and site culture. I also think it’s really important for all members of the LW team to be able to freely express their opinions in public and participate in public conversations with their own models and opinions.
In practice, I expect Ruby’s opinions to obviously factor into where we will go in terms of site moderation, but that based on how we made decisions in the past that we would try really hard to come to agreement first and then try to explain our new positions publicly and get more feedback before we make any large changes to the way we enforce site norms.
I personally think that banning people for things in the category of “tone” or “adversarialness” should be done only with very large hesitation and after many iterations of conversations, and I expect this to stay our site policy for the foreseeable future.
For a long-standing community member, this does seem correct to me.
I appreciate you noting that. I’m hoping to wrap up my involvement on this thread soon, but maybe we will find future opportunities to discuss further.
This comment contains no italics and no exclamation points. (I didn’t realize that was the implied request—as Wei intuited, I was trying to show that that’s just how I talk sometimes for complicated psychological reasons, and that I didn’t think it should be taken personally. Now that you’ve explicitly told me to not do that, I will. As you’ve noticed, I’m not always very good at subtext, but I should hope to be capable of complying with explicit requests.)
I don’t think that would be strong evidence. Anyone could have said “I respect your thinking” in order to be nice (or to deescalate the conflict), even if they didn’t, in fact, respect you. The Mnemosyne cards are stronger evidence because they already existed.
I came to offer relevant arguments and commentary in response to the OP. Whether or not my arguments and commentary were pursasive (or show that you were “wrong”) is up for each individual reader to decide for themselves.
That’s fine with me. (I’ve done this once with one user whose comments I didn’t like; it would be hypocritical for me to object if someone else did it to me because they didn’t like my comments.)
Yes, this meta exchange about discourse norms has been quite stressful for me, too. (The conversation about the post itself was fine for me.) I hope you feel better soon.
Some Updates and an Apology:
I’ve been thinking about this thread as well as discourse norms generally. After additional thought, I’ve updated that I responded poorly throughout this thread and misjudged quite a few things. I think I felt disproportionately attacked by Zack’s initial comment (perhaps because I haven’t been active enough online to ever receive a direct combative comment like that one), and after that I was biased to view subsequent comments as more antagonistic than they probably were.
Zack’s comments contain some reasonable and valuable points. I think they could be written better to let the good points be readily be seen (content, structure, and tone), but notwithstanding it’s probably on the whole good that Zack contributed them, including the first one as written.
The above update makes me also update towards more caution around norms which dictate how one communicates. I think it probably would be bad if there’d been norms I could have invoked to punish or silence when I felt upset with Zack and Zack’s comments. (This isn’t a final statement of my thoughts, just an interim update, as I continue to think more carefully about this topic.)
So lastly, I’m sorry @Zack. I shouldn’t have responded quite as I did, and I regret that I did. I apologize for the stress and aggravation that I am responsible for causing you.. Thank you for contributions and persistence. Maybe we’ll have some better exchanges in the future!?
I accept your apology.
Thank you.
I feel sympathy for both sides here. I think I personally am fine with both kinds of cultures, but sometimes kind of miss the more combative style of LW1, which I think can be fun and productive for a certain type of people (as evidenced by the fact that many people did enjoy participating on LW1 and it produced a lot of progress during its peak). I think in an ideal world there would be two vibrant LW2s, one for each conversational culture, because right now it’s not clear where people who strongly prefer combat culture are supposed to go.
I think he might have been trying to signal that using lots of bangs is just his natural writing style, and therefore you needn’t feel condescension as a result of them.
The debate here feels like something more than combat vs other cultures of discussion. There are versions of combative cultures which are fine and healthy and which I like a lot, but also versions which are much less so. I would be upset if anyone thought I was opposed to combative discussion altogether, though I do think they need to be done right and with sensitivity to the significance of the speech acts involved.
Addressing what you said:
I think there’s some room on LessWrong for that. Certainly under the Archipelago model, authors can set the norms they prefer for discussions on their posts. Outside of that, it seems fine, even good, if users who’ve established trust with each other and have both been seen to opt-in a combative culture choose to have exchanges which go like that.
I realize this isn’t quite the same as a website where you universally know without checking that in any place on the site one can abide by their preferred norms. So you might be right—the ideal world might be require more than one LessWrong and anything else is going to fall short. Possibly we build “subreddits” and those could have an established universal culture where you just know “this is how people talk here”.
I can imagine a world where eventually it was somehow decided by all (or enough of the relevant) parties that the default on LessWrong was an unfiltered, unrestrained combative culture. I could imagine being convinced that actually that was best . . . though it’d be surprising. If it was known as the price of admission, then maybe that would work okay.
In this case, though, the “What? Why?” actually was rhetorical on my part. (Note the link to “Fake Optimization Criteria”, which was intended to suggest that I don’t think the optimization criterion of defeating death recommends the policy of clamoring in the streets.) It’s not that I didn’t understand the “cishumanists accept Death because they believe that the customs of their tribe are the laws of nature” point, it was that I disagreed with its attempted use as an illustration of the concept of social reality (because I think transhumanists similarly fail to understand that the customary optimism of their tribe is no substitute for engineering know-how), and was trying to use “naïve” Socratic questioning/inquiry to illustrate what I thought means-end reasoning about causal reality actually looks like. I can see how the this could be construed as a violation of some possible discourse norms (like the Recurse Center’s “No feigned surprise” rule), but sometimes I find some such norms unduly constraining on the way I naturally talk and express ideas!
I endeavor to obey the moderation guidelines of any posts I comment on.
I’m happy at the coincidence that you happened to use this phrase, because it reminded me of an old (May 2017) Facebook post of mine that I had totally forgotten about, but which might be worth re-sharing as a Question here. (And if it’s not, then downvote it.) It’s written the same kind of “aggressively Socratic” style that you disliked in the grandparent, but I think that style is serving a specific and important purpose, even if it wouldn’t be appropriate in the comments of a post with contrary norm-enforcing moderation guidelines.
Yes, “clamoring in the streets” is not to be taken too literally here. I mean that it is something people have strong feelings about, something that they push for in whatever way. They seen grandma getting sicker and sicker, suffering more and more, and they feel outrage: why have we not solved this yet?
I don’t think the question of strategicness is is relevant here. For one thing, humans are not automatically strategic. But beyond that, I believe my point stands because most people are not taking any actions based on a belief that aging and death are solvable and it’s terrible that we’re not going as fast as we could be. I maintain this is evidence they are not living in a world (in their minds) where this is a real option. Your friend is an extreme outlier, and you too if your Rust example holds up.
It’s true the social pressures exist in both directions. The point of that statement is merely to state that social considerations can be weighed within a causal frame, but they can be traded off against other things which are not social. I don’t think an exhaustive enumeration of the different social pressures helps make that point further.
Yes, that paragraph was written from the mock-perspective of someone inhabiting a social reality frame, not my personal outside-analyzing frame as the OP. I apologize if that wasn’t adequately clear from context.
I agree this is a very hard problem and I have no easy answer. My point here was to say that a person in the social reality frame might not even be able to recognize the existence of people who working on life extension simply because they actually really care about life extension. That their whole assessment remains in the social frame (particularly at the S1 level).
(Meta: is this still too combative, or am I OK? Unfortunately, I fear there is only so much I know how to hold back on my natural writing style without at least one of either compromising the information content of what I’m trying to say, or destroying my motivation to write anything at all.)
Perhaps the crux is this: the example (of attitudes towards death) that you seem to be presenting as a contrast between a causal-reality worldview vs. a social-reality worldview, I’m instead interpeting as a contrast between between transhumanist social reality vs. “normie” social reality.
(This is probably also why I thought it would be helpful to mention pro-Vibrams social pressure: not to exhaustively enumerate all possible social pressures, but to credibly signal that you’re trying to make an intellectually substantive point, rather than just cheering for the smart/nonconformist/anti-death ingroup at the expense of the dumb/conformist/death-accommodationist outgroup.)
But whether aging and death are solvable is an empirical question, right? What if they’re not solvable? Then the belief that aging and death are solvable would be incorrect.
I can pretty easily imagine there being an upper bound on humanly-achievable medical technology. Suppose defeating aging would require advanced molecular nanotechnology, but all human civilizations inevitably destroy themselves shortly after reaching that point. (Say, because that same level of nanotech gives you super-fast computers that make it easy to brute-force unaligned AGI, and AI alignment is just too hard.)
The concept of “terrible” doesn’t exist in causal reality. (How does something being “terrible” pay rent in anticipated experiences?)
I think people do this. In the OP, you linked to the immortal Scott Alexander’s “Who By Very Slow Decay”, which contains this passage—
What is harrassing doctors to demand a liver transplant, if it’s not feeling outrage and taking action?
In social reality, this is a rhetorical question used to coordinate punishment of those who can be blamed for not solving it yet.
In causal reality, it’s a question with a very straightforward literal answer: the human organism is, in fact, subject to the biological process of senescence, and human civilization has not, in fact, developed the incredibly advanced technology that would be needed to circumvent this.
The cases Scott talks about are individuals clamoring for symbolic action in social reality in the aid of individuals that they want to signal they care about. It’s quite Hansonian, because the whole point is that these people are already dead and none of these interventions do anything but take away resources from other patients. They don’t ask ‘what would cause people I love to die less often’ at all, which my model says is because that question doesn’t even parse to them.
Fwiw, I found this paragraph quite helpful. I initially bounced off your original comment because I couldn’t tell what the point was, and would have had an easier time following it if it had opened with something more like this paragraph.
(Meta: Yup, that’s much better. I appreciate the effort. To share some perspective from my end, I think this has been my most controversial post to date. I think I understand now why many people say posting can be very stressful. I know of one author who removed all their content from LW after finding the comments on their posts too stressful. So there’s a probably a trade off [I also empathize with the desire to express emphatic opinions as you feel them], where writing more directly can end up dissuading many people from posting or commenting at all.)
I think that’s a reasonable point. My counter is that I’d argue that “transhumanist social reality” is more connected to the causal world than mainstream social reality. Transhumanists, even if they are biased and over-optimistic, etc., at least invoke arguments and evidence from the general physical world: telomeres, nanotechnology, the fact that turtles lives a really long time, experiments on worms, etc. Maybe they repeat each other’s socially sanctioned arguments, but those arguments invoke causal reality.
In contrast, the mainstream social reality appears to be very anchored on the status quo and history to date. You might be able to easily imagine that there’s an upper bound on humanly-achievable medical technology, but I’d wager that’s not the thought process most people go through when (assuming they ever even consider the possibility) they judge whether they think life-extension is possible or not. To quote the Chivers passage again:
Note that he’s not making an argument from physics or biology or technology at all. This argument is from comparison to other people. “My children will die the way we all do,” “all lives follow roughly the same path.” One might claim that isn’t unreasonable evidence. The past is a good prior, it’s a good outside view. But the past also shows tremendous advances in technology and medical science—including dramatic increases in lifespan. My claim is that these things aren’t considered in the ontology most people think within, one where how other people do things is dominant.
If I ask my parents, if I stop and ask people on the street, I don’t expect them to say they thought about radical life extension and dismissed it because of arguments about what is technologically realistic. I don’t expect them to say they’re not doing anything towards it (despite it seeming possible) because they see no realistic path for them to help. I expect them to not have thought about it, I expect them to have anchored on what human life has been like to date, or I expect them to have thought about it just long enough to note that it isn’t a commonly-held belief and conclude therefore it’s just a thing another group believes.
Even if the contrast is “transhumanist social reality”, I ask how did that social reality come to be and how did people join it? I’m pretty sure most transhumanists weren’t born to transhumanist families, educated in transhumanist schools, or surrounded at transhumanist friends. Something at some point prompted them to join this new social group—and I’d wager that in many cases it’s because on their own they reasoned how humans are now isn’t how they have to be—rightly or wrongly—they invoke a belief about what broader reality allows beyond what is commonly held opinion or practice to date. Maybe that’s a social reality too, but it’s a really different one.
The reason why the disease and death example is confusing to me is partly because I expect people to be highly emotion and unstrategic—willing to invest a great deal for only a small chance. People agonize over “maybe I could have done something” often enough. They demand doctors do things “so long as there’s a chance.” One can doubt that radical life extension is possible, but I don’t think one can be reasonably certain that it isn’t. I expect that if people thought there was any non-trivial chance that we didn’t need to millions of people to decay and die each year, they would be upset about it (especially given first-hand experience), and do something. As it is, I think most people take death and decay for granted. That’s just how it is. That’s what people do. That’s my confusion. How can you so blithely ignore the progress of the last few hundred years? Or the technological feats we continue to pull off. You think it’s reasonable for there to be giant flying metal cans? For us to split the atom and go to moon? To edit genes and have artificial hearts? To have double historical lifespans already? Yet to never wonder whether life could be better still? To never be upset that maybe the universe doesn’t require it to be this way, instead we (humanity) just haven’t got our shit together, and that’s a terrible tragedy.
This perspective is natural to me. Obvious. The question I am trying to explain is why am I different? I think I am the weird one (i.e., the unusual one). But what am I doing differently? How is my reality (social or otherwise) different? And one of the reasonable answers is that I invoke a different type of reasoning to infer what is possible. My evidence is that I don’t encounter people responding with like-kind arguments (or even having considered the question) to questions of elimination decay and death.
“Terrible” is a moral judgment. The anticipated experience is that when I point my “moral evaluator unit” at a morally terrible thing, it outputs “terrible.”
This isn’t necessarily a point in transhumanism’s favor! At least vertically-transmitted memeplexes (spread from parents to children, like established religions) face selective pressures tying the fitness of the meme to the fitness of the host. (Where evolutionary fitness isn’t necessarily good from a humane perspective, but there are at least bounds on how bad it can be.) Horizontally-transmitted memeplexes (like cults or mass political movements) don’t face this constraint and can optimize for raw marketing appeal independent of long-term consequences.
Isn’t this kind of circular? Compare: “A Vice President is anyone who’s job title is vice-president. That’s a falsifiable prediction because it constrains your anticipations of what you’ll see on their business card.” It’s true, but one is left with the sense that some important part of the explanation is being left out. What is the moral evaluator unit for?
I think moral judgements are usually understood to have a social function—if I see someone stealing forty cakes and say that that’s terrible, there’s an implied call-to-action to punish the thief in accordance with the laws of our tribe. It seems weird to expect this as an alternative to social reality.
You expect them to get angry—at whom in particular? - because grandma keeps getting older? For tens of thousands of years of human history, the only alternative to this has been substantially worse for grandma. Unless she wants to die and you’re talking about euthanasia, but no additional medical research is needed for that. There is no precedent or direct empirical evidence that anything else is possible.
Maybe people are wrong for ignoring speculative arguments that anti-aging research is possible, but that’s a terrible example of people being bound by social reality.
1. True, for ten thousands of years of human history, it has been that way. But “there is no precedent or direct empirical evidence that anything else is possible” emphatically does not cut it. Within only a few hundred years the world has been transformed, we have magical god-devices that connect us across the world, we have artificial hearts, we can clean someone’s blood by pumping out of it and then back in, we operate on brains, we put man on the moon. In recent years you’ve got the rise of AI and gene editing. Lifespans are already double most of what they’ve been for most of history. What has held for tens of thousands of years is no longer so. It is not that hard to see that humankind’s mastery over reality is only continuing to grow. Precedent? Maybe not. But reason for hope? Yes. Actually pretty reasonable expectation that our medical science is not maxed out? Definitely.
This isn’t speculative. The scientific and technological progress should be apparent to those who’ve lived more than a few decades in the recent history.
2. Anger doesn’t always have to have a target. But if you need one then pick society, pick science, pick research, pick doctors, pick your neighbours.
3. Watching your loved ones decay and die is anguish. If people are going to yell at the doctors that they should do something, that something must be possible (though some would argue this is fake/performance), then let them also yell at state of the world. That this unnecessary circumstance has come to be. Yell at the universe.
4. The alternative explanation to saying that people see the world overwhelmingly via social reality is that people simply have terrible causal models. Perhaps to me the scientific/technological progress of the last few hundred years is obviously, obviously reason to believe far more is possible (and better today than in fifty years), but not to others. Perhaps I’m wrong about it, though I don’t think I am.
And you needn’t be absolutely certain that curing death and aging is possible to demand we try. A chance should be enough. If you demand that doctors do things which only might prolong grandma’s life, then why not ask that have better science because there’s chance for that working too.
Perhaps people really didn’t get enough of an education to appreciate science and technology (that we manipulate light itself to communicate near instantaneously sparks no wonder and awe, for example). So then I’d say they are overly anchored on the status quo. It is not so much being bound by social reality, but by how things are now, without extrapolation even fifty years forward or back—even when they themselves have lived through so much change.
5. I pick the example of disease and death because is so personal, so immediate, so painful for many. It doesn’t require that we posit any altruistic motivation and it’s a situation where I expect to see a lot of powerful emotion revealing how people relate to reality (rather than them taking the options they think are immediately available to them and strategic).
I don’t think the disagreement here is about the feasibility of life extension. (I agree that it looks feasible.) I think the point that Benquo and I have been separately trying to make is that admonishing people to be angry independently of their anger having some specific causal effect on a specific target, doesn’t make sense in the context of trying to explain the “causal reality vs. social reality” frame. “People should be angrier about aging” might be a good thesis for a blog post, but I think it would work better as a different post.
The magnitude of the chance matters! Have you read the Overly Convenient Excuses Sequence? I think Yudkowsky explained this well in the post “But There’s Still a Chance, Right?”.
I wonder if this is a point where I being misunderstood. Based on this and a few in-person conversations, people think I’m taking a normative stance here. I’m not. Not primarily. I am trying to understand a thing I am confused about and to explain my observations. I observe that my models lead me to expect that people would be doing X, but I do not observe that—so what am I missing?
Fore the record, for all those reading:
This post isn’t trying to tell anyone to do anything, and I’m not actively stating a judgment. I haven’t thought about what people should be doing. I’m not saying they should be clamoring in the streets. There is no active admonishing directed at anyone here. There is no thesis. I haven’t thought about what people should be doing enough—I haven’t thought through what would actually be strategic for them. So I don’t know. Not with any confidence, not enough to tell them what to do.
Given this is about my confusion about what I expect people to do and that I don’t expect people to be strategic, the question of whether or not doing X would be strategic isn’t really relevant. My model doesn’t predict people to be strategic, so the fact that strategic action might not to be do X doesn’t make me less confused.
(A valid counter to my confusion is saying that people are in fact strategic, but I’m rather incredulous. I’m not sure if you or Benquo were saying that?)
I am a bit confused, I might not be reading you carefully enough, but it feels here like you’re trying to explain people’s behavior with reference to normative behavior rather than descriptive (in this comment and earlier ones).
It’s precisely because I expect most people to think “but there’s still a chance right” that I would expect the possibility of life extension to motivate to action—more so than if they cared about the magnitude. (Also, caring about magnitude is a causal reality thing, I would say as the notion of probabilities is, seemingly.)
Your argument doesn’t make sense unless whatever “clamoring in the streets” stands in for metaphorically is an available action to the people you’re referring to. It seems to me like the vast majority of people are neither in an epistemic position where they can reasonably think that they know that there’s a good chance of curing aging, nor do they have any idea how to go about causing the relevant research to happen.
They do know how to increase the salience of “boo death,” but so far in the best case that seems to result in pyramids, which don’t work and never could, and even then only for the richest.
Note that even for those of us who strive for legibility of action (“live in the causal world”), it’s not clear that aging and death CAN be solved in humans at all, and seems downright unlikely that any strategy or action can solve it fast enough to avoid the pain and fear of the death of my loved ones and myself.
Whether a loved one dies at 65, 85, 105, 205, or 1005, it’s going to suck when it happens. No amount of clamoring in the streets (or directed research into biology) is going to avoid that pain. Some amount of effort and sacrifice toward life extension _CAN_ have positive average and top-percentile lifespans, and that’s great if it applies to the people I care most about. And much research and behavior change is useful in improving the quality of the limited years of many people. Note that “quality” includes other people’s acceptance and support, so mixes social reality in with the definition.
It remains really unclear to me whether I should prefer that strangers live longer or that there are more strangers born to replace the dead ones. My intuition and initial preference is that fewer/longer is better than more/shorter lives, but I don’t have much rational justification for that, and with my current evidence for stagnation of beliefs and reduction in interest as people age, I suspect I may actually prefer more/shorter. I’m not sure how much of more/longer is possible as long as we’re limited to the current earth ecosystem.
Oops, went too far on the object level, sorry—my point is that there are many reasons someone might not spend much effort on eradicating aging, and “they live in social reality and don’t consider causal reality” is a very weak strawman for their choices.
What about descendants of you/your loved ones?
They get progressively more theoretical as distance increases. It seems l care about my n-degrees-removed cousin (in the present or future) who I haven’t met and know no specifics about, about as much as any n-degrees-connected stranger. Note that I have no theory or considered belief that I _SHOULD_ care about some strangers or distant relatives more than others, this is pure introspection on what I seem to actually feel.