When someone is arguing that a belief of yours is wrong, listen to the argument at the object level, instead of jumping to the question of whether saying the relevant arguments out loud is a good idea, which is a much harder question.”
It seems to me that they key issue here is the need for both public and private conversational spaces.
In public spaces, arguments are soldiers. They have to be, because others treat them that way, and because there are actual policies that we’re all fighting / negotiating over. In those contexts it is reasonable (I don’t know if it is correct, or not), to constrain what things you say, even if they’re true, because of their consequences. It is often the case that one piece of information, though true, taken out of context, does more harm than good, and often conveying the whole informational context to a large group of people is all but impossible.
But we need to be able to figure out which policies to support, somehow, separately from supporting them on this political battlefield. We also need private spaces, where we can think and our initial thoughts can be isolated from their possible consequences, or we won’t be able to think freely.
It seems like Carter thinks they are having a private conversation, in a private space, and Quinn thinks they’re having a public conversation in a public space.
(Strong-upvoted for making something explicit that is more often tacitly assumed. Seriously, this is an incredibly useful comment; thanks!!)
In public spaces, arguments are soldiers. They have to be, because others treat them that way, and because there are actual policies that we’re all fighting / negotiating over
Can you unpack what you mean by “have to be” in more detail? What happens if you just report your actual reasoning (even if your voice trembles)? (I mean that as a literal what-if question, not a rhetorical one. If you want, I can talk about how I would answer this in a future comment.)
I can imagine creatures living in a hyper-Malthusian Nash equilibrium where the slightest deviation from the optimal negotiating stance dictated by the incentives just gets you instantly killed and replaced with someone else who will follow the incentives. In this world, if being honest isn’t the optimal negotiating stance, then honesty is just suicide. Do you think this is a realistic description of life for present-day humans? Why or why not? (This is kind of a leading question on my part. Sorry.)
But we need to be able to figure out which policies to support, somehow, separately from supporting them on this political battlefield.
The problem with this is that private deliberation is extremely dependent on public information; misinformation has potentially drastic ripple effects. You might think you can sit in your room with an encyclopedia, figure out the optimal cause area, and compute the optimal propaganda for that cause … but if the encyclopedia authors are following the same strategy, then your encyclopedia is already full of propaganda.
You’re clearly and explicitly advocating for a policy I think is abhorrent. This is really valuable, because it gives me a chance to argue that the policy is abhorrent, and potentially change your mind (or those of others in the audience who agree with the policy).
I want to make sure you get socially-rewarded for clearly and explicitly advocating for the abhorrent policy (thus the strong-upvote, “thanks!!”, &c.), because if you were to get punished instead, you might think, “Whoops, better not say that in public so clearly”, and then secretly keep on using the abhorrent policy.
Obviously—and this really should just go without saying—just because I think you’re advocating something abhorrent doesn’t mean I think you’re abhorrent. People make mistakes! Making mistakes is OK as long as there exists enough optimization pressure to eventually correct mistakes. If we’re honest with each other about our reasoning, then we can help correct each other’s mistakes! If we’re honest with each other about our reasoning in public, then even people who aren’t already our closest trusted friends can help us correct our mistakes!
Well, I think the main thing is that this depends on onlookers having the ability, attention, and motivation to follow the actual complexity of your reasoning, which is often a quiet unreasonable assumption.
Usually, onlookers are going to round off what you’re saying to something simpler. Sometimes your audience has the resources to actually get on the same page with you, but that is not the default. If you’re not taking that dynamic into account, then you’re just shooting yourself in the foot.
Many of the things that I believe are nuanced, and nuance doesn’t travel well in the public sphere, where people will overhear one sentence out of context (for instance), and then tell their friends what “I believe.” So tact requires that I don’t say those things, in most contexts.
To be clear, I make a point to be honest, and I am not suggesting that you should ever outright lie.
You might think you can sit in your room with an encyclopedia, figure out the optimal cause area, and compute the optimal propaganda for that cause … but if the encyclopedia authors are following the same strategy, then your encyclopedia is already full of propaganda.
This does not seem right to me, so it seems like one of us is missing the other somehow.
Okay, I was getting too metaphorical with the encyclopedia; sorry about that. The proposition I actually want to defend is, “Private deliberation is extremely dependent on public information.” This seems obviously true to me. When you get together with your trusted friends in private to decide which policies to support, that discussion is mostly going to draw on evidence and arguments that you’ve heard in public discourse, rather than things you’ve directly seen and verified for yourself. But if everyone in Society is, like you, simplifying their public arguments in order to minimize their social “attack surface”, then the information you bring to your private discussion is based on fear-based simplifications, rather than the best reasoning humanity has to offer.
In the grandparent comment, the text “report your actual reasoning” is a link to the Sequences post “A Rational Argument”, which you’ve probably read. I recommend re-reading it.
If you omit evidence against your preferred conclusion, people can’t take your reasoning at face value anymore: if you first write at the bottom of a piece of paper, ”… and therefore, Policy P is the best,” it doesn’t matter what you write on the lines above.
A similarly catastrophic, but not identical, distortion occurs when you omit evidence that “someone might take the wrong way.” If your actual bottom line is, “And therefore I’m a Good Person who definitely doesn’t believe anything that could look Bad if taken out of context,” well, that might be a safe life decision for you, but then it’s not clear why I should pay attention to anything else you say.
If you’re not taking that dynamic into account, then you’re just shooting yourself in the foot. [...] people will overhear one sentence out of context (for instance), and then tell their friends what “I believe.”
Alternative metaphor: the people punishing you for misinterpretations of what you actually said are the ones shooting you in the foot. Those bastards! Maybe if we strategize about it together, there’s some way to defy them, rather than accepting their tyrannical rule as inevitable?
To be clear, I make a point to be honest, and I am not suggesting that you should ever outright lie.
It depends on what “honest” means in this context. If “honest” just means “not telling conscious explicit unambiguous outright lies” then, sure, whatever. I think intellectual honesty is a much higher standard than that.
(I’m not sure this comment is precisely a reply to the previous one, or more of a general reply to “things Zack has been saying for the past 6 months”)
I notice that I basically by this point agree with some kind of “something about the overton window of norms should change in the direction Zack is pushing in”, but it seems… like you’re pushing more for an abstract principle than a concrete change, and I’m not sure how to evaluate it. I’d find it helpful if you got more specific about what you’re pushing for.
I’d summarize my high-level understanding of the push you’re making as:
1. “Geez, the appropriate mood for ‘hmm, communicating openly and honestly in public seems hard’ is not ‘whelp, I guess we can’t do that then’. Especially if we’re going to call ourselves rationalists”
2. Any time that mood seems to cropping up or underlying someone’s decision procedure it should be pushed back against.
[is that a fair high level summary?]
I think I have basically come to agree (or at least take quite seriously), point #1 (this is a change from 6 months ago). There are some fine details about where I still disagree with something about your approach, and what exactly my previous and new positions are/were. But I think those are (for now) more distracting than helpful.
My question is, what precise things do you want changed from the status quo? (I think it’s important to point at missing moods, but implementing a missing mood requires actually operationalizing it into actions of some sort). I think I’d have an easier time interacting with this if I understood better what exact actions policies you’re pushing for.
I see roughly two levels of things one might operationalize:
Individual Action – Things that individuals should be trying to do (and, if you’re a participant on LessWrong or similar spaces, the “price for entry” should be something like “you agree that you are supposed to be trying to do this thing”
Norm Enforcement – Things that people should be commenting on, or otherwise acting upon, when they see other people doing
(you might split #2 into “things everyone should do” vs “things site moderators should do”, or you might treat those as mostly synonomous)
Some examples of things you might mean by Individual Action are things like:
“You[everyone] should be attempting to gain thicker skin” (or, different take: “you should try to cultivate an attitude wherein people criticizing your post doesn’t feel like an attack”)
“You should notice when you have avoided speaking up about something because it was inconvenient.” (Additional/alternate variants include: “when you notice that, speak up anyway”, or “when you notice that, speak up, if the current rate at which you mention the inconvenient things is proportionately lower than the rate at which you mention convenient things”)
Some examples of norm enforcement might be:
“When you observe saying something false, or sliding goalposts around in a way that seems dishonest, say so” (with sub-options for how to go about saying so, maybe you say they are lying, or motivated, or maybe you just focus on the falseness).
“When you observe someone systematically saying true-things that seem biased, say so”
Some major concerns/uncertainties of mine are:
1. How do you make sure that you don’t accidentally create a new norm which is “don’t speak up at all” (because it’s much easier to notice and respond to things that are happening, vs things that are not happening)
2. Which proposed changes are local strict improvements, that you can just start doing and having purely good effects, and which require multiple changes happening at once in order to have good effects. Or, which changes require some number of people to just be willing to eat some social cost until a new equilibrium is reached. (This might be fine, but I think it’s easier to respond to concretely to a proposal with a clearer sense of what that social cost is. If people aren’t willing to pay the cost, you might need a kickstarter for Inadequate Equilibria)
Both concerns seem quite addressable, just, require some operationalization to address.
For me to implement changes in myself (either as a person aspiring to be a competent truthseeking community member, or as a perhap helping to maintain a competent truthseeking culture), ideally need to be specified in some kind of Trigger-Action form. (This may not be universally true, some people get more mileage out of internal-alignment shifts rather than habit changes, but I personally find the latter much more helpful)
you’re pushing more for an abstract principle than a concrete change
I mean, the abstract principle that matters is of the kind that can be proved as a theorem rather than merely “pushed for.” If a lawful physical process results in the states of physical system A becoming correlated with the states of system B, and likewise system B and system C, then observations of the state of system C are evidence about the state of system A. I’m claiming this as technical knowledge, not a handwaved philosophical intuition; I can write literal computer programs that exhibit this kind of evidential-entanglement relationship.
Notably, the process whereby you can use your observations about C to help make better predictions about A doesn’t work if system B is lying to make itself look good. I again claim this as technical knowledge, and not a political position.
Any time that mood seems to cropping up or underlying someone’s decision procedure it should be pushed back against.
The word “should” definitely doesn’t belong here. Like, that’s definitely a fair description of the push I’m making. Because I actually feel that way. But obviously, other people shouldn’t passionately advocate for open and honest discourse if they’re not actually passionate about open and honest discourse: that would be dishonest!
I think I’d have an easier time interacting with this if I understood better what exact actions policies you’re pushing for.
I mean, you don’t have to interact with it if you don’t feel like it! I’m not the boss of anyone!
But obviously, other people shouldn’t passionately advocate for open and honest discourse if they’re not actually passionate about open and honest discourse: that would be dishonest!
The unpacked “should” I imagined you implying was more like “If you do not feel it is important to have open/honest discourse, you are probably making a mistake. i.e. it’s likely that you’re not noticing the damage you’re doing and if you really reflected on it honestly you’d probably ”
Notably, the process whereby you can use your observations about C to help make better predictions about A doesn’t work if system B is lying to make itself look good. I again claim this as technical knowledge, and not a political position.
That part is technical knowledge (and so is the related “the observation process doesn’t work [well] if system B is systematically distorting things in some way, whether intentional or not.”). And I definitely agree with that part and expect Eli does to and generally don’t think it’s where the disagreement lives.
But, you seem to have strongly implied, if not outright stated, that this isn’t just an interesting technical fact that exists in isolation, it implies an optimal (or at least improved) policy that individuals and groups make make to improve their truthseeking capability. This implies we (at least, rationalists with roughly similar background assumptions as you) should be doing something differently than they currently are doing. And, like, it actually matters what that thing is.
There is some fact of the matter about what sorts of interacting systems can make the best predictions and models.
There is a (I suspect different) fact of the matter of what the optimal systems you can implement on humans look like, and yet another quite different fact of the matter of what improvements are possible on LessWrong-in-particular given our starting conditions, and what is the best way to coordinate on them. They certainly don’t seem like they’re going to come about by accident.
There is a fact of the matter of what happens if you push for “thick skin” and saying what you mean without regard for politeness – maybe it results in a community that converges on truth faster (by some combination of distorting less when you speak, or by spending less effort on communication or listening). Or maybe it results in a community that converges on truth slower because it selected more for people who are conflict-prone than people who are smart. I don’t actually know the answer here, and the answer seems quite important.
Early LessWrong had a flaw (IMO) regarding instrumental rationality – there is also a fact of the matter of what an optimal AI decisionmaker would do if they were running on a human-brain worth of compute. But, this is quite different from what kind of decisionmaking works best implemented on typical human wetware, and failure to understand this resulted in a lot of people making bad plans and getting depressed because the plans they made were actually impossible to run.
I mean, you don’t have to interact with it if you don’t feel like it! I’m not the boss of anyone!
Sure, but, like, I want to interact with it (both individually and as a site moderator) because I think it’s pointing in an important direction. You’ve noted this as a something I should probably pay special attention to. And, like, I think you’re right, so I’m trying to pay special attention to it.
The word “should” definitely doesn’t belong here. Like, that’s definitely a fair description of the push I’m making. Because I actually feel that way. But obviously, other people shouldn’t passionately advocate for open and honest discourse if they’re not actually passionate about open and honest discourse: that would be dishonest!
This seems to me like you’re saying “people shouldn’t have to advocate for being open and honest because people should be open and honest”
And then the question becomes… If you think it’s true that people should be open and honest, do you have policy proposals that help that become true?
I separated out the question of “stuff individuals should do unilaterally” from “norm enforcement” because it seems like at least some stuff doesn’t require any central decision nodes.
In particular, while “don’t lie” is an easy injunction to follow, “account for systematic distortions in what you say” is actually quite computationally hard, because there are a lot of distortions with different mechanisms and different places one might intervene on their thought process and/or communication process. “Publicly say literally ever inconvenient thing you think of” probably isn’t what you meant (or maybe it was?), and it might cause you to end not having a harder time thinking inconvenient thoughts.
I’m asking because I’m actually interested in improving on this dimension.
(some current best guesses of mine are, at least for my own values, are:
“Practice noticing heretical thoughts I think and actually notice what things you can’t say, without obligating yourself to say them, so that you don’t accidentally train yourself not to think them”
“Practice noticing opportunities to exhibit social courage, either in low stakes situations, or important situations. Allocate some additional attention towards practicing social courage as skill/muscle” (it’s unclear to me how much to prioritize this, because there’s two separate potential models of ‘social/epistemic courage is a muscle’ and ’social/epistemic courage is a resource you can spend, but you risk using up people’s willingness to listen to you, as well a “most things one might be courageous about actually aren’t important and you’ll end up spending a lot of effort on things that don’t matter”)
But, I am interested in what you actually do within your own frame/value setup.
I’m more interested, as the person who has been the powerful central decision node at multiple times in my life, and will likely be in the future (and as someone who is interested in institution design in general) in if you have suggestions for how to make this work in new or existing institutions. For instance, some of the ideas I’ve shared elsewhere on radical transparency norms seem one way to go about this.
I think cultural evolution and the marketplace of ideas seems like a good idea, but memetics unfortunately select for other things than just truth, and relying on memetics to propagate truth norms (if indeed the propagation of truth norms is good) feels insufficient.
The proposition I actually want to defend is, “Private deliberation is extremely dependent on public information.” This seems obviously true to me. When you get together with your trusted friends in private to decide which policies to support, that discussion is mostly going to draw on evidence and arguments that you’ve heard in public discourse, rather than things you’ve directly seen and verified for yourself.
Most of the harm here comes not from public discourse being filtered in itself, but from people updating on filtered public discourse as if it were unfiltered. This makes me think it’s better to get people to realize that public discourse isn’t going to contain all the arguments than to get them to include all the arguments in public discourse.
I agree that that’s much less bad—but “better”? “Better”!? By what standard? What assumptions are you invoking without stating them?
I should clarify: I’m not saying submitting to censorship is never the right thing to do. If we live in Amazontopia, and there’s a man with a gun on the streetcorner who shoots anyone who says anything bad about Jeff Bezos, then indeed, I would not say anything bad about Jeff Bezos—in this specific (silly) hypothetical scenario with that specific threat model.
But ordinarily, when we try to figure out which cognitive algorithms are “better” (efficiently produce accurate maps, or successful plans), we tend to assume a “fair” problem class unless otherwise specified. The theory of “rational thought, except you get punished if you think about elephants” is strictly more complicated than the theory of “rational thought.” Even if we lived in a world where robots with MRI machines who punish elephant-thoughts were not unheard of and needed to be planned for, it would be pedagogically weird to treat that as the central case.
I hold “discourse algorithms” to the same standard: we need to figure out how to think together in the simple, unconstrained case before we have any hope of successfully dealing with the more complicated problem of thinking together under some specific censorship threat.
I am not able to rightly apprehend what kind of brain damage has turned almost everyone I used to trust into worthless cowards who just assume as if it were a law of nature that discourse is impossible—that rank and popularity are more powerful than intelligence. Is the man on the streetcorner actually holding a gun, or does he just flash his badge and glare at people? Have you even looked?
Most of the harm
Depends on the problem you’re facing. If you just want accurate individual maps, sufficiently smart Bayesians can algorithmically “back out” the effects of censorship. But what if you actually need common knowledge for something?
This is probably not the crux of our disagreement, but I think we already understand perfectly well how to think together and we’re limited by temperament rather than understanding. I agree that if we’re trying to think about how to think together we can treat no censorship as the default case.
worthless cowards
If cowardice means fear of personal consequences, this doesn’t ring true as an ad hominem. Speaking without any filter is fun and satisfying and consistent with a rationalist pro-truth self-image and other-image. The reason why I mostly don’t do it is because I’d feel guilt about harming the discourse. This motivation doesn’t disappear in cases where I feel safe from personal consequences, e.g. because of anonymity.
who just assume as if it were a law of nature that discourse is impossible
I don’t know how you want me to respond to this. Obviously I think my sense that real discourse on fraught topics is impossible is based on extensively observing attempts at real discourse on fraught topics being fake. I suspect your sense that real discourse is possible is caused by you underestimating how far real discourse would diverge from fake discourse because you assume real discourse is possible and interpret too much existing discourse as real discourse.
The reason why I mostly don’t do it is because I’d feel guilt about harming the discourse
Woah, can you explain this part in more detail?! Harming the discourse how, specifically? If you have thoughts, and your thoughts are correct, how does explaining your correct thoughts make things worse?
Consider the idea that the prospect of advanced AI implies the returns from stopping global warming are much smaller than you might otherwise think. I think this is a perfectly correct point, but I’m also willing to never make it, because a lot of people will respond by updating against the prospect of advanced AI, and I care a lot more about people having correct opinions on advanced AI than on the returns from stopping global warming.
I want to distinguish between “harming the discourse” and “harming my faction in a marketing war.”
When I say that public discourse is really important, what I mean is that if you tell the truth in public about what you believe and why (possibly investing a lot of effort and using a lot of hyperlinks to bridge the inferential distance), then other people who aren’t already your closest trusted friends have the opportunity to learn from the arguments and evidence that actually convinced you, combine it with their own knowledge, and potentially make better decisions. (“Discourse” might not be the right word here—the concept I want to point to includes unilateral truthtelling, as on a blog with no comment section, or where your immediate interlocutor doesn’t “reciprocate” in good faith, but someone in the audience might learn something.)
If you think other people can’t process arguments at all, but that you can, how do you account for your own existence? For myself: I’m smart, but I’m not that smart (IQ ~130). The Sequences were life-changingly great, but I was still interested in philosophy and argument before that. Our little robot cult does not have a monopoly on reasoning itself.
a lot of people will respond by updating against the prospect of advanced AI
Sure. Those are the people who don’t matter. Even if you couldpsychologically manipulate [revised: persuade] them into having the correct bottom-line“opinion”, what would you do with them? Were you planning to solve the alignment problem by lobbying Congress to pass appropriate legislation?
When I say that public discourse is really important, what I mean is that if you tell the truth in public about what you believe and why (possibly investing a lot of effort and using a lot of hyperlinks to bridge the inferential distance)
I want to agree with the general point here, but I find it breaking down in some of the cases I’m considering. I think the underlying generator is something like “communication is a two-way street”, and it makes sense to not just emit sentences that compile and evaluate to ‘true’ in my ontology, but that I expect to compile and evaluate to approximately what I wanted to convey in their ontology.
Does that fall into ‘harming my faction in a marketing war’ according to you?
No, I agree that authors should write in language that their audience will understand. I’m trying to make a distinction between having intent to inform (giving the audience information that they can use to think with) vs. persuasion (trying to exert control over the audience’s conclusion). Consider this generalization of a comment upthread—
Consider the idea that X implies Y. I think this is a perfectly correct point, but I’m also willing to never make it, because a lot of people will respond by concluding that not-X, because they’re emotionally attached to not-Y, and I care a lot more about people having correct beliefs about the truth value of X than Y.
This makes perfect sense as part of a consequentialist algorithm for maximizing the number of people who believe X. The algorithm works just as well, and for the same reasons whether X = “superintelligence is an existential risk” and Y = “returns from stopping global warming are smaller than you might otherwise think” (when many audience members have global warming “cause-area loyalty”), or whether X = “you should drink Coke” and Y = “returns from drinking Pepsi are smaller than you might otherwise think” (when many audience members have Pepsi brand loyalty). That’s why I want to call it a marketing algorithm—the function is to strategically route around the audience’s psychological defenses, rather than just tell them stuff as an epistemic peer.
To be clear, if you don’t think you’re talking to an epistemic peer, strategically routing around the audience’s psychological defenses might be the right thing to do! For an example that I thought was OK because I didn’t think it significantly distorted the discourse, see my recent comment explaining an editorial choice I made in a linkpost description. But I think that when one does this, it’s important to notice the nature of what one is doing (there’s a reason my linked comment uses the phrase “marketing keyword”!), and track how much of a distortion it is relative to how you would talk to an epistemic peer. As you know, quality of discourse is about the conversation executing an algorithm that reaches truth, not just convincing people of the conclusion that (you think) is correct. That’s why I’m alarmed at the prospect of someone feeling guilty (!?) that honestly reporting their actual reasoning might be “harming the discourse” (!?!?).
To be clear, if you don’t think you’re talking to an epistemic peer, strategically routing around the audience’s psychological defenses might be the right thing to do!
I’m confused reading this.
It seems to me that if you think routing around psychological defenses is a sometimes reasonable thing to do with people who aren’t your epistemic peers.
But you said above that you thought the overall position of having private discourse spaces and public discourse spaces is abhorrent?
How do these fit together? The the vast majority of people are not your (or my) epistemic peers, even the robot cult doesn’t have a monopoly on truth or truth seeking. And so you would behave differentely in private spaces with your peers, and public spaces that include the whole world.
It’s a fuzzy Sorites-like distinction, but I think I’m more sympathetic to trying to route around a particular interlocutor’s biases in the context of a direct conversation with a particular person (like a comment or Tweet thread) than I am in writing directed “at the world” (like top-level posts), because the more something is directed “at the world”, the more you should expect that many of your readers know things that you don’t, such that the humility argument for honesty applies forcefully.
FWIW, I have the opposite inclination. If I’m talking with a person one-on-one, we have high bandwidth. I will try to be skillful and compassionate in avoiding triggering them, while still saying what’s true, and depending on the who I’m talking to, I may elect to remain silent about some of the things that I think are true.
But I overall am much more uncomfortable with anything less than straightforward statements of what I believe and why, in smaller-person contexts, where there is the communication capacity to clarify misunderstandings, and where my declining to offer an objection to something that someone says more strongly implies agreement.
the more you should expect that many of your readers know things that you don’t
This seems right to me.
But it also seem right to me that the broader your audience the lower their average level of epistemics and commitment to epistemic discourse norms. And your communication bandwidth is lower.
Which means there is proportionally more risk of 1) people mishearing you and that damaging the prospects of the policies you want to advocate for (eg “marketing”), 2) people mishearing you, and that causing you personal problems of various stripes, and 3) people understanding you correctly, and causing you personal problems of various stripes. [1]
So the larger my audience the more reticent I might be about what I’m willing to say.
There’s obviously a fourth quadrant of that 2-by-2, “people hearing you correctly and that damaging the prospects of the policies you want to advocate for.”
Acting to avoid that seems commons destroying, and personally out of integrity. If my policy proposals have true drawbacks, I want to clearly acknowledge them and state why I think they’re worth it, not disemble about them.
“Intent to inform” jives with my sense of it much more than “tell the truth.”
On reflection, I think the ‘epistemic peer’ thing is close but not entirely right. Definitely if I think Bob “can’t handle the truth” about climate change, and so I only talk about AI with Bob, then I’m deciding that Bob isn’t an epistemic peer. But if I have only a short conversation with Bob, then there’s a Gricean implication point that saying X implicitly means I thought it was more relevant to say than Y, or is complete, or so on, and so there are whole topics that might be undiscussed because I don’t want to send the implicit message that my short thoughts on the matter are complete enough to reconstruct my position or that this topic is more relevant than other topics.
---
More broadly, I note that I often see “the discourse” used as a term of derision, I think because it is (currently) something more like a marketing war than an open exchange of information. Or, like a market left to its own devices, it has Goodharted on marketing. It is unclear to me whether it’s better to abandon it (like, for example, not caring about what people think on Twitter) or attempt to recapture it (by pushing for the sorts of ‘public goods’ and savvy customers that cause markets to Goodhart less on marketing).
Sharing reasoning is obviously normally good, but we obviously live in a world with lots of causally important actors who don’t always respond rationally to arguments, and there are cases like the grandparent comment when one is justified in worrying that an argument would make people stupid in a particular way, and one can avoid this problem by not making the argument, and doing so is importantly different from filtering out arguments for causing a justified update against one’s side, and is even more importantly different from anything similar to what pops into people’s minds when they hear “psychological manipulation”. If I’m worried that someone with a finger on some sort of hypertech button may avoid learning about some crucial set of thoughts about what circumstances it’s good to press hypertech buttons under because they’ve always vaguely heard that set of thoughts is disreputable and so never looked into it, I don’t think your last paragraph is a fair response to that. I think I should tap out of this discussion because I feel like the more-than-one-sentence-at-a-time medium is nudging it more toward rhetoric than debugging, but let’s still talk some time.
even more importantly different from anything similar to what pops into people’s minds when they hear “psychological manipulation”
That’s fair. Let me scratch “psychologically manipulate”, edit to “persuade”, and refer to my reply to Vaniver and Ben Hoffman’s “The Humility Argument for Honesty” (also the first link in the grandparent) for the case that generic persuasion techniques are (counterintuitively!) Actually Bad.
I feel like the more-than-one-sentence-at-a-time medium is nudging it more toward rhetoric than debugging
I don’t think it’s the long-form medium so much as it is the fact that I am on a personal vindictive rampage against appeals-to-consequences lately. You should take my vindictiveness into account if you think it’s biasing me!
In those contexts it is reasonable (I don’t know if it is correct, or not), to constrain what things you say, even if they’re true, because of their consequences.
This agrees with Carter:
So, of course you can evaluate consequences in your head before deciding to say something.
Carter is arguing that appeals to consequences should be disallowed at the level of discourse norms, including public discourse norms. That is, in public, “but saying that has bad consequences!” is considered invalid.
It’s better to fight on a battlefield with good rules than one with bad rules.
Hmm...something about that seems not quite right to me. I’m going to see if I can draw out why.
Carter is arguing that appeals to consequences should be disallowed at the level of discourse norms, including public discourse norms. That is, in public, “but saying that has bad consequences!” is considered invalid.
The thing at stake for Quinn_Eli is not whether or not this kind of argument is “invalid”. It’s whether or not she has the affordance to make a friendly, if sometimes forceful, bid to bring this conversation into a private space, to avoid collateral damage.
(Sometimes of course, the damage won’t be collateral. If in private discussion, Quinn concludes, to the best of her ability to reason, that, in fact, it would be good if fewer people donated to PADP, she might then give that argument in public. And if others make bids to say explore that privately, at that stage, she might respond, “No. I am specifically arguing that onlookers should donate less to PADP (or think that decreasing their donations is a reasonable outcome of this argument). That isn’t accidental collateral damage. It’s the thing that’s at stake for me right now.”)
I don’t know if you already agree with what I’m saying here.
. . .
It’s better to fight on a battlefield with good rules than one with bad rules.
I don’t think we get to pick the rules of the battlefield. The rules of the battlefield are defined only by what causes one to win. Nature alone chooses the rules.
Bidding to move to a private space isn’t necessarily bad but at the same time it’s not an argument. “I want to take this private” doesn’t argue for any object-level position.
It seems that the text of what you’re saying implies you think humans have no agency over discourse norms, regulations, rules of games, etc, but that seems absurd so I don’t think you actually believe that. Perhaps you’ve given up on affecting them, though.
(“What wins” is underdetermined given choice is involved in what wins; you can’t extrapolate from two player zero sum games (where there’s basically one best strategy) to multi player zero sum games (where there isn’t, at least due to coalitional dynamics implying a “weaker” player can win by getting more supporters))
It seems that the text of what you’re saying implies you think humans have no agency over discourse norms, regulations, rules of games, etc, but that seems absurd so I don’t think you actually believe that.
How much agency we have is proportional to how many other actors are in a space. I think it’s quite achievable (though requires a bit of coordination) to establish good norms for a space with 100 people. It’s still achievable, but… probably at least (10x?) as hard to establish good norms for 1000 people.
But “public searchable internet” is immediately putting things in in a context with at least millions if not billions of potentially relevant actors, many of whom don’t know anything about your norms. I’m still actually fairly optimistic about making important improvements to this space, but those improvements will have a lot of constraints for anyone with major goals that affect the world-stage.
It seems that the text of what you’re saying implies you think humans have no agency over discourse norms, regulations, rules of games, etc, but that seems absurd so I don’t think you actually believe that. Perhaps you’ve given up on affecting them, though.
I do think that is possible and often correct to push for some discourse norms over others. I will often reward moves that I think are good, and will sometimes challenge moves that I think are harmful to our collective epistemology.
But I don’t think that I have much ability to “choose” how other people will respond to my speech acts. The world is a lot bigger than me, and it would be imprudent to miss-model that fact that, for instance, many people will not or cannot follow some forms of argument, but will just round what you’re saying to the closest thing that they can understand. And that this can sometimes cause damage.
(I think that you must agree with this? Or maybe you think that you should refuse to engage in groups where the collective epistemology can’t track nuanced argument? I don’t think I’m getting you yet.)
Bidding to move to a private space isn’t necessarily bad but at the same time it’s not an argument. “I want to take this private” doesn’t argue for any object-level position
I absolutely agree.
I think the main thing I want to stand for here is both that obviously the consequences of believing or saying a statement have no bearing on its truth value (except in unusual self-fulfilling prophecy edge cases), and it is often reasonable to say “Hey man, I don’t think you should say that here in this context where bystanders will overhear you.”
I’m afraid that those two might being conflated, or that one is is being confused for the other (not in this dialogue, but in the world).
To be clear, I’m not sure that I’m disagreeing with you. I do have the feeling that we are missing each other somehow.
Yes, and Carter is arguing in a context where it’s easy to shift the discourse norms, since there are few people present in the conversation.
LW doesn’t have that many active users, it’s possible to write posts arguing for discourse norms, sometimes to convince moderators they are good, etc.
and it is often reasonable to say “Hey man, I don’t think you should say that here in this context where bystanders will overhear you.”
Sure, and also “that’s just your opinion, man, so I’ll keep talking” is often a valid response to that. It’s important not to bias towards saying exposing information is risky while hiding it is not.
It seems to me that they key issue here is the need for both public and private conversational spaces.
In public spaces, arguments are soldiers. They have to be, because others treat them that way, and because there are actual policies that we’re all fighting / negotiating over. In those contexts it is reasonable (I don’t know if it is correct, or not), to constrain what things you say, even if they’re true, because of their consequences. It is often the case that one piece of information, though true, taken out of context, does more harm than good, and often conveying the whole informational context to a large group of people is all but impossible.
But we need to be able to figure out which policies to support, somehow, separately from supporting them on this political battlefield. We also need private spaces, where we can think and our initial thoughts can be isolated from their possible consequences, or we won’t be able to think freely.
It seems like Carter thinks they are having a private conversation, in a private space, and Quinn thinks they’re having a public conversation in a public space.
(Strong-upvoted for making something explicit that is more often tacitly assumed. Seriously, this is an incredibly useful comment; thanks!!)
Can you unpack what you mean by “have to be” in more detail? What happens if you just report your actual reasoning (even if your voice trembles)? (I mean that as a literal what-if question, not a rhetorical one. If you want, I can talk about how I would answer this in a future comment.)
I can imagine creatures living in a hyper-Malthusian Nash equilibrium where the slightest deviation from the optimal negotiating stance dictated by the incentives just gets you instantly killed and replaced with someone else who will follow the incentives. In this world, if being honest isn’t the optimal negotiating stance, then honesty is just suicide. Do you think this is a realistic description of life for present-day humans? Why or why not? (This is kind of a leading question on my part. Sorry.)
The problem with this is that private deliberation is extremely dependent on public information; misinformation has potentially drastic ripple effects. You might think you can sit in your room with an encyclopedia, figure out the optimal cause area, and compute the optimal propaganda for that cause … but if the encyclopedia authors are following the same strategy, then your encyclopedia is already full of propaganda.
Huh. Can you say why?
You’re clearly and explicitly advocating for a policy I think is abhorrent. This is really valuable, because it gives me a chance to argue that the policy is abhorrent, and potentially change your mind (or those of others in the audience who agree with the policy).
I want to make sure you get socially-rewarded for clearly and explicitly advocating for the abhorrent policy (thus the strong-upvote, “thanks!!”, &c.), because if you were to get punished instead, you might think, “Whoops, better not say that in public so clearly”, and then secretly keep on using the abhorrent policy.
Obviously—and this really should just go without saying—just because I think you’re advocating something abhorrent doesn’t mean I think you’re abhorrent. People make mistakes! Making mistakes is OK as long as there exists enough optimization pressure to eventually correct mistakes. If we’re honest with each other about our reasoning, then we can help correct each other’s mistakes! If we’re honest with each other about our reasoning in public, then even people who aren’t already our closest trusted friends can help us correct our mistakes!
Well, I think the main thing is that this depends on onlookers having the ability, attention, and motivation to follow the actual complexity of your reasoning, which is often a quiet unreasonable assumption.
Usually, onlookers are going to round off what you’re saying to something simpler. Sometimes your audience has the resources to actually get on the same page with you, but that is not the default. If you’re not taking that dynamic into account, then you’re just shooting yourself in the foot.
Many of the things that I believe are nuanced, and nuance doesn’t travel well in the public sphere, where people will overhear one sentence out of context (for instance), and then tell their friends what “I believe.” So tact requires that I don’t say those things, in most contexts.
To be clear, I make a point to be honest, and I am not suggesting that you should ever outright lie.
This does not seem right to me, so it seems like one of us is missing the other somehow.
Okay, I was getting too metaphorical with the encyclopedia; sorry about that. The proposition I actually want to defend is, “Private deliberation is extremely dependent on public information.” This seems obviously true to me. When you get together with your trusted friends in private to decide which policies to support, that discussion is mostly going to draw on evidence and arguments that you’ve heard in public discourse, rather than things you’ve directly seen and verified for yourself. But if everyone in Society is, like you, simplifying their public arguments in order to minimize their social “attack surface”, then the information you bring to your private discussion is based on fear-based simplifications, rather than the best reasoning humanity has to offer.
In the grandparent comment, the text “report your actual reasoning” is a link to the Sequences post “A Rational Argument”, which you’ve probably read. I recommend re-reading it.
If you omit evidence against your preferred conclusion, people can’t take your reasoning at face value anymore: if you first write at the bottom of a piece of paper, ”… and therefore, Policy P is the best,” it doesn’t matter what you write on the lines above.
A similarly catastrophic, but not identical, distortion occurs when you omit evidence that “someone might take the wrong way.” If your actual bottom line is, “And therefore I’m a Good Person who definitely doesn’t believe anything that could look Bad if taken out of context,” well, that might be a safe life decision for you, but then it’s not clear why I should pay attention to anything else you say.
Alternative metaphor: the people punishing you for misinterpretations of what you actually said are the ones shooting you in the foot. Those bastards! Maybe if we strategize about it together, there’s some way to defy them, rather than accepting their tyrannical rule as inevitable?
It depends on what “honest” means in this context. If “honest” just means “not telling conscious explicit unambiguous outright lies” then, sure, whatever. I think intellectual honesty is a much higher standard than that.
(I’m not sure this comment is precisely a reply to the previous one, or more of a general reply to “things Zack has been saying for the past 6 months”)
I notice that I basically by this point agree with some kind of “something about the overton window of norms should change in the direction Zack is pushing in”, but it seems… like you’re pushing more for an abstract principle than a concrete change, and I’m not sure how to evaluate it. I’d find it helpful if you got more specific about what you’re pushing for.
I’d summarize my high-level understanding of the push you’re making as:
1. “Geez, the appropriate mood for ‘hmm, communicating openly and honestly in public seems hard’ is not ‘whelp, I guess we can’t do that then’. Especially if we’re going to call ourselves rationalists”
2. Any time that mood seems to cropping up or underlying someone’s decision procedure it should be pushed back against.
[is that a fair high level summary?]
I think I have basically come to agree (or at least take quite seriously), point #1 (this is a change from 6 months ago). There are some fine details about where I still disagree with something about your approach, and what exactly my previous and new positions are/were. But I think those are (for now) more distracting than helpful.
My question is, what precise things do you want changed from the status quo? (I think it’s important to point at missing moods, but implementing a missing mood requires actually operationalizing it into actions of some sort). I think I’d have an easier time interacting with this if I understood better what exact actions policies you’re pushing for.
I see roughly two levels of things one might operationalize:
Individual Action – Things that individuals should be trying to do (and, if you’re a participant on LessWrong or similar spaces, the “price for entry” should be something like “you agree that you are supposed to be trying to do this thing”
Norm Enforcement – Things that people should be commenting on, or otherwise acting upon, when they see other people doing
(you might split #2 into “things everyone should do” vs “things site moderators should do”, or you might treat those as mostly synonomous)
Some examples of things you might mean by Individual Action are things like:
“You[everyone] should be attempting to gain thicker skin” (or, different take: “you should try to cultivate an attitude wherein people criticizing your post doesn’t feel like an attack”)
“You should notice when you have avoided speaking up about something because it was inconvenient.” (Additional/alternate variants include: “when you notice that, speak up anyway”, or “when you notice that, speak up, if the current rate at which you mention the inconvenient things is proportionately lower than the rate at which you mention convenient things”)
Some examples of norm enforcement might be:
“When you observe saying something false, or sliding goalposts around in a way that seems dishonest, say so” (with sub-options for how to go about saying so, maybe you say they are lying, or motivated, or maybe you just focus on the falseness).
“When you observe someone systematically saying true-things that seem biased, say so”
Some major concerns/uncertainties of mine are:
1. How do you make sure that you don’t accidentally create a new norm which is “don’t speak up at all” (because it’s much easier to notice and respond to things that are happening, vs things that are not happening)
2. Which proposed changes are local strict improvements, that you can just start doing and having purely good effects, and which require multiple changes happening at once in order to have good effects. Or, which changes require some number of people to just be willing to eat some social cost until a new equilibrium is reached. (This might be fine, but I think it’s easier to respond to concretely to a proposal with a clearer sense of what that social cost is. If people aren’t willing to pay the cost, you might need a kickstarter for Inadequate Equilibria)
Both concerns seem quite addressable, just, require some operationalization to address.
For me to implement changes in myself (either as a person aspiring to be a competent truthseeking community member, or as a perhap helping to maintain a competent truthseeking culture), ideally need to be specified in some kind of Trigger-Action form. (This may not be universally true, some people get more mileage out of internal-alignment shifts rather than habit changes, but I personally find the latter much more helpful)
I mean, the abstract principle that matters is of the kind that can be proved as a theorem rather than merely “pushed for.” If a lawful physical process results in the states of physical system A becoming correlated with the states of system B, and likewise system B and system C, then observations of the state of system C are evidence about the state of system A. I’m claiming this as technical knowledge, not a handwaved philosophical intuition; I can write literal computer programs that exhibit this kind of evidential-entanglement relationship.
Notably, the process whereby you can use your observations about C to help make better predictions about A doesn’t work if system B is lying to make itself look good. I again claim this as technical knowledge, and not a political position.
The word “should” definitely doesn’t belong here. Like, that’s definitely a fair description of the push I’m making. Because I actually feel that way. But obviously, other people shouldn’t passionately advocate for open and honest discourse if they’re not actually passionate about open and honest discourse: that would be dishonest!
I mean, you don’t have to interact with it if you don’t feel like it! I’m not the boss of anyone!
The unpacked “should” I imagined you implying was more like “If you do not feel it is important to have open/honest discourse, you are probably making a mistake. i.e. it’s likely that you’re not noticing the damage you’re doing and if you really reflected on it honestly you’d probably ”
That part is technical knowledge (and so is the related “the observation process doesn’t work [well] if system B is systematically distorting things in some way, whether intentional or not.”). And I definitely agree with that part and expect Eli does to and generally don’t think it’s where the disagreement lives.
But, you seem to have strongly implied, if not outright stated, that this isn’t just an interesting technical fact that exists in isolation, it implies an optimal (or at least improved) policy that individuals and groups make make to improve their truthseeking capability. This implies we (at least, rationalists with roughly similar background assumptions as you) should be doing something differently than they currently are doing. And, like, it actually matters what that thing is.
There is some fact of the matter about what sorts of interacting systems can make the best predictions and models.
There is a (I suspect different) fact of the matter of what the optimal systems you can implement on humans look like, and yet another quite different fact of the matter of what improvements are possible on LessWrong-in-particular given our starting conditions, and what is the best way to coordinate on them. They certainly don’t seem like they’re going to come about by accident.
There is a fact of the matter of what happens if you push for “thick skin” and saying what you mean without regard for politeness – maybe it results in a community that converges on truth faster (by some combination of distorting less when you speak, or by spending less effort on communication or listening). Or maybe it results in a community that converges on truth slower because it selected more for people who are conflict-prone than people who are smart. I don’t actually know the answer here, and the answer seems quite important.
Early LessWrong had a flaw (IMO) regarding instrumental rationality – there is also a fact of the matter of what an optimal AI decisionmaker would do if they were running on a human-brain worth of compute. But, this is quite different from what kind of decisionmaking works best implemented on typical human wetware, and failure to understand this resulted in a lot of people making bad plans and getting depressed because the plans they made were actually impossible to run.
Sure, but, like, I want to interact with it (both individually and as a site moderator) because I think it’s pointing in an important direction. You’ve noted this as a something I should probably pay special attention to. And, like, I think you’re right, so I’m trying to pay special attention to it.
This seems to me like you’re saying “people shouldn’t have to advocate for being open and honest because people should be open and honest”
And then the question becomes… If you think it’s true that people should be open and honest, do you have policy proposals that help that become true?
Not really? The concept of a “policy proposal” seems to presuppose control over some powerful central decision node, which I don’t think is true of me. This is a forum website. I write things. Maybe someone reads them. Maybe they learn something. Maybe me and the people who are better at open and honest discourse preferentially collaborate with each other (and ignore people who we can detect are playing a different game), have systematically better ideas, and newcomers tend to imitate our ways in a process of cultural evolution.
I separated out the question of “stuff individuals should do unilaterally” from “norm enforcement” because it seems like at least some stuff doesn’t require any central decision nodes.
In particular, while “don’t lie” is an easy injunction to follow, “account for systematic distortions in what you say” is actually quite computationally hard, because there are a lot of distortions with different mechanisms and different places one might intervene on their thought process and/or communication process. “Publicly say literally ever inconvenient thing you think of” probably isn’t what you meant (or maybe it was?), and it might cause you to end not having a harder time thinking inconvenient thoughts.
I’m asking because I’m actually interested in improving on this dimension.
(some current best guesses of mine are, at least for my own values, are:
“Practice noticing heretical thoughts I think and actually notice what things you can’t say, without obligating yourself to say them, so that you don’t accidentally train yourself not to think them”
“Practice noticing opportunities to exhibit social courage, either in low stakes situations, or important situations. Allocate some additional attention towards practicing social courage as skill/muscle” (it’s unclear to me how much to prioritize this, because there’s two separate potential models of ‘social/epistemic courage is a muscle’ and ’social/epistemic courage is a resource you can spend, but you risk using up people’s willingness to listen to you, as well a “most things one might be courageous about actually aren’t important and you’ll end up spending a lot of effort on things that don’t matter”)
But, I am interested in what you actually do within your own frame/value setup.
I’m more interested, as the person who has been the powerful central decision node at multiple times in my life, and will likely be in the future (and as someone who is interested in institution design in general) in if you have suggestions for how to make this work in new or existing institutions. For instance, some of the ideas I’ve shared elsewhere on radical transparency norms seem one way to go about this.
I think cultural evolution and the marketplace of ideas seems like a good idea, but memetics unfortunately select for other things than just truth, and relying on memetics to propagate truth norms (if indeed the propagation of truth norms is good) feels insufficient.
I would love to see a summary what particular arguments of Zach’s changed your mind, and how it changed over time.
Most of the harm here comes not from public discourse being filtered in itself, but from people updating on filtered public discourse as if it were unfiltered. This makes me think it’s better to get people to realize that public discourse isn’t going to contain all the arguments than to get them to include all the arguments in public discourse.
I agree that that’s much less bad—but “better”? “Better”!? By what standard? What assumptions are you invoking without stating them?
I should clarify: I’m not saying submitting to censorship is never the right thing to do. If we live in Amazontopia, and there’s a man with a gun on the streetcorner who shoots anyone who says anything bad about Jeff Bezos, then indeed, I would not say anything bad about Jeff Bezos—in this specific (silly) hypothetical scenario with that specific threat model.
But ordinarily, when we try to figure out which cognitive algorithms are “better” (efficiently produce accurate maps, or successful plans), we tend to assume a “fair” problem class unless otherwise specified. The theory of “rational thought, except you get punished if you think about elephants” is strictly more complicated than the theory of “rational thought.” Even if we lived in a world where robots with MRI machines who punish elephant-thoughts were not unheard of and needed to be planned for, it would be pedagogically weird to treat that as the central case.
I hold “discourse algorithms” to the same standard: we need to figure out how to think together in the simple, unconstrained case before we have any hope of successfully dealing with the more complicated problem of thinking together under some specific censorship threat.
I am not able to rightly apprehend what kind of brain damage has turned almost everyone I used to trust into worthless cowards who just assume as if it were a law of nature that discourse is impossible—that rank and popularity are more powerful than intelligence. Is the man on the streetcorner actually holding a gun, or does he just flash his badge and glare at people? Have you even looked?
Depends on the problem you’re facing. If you just want accurate individual maps, sufficiently smart Bayesians can algorithmically “back out” the effects of censorship. But what if you actually need common knowledge for something?
This is probably not the crux of our disagreement, but I think we already understand perfectly well how to think together and we’re limited by temperament rather than understanding. I agree that if we’re trying to think about how to think together we can treat no censorship as the default case.
If cowardice means fear of personal consequences, this doesn’t ring true as an ad hominem. Speaking without any filter is fun and satisfying and consistent with a rationalist pro-truth self-image and other-image. The reason why I mostly don’t do it is because I’d feel guilt about harming the discourse. This motivation doesn’t disappear in cases where I feel safe from personal consequences, e.g. because of anonymity.
I don’t know how you want me to respond to this. Obviously I think my sense that real discourse on fraught topics is impossible is based on extensively observing attempts at real discourse on fraught topics being fake. I suspect your sense that real discourse is possible is caused by you underestimating how far real discourse would diverge from fake discourse because you assume real discourse is possible and interpret too much existing discourse as real discourse.
Then that’s a reason to try to create common knowledge, whether privately or publicly. I think ordinary knowledge is fine most of the time, though.
Woah, can you explain this part in more detail?! Harming the discourse how, specifically? If you have thoughts, and your thoughts are correct, how does explaining your correct thoughts make things worse?
Consider the idea that the prospect of advanced AI implies the returns from stopping global warming are much smaller than you might otherwise think. I think this is a perfectly correct point, but I’m also willing to never make it, because a lot of people will respond by updating against the prospect of advanced AI, and I care a lot more about people having correct opinions on advanced AI than on the returns from stopping global warming.
I want to distinguish between “harming the discourse” and “harming my faction in a marketing war.”
When I say that public discourse is really important, what I mean is that if you tell the truth in public about what you believe and why (possibly investing a lot of effort and using a lot of hyperlinks to bridge the inferential distance), then other people who aren’t already your closest trusted friends have the opportunity to learn from the arguments and evidence that actually convinced you, combine it with their own knowledge, and potentially make better decisions. (“Discourse” might not be the right word here—the concept I want to point to includes unilateral truthtelling, as on a blog with no comment section, or where your immediate interlocutor doesn’t “reciprocate” in good faith, but someone in the audience might learn something.)
If you think other people can’t process arguments at all, but that you can, how do you account for your own existence? For myself: I’m smart, but I’m not that smart (IQ ~130). The Sequences were life-changingly great, but I was still interested in philosophy and argument before that. Our little robot cult does not have a monopoly on reasoning itself.
Sure. Those are the people who don’t matter. Even if you could
psychologically manipulate[revised: persuade] them into having the correct bottom-line “opinion”, what would you do with them? Were you planning to solve the alignment problem by lobbying Congress to pass appropriate legislation?I want to agree with the general point here, but I find it breaking down in some of the cases I’m considering. I think the underlying generator is something like “communication is a two-way street”, and it makes sense to not just emit sentences that compile and evaluate to ‘true’ in my ontology, but that I expect to compile and evaluate to approximately what I wanted to convey in their ontology.
Does that fall into ‘harming my faction in a marketing war’ according to you?
No, I agree that authors should write in language that their audience will understand. I’m trying to make a distinction between having intent to inform (giving the audience information that they can use to think with) vs. persuasion (trying to exert control over the audience’s conclusion). Consider this generalization of a comment upthread—
This makes perfect sense as part of a consequentialist algorithm for maximizing the number of people who believe X. The algorithm works just as well, and for the same reasons whether X = “superintelligence is an existential risk” and Y = “returns from stopping global warming are smaller than you might otherwise think” (when many audience members have global warming “cause-area loyalty”), or whether X = “you should drink Coke” and Y = “returns from drinking Pepsi are smaller than you might otherwise think” (when many audience members have Pepsi brand loyalty). That’s why I want to call it a marketing algorithm—the function is to strategically route around the audience’s psychological defenses, rather than just tell them stuff as an epistemic peer.
To be clear, if you don’t think you’re talking to an epistemic peer, strategically routing around the audience’s psychological defenses might be the right thing to do! For an example that I thought was OK because I didn’t think it significantly distorted the discourse, see my recent comment explaining an editorial choice I made in a linkpost description. But I think that when one does this, it’s important to notice the nature of what one is doing (there’s a reason my linked comment uses the phrase “marketing keyword”!), and track how much of a distortion it is relative to how you would talk to an epistemic peer. As you know, quality of discourse is about the conversation executing an algorithm that reaches truth, not just convincing people of the conclusion that (you think) is correct. That’s why I’m alarmed at the prospect of someone feeling guilty (!?) that honestly reporting their actual reasoning might be “harming the discourse” (!?!?).
I’m confused reading this.
It seems to me that if you think routing around psychological defenses is a sometimes reasonable thing to do with people who aren’t your epistemic peers.
But you said above that you thought the overall position of having private discourse spaces and public discourse spaces is abhorrent?
How do these fit together? The the vast majority of people are not your (or my) epistemic peers, even the robot cult doesn’t have a monopoly on truth or truth seeking. And so you would behave differentely in private spaces with your peers, and public spaces that include the whole world.
Can you clarify?
It’s a fuzzy Sorites-like distinction, but I think I’m more sympathetic to trying to route around a particular interlocutor’s biases in the context of a direct conversation with a particular person (like a comment or Tweet thread) than I am in writing directed “at the world” (like top-level posts), because the more something is directed “at the world”, the more you should expect that many of your readers know things that you don’t, such that the humility argument for honesty applies forcefully.
FWIW, I have the opposite inclination. If I’m talking with a person one-on-one, we have high bandwidth. I will try to be skillful and compassionate in avoiding triggering them, while still saying what’s true, and depending on the who I’m talking to, I may elect to remain silent about some of the things that I think are true.
But I overall am much more uncomfortable with anything less than straightforward statements of what I believe and why, in smaller-person contexts, where there is the communication capacity to clarify misunderstandings, and where my declining to offer an objection to something that someone says more strongly implies agreement.
This seems right to me.
But it also seem right to me that the broader your audience the lower their average level of epistemics and commitment to epistemic discourse norms. And your communication bandwidth is lower.
Which means there is proportionally more risk of 1) people mishearing you and that damaging the prospects of the policies you want to advocate for (eg “marketing”), 2) people mishearing you, and that causing you personal problems of various stripes, and 3) people understanding you correctly, and causing you personal problems of various stripes.
[1]
So the larger my audience the more reticent I might be about what I’m willing to say.
There’s obviously a fourth quadrant of that 2-by-2, “people hearing you correctly and that damaging the prospects of the policies you want to advocate for.”
Acting to avoid that seems commons destroying, and personally out of integrity. If my policy proposals have true drawbacks, I want to clearly acknowledge them and state why I think they’re worth it, not disemble about them.
“Intent to inform” jives with my sense of it much more than “tell the truth.”
On reflection, I think the ‘epistemic peer’ thing is close but not entirely right. Definitely if I think Bob “can’t handle the truth” about climate change, and so I only talk about AI with Bob, then I’m deciding that Bob isn’t an epistemic peer. But if I have only a short conversation with Bob, then there’s a Gricean implication point that saying X implicitly means I thought it was more relevant to say than Y, or is complete, or so on, and so there are whole topics that might be undiscussed because I don’t want to send the implicit message that my short thoughts on the matter are complete enough to reconstruct my position or that this topic is more relevant than other topics.
---
More broadly, I note that I often see “the discourse” used as a term of derision, I think because it is (currently) something more like a marketing war than an open exchange of information. Or, like a market left to its own devices, it has Goodharted on marketing. It is unclear to me whether it’s better to abandon it (like, for example, not caring about what people think on Twitter) or attempt to recapture it (by pushing for the sorts of ‘public goods’ and savvy customers that cause markets to Goodhart less on marketing).
Sharing reasoning is obviously normally good, but we obviously live in a world with lots of causally important actors who don’t always respond rationally to arguments, and there are cases like the grandparent comment when one is justified in worrying that an argument would make people stupid in a particular way, and one can avoid this problem by not making the argument, and doing so is importantly different from filtering out arguments for causing a justified update against one’s side, and is even more importantly different from anything similar to what pops into people’s minds when they hear “psychological manipulation”. If I’m worried that someone with a finger on some sort of hypertech button may avoid learning about some crucial set of thoughts about what circumstances it’s good to press hypertech buttons under because they’ve always vaguely heard that set of thoughts is disreputable and so never looked into it, I don’t think your last paragraph is a fair response to that. I think I should tap out of this discussion because I feel like the more-than-one-sentence-at-a-time medium is nudging it more toward rhetoric than debugging, but let’s still talk some time.
That’s fair. Let me scratch “psychologically manipulate”, edit to “persuade”, and refer to my reply to Vaniver and Ben Hoffman’s “The Humility Argument for Honesty” (also the first link in the grandparent) for the case that generic persuasion techniques are (counterintuitively!) Actually Bad.
I don’t think it’s the long-form medium so much as it is the fact that I am on a personal vindictive rampage against appeals-to-consequences lately. You should take my vindictiveness into account if you think it’s biasing me!
Um. Yes, as of 2024, lobbying congress to get an AI scaling ban, to buy time to solve the technical problem is now part of the plan.
2019 was a more innocent time. I grieve what we’ve lost.
One potential reason is Idea Innoculation + Inferential Distance.
This agrees with Carter:
Carter is arguing that appeals to consequences should be disallowed at the level of discourse norms, including public discourse norms. That is, in public, “but saying that has bad consequences!” is considered invalid.
It’s better to fight on a battlefield with good rules than one with bad rules.
Hmm...something about that seems not quite right to me. I’m going to see if I can draw out why.
The thing at stake for Quinn_Eli is not whether or not this kind of argument is “invalid”. It’s whether or not she has the affordance to make a friendly, if sometimes forceful, bid to bring this conversation into a private space, to avoid collateral damage.
(Sometimes of course, the damage won’t be collateral. If in private discussion, Quinn concludes, to the best of her ability to reason, that, in fact, it would be good if fewer people donated to PADP, she might then give that argument in public. And if others make bids to say explore that privately, at that stage, she might respond, “No. I am specifically arguing that onlookers should donate less to PADP (or think that decreasing their donations is a reasonable outcome of this argument). That isn’t accidental collateral damage. It’s the thing that’s at stake for me right now.”)
I don’t know if you already agree with what I’m saying here.
. . .
I don’t think we get to pick the rules of the battlefield. The rules of the battlefield are defined only by what causes one to win. Nature alone chooses the rules.
Bidding to move to a private space isn’t necessarily bad but at the same time it’s not an argument. “I want to take this private” doesn’t argue for any object-level position.
It seems that the text of what you’re saying implies you think humans have no agency over discourse norms, regulations, rules of games, etc, but that seems absurd so I don’t think you actually believe that. Perhaps you’ve given up on affecting them, though.
(“What wins” is underdetermined given choice is involved in what wins; you can’t extrapolate from two player zero sum games (where there’s basically one best strategy) to multi player zero sum games (where there isn’t, at least due to coalitional dynamics implying a “weaker” player can win by getting more supporters))
How much agency we have is proportional to how many other actors are in a space. I think it’s quite achievable (though requires a bit of coordination) to establish good norms for a space with 100 people. It’s still achievable, but… probably at least (10x?) as hard to establish good norms for 1000 people.
But “public searchable internet” is immediately putting things in in a context with at least millions if not billions of potentially relevant actors, many of whom don’t know anything about your norms. I’m still actually fairly optimistic about making important improvements to this space, but those improvements will have a lot of constraints for anyone with major goals that affect the world-stage.
Yes. This, exactly. Thank you for putting it so succinctly.
Furthermore, you have a lot more ability to enforce norms regarding what people say, as opposed to norms about how people interpret what people say.
I do think that is possible and often correct to push for some discourse norms over others. I will often reward moves that I think are good, and will sometimes challenge moves that I think are harmful to our collective epistemology.
But I don’t think that I have much ability to “choose” how other people will respond to my speech acts. The world is a lot bigger than me, and it would be imprudent to miss-model that fact that, for instance, many people will not or cannot follow some forms of argument, but will just round what you’re saying to the closest thing that they can understand. And that this can sometimes cause damage.
(I think that you must agree with this? Or maybe you think that you should refuse to engage in groups where the collective epistemology can’t track nuanced argument? I don’t think I’m getting you yet.)
I absolutely agree.
I think the main thing I want to stand for here is both that obviously the consequences of believing or saying a statement have no bearing on its truth value (except in unusual self-fulfilling prophecy edge cases), and it is often reasonable to say “Hey man, I don’t think you should say that here in this context where bystanders will overhear you.”
I’m afraid that those two might being conflated, or that one is is being confused for the other (not in this dialogue, but in the world).
To be clear, I’m not sure that I’m disagreeing with you. I do have the feeling that we are missing each other somehow.
Yes, and Carter is arguing in a context where it’s easy to shift the discourse norms, since there are few people present in the conversation.
LW doesn’t have that many active users, it’s possible to write posts arguing for discourse norms, sometimes to convince moderators they are good, etc.
Sure, and also “that’s just your opinion, man, so I’ll keep talking” is often a valid response to that. It’s important not to bias towards saying exposing information is risky while hiding it is not.
I think you meant ‘do not think’?
Yep. Fixed.
Notably, many other commenters seem to be implicitly or explicitly pointing to the private vs. public distinction.