This is probably not the crux of our disagreement, but I think we already understand perfectly well how to think together and we’re limited by temperament rather than understanding. I agree that if we’re trying to think about how to think together we can treat no censorship as the default case.
worthless cowards
If cowardice means fear of personal consequences, this doesn’t ring true as an ad hominem. Speaking without any filter is fun and satisfying and consistent with a rationalist pro-truth self-image and other-image. The reason why I mostly don’t do it is because I’d feel guilt about harming the discourse. This motivation doesn’t disappear in cases where I feel safe from personal consequences, e.g. because of anonymity.
who just assume as if it were a law of nature that discourse is impossible
I don’t know how you want me to respond to this. Obviously I think my sense that real discourse on fraught topics is impossible is based on extensively observing attempts at real discourse on fraught topics being fake. I suspect your sense that real discourse is possible is caused by you underestimating how far real discourse would diverge from fake discourse because you assume real discourse is possible and interpret too much existing discourse as real discourse.
The reason why I mostly don’t do it is because I’d feel guilt about harming the discourse
Woah, can you explain this part in more detail?! Harming the discourse how, specifically? If you have thoughts, and your thoughts are correct, how does explaining your correct thoughts make things worse?
Consider the idea that the prospect of advanced AI implies the returns from stopping global warming are much smaller than you might otherwise think. I think this is a perfectly correct point, but I’m also willing to never make it, because a lot of people will respond by updating against the prospect of advanced AI, and I care a lot more about people having correct opinions on advanced AI than on the returns from stopping global warming.
I want to distinguish between “harming the discourse” and “harming my faction in a marketing war.”
When I say that public discourse is really important, what I mean is that if you tell the truth in public about what you believe and why (possibly investing a lot of effort and using a lot of hyperlinks to bridge the inferential distance), then other people who aren’t already your closest trusted friends have the opportunity to learn from the arguments and evidence that actually convinced you, combine it with their own knowledge, and potentially make better decisions. (“Discourse” might not be the right word here—the concept I want to point to includes unilateral truthtelling, as on a blog with no comment section, or where your immediate interlocutor doesn’t “reciprocate” in good faith, but someone in the audience might learn something.)
If you think other people can’t process arguments at all, but that you can, how do you account for your own existence? For myself: I’m smart, but I’m not that smart (IQ ~130). The Sequences were life-changingly great, but I was still interested in philosophy and argument before that. Our little robot cult does not have a monopoly on reasoning itself.
a lot of people will respond by updating against the prospect of advanced AI
Sure. Those are the people who don’t matter. Even if you couldpsychologically manipulate [revised: persuade] them into having the correct bottom-line“opinion”, what would you do with them? Were you planning to solve the alignment problem by lobbying Congress to pass appropriate legislation?
When I say that public discourse is really important, what I mean is that if you tell the truth in public about what you believe and why (possibly investing a lot of effort and using a lot of hyperlinks to bridge the inferential distance)
I want to agree with the general point here, but I find it breaking down in some of the cases I’m considering. I think the underlying generator is something like “communication is a two-way street”, and it makes sense to not just emit sentences that compile and evaluate to ‘true’ in my ontology, but that I expect to compile and evaluate to approximately what I wanted to convey in their ontology.
Does that fall into ‘harming my faction in a marketing war’ according to you?
No, I agree that authors should write in language that their audience will understand. I’m trying to make a distinction between having intent to inform (giving the audience information that they can use to think with) vs. persuasion (trying to exert control over the audience’s conclusion). Consider this generalization of a comment upthread—
Consider the idea that X implies Y. I think this is a perfectly correct point, but I’m also willing to never make it, because a lot of people will respond by concluding that not-X, because they’re emotionally attached to not-Y, and I care a lot more about people having correct beliefs about the truth value of X than Y.
This makes perfect sense as part of a consequentialist algorithm for maximizing the number of people who believe X. The algorithm works just as well, and for the same reasons whether X = “superintelligence is an existential risk” and Y = “returns from stopping global warming are smaller than you might otherwise think” (when many audience members have global warming “cause-area loyalty”), or whether X = “you should drink Coke” and Y = “returns from drinking Pepsi are smaller than you might otherwise think” (when many audience members have Pepsi brand loyalty). That’s why I want to call it a marketing algorithm—the function is to strategically route around the audience’s psychological defenses, rather than just tell them stuff as an epistemic peer.
To be clear, if you don’t think you’re talking to an epistemic peer, strategically routing around the audience’s psychological defenses might be the right thing to do! For an example that I thought was OK because I didn’t think it significantly distorted the discourse, see my recent comment explaining an editorial choice I made in a linkpost description. But I think that when one does this, it’s important to notice the nature of what one is doing (there’s a reason my linked comment uses the phrase “marketing keyword”!), and track how much of a distortion it is relative to how you would talk to an epistemic peer. As you know, quality of discourse is about the conversation executing an algorithm that reaches truth, not just convincing people of the conclusion that (you think) is correct. That’s why I’m alarmed at the prospect of someone feeling guilty (!?) that honestly reporting their actual reasoning might be “harming the discourse” (!?!?).
“Intent to inform” jives with my sense of it much more than “tell the truth.”
On reflection, I think the ‘epistemic peer’ thing is close but not entirely right. Definitely if I think Bob “can’t handle the truth” about climate change, and so I only talk about AI with Bob, then I’m deciding that Bob isn’t an epistemic peer. But if I have only a short conversation with Bob, then there’s a Gricean implication point that saying X implicitly means I thought it was more relevant to say than Y, or is complete, or so on, and so there are whole topics that might be undiscussed because I don’t want to send the implicit message that my short thoughts on the matter are complete enough to reconstruct my position or that this topic is more relevant than other topics.
---
More broadly, I note that I often see “the discourse” used as a term of derision, I think because it is (currently) something more like a marketing war than an open exchange of information. Or, like a market left to its own devices, it has Goodharted on marketing. It is unclear to me whether it’s better to abandon it (like, for example, not caring about what people think on Twitter) or attempt to recapture it (by pushing for the sorts of ‘public goods’ and savvy customers that cause markets to Goodhart less on marketing).
To be clear, if you don’t think you’re talking to an epistemic peer, strategically routing around the audience’s psychological defenses might be the right thing to do!
I’m confused reading this.
It seems to me that if you think routing around psychological defenses is a sometimes reasonable thing to do with people who aren’t your epistemic peers.
But you said above that you thought the overall position of having private discourse spaces and public discourse spaces is abhorrent?
How do these fit together? The the vast majority of people are not your (or my) epistemic peers, even the robot cult doesn’t have a monopoly on truth or truth seeking. And so you would behave differentely in private spaces with your peers, and public spaces that include the whole world.
It’s a fuzzy Sorites-like distinction, but I think I’m more sympathetic to trying to route around a particular interlocutor’s biases in the context of a direct conversation with a particular person (like a comment or Tweet thread) than I am in writing directed “at the world” (like top-level posts), because the more something is directed “at the world”, the more you should expect that many of your readers know things that you don’t, such that the humility argument for honesty applies forcefully.
FWIW, I have the opposite inclination. If I’m talking with a person one-on-one, we have high bandwidth. I will try to be skillful and compassionate in avoiding triggering them, while still saying what’s true, and depending on the who I’m talking to, I may elect to remain silent about some of the things that I think are true.
But I overall am much more uncomfortable with anything less than straightforward statements of what I believe and why, in smaller-person contexts, where there is the communication capacity to clarify misunderstandings, and where my declining to offer an objection to something that someone says more strongly implies agreement.
the more you should expect that many of your readers know things that you don’t
This seems right to me.
But it also seem right to me that the broader your audience the lower their average level of epistemics and commitment to epistemic discourse norms. And your communication bandwidth is lower.
Which means there is proportionally more risk of 1) people mishearing you and that damaging the prospects of the policies you want to advocate for (eg “marketing”), 2) people mishearing you, and that causing you personal problems of various stripes, and 3) people understanding you correctly, and causing you personal problems of various stripes. [1]
So the larger my audience the more reticent I might be about what I’m willing to say.
There’s obviously a fourth quadrant of that 2-by-2, “people hearing you correctly and that damaging the prospects of the policies you want to advocate for.”
Acting to avoid that seems commons destroying, and personally out of integrity. If my policy proposals have true drawbacks, I want to clearly acknowledge them and state why I think they’re worth it, not disemble about them.
Sharing reasoning is obviously normally good, but we obviously live in a world with lots of causally important actors who don’t always respond rationally to arguments, and there are cases like the grandparent comment when one is justified in worrying that an argument would make people stupid in a particular way, and one can avoid this problem by not making the argument, and doing so is importantly different from filtering out arguments for causing a justified update against one’s side, and is even more importantly different from anything similar to what pops into people’s minds when they hear “psychological manipulation”. If I’m worried that someone with a finger on some sort of hypertech button may avoid learning about some crucial set of thoughts about what circumstances it’s good to press hypertech buttons under because they’ve always vaguely heard that set of thoughts is disreputable and so never looked into it, I don’t think your last paragraph is a fair response to that. I think I should tap out of this discussion because I feel like the more-than-one-sentence-at-a-time medium is nudging it more toward rhetoric than debugging, but let’s still talk some time.
even more importantly different from anything similar to what pops into people’s minds when they hear “psychological manipulation”
That’s fair. Let me scratch “psychologically manipulate”, edit to “persuade”, and refer to my reply to Vaniver and Ben Hoffman’s “The Humility Argument for Honesty” (also the first link in the grandparent) for the case that generic persuasion techniques are (counterintuitively!) Actually Bad.
I feel like the more-than-one-sentence-at-a-time medium is nudging it more toward rhetoric than debugging
I don’t think it’s the long-form medium so much as it is the fact that I am on a personal vindictive rampage against appeals-to-consequences lately. You should take my vindictiveness into account if you think it’s biasing me!
This is probably not the crux of our disagreement, but I think we already understand perfectly well how to think together and we’re limited by temperament rather than understanding. I agree that if we’re trying to think about how to think together we can treat no censorship as the default case.
If cowardice means fear of personal consequences, this doesn’t ring true as an ad hominem. Speaking without any filter is fun and satisfying and consistent with a rationalist pro-truth self-image and other-image. The reason why I mostly don’t do it is because I’d feel guilt about harming the discourse. This motivation doesn’t disappear in cases where I feel safe from personal consequences, e.g. because of anonymity.
I don’t know how you want me to respond to this. Obviously I think my sense that real discourse on fraught topics is impossible is based on extensively observing attempts at real discourse on fraught topics being fake. I suspect your sense that real discourse is possible is caused by you underestimating how far real discourse would diverge from fake discourse because you assume real discourse is possible and interpret too much existing discourse as real discourse.
Then that’s a reason to try to create common knowledge, whether privately or publicly. I think ordinary knowledge is fine most of the time, though.
Woah, can you explain this part in more detail?! Harming the discourse how, specifically? If you have thoughts, and your thoughts are correct, how does explaining your correct thoughts make things worse?
Consider the idea that the prospect of advanced AI implies the returns from stopping global warming are much smaller than you might otherwise think. I think this is a perfectly correct point, but I’m also willing to never make it, because a lot of people will respond by updating against the prospect of advanced AI, and I care a lot more about people having correct opinions on advanced AI than on the returns from stopping global warming.
I want to distinguish between “harming the discourse” and “harming my faction in a marketing war.”
When I say that public discourse is really important, what I mean is that if you tell the truth in public about what you believe and why (possibly investing a lot of effort and using a lot of hyperlinks to bridge the inferential distance), then other people who aren’t already your closest trusted friends have the opportunity to learn from the arguments and evidence that actually convinced you, combine it with their own knowledge, and potentially make better decisions. (“Discourse” might not be the right word here—the concept I want to point to includes unilateral truthtelling, as on a blog with no comment section, or where your immediate interlocutor doesn’t “reciprocate” in good faith, but someone in the audience might learn something.)
If you think other people can’t process arguments at all, but that you can, how do you account for your own existence? For myself: I’m smart, but I’m not that smart (IQ ~130). The Sequences were life-changingly great, but I was still interested in philosophy and argument before that. Our little robot cult does not have a monopoly on reasoning itself.
Sure. Those are the people who don’t matter. Even if you could
psychologically manipulate[revised: persuade] them into having the correct bottom-line “opinion”, what would you do with them? Were you planning to solve the alignment problem by lobbying Congress to pass appropriate legislation?I want to agree with the general point here, but I find it breaking down in some of the cases I’m considering. I think the underlying generator is something like “communication is a two-way street”, and it makes sense to not just emit sentences that compile and evaluate to ‘true’ in my ontology, but that I expect to compile and evaluate to approximately what I wanted to convey in their ontology.
Does that fall into ‘harming my faction in a marketing war’ according to you?
No, I agree that authors should write in language that their audience will understand. I’m trying to make a distinction between having intent to inform (giving the audience information that they can use to think with) vs. persuasion (trying to exert control over the audience’s conclusion). Consider this generalization of a comment upthread—
This makes perfect sense as part of a consequentialist algorithm for maximizing the number of people who believe X. The algorithm works just as well, and for the same reasons whether X = “superintelligence is an existential risk” and Y = “returns from stopping global warming are smaller than you might otherwise think” (when many audience members have global warming “cause-area loyalty”), or whether X = “you should drink Coke” and Y = “returns from drinking Pepsi are smaller than you might otherwise think” (when many audience members have Pepsi brand loyalty). That’s why I want to call it a marketing algorithm—the function is to strategically route around the audience’s psychological defenses, rather than just tell them stuff as an epistemic peer.
To be clear, if you don’t think you’re talking to an epistemic peer, strategically routing around the audience’s psychological defenses might be the right thing to do! For an example that I thought was OK because I didn’t think it significantly distorted the discourse, see my recent comment explaining an editorial choice I made in a linkpost description. But I think that when one does this, it’s important to notice the nature of what one is doing (there’s a reason my linked comment uses the phrase “marketing keyword”!), and track how much of a distortion it is relative to how you would talk to an epistemic peer. As you know, quality of discourse is about the conversation executing an algorithm that reaches truth, not just convincing people of the conclusion that (you think) is correct. That’s why I’m alarmed at the prospect of someone feeling guilty (!?) that honestly reporting their actual reasoning might be “harming the discourse” (!?!?).
“Intent to inform” jives with my sense of it much more than “tell the truth.”
On reflection, I think the ‘epistemic peer’ thing is close but not entirely right. Definitely if I think Bob “can’t handle the truth” about climate change, and so I only talk about AI with Bob, then I’m deciding that Bob isn’t an epistemic peer. But if I have only a short conversation with Bob, then there’s a Gricean implication point that saying X implicitly means I thought it was more relevant to say than Y, or is complete, or so on, and so there are whole topics that might be undiscussed because I don’t want to send the implicit message that my short thoughts on the matter are complete enough to reconstruct my position or that this topic is more relevant than other topics.
---
More broadly, I note that I often see “the discourse” used as a term of derision, I think because it is (currently) something more like a marketing war than an open exchange of information. Or, like a market left to its own devices, it has Goodharted on marketing. It is unclear to me whether it’s better to abandon it (like, for example, not caring about what people think on Twitter) or attempt to recapture it (by pushing for the sorts of ‘public goods’ and savvy customers that cause markets to Goodhart less on marketing).
I’m confused reading this.
It seems to me that if you think routing around psychological defenses is a sometimes reasonable thing to do with people who aren’t your epistemic peers.
But you said above that you thought the overall position of having private discourse spaces and public discourse spaces is abhorrent?
How do these fit together? The the vast majority of people are not your (or my) epistemic peers, even the robot cult doesn’t have a monopoly on truth or truth seeking. And so you would behave differentely in private spaces with your peers, and public spaces that include the whole world.
Can you clarify?
It’s a fuzzy Sorites-like distinction, but I think I’m more sympathetic to trying to route around a particular interlocutor’s biases in the context of a direct conversation with a particular person (like a comment or Tweet thread) than I am in writing directed “at the world” (like top-level posts), because the more something is directed “at the world”, the more you should expect that many of your readers know things that you don’t, such that the humility argument for honesty applies forcefully.
FWIW, I have the opposite inclination. If I’m talking with a person one-on-one, we have high bandwidth. I will try to be skillful and compassionate in avoiding triggering them, while still saying what’s true, and depending on the who I’m talking to, I may elect to remain silent about some of the things that I think are true.
But I overall am much more uncomfortable with anything less than straightforward statements of what I believe and why, in smaller-person contexts, where there is the communication capacity to clarify misunderstandings, and where my declining to offer an objection to something that someone says more strongly implies agreement.
This seems right to me.
But it also seem right to me that the broader your audience the lower their average level of epistemics and commitment to epistemic discourse norms. And your communication bandwidth is lower.
Which means there is proportionally more risk of 1) people mishearing you and that damaging the prospects of the policies you want to advocate for (eg “marketing”), 2) people mishearing you, and that causing you personal problems of various stripes, and 3) people understanding you correctly, and causing you personal problems of various stripes.
[1]
So the larger my audience the more reticent I might be about what I’m willing to say.
There’s obviously a fourth quadrant of that 2-by-2, “people hearing you correctly and that damaging the prospects of the policies you want to advocate for.”
Acting to avoid that seems commons destroying, and personally out of integrity. If my policy proposals have true drawbacks, I want to clearly acknowledge them and state why I think they’re worth it, not disemble about them.
Sharing reasoning is obviously normally good, but we obviously live in a world with lots of causally important actors who don’t always respond rationally to arguments, and there are cases like the grandparent comment when one is justified in worrying that an argument would make people stupid in a particular way, and one can avoid this problem by not making the argument, and doing so is importantly different from filtering out arguments for causing a justified update against one’s side, and is even more importantly different from anything similar to what pops into people’s minds when they hear “psychological manipulation”. If I’m worried that someone with a finger on some sort of hypertech button may avoid learning about some crucial set of thoughts about what circumstances it’s good to press hypertech buttons under because they’ve always vaguely heard that set of thoughts is disreputable and so never looked into it, I don’t think your last paragraph is a fair response to that. I think I should tap out of this discussion because I feel like the more-than-one-sentence-at-a-time medium is nudging it more toward rhetoric than debugging, but let’s still talk some time.
That’s fair. Let me scratch “psychologically manipulate”, edit to “persuade”, and refer to my reply to Vaniver and Ben Hoffman’s “The Humility Argument for Honesty” (also the first link in the grandparent) for the case that generic persuasion techniques are (counterintuitively!) Actually Bad.
I don’t think it’s the long-form medium so much as it is the fact that I am on a personal vindictive rampage against appeals-to-consequences lately. You should take my vindictiveness into account if you think it’s biasing me!
Um. Yes, as of 2024, lobbying congress to get an AI scaling ban, to buy time to solve the technical problem is now part of the plan.
2019 was a more innocent time. I grieve what we’ve lost.
One potential reason is Idea Innoculation + Inferential Distance.