(I’m not sure this comment is precisely a reply to the previous one, or more of a general reply to “things Zack has been saying for the past 6 months”)
I notice that I basically by this point agree with some kind of “something about the overton window of norms should change in the direction Zack is pushing in”, but it seems… like you’re pushing more for an abstract principle than a concrete change, and I’m not sure how to evaluate it. I’d find it helpful if you got more specific about what you’re pushing for.
I’d summarize my high-level understanding of the push you’re making as:
1. “Geez, the appropriate mood for ‘hmm, communicating openly and honestly in public seems hard’ is not ‘whelp, I guess we can’t do that then’. Especially if we’re going to call ourselves rationalists”
2. Any time that mood seems to cropping up or underlying someone’s decision procedure it should be pushed back against.
[is that a fair high level summary?]
I think I have basically come to agree (or at least take quite seriously), point #1 (this is a change from 6 months ago). There are some fine details about where I still disagree with something about your approach, and what exactly my previous and new positions are/were. But I think those are (for now) more distracting than helpful.
My question is, what precise things do you want changed from the status quo? (I think it’s important to point at missing moods, but implementing a missing mood requires actually operationalizing it into actions of some sort). I think I’d have an easier time interacting with this if I understood better what exact actions policies you’re pushing for.
I see roughly two levels of things one might operationalize:
Individual Action – Things that individuals should be trying to do (and, if you’re a participant on LessWrong or similar spaces, the “price for entry” should be something like “you agree that you are supposed to be trying to do this thing”
Norm Enforcement – Things that people should be commenting on, or otherwise acting upon, when they see other people doing
(you might split #2 into “things everyone should do” vs “things site moderators should do”, or you might treat those as mostly synonomous)
Some examples of things you might mean by Individual Action are things like:
“You[everyone] should be attempting to gain thicker skin” (or, different take: “you should try to cultivate an attitude wherein people criticizing your post doesn’t feel like an attack”)
“You should notice when you have avoided speaking up about something because it was inconvenient.” (Additional/alternate variants include: “when you notice that, speak up anyway”, or “when you notice that, speak up, if the current rate at which you mention the inconvenient things is proportionately lower than the rate at which you mention convenient things”)
Some examples of norm enforcement might be:
“When you observe saying something false, or sliding goalposts around in a way that seems dishonest, say so” (with sub-options for how to go about saying so, maybe you say they are lying, or motivated, or maybe you just focus on the falseness).
“When you observe someone systematically saying true-things that seem biased, say so”
Some major concerns/uncertainties of mine are:
1. How do you make sure that you don’t accidentally create a new norm which is “don’t speak up at all” (because it’s much easier to notice and respond to things that are happening, vs things that are not happening)
2. Which proposed changes are local strict improvements, that you can just start doing and having purely good effects, and which require multiple changes happening at once in order to have good effects. Or, which changes require some number of people to just be willing to eat some social cost until a new equilibrium is reached. (This might be fine, but I think it’s easier to respond to concretely to a proposal with a clearer sense of what that social cost is. If people aren’t willing to pay the cost, you might need a kickstarter for Inadequate Equilibria)
Both concerns seem quite addressable, just, require some operationalization to address.
For me to implement changes in myself (either as a person aspiring to be a competent truthseeking community member, or as a perhap helping to maintain a competent truthseeking culture), ideally need to be specified in some kind of Trigger-Action form. (This may not be universally true, some people get more mileage out of internal-alignment shifts rather than habit changes, but I personally find the latter much more helpful)
you’re pushing more for an abstract principle than a concrete change
I mean, the abstract principle that matters is of the kind that can be proved as a theorem rather than merely “pushed for.” If a lawful physical process results in the states of physical system A becoming correlated with the states of system B, and likewise system B and system C, then observations of the state of system C are evidence about the state of system A. I’m claiming this as technical knowledge, not a handwaved philosophical intuition; I can write literal computer programs that exhibit this kind of evidential-entanglement relationship.
Notably, the process whereby you can use your observations about C to help make better predictions about A doesn’t work if system B is lying to make itself look good. I again claim this as technical knowledge, and not a political position.
Any time that mood seems to cropping up or underlying someone’s decision procedure it should be pushed back against.
The word “should” definitely doesn’t belong here. Like, that’s definitely a fair description of the push I’m making. Because I actually feel that way. But obviously, other people shouldn’t passionately advocate for open and honest discourse if they’re not actually passionate about open and honest discourse: that would be dishonest!
I think I’d have an easier time interacting with this if I understood better what exact actions policies you’re pushing for.
I mean, you don’t have to interact with it if you don’t feel like it! I’m not the boss of anyone!
But obviously, other people shouldn’t passionately advocate for open and honest discourse if they’re not actually passionate about open and honest discourse: that would be dishonest!
The unpacked “should” I imagined you implying was more like “If you do not feel it is important to have open/honest discourse, you are probably making a mistake. i.e. it’s likely that you’re not noticing the damage you’re doing and if you really reflected on it honestly you’d probably ”
Notably, the process whereby you can use your observations about C to help make better predictions about A doesn’t work if system B is lying to make itself look good. I again claim this as technical knowledge, and not a political position.
That part is technical knowledge (and so is the related “the observation process doesn’t work [well] if system B is systematically distorting things in some way, whether intentional or not.”). And I definitely agree with that part and expect Eli does to and generally don’t think it’s where the disagreement lives.
But, you seem to have strongly implied, if not outright stated, that this isn’t just an interesting technical fact that exists in isolation, it implies an optimal (or at least improved) policy that individuals and groups make make to improve their truthseeking capability. This implies we (at least, rationalists with roughly similar background assumptions as you) should be doing something differently than they currently are doing. And, like, it actually matters what that thing is.
There is some fact of the matter about what sorts of interacting systems can make the best predictions and models.
There is a (I suspect different) fact of the matter of what the optimal systems you can implement on humans look like, and yet another quite different fact of the matter of what improvements are possible on LessWrong-in-particular given our starting conditions, and what is the best way to coordinate on them. They certainly don’t seem like they’re going to come about by accident.
There is a fact of the matter of what happens if you push for “thick skin” and saying what you mean without regard for politeness – maybe it results in a community that converges on truth faster (by some combination of distorting less when you speak, or by spending less effort on communication or listening). Or maybe it results in a community that converges on truth slower because it selected more for people who are conflict-prone than people who are smart. I don’t actually know the answer here, and the answer seems quite important.
Early LessWrong had a flaw (IMO) regarding instrumental rationality – there is also a fact of the matter of what an optimal AI decisionmaker would do if they were running on a human-brain worth of compute. But, this is quite different from what kind of decisionmaking works best implemented on typical human wetware, and failure to understand this resulted in a lot of people making bad plans and getting depressed because the plans they made were actually impossible to run.
I mean, you don’t have to interact with it if you don’t feel like it! I’m not the boss of anyone!
Sure, but, like, I want to interact with it (both individually and as a site moderator) because I think it’s pointing in an important direction. You’ve noted this as a something I should probably pay special attention to. And, like, I think you’re right, so I’m trying to pay special attention to it.
The word “should” definitely doesn’t belong here. Like, that’s definitely a fair description of the push I’m making. Because I actually feel that way. But obviously, other people shouldn’t passionately advocate for open and honest discourse if they’re not actually passionate about open and honest discourse: that would be dishonest!
This seems to me like you’re saying “people shouldn’t have to advocate for being open and honest because people should be open and honest”
And then the question becomes… If you think it’s true that people should be open and honest, do you have policy proposals that help that become true?
I separated out the question of “stuff individuals should do unilaterally” from “norm enforcement” because it seems like at least some stuff doesn’t require any central decision nodes.
In particular, while “don’t lie” is an easy injunction to follow, “account for systematic distortions in what you say” is actually quite computationally hard, because there are a lot of distortions with different mechanisms and different places one might intervene on their thought process and/or communication process. “Publicly say literally ever inconvenient thing you think of” probably isn’t what you meant (or maybe it was?), and it might cause you to end not having a harder time thinking inconvenient thoughts.
I’m asking because I’m actually interested in improving on this dimension.
(some current best guesses of mine are, at least for my own values, are:
“Practice noticing heretical thoughts I think and actually notice what things you can’t say, without obligating yourself to say them, so that you don’t accidentally train yourself not to think them”
“Practice noticing opportunities to exhibit social courage, either in low stakes situations, or important situations. Allocate some additional attention towards practicing social courage as skill/muscle” (it’s unclear to me how much to prioritize this, because there’s two separate potential models of ‘social/epistemic courage is a muscle’ and ’social/epistemic courage is a resource you can spend, but you risk using up people’s willingness to listen to you, as well a “most things one might be courageous about actually aren’t important and you’ll end up spending a lot of effort on things that don’t matter”)
But, I am interested in what you actually do within your own frame/value setup.
I’m more interested, as the person who has been the powerful central decision node at multiple times in my life, and will likely be in the future (and as someone who is interested in institution design in general) in if you have suggestions for how to make this work in new or existing institutions. For instance, some of the ideas I’ve shared elsewhere on radical transparency norms seem one way to go about this.
I think cultural evolution and the marketplace of ideas seems like a good idea, but memetics unfortunately select for other things than just truth, and relying on memetics to propagate truth norms (if indeed the propagation of truth norms is good) feels insufficient.
(I’m not sure this comment is precisely a reply to the previous one, or more of a general reply to “things Zack has been saying for the past 6 months”)
I notice that I basically by this point agree with some kind of “something about the overton window of norms should change in the direction Zack is pushing in”, but it seems… like you’re pushing more for an abstract principle than a concrete change, and I’m not sure how to evaluate it. I’d find it helpful if you got more specific about what you’re pushing for.
I’d summarize my high-level understanding of the push you’re making as:
1. “Geez, the appropriate mood for ‘hmm, communicating openly and honestly in public seems hard’ is not ‘whelp, I guess we can’t do that then’. Especially if we’re going to call ourselves rationalists”
2. Any time that mood seems to cropping up or underlying someone’s decision procedure it should be pushed back against.
[is that a fair high level summary?]
I think I have basically come to agree (or at least take quite seriously), point #1 (this is a change from 6 months ago). There are some fine details about where I still disagree with something about your approach, and what exactly my previous and new positions are/were. But I think those are (for now) more distracting than helpful.
My question is, what precise things do you want changed from the status quo? (I think it’s important to point at missing moods, but implementing a missing mood requires actually operationalizing it into actions of some sort). I think I’d have an easier time interacting with this if I understood better what exact actions policies you’re pushing for.
I see roughly two levels of things one might operationalize:
Individual Action – Things that individuals should be trying to do (and, if you’re a participant on LessWrong or similar spaces, the “price for entry” should be something like “you agree that you are supposed to be trying to do this thing”
Norm Enforcement – Things that people should be commenting on, or otherwise acting upon, when they see other people doing
(you might split #2 into “things everyone should do” vs “things site moderators should do”, or you might treat those as mostly synonomous)
Some examples of things you might mean by Individual Action are things like:
“You[everyone] should be attempting to gain thicker skin” (or, different take: “you should try to cultivate an attitude wherein people criticizing your post doesn’t feel like an attack”)
“You should notice when you have avoided speaking up about something because it was inconvenient.” (Additional/alternate variants include: “when you notice that, speak up anyway”, or “when you notice that, speak up, if the current rate at which you mention the inconvenient things is proportionately lower than the rate at which you mention convenient things”)
Some examples of norm enforcement might be:
“When you observe saying something false, or sliding goalposts around in a way that seems dishonest, say so” (with sub-options for how to go about saying so, maybe you say they are lying, or motivated, or maybe you just focus on the falseness).
“When you observe someone systematically saying true-things that seem biased, say so”
Some major concerns/uncertainties of mine are:
1. How do you make sure that you don’t accidentally create a new norm which is “don’t speak up at all” (because it’s much easier to notice and respond to things that are happening, vs things that are not happening)
2. Which proposed changes are local strict improvements, that you can just start doing and having purely good effects, and which require multiple changes happening at once in order to have good effects. Or, which changes require some number of people to just be willing to eat some social cost until a new equilibrium is reached. (This might be fine, but I think it’s easier to respond to concretely to a proposal with a clearer sense of what that social cost is. If people aren’t willing to pay the cost, you might need a kickstarter for Inadequate Equilibria)
Both concerns seem quite addressable, just, require some operationalization to address.
For me to implement changes in myself (either as a person aspiring to be a competent truthseeking community member, or as a perhap helping to maintain a competent truthseeking culture), ideally need to be specified in some kind of Trigger-Action form. (This may not be universally true, some people get more mileage out of internal-alignment shifts rather than habit changes, but I personally find the latter much more helpful)
I mean, the abstract principle that matters is of the kind that can be proved as a theorem rather than merely “pushed for.” If a lawful physical process results in the states of physical system A becoming correlated with the states of system B, and likewise system B and system C, then observations of the state of system C are evidence about the state of system A. I’m claiming this as technical knowledge, not a handwaved philosophical intuition; I can write literal computer programs that exhibit this kind of evidential-entanglement relationship.
Notably, the process whereby you can use your observations about C to help make better predictions about A doesn’t work if system B is lying to make itself look good. I again claim this as technical knowledge, and not a political position.
The word “should” definitely doesn’t belong here. Like, that’s definitely a fair description of the push I’m making. Because I actually feel that way. But obviously, other people shouldn’t passionately advocate for open and honest discourse if they’re not actually passionate about open and honest discourse: that would be dishonest!
I mean, you don’t have to interact with it if you don’t feel like it! I’m not the boss of anyone!
The unpacked “should” I imagined you implying was more like “If you do not feel it is important to have open/honest discourse, you are probably making a mistake. i.e. it’s likely that you’re not noticing the damage you’re doing and if you really reflected on it honestly you’d probably ”
That part is technical knowledge (and so is the related “the observation process doesn’t work [well] if system B is systematically distorting things in some way, whether intentional or not.”). And I definitely agree with that part and expect Eli does to and generally don’t think it’s where the disagreement lives.
But, you seem to have strongly implied, if not outright stated, that this isn’t just an interesting technical fact that exists in isolation, it implies an optimal (or at least improved) policy that individuals and groups make make to improve their truthseeking capability. This implies we (at least, rationalists with roughly similar background assumptions as you) should be doing something differently than they currently are doing. And, like, it actually matters what that thing is.
There is some fact of the matter about what sorts of interacting systems can make the best predictions and models.
There is a (I suspect different) fact of the matter of what the optimal systems you can implement on humans look like, and yet another quite different fact of the matter of what improvements are possible on LessWrong-in-particular given our starting conditions, and what is the best way to coordinate on them. They certainly don’t seem like they’re going to come about by accident.
There is a fact of the matter of what happens if you push for “thick skin” and saying what you mean without regard for politeness – maybe it results in a community that converges on truth faster (by some combination of distorting less when you speak, or by spending less effort on communication or listening). Or maybe it results in a community that converges on truth slower because it selected more for people who are conflict-prone than people who are smart. I don’t actually know the answer here, and the answer seems quite important.
Early LessWrong had a flaw (IMO) regarding instrumental rationality – there is also a fact of the matter of what an optimal AI decisionmaker would do if they were running on a human-brain worth of compute. But, this is quite different from what kind of decisionmaking works best implemented on typical human wetware, and failure to understand this resulted in a lot of people making bad plans and getting depressed because the plans they made were actually impossible to run.
Sure, but, like, I want to interact with it (both individually and as a site moderator) because I think it’s pointing in an important direction. You’ve noted this as a something I should probably pay special attention to. And, like, I think you’re right, so I’m trying to pay special attention to it.
This seems to me like you’re saying “people shouldn’t have to advocate for being open and honest because people should be open and honest”
And then the question becomes… If you think it’s true that people should be open and honest, do you have policy proposals that help that become true?
Not really? The concept of a “policy proposal” seems to presuppose control over some powerful central decision node, which I don’t think is true of me. This is a forum website. I write things. Maybe someone reads them. Maybe they learn something. Maybe me and the people who are better at open and honest discourse preferentially collaborate with each other (and ignore people who we can detect are playing a different game), have systematically better ideas, and newcomers tend to imitate our ways in a process of cultural evolution.
I separated out the question of “stuff individuals should do unilaterally” from “norm enforcement” because it seems like at least some stuff doesn’t require any central decision nodes.
In particular, while “don’t lie” is an easy injunction to follow, “account for systematic distortions in what you say” is actually quite computationally hard, because there are a lot of distortions with different mechanisms and different places one might intervene on their thought process and/or communication process. “Publicly say literally ever inconvenient thing you think of” probably isn’t what you meant (or maybe it was?), and it might cause you to end not having a harder time thinking inconvenient thoughts.
I’m asking because I’m actually interested in improving on this dimension.
(some current best guesses of mine are, at least for my own values, are:
“Practice noticing heretical thoughts I think and actually notice what things you can’t say, without obligating yourself to say them, so that you don’t accidentally train yourself not to think them”
“Practice noticing opportunities to exhibit social courage, either in low stakes situations, or important situations. Allocate some additional attention towards practicing social courage as skill/muscle” (it’s unclear to me how much to prioritize this, because there’s two separate potential models of ‘social/epistemic courage is a muscle’ and ’social/epistemic courage is a resource you can spend, but you risk using up people’s willingness to listen to you, as well a “most things one might be courageous about actually aren’t important and you’ll end up spending a lot of effort on things that don’t matter”)
But, I am interested in what you actually do within your own frame/value setup.
I’m more interested, as the person who has been the powerful central decision node at multiple times in my life, and will likely be in the future (and as someone who is interested in institution design in general) in if you have suggestions for how to make this work in new or existing institutions. For instance, some of the ideas I’ve shared elsewhere on radical transparency norms seem one way to go about this.
I think cultural evolution and the marketplace of ideas seems like a good idea, but memetics unfortunately select for other things than just truth, and relying on memetics to propagate truth norms (if indeed the propagation of truth norms is good) feels insufficient.
I would love to see a summary what particular arguments of Zach’s changed your mind, and how it changed over time.