But obviously, other people shouldn’t passionately advocate for open and honest discourse if they’re not actually passionate about open and honest discourse: that would be dishonest!
The unpacked “should” I imagined you implying was more like “If you do not feel it is important to have open/honest discourse, you are probably making a mistake. i.e. it’s likely that you’re not noticing the damage you’re doing and if you really reflected on it honestly you’d probably ”
Notably, the process whereby you can use your observations about C to help make better predictions about A doesn’t work if system B is lying to make itself look good. I again claim this as technical knowledge, and not a political position.
That part is technical knowledge (and so is the related “the observation process doesn’t work [well] if system B is systematically distorting things in some way, whether intentional or not.”). And I definitely agree with that part and expect Eli does to and generally don’t think it’s where the disagreement lives.
But, you seem to have strongly implied, if not outright stated, that this isn’t just an interesting technical fact that exists in isolation, it implies an optimal (or at least improved) policy that individuals and groups make make to improve their truthseeking capability. This implies we (at least, rationalists with roughly similar background assumptions as you) should be doing something differently than they currently are doing. And, like, it actually matters what that thing is.
There is some fact of the matter about what sorts of interacting systems can make the best predictions and models.
There is a (I suspect different) fact of the matter of what the optimal systems you can implement on humans look like, and yet another quite different fact of the matter of what improvements are possible on LessWrong-in-particular given our starting conditions, and what is the best way to coordinate on them. They certainly don’t seem like they’re going to come about by accident.
There is a fact of the matter of what happens if you push for “thick skin” and saying what you mean without regard for politeness – maybe it results in a community that converges on truth faster (by some combination of distorting less when you speak, or by spending less effort on communication or listening). Or maybe it results in a community that converges on truth slower because it selected more for people who are conflict-prone than people who are smart. I don’t actually know the answer here, and the answer seems quite important.
Early LessWrong had a flaw (IMO) regarding instrumental rationality – there is also a fact of the matter of what an optimal AI decisionmaker would do if they were running on a human-brain worth of compute. But, this is quite different from what kind of decisionmaking works best implemented on typical human wetware, and failure to understand this resulted in a lot of people making bad plans and getting depressed because the plans they made were actually impossible to run.
I mean, you don’t have to interact with it if you don’t feel like it! I’m not the boss of anyone!
Sure, but, like, I want to interact with it (both individually and as a site moderator) because I think it’s pointing in an important direction. You’ve noted this as a something I should probably pay special attention to. And, like, I think you’re right, so I’m trying to pay special attention to it.
The unpacked “should” I imagined you implying was more like “If you do not feel it is important to have open/honest discourse, you are probably making a mistake. i.e. it’s likely that you’re not noticing the damage you’re doing and if you really reflected on it honestly you’d probably ”
That part is technical knowledge (and so is the related “the observation process doesn’t work [well] if system B is systematically distorting things in some way, whether intentional or not.”). And I definitely agree with that part and expect Eli does to and generally don’t think it’s where the disagreement lives.
But, you seem to have strongly implied, if not outright stated, that this isn’t just an interesting technical fact that exists in isolation, it implies an optimal (or at least improved) policy that individuals and groups make make to improve their truthseeking capability. This implies we (at least, rationalists with roughly similar background assumptions as you) should be doing something differently than they currently are doing. And, like, it actually matters what that thing is.
There is some fact of the matter about what sorts of interacting systems can make the best predictions and models.
There is a (I suspect different) fact of the matter of what the optimal systems you can implement on humans look like, and yet another quite different fact of the matter of what improvements are possible on LessWrong-in-particular given our starting conditions, and what is the best way to coordinate on them. They certainly don’t seem like they’re going to come about by accident.
There is a fact of the matter of what happens if you push for “thick skin” and saying what you mean without regard for politeness – maybe it results in a community that converges on truth faster (by some combination of distorting less when you speak, or by spending less effort on communication or listening). Or maybe it results in a community that converges on truth slower because it selected more for people who are conflict-prone than people who are smart. I don’t actually know the answer here, and the answer seems quite important.
Early LessWrong had a flaw (IMO) regarding instrumental rationality – there is also a fact of the matter of what an optimal AI decisionmaker would do if they were running on a human-brain worth of compute. But, this is quite different from what kind of decisionmaking works best implemented on typical human wetware, and failure to understand this resulted in a lot of people making bad plans and getting depressed because the plans they made were actually impossible to run.
Sure, but, like, I want to interact with it (both individually and as a site moderator) because I think it’s pointing in an important direction. You’ve noted this as a something I should probably pay special attention to. And, like, I think you’re right, so I’m trying to pay special attention to it.