Some speech acts lower the message length of proposals to attack some groups, or raise the message length of attempts to prevent such attacks.
Rationality is the common interest of many causes; the whole point of it is to lower the message-description-length of proposals that will improve overall utility, while (conversely, and inevitably) raising the message-description-length of other possible proposals that can be expected to worsen it. To be against rationality on such a basis would seem to be quite incoherent. Yes, in rare cases, it might be that this also involves an “attack” on some identified groups, such as theists. But I don’t know of a plausible case that theists have been put in physical danger because rationality has now made their distinctive ideas harder to express! (In this as in many other cases, the more rationality, the less religious persecution/conflict we see out there in the real world!) And I have no reason to think that substituting “trans advocacy that makes plausibly-wrong claims about how the real-world (in this case: human psychology) factually works” for “theism” would lead to a different conclusion. Both instances of this claim seem just as unhinged and plausibly self-serving, in a way that’s hard not to describe as involving bad faith.
the whole point of it is to lower the message-description-length of proposals that will improve overall utility
I thought the point was to help us model the things we care about more accurately and efficiently, which doesn’t require utilitarianism to be an appealing proximate goal (it just has to require caring about something which depends on objective reality).
Theists can have a hard time to formulate their value to harmony and community building. Advancing “hard facts” kind of can make them more appear to be hateful ignorants which can make it seem okay to be more confrontational socially with them which might involve more touching. The psychology behind how racism causes dangerous situations for black people might be a good example how you don’t need explicit representations of acknowledged dangerous things to be in fact dangerous.
I live in a culture that treats “religion oriented people” as more of a “whatever floats your boat privately” kind of people and not the kind of “people zealously pushing for false beliefs”. I feel that the latter kind of rhetoric makes it easier to paint them “as the enemy” and can be a contributing factor in legitimizing violence against them. Some of the “rationalist-inspired” work pushes harder on the “truth vs falsity” than on “irrelevance of bullshit” which has negative impact on near term security and the positive impact on security is contigent on the strategy working out. Note that the danger that rationalist inspired work can create migth en up materialising in the hands of people that are far from ideally rational. Yes some fights are worth fighting but people also usually agree that having to fight to accomplish somehting is worse than not fighting to accomplish that. And if you rally people to fight for truth you are still rallying people to fight. Even your explicit intetnion was to avoid rallying but you ended up doing it anyway.
The rationality community itself is far from static; it tends to steadily improve over time, even in the sorts of proposals that it tends to favor. If you go browse RationalWiki (a very early example indeed of something that’s at least comparable to the modern “rationalist” memeplex) you’ll in fact see plenty of content connoting a view of theists as “people who are zealously pushing for false beliefs (and this is bad, really really bad)”. Ask around now on LW itself, or even more clearly on SSC, and you’ll very likely see a far more nuanced view of theism, that de-emphasizes the “pushing for false beliefs” side while pointing out the socially-beneficial orientation towards harmony and community building that might perhaps be inherent in theists’ way of life. But such change cannot and will not happen unless current standards are themselves up for debate! One simply cannot afford to reject debate simply on the view that this might make standards “hazy” or “fuzzy”, and thus less effective at promoting some desirable goals (including, perhaps, the goal of protecting vulnerable people from very real harm and from a low quality of life more generally). An ineffective standard, as the case of views-of-theism shows, is far more dangerous than one that’s temporarily “hazy” or “fuzzy”. Preventing all rational debate on the most “sensitive” issues is the very opposite of an effective, truth-promoting policy; it systematically pushes us towards having the wrong sorts of views, and away from having the right ones.
One should also note that it’s hard to predict how our current standards are going to change in the future. For instance, at least among rationalists, the more recent view “theism? meh, whatever floats your boat” tends to practically go hand-in-hand with a “post-rationalist” redefinition of “what exactly it is that theists mean by ‘God’ ”. You can see this very explicitly in the popularity of egregores like “Gnon”, “Moloch”, “Elua” or “Ra”, which are arguably indistinguishable, at least within a post-rationalist POV, from the “gods” of classical myths! But such a “twist” would be far beyond what the average RationalWiki contributor would have been able to predict as the consensus view about the issue back in that site’s heyday—even if he was unusually favorable to theists! Clearly, if we retroactively tried to apply the argument “we (RationalWiki/the rationalist community) should be a lot more pro-theist than we are, and we cannot allow this to be debated under any circumstances because that would clearly lead to very bad consequences”, we would’ve been selling the community short.
A case more troublesome than an ineffective standard is an actively harmful one. Part of the rationalist virtue sphere is recognising your actual impact even when it goes wildly against your expectations. Political speech being known to be a clusterfuck should orient as to “get it right” and not so much “apply solutions”. People that grow up into harmony (optimise for harmony in agent speech) while using epistemology as a dumpstat are more effective in conversation safety. Even if rationalist are having more useful beliefs about other belief-groups the rational memplex being more distant from other memplexes means meaningful interaction is harder. We run the risk of having our models of groups such as theists advocate their interests rather than the persons themselfs. Sure we have distinct reasons why we can’t implement group-interoperatibility the same way that they can/do implement it. But if we empatsize how little we value safety vs accuracy it doesn’t make us move to solve safety. And we are supposedly good at intentionally setting out to solve hard problems. And it should be permissible to try to remove unneccary obstacles for people to join in the conversation. If the plan is to come up with an awesome way to conduct business/conversation and then let that discovery benefit others a move that makes discovery easier but sharing of the results harder might not move that much closer to the goal than naively only caring about discovery.
I’m very sorry that we seem to be going around in circles on this one. In many ways, the whole point of that call to doing “post-rationality” was indeed an attempt to better engage with the sort of people who, as you say, “have epistemology as a dumpstat”. It was a call to understand that no, engaging in dark side epistemology does not necessarily make one a werewolf that’s just trying to muddy the surface-level issues, that indeed there is a there there. Absent a very carefully laid-out argument about what exactly it is that’s being expected of us I’m never going to accept the prospect that the rationalist community should be apologizing for our incredibly hard work in trying to salvage something workable out of the surface-level craziness that is the rhetoric and arguments that these people ordinarily make. Because, as a matter of fact, calling for that would be the quickest way by far of plunging the community back to the RationalWiki-level knee-jerk reflex of shouting “werewolf, werewolf! Out, out out, begone from this community!” whenever we see a “dark-side-epistemology” pattern being deployed.
(I also think that this whole concern with “safety” is something that I’ve addressed already. But of course, in principle, there’s no reason why we couldn’t simply encompass that into what we mean by a standard/norm being “ineffective”—and I think that I have been explicitly allowing for this with my previous comment.)
I does seem weird why so little communication is achieved with so many words.
I might be conflicted with interpreting messages in opposite directions on different layers.
> Clearly, if we retroactively tried to apply the argument “we (RationalWiki/the rationalist community) should be a lot more pro-theist than we are, and we cannot allow this to be debated under any circumstances because that would clearly lead to very bad consequences”, we would’ve been selling the community short.
This seem like a statement that argument of “we should be pro-teist & can not allow debate because bad consequence” would have been an error. If it would have been presented as proposal it would have indeed been an argument. “can not allow debate” would seem like a stance against being able to start arguments. It seems self-refuting and in general wanting censorship of censorship which I have very thought time on whether it’s for or against censorship. Now the situation would be very different if there was a silent or assumed consensus that debate could not be had, but it’s kinda differnt if debate and decision not to have debate is had.
I lost how exactly it relates to this but I realised that the “look these guys spreading known falsehoods” kind of attitude made me not want to engage socially probably by pattern-matching to a sufficiently lost soul to not be reachable within discussion timeframe. And I realised that the standard for sanity I was using for that comparison came from my local culture and realised that the “sanity waterline” situation here might be good enough that I don’t understand other peoples need for order. And the funny thing being that there is enough “sanity seeking” within religious groups that I was used for veteran religious persons to guide novice religious persons away from those pitfalls. If someone was praying for a miracle for themselfs that would be punished and intervened and I kinda knew the guidance even if I didn’t really feel “team religion”. Asking for mysterious power for your personal benefit is magic. It’s audacious to ask for that and it would not be good for the moral development of the prayer to grant that prayer. That is phrased in terms of virtue instead of epistemology. But never the less it’s insanity by other conceptualization. The other kind of argumentation avenue that focuses that prayer doesn’t work seemed primitive by comparison. I was way too intune in tracking how sane vs insane religion works to really believe a pitting of reason vs religion (I guess reading there is that kind of pitting present in parents of this comment, I think I am assuming that other people might have all of their or essentially all of their religion pool insane so that their opinion of religion as insane is justified (and even came up with stories why it could be so because of history)).
I guess part of what sparked me initially to write was that “increasing description length” of things that worsens overall utility seemd like making non-sense harder to understand. My impression was that it’s the goal to make nonsense plainly and easy so. There is some allusion that there is some kind of zero-sum going on with description lengths. But my impression was that people have a hard time processing any kind of option and that shortening of all options is on the table.
I had some idea about how if a decison procedure is too reflex like it doesn’t ever enter into concious mind to be subject to critique. But negating a unreflective decision procedure is not intelligence. So what you want is have it enter into concious thought where you can verify it’s appropriateness (where it can be selectively allowed). If you are suffering from an optical illusion you do not close your eye but critically evaluate what you are seeing.
Rationality is the common interest of many causes; the whole point of it is to lower the message-description-length of proposals that will improve overall utility, while (conversely, and inevitably) raising the message-description-length of other possible proposals that can be expected to worsen it. To be against rationality on such a basis would seem to be quite incoherent. Yes, in rare cases, it might be that this also involves an “attack” on some identified groups, such as theists. But I don’t know of a plausible case that theists have been put in physical danger because rationality has now made their distinctive ideas harder to express! (In this as in many other cases, the more rationality, the less religious persecution/conflict we see out there in the real world!) And I have no reason to think that substituting “trans advocacy that makes plausibly-wrong claims about how the real-world (in this case: human psychology) factually works” for “theism” would lead to a different conclusion. Both instances of this claim seem just as unhinged and plausibly self-serving, in a way that’s hard not to describe as involving bad faith.
I thought the point was to help us model the things we care about more accurately and efficiently, which doesn’t require utilitarianism to be an appealing proximate goal (it just has to require caring about something which depends on objective reality).
Theists can have a hard time to formulate their value to harmony and community building. Advancing “hard facts” kind of can make them more appear to be hateful ignorants which can make it seem okay to be more confrontational socially with them which might involve more touching. The psychology behind how racism causes dangerous situations for black people might be a good example how you don’t need explicit representations of acknowledged dangerous things to be in fact dangerous.
I live in a culture that treats “religion oriented people” as more of a “whatever floats your boat privately” kind of people and not the kind of “people zealously pushing for false beliefs”. I feel that the latter kind of rhetoric makes it easier to paint them “as the enemy” and can be a contributing factor in legitimizing violence against them. Some of the “rationalist-inspired” work pushes harder on the “truth vs falsity” than on “irrelevance of bullshit” which has negative impact on near term security and the positive impact on security is contigent on the strategy working out. Note that the danger that rationalist inspired work can create migth en up materialising in the hands of people that are far from ideally rational. Yes some fights are worth fighting but people also usually agree that having to fight to accomplish somehting is worse than not fighting to accomplish that. And if you rally people to fight for truth you are still rallying people to fight. Even your explicit intetnion was to avoid rallying but you ended up doing it anyway.
The rationality community itself is far from static; it tends to steadily improve over time, even in the sorts of proposals that it tends to favor. If you go browse RationalWiki (a very early example indeed of something that’s at least comparable to the modern “rationalist” memeplex) you’ll in fact see plenty of content connoting a view of theists as “people who are zealously pushing for false beliefs (and this is bad, really really bad)”. Ask around now on LW itself, or even more clearly on SSC, and you’ll very likely see a far more nuanced view of theism, that de-emphasizes the “pushing for false beliefs” side while pointing out the socially-beneficial orientation towards harmony and community building that might perhaps be inherent in theists’ way of life. But such change cannot and will not happen unless current standards are themselves up for debate! One simply cannot afford to reject debate simply on the view that this might make standards “hazy” or “fuzzy”, and thus less effective at promoting some desirable goals (including, perhaps, the goal of protecting vulnerable people from very real harm and from a low quality of life more generally). An ineffective standard, as the case of views-of-theism shows, is far more dangerous than one that’s temporarily “hazy” or “fuzzy”. Preventing all rational debate on the most “sensitive” issues is the very opposite of an effective, truth-promoting policy; it systematically pushes us towards having the wrong sorts of views, and away from having the right ones.
One should also note that it’s hard to predict how our current standards are going to change in the future. For instance, at least among rationalists, the more recent view “theism? meh, whatever floats your boat” tends to practically go hand-in-hand with a “post-rationalist” redefinition of “what exactly it is that theists mean by ‘God’ ”. You can see this very explicitly in the popularity of egregores like “Gnon”, “Moloch”, “Elua” or “Ra”, which are arguably indistinguishable, at least within a post-rationalist POV, from the “gods” of classical myths! But such a “twist” would be far beyond what the average RationalWiki contributor would have been able to predict as the consensus view about the issue back in that site’s heyday—even if he was unusually favorable to theists! Clearly, if we retroactively tried to apply the argument “we (RationalWiki/the rationalist community) should be a lot more pro-theist than we are, and we cannot allow this to be debated under any circumstances because that would clearly lead to very bad consequences”, we would’ve been selling the community short.
A case more troublesome than an ineffective standard is an actively harmful one. Part of the rationalist virtue sphere is recognising your actual impact even when it goes wildly against your expectations. Political speech being known to be a clusterfuck should orient as to “get it right” and not so much “apply solutions”. People that grow up into harmony (optimise for harmony in agent speech) while using epistemology as a dumpstat are more effective in conversation safety. Even if rationalist are having more useful beliefs about other belief-groups the rational memplex being more distant from other memplexes means meaningful interaction is harder. We run the risk of having our models of groups such as theists advocate their interests rather than the persons themselfs. Sure we have distinct reasons why we can’t implement group-interoperatibility the same way that they can/do implement it. But if we empatsize how little we value safety vs accuracy it doesn’t make us move to solve safety. And we are supposedly good at intentionally setting out to solve hard problems. And it should be permissible to try to remove unneccary obstacles for people to join in the conversation. If the plan is to come up with an awesome way to conduct business/conversation and then let that discovery benefit others a move that makes discovery easier but sharing of the results harder might not move that much closer to the goal than naively only caring about discovery.
I’m very sorry that we seem to be going around in circles on this one. In many ways, the whole point of that call to doing “post-rationality” was indeed an attempt to better engage with the sort of people who, as you say, “have epistemology as a dumpstat”. It was a call to understand that no, engaging in dark side epistemology does not necessarily make one a werewolf that’s just trying to muddy the surface-level issues, that indeed there is a there there. Absent a very carefully laid-out argument about what exactly it is that’s being expected of us I’m never going to accept the prospect that the rationalist community should be apologizing for our incredibly hard work in trying to salvage something workable out of the surface-level craziness that is the rhetoric and arguments that these people ordinarily make. Because, as a matter of fact, calling for that would be the quickest way by far of plunging the community back to the RationalWiki-level knee-jerk reflex of shouting “werewolf, werewolf! Out, out out, begone from this community!” whenever we see a “dark-side-epistemology” pattern being deployed.
(I also think that this whole concern with “safety” is something that I’ve addressed already. But of course, in principle, there’s no reason why we couldn’t simply encompass that into what we mean by a standard/norm being “ineffective”—and I think that I have been explicitly allowing for this with my previous comment.)
I does seem weird why so little communication is achieved with so many words.
I might be conflicted with interpreting messages in opposite directions on different layers.
> Clearly, if we retroactively tried to apply the argument “we (RationalWiki/the rationalist community) should be a lot more pro-theist than we are, and we cannot allow this to be debated under any circumstances because that would clearly lead to very bad consequences”, we would’ve been selling the community short.
This seem like a statement that argument of “we should be pro-teist & can not allow debate because bad consequence” would have been an error. If it would have been presented as proposal it would have indeed been an argument. “can not allow debate” would seem like a stance against being able to start arguments. It seems self-refuting and in general wanting censorship of censorship which I have very thought time on whether it’s for or against censorship. Now the situation would be very different if there was a silent or assumed consensus that debate could not be had, but it’s kinda differnt if debate and decision not to have debate is had.
I lost how exactly it relates to this but I realised that the “look these guys spreading known falsehoods” kind of attitude made me not want to engage socially probably by pattern-matching to a sufficiently lost soul to not be reachable within discussion timeframe. And I realised that the standard for sanity I was using for that comparison came from my local culture and realised that the “sanity waterline” situation here might be good enough that I don’t understand other peoples need for order. And the funny thing being that there is enough “sanity seeking” within religious groups that I was used for veteran religious persons to guide novice religious persons away from those pitfalls. If someone was praying for a miracle for themselfs that would be punished and intervened and I kinda knew the guidance even if I didn’t really feel “team religion”. Asking for mysterious power for your personal benefit is magic. It’s audacious to ask for that and it would not be good for the moral development of the prayer to grant that prayer. That is phrased in terms of virtue instead of epistemology. But never the less it’s insanity by other conceptualization. The other kind of argumentation avenue that focuses that prayer doesn’t work seemed primitive by comparison. I was way too intune in tracking how sane vs insane religion works to really believe a pitting of reason vs religion (I guess reading there is that kind of pitting present in parents of this comment, I think I am assuming that other people might have all of their or essentially all of their religion pool insane so that their opinion of religion as insane is justified (and even came up with stories why it could be so because of history)).
I guess part of what sparked me initially to write was that “increasing description length” of things that worsens overall utility seemd like making non-sense harder to understand. My impression was that it’s the goal to make nonsense plainly and easy so. There is some allusion that there is some kind of zero-sum going on with description lengths. But my impression was that people have a hard time processing any kind of option and that shortening of all options is on the table.
I had some idea about how if a decison procedure is too reflex like it doesn’t ever enter into concious mind to be subject to critique. But negating a unreflective decision procedure is not intelligence. So what you want is have it enter into concious thought where you can verify it’s appropriateness (where it can be selectively allowed). If you are suffering from an optical illusion you do not close your eye but critically evaluate what you are seeing.