I think that what you’re saying here is mostly right, but I feel like it leaves out an important facet of the problem.
Some speech acts lower the message length of proposals to attack some groups, or raise the message length of attempts to prevent such attacks. This is a kind of meta-attack or threat, like concentrating troops on a country’s border.
The situation is often asymmetrical in particular contexts—given existing power structures & official narratives, some such meta-attacks are easier to perform than others—and in particular, proposals to alter the official narrative can look more “political” than moves in the opposite direction, even when the official narrative is obviously not a reasonable prior.
This problem is aggravated by a norm of avoiding “political” discourse—if one side of an argument is construed as political and the other isn’t, we get a biased result that favors & intensifies existing power arrangements. It’s also aggravated by norms of calm, impersonal discourse, since that’s easier to perform if you feel safe.
Some speech acts lower the message length of proposals to attack some groups, or raise the message length of attempts to prevent such attacks.
This is true; indeed, it’s difficult to see how it can fail to be true, even in the absence of any awareness or intention on anyone’s part. Yet it seems an exceedingly abstract basis on which to consider even censuring or discouraging certain sorts of speech, much less punishing or banning it.
I agree. I think this makes discouraging political or heated speech hard to do without introducing substantively harmful bias. That’s the context in which Zack’s speech can create a problem for Vanessa (and in which others’ speech created a structurally similar problem for Zack!).
Well, as for “heated” speech, I think discouraging that is easy enough. But where “political” is concerned, my point is exactly that the perspective you take makes it difficult to see where “political” ends, and “non-political” begins—indeed, it does not seem to me to be difficult to start from that view, and construct an argument that all speech is “political”! (And if I understand Zack’s point correctly, he seems to be saying that this has, in essence, already happened, on one particular topic.)
Let me try to be a bit clearer with an example. I’m saying that in, for instance, a discussion of human decisionmaking that uses utilitarian frameworks, posts like Totalitarian ethical systems and Should Effective Altruism be at war with North Korea? ought to be considered on-topic, since they discuss patterns of thinking that this framework is likely to push us towards, and point to competing considerations that are harder to express in that frame, which we might want to make sure we don’t lose sight of. Right now, on LessWrong, such posts are ambiguously permissible, in ways that cause Vanessa Kosoy to be legitimately uncertain about whether and to what extent—if she extends the interpretive labor of explaining what she thinks the problems are with Zack’s points—her work will be judged admissible.
IMO, on-topic is a strict subset of what is allowable on LW. There are plenty of topics that are about rationality (especially about group rationality and social/peer norms) but don’t work here because they’re related to topics that tend to trigger tribal or social status problems.
I’m starting to see that “on LW” is different for me than for at least some readers and moderators—it may be that I’m too restrictive in my opinion of non-promoted posts. I’m still going to downvote them.
(Only speaking as a participant, not as a moderator. The rules are currently very clear that you can downvote and upvote whatever you like.)
I do think I would prefer it if you would not downvote personal blogposts if they feel off-topic to you. You can always just uncheck the “show personal blogposts” checkbox on the frontpage. I care a lot about people being able to just explore ideas freely on the site, and you can always downvote them if we do move them to frontpage.
I think that’s fair—I don’t want to discourage exploration of ideas not yet ready for publication, but I _AM_ concerned that people other than me may take the leniency as permission to discuss overtly political topics here. I think I’ll stop voting on non-promoted posts and comments for a bit and see if my worries get worse or better.
Is there a way to tell whether a post is promoted or not, on the page that contains the voting buttons?
We just added the ability to easily identify a post as frontpage or personal on our test-server today. Should be out by early next week.
You can currently tell by hovering over a post in a list of posts, or by looking at the moderation guidelines at the bottom of the post (which will always include “frontpage moderation guidelines” if it’s a frontpage post).
Those posts are definitely permissible on LessWrong from the site-rule perspective, though there is a sense in which they are off-topic in that we didn’t promote them to the frontpage.
I do think that imbalance of frontpage vs. personal already creates some problems, though I think the distinction is doing a bunch of important work that I don’t know how to achieve in other ways.
Some speech acts lower the message length of proposals to attack some groups, or raise the message length of attempts to prevent such attacks.
Rationality is the common interest of many causes; the whole point of it is to lower the message-description-length of proposals that will improve overall utility, while (conversely, and inevitably) raising the message-description-length of other possible proposals that can be expected to worsen it. To be against rationality on such a basis would seem to be quite incoherent. Yes, in rare cases, it might be that this also involves an “attack” on some identified groups, such as theists. But I don’t know of a plausible case that theists have been put in physical danger because rationality has now made their distinctive ideas harder to express! (In this as in many other cases, the more rationality, the less religious persecution/conflict we see out there in the real world!) And I have no reason to think that substituting “trans advocacy that makes plausibly-wrong claims about how the real-world (in this case: human psychology) factually works” for “theism” would lead to a different conclusion. Both instances of this claim seem just as unhinged and plausibly self-serving, in a way that’s hard not to describe as involving bad faith.
the whole point of it is to lower the message-description-length of proposals that will improve overall utility
I thought the point was to help us model the things we care about more accurately and efficiently, which doesn’t require utilitarianism to be an appealing proximate goal (it just has to require caring about something which depends on objective reality).
Theists can have a hard time to formulate their value to harmony and community building. Advancing “hard facts” kind of can make them more appear to be hateful ignorants which can make it seem okay to be more confrontational socially with them which might involve more touching. The psychology behind how racism causes dangerous situations for black people might be a good example how you don’t need explicit representations of acknowledged dangerous things to be in fact dangerous.
I live in a culture that treats “religion oriented people” as more of a “whatever floats your boat privately” kind of people and not the kind of “people zealously pushing for false beliefs”. I feel that the latter kind of rhetoric makes it easier to paint them “as the enemy” and can be a contributing factor in legitimizing violence against them. Some of the “rationalist-inspired” work pushes harder on the “truth vs falsity” than on “irrelevance of bullshit” which has negative impact on near term security and the positive impact on security is contigent on the strategy working out. Note that the danger that rationalist inspired work can create migth en up materialising in the hands of people that are far from ideally rational. Yes some fights are worth fighting but people also usually agree that having to fight to accomplish somehting is worse than not fighting to accomplish that. And if you rally people to fight for truth you are still rallying people to fight. Even your explicit intetnion was to avoid rallying but you ended up doing it anyway.
The rationality community itself is far from static; it tends to steadily improve over time, even in the sorts of proposals that it tends to favor. If you go browse RationalWiki (a very early example indeed of something that’s at least comparable to the modern “rationalist” memeplex) you’ll in fact see plenty of content connoting a view of theists as “people who are zealously pushing for false beliefs (and this is bad, really really bad)”. Ask around now on LW itself, or even more clearly on SSC, and you’ll very likely see a far more nuanced view of theism, that de-emphasizes the “pushing for false beliefs” side while pointing out the socially-beneficial orientation towards harmony and community building that might perhaps be inherent in theists’ way of life. But such change cannot and will not happen unless current standards are themselves up for debate! One simply cannot afford to reject debate simply on the view that this might make standards “hazy” or “fuzzy”, and thus less effective at promoting some desirable goals (including, perhaps, the goal of protecting vulnerable people from very real harm and from a low quality of life more generally). An ineffective standard, as the case of views-of-theism shows, is far more dangerous than one that’s temporarily “hazy” or “fuzzy”. Preventing all rational debate on the most “sensitive” issues is the very opposite of an effective, truth-promoting policy; it systematically pushes us towards having the wrong sorts of views, and away from having the right ones.
One should also note that it’s hard to predict how our current standards are going to change in the future. For instance, at least among rationalists, the more recent view “theism? meh, whatever floats your boat” tends to practically go hand-in-hand with a “post-rationalist” redefinition of “what exactly it is that theists mean by ‘God’ ”. You can see this very explicitly in the popularity of egregores like “Gnon”, “Moloch”, “Elua” or “Ra”, which are arguably indistinguishable, at least within a post-rationalist POV, from the “gods” of classical myths! But such a “twist” would be far beyond what the average RationalWiki contributor would have been able to predict as the consensus view about the issue back in that site’s heyday—even if he was unusually favorable to theists! Clearly, if we retroactively tried to apply the argument “we (RationalWiki/the rationalist community) should be a lot more pro-theist than we are, and we cannot allow this to be debated under any circumstances because that would clearly lead to very bad consequences”, we would’ve been selling the community short.
A case more troublesome than an ineffective standard is an actively harmful one. Part of the rationalist virtue sphere is recognising your actual impact even when it goes wildly against your expectations. Political speech being known to be a clusterfuck should orient as to “get it right” and not so much “apply solutions”. People that grow up into harmony (optimise for harmony in agent speech) while using epistemology as a dumpstat are more effective in conversation safety. Even if rationalist are having more useful beliefs about other belief-groups the rational memplex being more distant from other memplexes means meaningful interaction is harder. We run the risk of having our models of groups such as theists advocate their interests rather than the persons themselfs. Sure we have distinct reasons why we can’t implement group-interoperatibility the same way that they can/do implement it. But if we empatsize how little we value safety vs accuracy it doesn’t make us move to solve safety. And we are supposedly good at intentionally setting out to solve hard problems. And it should be permissible to try to remove unneccary obstacles for people to join in the conversation. If the plan is to come up with an awesome way to conduct business/conversation and then let that discovery benefit others a move that makes discovery easier but sharing of the results harder might not move that much closer to the goal than naively only caring about discovery.
I’m very sorry that we seem to be going around in circles on this one. In many ways, the whole point of that call to doing “post-rationality” was indeed an attempt to better engage with the sort of people who, as you say, “have epistemology as a dumpstat”. It was a call to understand that no, engaging in dark side epistemology does not necessarily make one a werewolf that’s just trying to muddy the surface-level issues, that indeed there is a there there. Absent a very carefully laid-out argument about what exactly it is that’s being expected of us I’m never going to accept the prospect that the rationalist community should be apologizing for our incredibly hard work in trying to salvage something workable out of the surface-level craziness that is the rhetoric and arguments that these people ordinarily make. Because, as a matter of fact, calling for that would be the quickest way by far of plunging the community back to the RationalWiki-level knee-jerk reflex of shouting “werewolf, werewolf! Out, out out, begone from this community!” whenever we see a “dark-side-epistemology” pattern being deployed.
(I also think that this whole concern with “safety” is something that I’ve addressed already. But of course, in principle, there’s no reason why we couldn’t simply encompass that into what we mean by a standard/norm being “ineffective”—and I think that I have been explicitly allowing for this with my previous comment.)
I does seem weird why so little communication is achieved with so many words.
I might be conflicted with interpreting messages in opposite directions on different layers.
> Clearly, if we retroactively tried to apply the argument “we (RationalWiki/the rationalist community) should be a lot more pro-theist than we are, and we cannot allow this to be debated under any circumstances because that would clearly lead to very bad consequences”, we would’ve been selling the community short.
This seem like a statement that argument of “we should be pro-teist & can not allow debate because bad consequence” would have been an error. If it would have been presented as proposal it would have indeed been an argument. “can not allow debate” would seem like a stance against being able to start arguments. It seems self-refuting and in general wanting censorship of censorship which I have very thought time on whether it’s for or against censorship. Now the situation would be very different if there was a silent or assumed consensus that debate could not be had, but it’s kinda differnt if debate and decision not to have debate is had.
I lost how exactly it relates to this but I realised that the “look these guys spreading known falsehoods” kind of attitude made me not want to engage socially probably by pattern-matching to a sufficiently lost soul to not be reachable within discussion timeframe. And I realised that the standard for sanity I was using for that comparison came from my local culture and realised that the “sanity waterline” situation here might be good enough that I don’t understand other peoples need for order. And the funny thing being that there is enough “sanity seeking” within religious groups that I was used for veteran religious persons to guide novice religious persons away from those pitfalls. If someone was praying for a miracle for themselfs that would be punished and intervened and I kinda knew the guidance even if I didn’t really feel “team religion”. Asking for mysterious power for your personal benefit is magic. It’s audacious to ask for that and it would not be good for the moral development of the prayer to grant that prayer. That is phrased in terms of virtue instead of epistemology. But never the less it’s insanity by other conceptualization. The other kind of argumentation avenue that focuses that prayer doesn’t work seemed primitive by comparison. I was way too intune in tracking how sane vs insane religion works to really believe a pitting of reason vs religion (I guess reading there is that kind of pitting present in parents of this comment, I think I am assuming that other people might have all of their or essentially all of their religion pool insane so that their opinion of religion as insane is justified (and even came up with stories why it could be so because of history)).
I guess part of what sparked me initially to write was that “increasing description length” of things that worsens overall utility seemd like making non-sense harder to understand. My impression was that it’s the goal to make nonsense plainly and easy so. There is some allusion that there is some kind of zero-sum going on with description lengths. But my impression was that people have a hard time processing any kind of option and that shortening of all options is on the table.
I had some idea about how if a decison procedure is too reflex like it doesn’t ever enter into concious mind to be subject to critique. But negating a unreflective decision procedure is not intelligence. So what you want is have it enter into concious thought where you can verify it’s appropriateness (where it can be selectively allowed). If you are suffering from an optical illusion you do not close your eye but critically evaluate what you are seeing.
I think that what you’re saying here is mostly right, but I feel like it leaves out an important facet of the problem.
Some speech acts lower the message length of proposals to attack some groups, or raise the message length of attempts to prevent such attacks. This is a kind of meta-attack or threat, like concentrating troops on a country’s border.
The situation is often asymmetrical in particular contexts—given existing power structures & official narratives, some such meta-attacks are easier to perform than others—and in particular, proposals to alter the official narrative can look more “political” than moves in the opposite direction, even when the official narrative is obviously not a reasonable prior.
This problem is aggravated by a norm of avoiding “political” discourse—if one side of an argument is construed as political and the other isn’t, we get a biased result that favors & intensifies existing power arrangements. It’s also aggravated by norms of calm, impersonal discourse, since that’s easier to perform if you feel safe.
This is true; indeed, it’s difficult to see how it can fail to be true, even in the absence of any awareness or intention on anyone’s part. Yet it seems an exceedingly abstract basis on which to consider even censuring or discouraging certain sorts of speech, much less punishing or banning it.
I agree. I think this makes discouraging political or heated speech hard to do without introducing substantively harmful bias. That’s the context in which Zack’s speech can create a problem for Vanessa (and in which others’ speech created a structurally similar problem for Zack!).
Well, as for “heated” speech, I think discouraging that is easy enough. But where “political” is concerned, my point is exactly that the perspective you take makes it difficult to see where “political” ends, and “non-political” begins—indeed, it does not seem to me to be difficult to start from that view, and construct an argument that all speech is “political”! (And if I understand Zack’s point correctly, he seems to be saying that this has, in essence, already happened, on one particular topic.)
The problem is doing so without the specified harmful consequences. Obviously one can discourage heated speech.
This complexity is in the territory, not just in the map.
Let me try to be a bit clearer with an example. I’m saying that in, for instance, a discussion of human decisionmaking that uses utilitarian frameworks, posts like Totalitarian ethical systems and Should Effective Altruism be at war with North Korea? ought to be considered on-topic, since they discuss patterns of thinking that this framework is likely to push us towards, and point to competing considerations that are harder to express in that frame, which we might want to make sure we don’t lose sight of. Right now, on LessWrong, such posts are ambiguously permissible, in ways that cause Vanessa Kosoy to be legitimately uncertain about whether and to what extent—if she extends the interpretive labor of explaining what she thinks the problems are with Zack’s points—her work will be judged admissible.
IMO, on-topic is a strict subset of what is allowable on LW. There are plenty of topics that are about rationality (especially about group rationality and social/peer norms) but don’t work here because they’re related to topics that tend to trigger tribal or social status problems.
I’m starting to see that “on LW” is different for me than for at least some readers and moderators—it may be that I’m too restrictive in my opinion of non-promoted posts. I’m still going to downvote them.
(Only speaking as a participant, not as a moderator. The rules are currently very clear that you can downvote and upvote whatever you like.)
I do think I would prefer it if you would not downvote personal blogposts if they feel off-topic to you. You can always just uncheck the “show personal blogposts” checkbox on the frontpage. I care a lot about people being able to just explore ideas freely on the site, and you can always downvote them if we do move them to frontpage.
I think that’s fair—I don’t want to discourage exploration of ideas not yet ready for publication, but I _AM_ concerned that people other than me may take the leniency as permission to discuss overtly political topics here. I think I’ll stop voting on non-promoted posts and comments for a bit and see if my worries get worse or better.
Is there a way to tell whether a post is promoted or not, on the page that contains the voting buttons?
Note for any GreaterWrong users who might have a similar question:
When viewing a post, you’ll see an icon under the post name, at the left. It indicates what kind of post it is, e.g.:
(In order, those are: personal, frontpage, curated, Meta, Alignment Forum.)
We just added the ability to easily identify a post as frontpage or personal on our test-server today. Should be out by early next week.
You can currently tell by hovering over a post in a list of posts, or by looking at the moderation guidelines at the bottom of the post (which will always include “frontpage moderation guidelines” if it’s a frontpage post).
Those posts are definitely permissible on LessWrong from the site-rule perspective, though there is a sense in which they are off-topic in that we didn’t promote them to the frontpage.
I do think that imbalance of frontpage vs. personal already creates some problems, though I think the distinction is doing a bunch of important work that I don’t know how to achieve in other ways.
Rationality is the common interest of many causes; the whole point of it is to lower the message-description-length of proposals that will improve overall utility, while (conversely, and inevitably) raising the message-description-length of other possible proposals that can be expected to worsen it. To be against rationality on such a basis would seem to be quite incoherent. Yes, in rare cases, it might be that this also involves an “attack” on some identified groups, such as theists. But I don’t know of a plausible case that theists have been put in physical danger because rationality has now made their distinctive ideas harder to express! (In this as in many other cases, the more rationality, the less religious persecution/conflict we see out there in the real world!) And I have no reason to think that substituting “trans advocacy that makes plausibly-wrong claims about how the real-world (in this case: human psychology) factually works” for “theism” would lead to a different conclusion. Both instances of this claim seem just as unhinged and plausibly self-serving, in a way that’s hard not to describe as involving bad faith.
I thought the point was to help us model the things we care about more accurately and efficiently, which doesn’t require utilitarianism to be an appealing proximate goal (it just has to require caring about something which depends on objective reality).
Theists can have a hard time to formulate their value to harmony and community building. Advancing “hard facts” kind of can make them more appear to be hateful ignorants which can make it seem okay to be more confrontational socially with them which might involve more touching. The psychology behind how racism causes dangerous situations for black people might be a good example how you don’t need explicit representations of acknowledged dangerous things to be in fact dangerous.
I live in a culture that treats “religion oriented people” as more of a “whatever floats your boat privately” kind of people and not the kind of “people zealously pushing for false beliefs”. I feel that the latter kind of rhetoric makes it easier to paint them “as the enemy” and can be a contributing factor in legitimizing violence against them. Some of the “rationalist-inspired” work pushes harder on the “truth vs falsity” than on “irrelevance of bullshit” which has negative impact on near term security and the positive impact on security is contigent on the strategy working out. Note that the danger that rationalist inspired work can create migth en up materialising in the hands of people that are far from ideally rational. Yes some fights are worth fighting but people also usually agree that having to fight to accomplish somehting is worse than not fighting to accomplish that. And if you rally people to fight for truth you are still rallying people to fight. Even your explicit intetnion was to avoid rallying but you ended up doing it anyway.
The rationality community itself is far from static; it tends to steadily improve over time, even in the sorts of proposals that it tends to favor. If you go browse RationalWiki (a very early example indeed of something that’s at least comparable to the modern “rationalist” memeplex) you’ll in fact see plenty of content connoting a view of theists as “people who are zealously pushing for false beliefs (and this is bad, really really bad)”. Ask around now on LW itself, or even more clearly on SSC, and you’ll very likely see a far more nuanced view of theism, that de-emphasizes the “pushing for false beliefs” side while pointing out the socially-beneficial orientation towards harmony and community building that might perhaps be inherent in theists’ way of life. But such change cannot and will not happen unless current standards are themselves up for debate! One simply cannot afford to reject debate simply on the view that this might make standards “hazy” or “fuzzy”, and thus less effective at promoting some desirable goals (including, perhaps, the goal of protecting vulnerable people from very real harm and from a low quality of life more generally). An ineffective standard, as the case of views-of-theism shows, is far more dangerous than one that’s temporarily “hazy” or “fuzzy”. Preventing all rational debate on the most “sensitive” issues is the very opposite of an effective, truth-promoting policy; it systematically pushes us towards having the wrong sorts of views, and away from having the right ones.
One should also note that it’s hard to predict how our current standards are going to change in the future. For instance, at least among rationalists, the more recent view “theism? meh, whatever floats your boat” tends to practically go hand-in-hand with a “post-rationalist” redefinition of “what exactly it is that theists mean by ‘God’ ”. You can see this very explicitly in the popularity of egregores like “Gnon”, “Moloch”, “Elua” or “Ra”, which are arguably indistinguishable, at least within a post-rationalist POV, from the “gods” of classical myths! But such a “twist” would be far beyond what the average RationalWiki contributor would have been able to predict as the consensus view about the issue back in that site’s heyday—even if he was unusually favorable to theists! Clearly, if we retroactively tried to apply the argument “we (RationalWiki/the rationalist community) should be a lot more pro-theist than we are, and we cannot allow this to be debated under any circumstances because that would clearly lead to very bad consequences”, we would’ve been selling the community short.
A case more troublesome than an ineffective standard is an actively harmful one. Part of the rationalist virtue sphere is recognising your actual impact even when it goes wildly against your expectations. Political speech being known to be a clusterfuck should orient as to “get it right” and not so much “apply solutions”. People that grow up into harmony (optimise for harmony in agent speech) while using epistemology as a dumpstat are more effective in conversation safety. Even if rationalist are having more useful beliefs about other belief-groups the rational memplex being more distant from other memplexes means meaningful interaction is harder. We run the risk of having our models of groups such as theists advocate their interests rather than the persons themselfs. Sure we have distinct reasons why we can’t implement group-interoperatibility the same way that they can/do implement it. But if we empatsize how little we value safety vs accuracy it doesn’t make us move to solve safety. And we are supposedly good at intentionally setting out to solve hard problems. And it should be permissible to try to remove unneccary obstacles for people to join in the conversation. If the plan is to come up with an awesome way to conduct business/conversation and then let that discovery benefit others a move that makes discovery easier but sharing of the results harder might not move that much closer to the goal than naively only caring about discovery.
I’m very sorry that we seem to be going around in circles on this one. In many ways, the whole point of that call to doing “post-rationality” was indeed an attempt to better engage with the sort of people who, as you say, “have epistemology as a dumpstat”. It was a call to understand that no, engaging in dark side epistemology does not necessarily make one a werewolf that’s just trying to muddy the surface-level issues, that indeed there is a there there. Absent a very carefully laid-out argument about what exactly it is that’s being expected of us I’m never going to accept the prospect that the rationalist community should be apologizing for our incredibly hard work in trying to salvage something workable out of the surface-level craziness that is the rhetoric and arguments that these people ordinarily make. Because, as a matter of fact, calling for that would be the quickest way by far of plunging the community back to the RationalWiki-level knee-jerk reflex of shouting “werewolf, werewolf! Out, out out, begone from this community!” whenever we see a “dark-side-epistemology” pattern being deployed.
(I also think that this whole concern with “safety” is something that I’ve addressed already. But of course, in principle, there’s no reason why we couldn’t simply encompass that into what we mean by a standard/norm being “ineffective”—and I think that I have been explicitly allowing for this with my previous comment.)
I does seem weird why so little communication is achieved with so many words.
I might be conflicted with interpreting messages in opposite directions on different layers.
> Clearly, if we retroactively tried to apply the argument “we (RationalWiki/the rationalist community) should be a lot more pro-theist than we are, and we cannot allow this to be debated under any circumstances because that would clearly lead to very bad consequences”, we would’ve been selling the community short.
This seem like a statement that argument of “we should be pro-teist & can not allow debate because bad consequence” would have been an error. If it would have been presented as proposal it would have indeed been an argument. “can not allow debate” would seem like a stance against being able to start arguments. It seems self-refuting and in general wanting censorship of censorship which I have very thought time on whether it’s for or against censorship. Now the situation would be very different if there was a silent or assumed consensus that debate could not be had, but it’s kinda differnt if debate and decision not to have debate is had.
I lost how exactly it relates to this but I realised that the “look these guys spreading known falsehoods” kind of attitude made me not want to engage socially probably by pattern-matching to a sufficiently lost soul to not be reachable within discussion timeframe. And I realised that the standard for sanity I was using for that comparison came from my local culture and realised that the “sanity waterline” situation here might be good enough that I don’t understand other peoples need for order. And the funny thing being that there is enough “sanity seeking” within religious groups that I was used for veteran religious persons to guide novice religious persons away from those pitfalls. If someone was praying for a miracle for themselfs that would be punished and intervened and I kinda knew the guidance even if I didn’t really feel “team religion”. Asking for mysterious power for your personal benefit is magic. It’s audacious to ask for that and it would not be good for the moral development of the prayer to grant that prayer. That is phrased in terms of virtue instead of epistemology. But never the less it’s insanity by other conceptualization. The other kind of argumentation avenue that focuses that prayer doesn’t work seemed primitive by comparison. I was way too intune in tracking how sane vs insane religion works to really believe a pitting of reason vs religion (I guess reading there is that kind of pitting present in parents of this comment, I think I am assuming that other people might have all of their or essentially all of their religion pool insane so that their opinion of religion as insane is justified (and even came up with stories why it could be so because of history)).
I guess part of what sparked me initially to write was that “increasing description length” of things that worsens overall utility seemd like making non-sense harder to understand. My impression was that it’s the goal to make nonsense plainly and easy so. There is some allusion that there is some kind of zero-sum going on with description lengths. But my impression was that people have a hard time processing any kind of option and that shortening of all options is on the table.
I had some idea about how if a decison procedure is too reflex like it doesn’t ever enter into concious mind to be subject to critique. But negating a unreflective decision procedure is not intelligence. So what you want is have it enter into concious thought where you can verify it’s appropriateness (where it can be selectively allowed). If you are suffering from an optical illusion you do not close your eye but critically evaluate what you are seeing.