Take two! [Note, the following may contain an infohazard, though I’ve tried to leave key details out while still getting what I want across]
I’ve been wondering if we should be more concerned about “pessimistic philosophy.” By this I mean the family of philosophical positions which lead to a seemingly-rationally-arrived-at conclusion that it is better not to exist than to exist. It seems quite easy, or at least conceivable, for an intelligent individual, perhaps one with significant power, to find themselves at such a conclusion, and decide to “benevolently” try to act on that (perhaps Nick Land as interpreted by his critics is an example of this?). I’m not sure what, if anything, to do with this train of thought, and am concerned that with even light study of the subject, I’ve run into a large body of infohazards, some of which may have negatively affected me slightly (as far as I’m aware not contagious though, unless you count this post as a potential spreader. Reminder to be responsible with your personal mental health here if you want to look into this further.).
I have often come to a seemingly-rationally-arrived-at conclusion that 1+1=3 (or some other mathematical contradiction). I invariably conclude that my reasoning went astray, not that ZF is inconsistent.
I respond similarly to reasoning that it is better to die/never have existed/kill everyone and fill my future lightcone with copies of myself/erase my own identity/wirehead/give away everything I own/obsess over the idea that I might be a Boltzmann brain/go on a hour-long crying jag whenever I contemplate the sorrows of the world/be paralysed in terror at the octillions of potential future lives whose welfare and suffering hang on the slightest twitch of my finger/consider myself such a vile and depraved thing that one thousand pages by the most gifted writer could not express the smallest particle of my evilness/succumb to Power Word: Red Pill/respond to the zombie when it croaks “yes, but what if? what if?”/take the unwelcomeness of any of these conclusions as evidence of their truth.
I know not to trust my satnav when it tells me to drive off a cliff, and neither do I follow an argument when it leads into the abyss.
It’s great that you have that satnav. I worry about people like me. I worry about being incapable of leaving those thoughts alone until I’ve pulled the thread enough be sure I should ignore it. In other words, if I think there’s a chance something like that is true, I do want to trust the satnav, but I also want to be sure my “big if true” discovery genuinely isn’t true.
Of course, a good innoculation against this has been reading some intense blogs of people who’ve adopted alternative decision-theories which lead them down really scary paths to watch.
I worry “there but for the grace of chance go I.” But that’s not quite right, and being able to read that content and not go off the deep end myself is evidence that maybe my satnav is functioning just fine after all.
I suspect I’m talking about the same exact class of infohazard as mentioned here. I think I know what’s being veiled and have looked it in the eye.
Thanks for your excellent input! It’s not really the potential accuracy of such dark philosophies that I’m worried about here (though that is also an area of some concern, of course, since I am human and do have those anxieties on occasion), but rather how easy it seems to be to fall prey to and subsequently act on those infohazards for a certain subclass of extremely intelligent people. We’ve sadly had multiple cases in this community of smart people succumbing to thought-patterns which arguably (probably?) led to real-world deaths, but as far as I can tell, the damage has mostly been contained to individuals or small groups of people so far. The same cannot be said of some religious groups and cults, who have a history of falling prey to such ideologies (“everyone in outgroup x deserves death,” is a popular one). How concerned should we be about, say, philosophical infohazards leading to x-risk level conclusions [example removed]? I suspect natural human satnav/moral intuition leads to very few people being convinced by such arguments, but due to the tendency of people in rationalist (and religious!) spaces to deliberately rethink their intuition, there seems to be a higher risk in those subgroups for perverse eschatological ideologies. Is that risk high enough that active preventative measures should be taken, or is this concern itself of the 1+1=3, wrong-side-of-the-abyss type?
I know what you mean, and I think that similar to Richard Kennaway says below, we need to teach people new to the sequences and to exotic decision theories not to drive off a cliff because of a thread they couldn’t resist pulling.
I think we really need something in the sequences about how to tell if your wild seeming idea is remotely likely. I.e a “How to Trust Your SatNav” post. The basic content in the post is: remember to stay grounded, and ask how likely this wild new framework might be. Ask others who can understand and assess your theory, and if they say you’re getting some things wrong, take them very seriously. This doesn’t mean you can’t follow your own convictions, it just means you should do it in a way that minimises potential harm.
Now, having read the content you’re talking about, I think a person needs to already be pretty far gone epistemically before this info hazard can “get them,” and I mean either the original idea-haver and also those who receive it via transmission. But I think it’s still going to help very new readers to not drive off so many cliffs. It’s almost like some of them want to, which is… its own class of concerns.
Take two! [Note, the following may contain an infohazard, though I’ve tried to leave key details out while still getting what I want across] I’ve been wondering if we should be more concerned about “pessimistic philosophy.” By this I mean the family of philosophical positions which lead to a seemingly-rationally-arrived-at conclusion that it is better not to exist than to exist. It seems quite easy, or at least conceivable, for an intelligent individual, perhaps one with significant power, to find themselves at such a conclusion, and decide to “benevolently” try to act on that (perhaps Nick Land as interpreted by his critics is an example of this?). I’m not sure what, if anything, to do with this train of thought, and am concerned that with even light study of the subject, I’ve run into a large body of infohazards, some of which may have negatively affected me slightly (as far as I’m aware not contagious though, unless you count this post as a potential spreader. Reminder to be responsible with your personal mental health here if you want to look into this further.).
I have often come to a seemingly-rationally-arrived-at conclusion that 1+1=3 (or some other mathematical contradiction). I invariably conclude that my reasoning went astray, not that ZF is inconsistent.
I respond similarly to reasoning that it is better to die/never have existed/kill everyone and fill my future lightcone with copies of myself/erase my own identity/wirehead/give away everything I own/obsess over the idea that I might be a Boltzmann brain/go on a hour-long crying jag whenever I contemplate the sorrows of the world/be paralysed in terror at the octillions of potential future lives whose welfare and suffering hang on the slightest twitch of my finger/consider myself such a vile and depraved thing that one thousand pages by the most gifted writer could not express the smallest particle of my evilness/succumb to Power Word: Red Pill/respond to the zombie when it croaks “yes, but what if? what if?”/take the unwelcomeness of any of these conclusions as evidence of their truth.
I know not to trust my satnav when it tells me to drive off a cliff, and neither do I follow an argument when it leads into the abyss.
It’s great that you have that satnav. I worry about people like me. I worry about being incapable of leaving those thoughts alone until I’ve pulled the thread enough be sure I should ignore it. In other words, if I think there’s a chance something like that is true, I do want to trust the satnav, but I also want to be sure my “big if true” discovery genuinely isn’t true.
Of course, a good innoculation against this has been reading some intense blogs of people who’ve adopted alternative decision-theories which lead them down really scary paths to watch.
I worry “there but for the grace of chance go I.” But that’s not quite right, and being able to read that content and not go off the deep end myself is evidence that maybe my satnav is functioning just fine after all.
I suspect I’m talking about the same exact class of infohazard as mentioned here. I think I know what’s being veiled and have looked it in the eye.
Thanks for your excellent input! It’s not really the potential accuracy of such dark philosophies that I’m worried about here (though that is also an area of some concern, of course, since I am human and do have those anxieties on occasion), but rather how easy it seems to be to fall prey to and subsequently act on those infohazards for a certain subclass of extremely intelligent people. We’ve sadly had multiple cases in this community of smart people succumbing to thought-patterns which arguably (probably?) led to real-world deaths, but as far as I can tell, the damage has mostly been contained to individuals or small groups of people so far. The same cannot be said of some religious groups and cults, who have a history of falling prey to such ideologies (“everyone in outgroup x deserves death,” is a popular one). How concerned should we be about, say, philosophical infohazards leading to x-risk level conclusions [example removed]? I suspect natural human satnav/moral intuition leads to very few people being convinced by such arguments, but due to the tendency of people in rationalist (and religious!) spaces to deliberately rethink their intuition, there seems to be a higher risk in those subgroups for perverse eschatological ideologies. Is that risk high enough that active preventative measures should be taken, or is this concern itself of the 1+1=3, wrong-side-of-the-abyss type?
I know what you mean, and I think that similar to Richard Kennaway says below, we need to teach people new to the sequences and to exotic decision theories not to drive off a cliff because of a thread they couldn’t resist pulling.
I think we really need something in the sequences about how to tell if your wild seeming idea is remotely likely. I.e a “How to Trust Your SatNav” post. The basic content in the post is: remember to stay grounded, and ask how likely this wild new framework might be. Ask others who can understand and assess your theory, and if they say you’re getting some things wrong, take them very seriously. This doesn’t mean you can’t follow your own convictions, it just means you should do it in a way that minimises potential harm.
Now, having read the content you’re talking about, I think a person needs to already be pretty far gone epistemically before this info hazard can “get them,” and I mean either the original idea-haver and also those who receive it via transmission. But I think it’s still going to help very new readers to not drive off so many cliffs. It’s almost like some of them want to, which is… its own class of concerns.