I have often come to a seemingly-rationally-arrived-at conclusion that 1+1=3 (or some other mathematical contradiction). I invariably conclude that my reasoning went astray, not that ZF is inconsistent.
I respond similarly to reasoning that it is better to die/never have existed/kill everyone and fill my future lightcone with copies of myself/erase my own identity/wirehead/give away everything I own/obsess over the idea that I might be a Boltzmann brain/go on a hour-long crying jag whenever I contemplate the sorrows of the world/be paralysed in terror at the octillions of potential future lives whose welfare and suffering hang on the slightest twitch of my finger/consider myself such a vile and depraved thing that one thousand pages by the most gifted writer could not express the smallest particle of my evilness/succumb to Power Word: Red Pill/respond to the zombie when it croaks “yes, but what if? what if?”/take the unwelcomeness of any of these conclusions as evidence of their truth.
I know not to trust my satnav when it tells me to drive off a cliff, and neither do I follow an argument when it leads into the abyss.
It’s great that you have that satnav. I worry about people like me. I worry about being incapable of leaving those thoughts alone until I’ve pulled the thread enough be sure I should ignore it. In other words, if I think there’s a chance something like that is true, I do want to trust the satnav, but I also want to be sure my “big if true” discovery genuinely isn’t true.
Of course, a good innoculation against this has been reading some intense blogs of people who’ve adopted alternative decision-theories which lead them down really scary paths to watch.
I worry “there but for the grace of chance go I.” But that’s not quite right, and being able to read that content and not go off the deep end myself is evidence that maybe my satnav is functioning just fine after all.
I suspect I’m talking about the same exact class of infohazard as mentioned here. I think I know what’s being veiled and have looked it in the eye.
Thanks for your excellent input! It’s not really the potential accuracy of such dark philosophies that I’m worried about here (though that is also an area of some concern, of course, since I am human and do have those anxieties on occasion), but rather how easy it seems to be to fall prey to and subsequently act on those infohazards for a certain subclass of extremely intelligent people. We’ve sadly had multiple cases in this community of smart people succumbing to thought-patterns which arguably (probably?) led to real-world deaths, but as far as I can tell, the damage has mostly been contained to individuals or small groups of people so far. The same cannot be said of some religious groups and cults, who have a history of falling prey to such ideologies (“everyone in outgroup x deserves death,” is a popular one). How concerned should we be about, say, philosophical infohazards leading to x-risk level conclusions [example removed]? I suspect natural human satnav/moral intuition leads to very few people being convinced by such arguments, but due to the tendency of people in rationalist (and religious!) spaces to deliberately rethink their intuition, there seems to be a higher risk in those subgroups for perverse eschatological ideologies. Is that risk high enough that active preventative measures should be taken, or is this concern itself of the 1+1=3, wrong-side-of-the-abyss type?
I have often come to a seemingly-rationally-arrived-at conclusion that 1+1=3 (or some other mathematical contradiction). I invariably conclude that my reasoning went astray, not that ZF is inconsistent.
I respond similarly to reasoning that it is better to die/never have existed/kill everyone and fill my future lightcone with copies of myself/erase my own identity/wirehead/give away everything I own/obsess over the idea that I might be a Boltzmann brain/go on a hour-long crying jag whenever I contemplate the sorrows of the world/be paralysed in terror at the octillions of potential future lives whose welfare and suffering hang on the slightest twitch of my finger/consider myself such a vile and depraved thing that one thousand pages by the most gifted writer could not express the smallest particle of my evilness/succumb to Power Word: Red Pill/respond to the zombie when it croaks “yes, but what if? what if?”/take the unwelcomeness of any of these conclusions as evidence of their truth.
I know not to trust my satnav when it tells me to drive off a cliff, and neither do I follow an argument when it leads into the abyss.
It’s great that you have that satnav. I worry about people like me. I worry about being incapable of leaving those thoughts alone until I’ve pulled the thread enough be sure I should ignore it. In other words, if I think there’s a chance something like that is true, I do want to trust the satnav, but I also want to be sure my “big if true” discovery genuinely isn’t true.
Of course, a good innoculation against this has been reading some intense blogs of people who’ve adopted alternative decision-theories which lead them down really scary paths to watch.
I worry “there but for the grace of chance go I.” But that’s not quite right, and being able to read that content and not go off the deep end myself is evidence that maybe my satnav is functioning just fine after all.
I suspect I’m talking about the same exact class of infohazard as mentioned here. I think I know what’s being veiled and have looked it in the eye.
Thanks for your excellent input! It’s not really the potential accuracy of such dark philosophies that I’m worried about here (though that is also an area of some concern, of course, since I am human and do have those anxieties on occasion), but rather how easy it seems to be to fall prey to and subsequently act on those infohazards for a certain subclass of extremely intelligent people. We’ve sadly had multiple cases in this community of smart people succumbing to thought-patterns which arguably (probably?) led to real-world deaths, but as far as I can tell, the damage has mostly been contained to individuals or small groups of people so far. The same cannot be said of some religious groups and cults, who have a history of falling prey to such ideologies (“everyone in outgroup x deserves death,” is a popular one). How concerned should we be about, say, philosophical infohazards leading to x-risk level conclusions [example removed]? I suspect natural human satnav/moral intuition leads to very few people being convinced by such arguments, but due to the tendency of people in rationalist (and religious!) spaces to deliberately rethink their intuition, there seems to be a higher risk in those subgroups for perverse eschatological ideologies. Is that risk high enough that active preventative measures should be taken, or is this concern itself of the 1+1=3, wrong-side-of-the-abyss type?