hmm… this could have come down to spending time in different parts of MIRI? I mostly worked on the “world’s last decent logic department” stuff—maybe the more “global strategic” aspects of MIRI work, at least the parts behind closed doors I wasn’t allowed through, were more toxic? Still feels kinda unlikely but I’m missing info there so it’s just a hunch.
My guess is that it has more to do with willingness to compartmentalize than part of MIRI per se. Compartmentalization is negatively correlated with “taking on responsibility” for more of the problem. I’m sure you can see why it would be appealing to avoid giving into extortion in real life, not just on whiteboards, and attempting that with a skewed model of the situation can lead to outlandish behavior like Ziz resisting arrest as hard as possible.
I think this is a persistent difference between us but isn’t especially relevant to the difference in outcomes here.
I’d more guess that the reason you had psychoses and I didn’t had to do with you having anxieties about being irredeemably bad that I basically didn’t at the time. Seems like this would be correlated with your feeling like you grew up in a Shin Sekai Yori world?
I clearly had more scrupulosity issues than you and that contributed a lot. Relevantly, the original Roko’s Basilisk post is putting AI sci-fi detail on a fear I am pretty sure a lot of EAs feel/felt in their heart, that something nonspecifically bad will happen to them because they are able to help a lot of people (due to being pivotal on the future), and know this, and don’t do nearly as much as they could. If you’re already having these sorts of fears then the abstract math of extortion and so on can look really threatening.
hmm… this could have come down to spending time in different parts of MIRI? I mostly worked on the “world’s last decent logic department” stuff—maybe the more “global strategic” aspects of MIRI work, at least the parts behind closed doors I wasn’t allowed through, were more toxic? Still feels kinda unlikely but I’m missing info there so it’s just a hunch.
My guess is that it has more to do with willingness to compartmentalize than part of MIRI per se. Compartmentalization is negatively correlated with “taking on responsibility” for more of the problem. I’m sure you can see why it would be appealing to avoid giving into extortion in real life, not just on whiteboards, and attempting that with a skewed model of the situation can lead to outlandish behavior like Ziz resisting arrest as hard as possible.
I think this is a persistent difference between us but isn’t especially relevant to the difference in outcomes here.
I’d more guess that the reason you had psychoses and I didn’t had to do with you having anxieties about being irredeemably bad that I basically didn’t at the time. Seems like this would be correlated with your feeling like you grew up in a Shin Sekai Yori world?
I clearly had more scrupulosity issues than you and that contributed a lot. Relevantly, the original Roko’s Basilisk post is putting AI sci-fi detail on a fear I am pretty sure a lot of EAs feel/felt in their heart, that something nonspecifically bad will happen to them because they are able to help a lot of people (due to being pivotal on the future), and know this, and don’t do nearly as much as they could. If you’re already having these sorts of fears then the abstract math of extortion and so on can look really threatening.