I clearly had more scrupulosity issues than you and that contributed a lot. Relevantly, the original Roko’s Basilisk post is putting AI sci-fi detail on a fear I am pretty sure a lot of EAs feel/felt in their heart, that something nonspecifically bad will happen to them because they are able to help a lot of people (due to being pivotal on the future), and know this, and don’t do nearly as much as they could. If you’re already having these sorts of fears then the abstract math of extortion and so on can look really threatening.
I clearly had more scrupulosity issues than you and that contributed a lot. Relevantly, the original Roko’s Basilisk post is putting AI sci-fi detail on a fear I am pretty sure a lot of EAs feel/felt in their heart, that something nonspecifically bad will happen to them because they are able to help a lot of people (due to being pivotal on the future), and know this, and don’t do nearly as much as they could. If you’re already having these sorts of fears then the abstract math of extortion and so on can look really threatening.