If a line of reasoning is leading you to do something crazy, then that line of reasoning is probably incorrect. I think that is where you should draw the line. If the reasoning is actually correct, then by learning more your intuitions will automatically fall in line with the reasoning and it will not seem crazy anymore.
In this case, I think your intuition correctly diagnoses the conclusion as crazy. Whether you are well-educated or not, the fact that you can tell the difference speaks well of you, although I think you are causing yourself way too much anxiety by worrying about whether you should accept the conclusion after all. Like I said, by learning more you will decrease the inferential distance you will have to traverse in such arguments, and better deduce whether they are valid.
That being said, I still reject these sorts of existential risk arguments based mostly on intuition, plus I am unwilling to do things with high probabilities of failure, no matter how good the situation would be in the event of success.
ETA: To clarify, I think existential risk reduction is a worthwhile goal, but I am uncomfortable with arguments advocating specific ways to reduce risk that rely on very abstract or low-probability scenarios.
If a line of reasoning is leading you to do something crazy, then that line of reasoning is probably incorrect. I think that is where you should draw the line. If the reasoning is actually correct, then by learning more your intuitions will automatically fall in line with the reasoning and it will not seem crazy anymore.
In this case, I think your intuition correctly diagnoses the conclusion as crazy. Whether you are well-educated or not, the fact that you can tell the difference speaks well of you, although I think you are causing yourself way too much anxiety by worrying about whether you should accept the conclusion after all. Like I said, by learning more you will decrease the inferential distance you will have to traverse in such arguments, and better deduce whether they are valid.
That being said, I still reject these sorts of existential risk arguments based mostly on intuition, plus I am unwilling to do things with high probabilities of failure, no matter how good the situation would be in the event of success.
ETA: To clarify, I think existential risk reduction is a worthwhile goal, but I am uncomfortable with arguments advocating specific ways to reduce risk that rely on very abstract or low-probability scenarios.