I am >.9 confident that donating money to SI doesn’t significantly increase human existential risk.
(Edit: Which, on second read, I guess means I agree with Holden as you summarize him here. At least, the difference between “A doesn’t significantly affect B” and “A insignificantly affects B” seems like a difference I ought not care about.)
I also think Pascal’s Wager type arguments are silly. More precisely, given how unreliable human intuition is when dealing with very low probabilities and when dealing with very large utilities/disutilities, I think lines of reasoning that rely on human intuitions about very large very-low-probability utility shifts are unlikely to be truth-preserving.
I am >.9 confident that donating money to SI doesn’t significantly increase human existential risk.
(Edit: Which, on second read, I guess means I agree with Holden as you summarize him here. At least, the difference between “A doesn’t significantly affect B” and “A insignificantly affects B” seems like a difference I ought not care about.)
I also think Pascal’s Wager type arguments are silly. More precisely, given how unreliable human intuition is when dealing with very low probabilities and when dealing with very large utilities/disutilities, I think lines of reasoning that rely on human intuitions about very large very-low-probability utility shifts are unlikely to be truth-preserving.
Why do you want to know?