To be clear: your argument is that every human being who has ever lived may suffer eternally after death, and there are good reasons for not caring...?
That requires an answer that, at the very least, you should be able to put in your own words. How does our subjective suffering improve anything in the worlds where you die?
To be clear: your argument is that every human being who has ever lived may suffer eternally after death, and there are good reasons for not caring...?
It’s not my argument, but it follows from what I’m saying, yes. Even if people should care about this, there are probably good reasons not to, just not good enough to tilt the balance. There are good reasons for all kinds of wrong conclusions, it should be suspicious when there aren’t. Note that caring about this too much is the same as caring about other things too little. Also, as an epistemic principle, appreciation of arguments shouldn’t depend on consequences of agreeing with them.
How does our subjective suffering improve anything in the worlds where you die?
Focusing effort on the worlds where you’ll eventually die (as well as the worlds where you survive in a normal non-QI way) improves them at the cost of neglecting the worlds where you eternally suffer for QI reasons.
Rationalists love criticism that helps them improve their thinking. But this complaint is too vague to be any help to us. What exactly went wrong, and how can we do better?
To be clear: your argument is that every human being who has ever lived may suffer eternally after death, and there are good reasons for not caring...?
That requires an answer that, at the very least, you should be able to put in your own words. How does our subjective suffering improve anything in the worlds where you die?
It’s not my argument, but it follows from what I’m saying, yes. Even if people should care about this, there are probably good reasons not to, just not good enough to tilt the balance. There are good reasons for all kinds of wrong conclusions, it should be suspicious when there aren’t. Note that caring about this too much is the same as caring about other things too little. Also, as an epistemic principle, appreciation of arguments shouldn’t depend on consequences of agreeing with them.
Focusing effort on the worlds where you’ll eventually die (as well as the worlds where you survive in a normal non-QI way) improves them at the cost of neglecting the worlds where you eternally suffer for QI reasons.
...and here’s about when I realize what a mistake it was setting foot in Lesswrong again for answers.
Rationalists love criticism that helps them improve their thinking. But this complaint is too vague to be any help to us. What exactly went wrong, and how can we do better?
Asking for exact complete error report might be a bit taunting in challenging error states. I am sure also partial hints would be appriciated.