Inconsistency shows that there is at least one error, it does not imply (actually, it gives some evidence against) that either calculation is correct. You can’t choose which one to adjust to fit the other, you have to correct all errors. Remember, consistency isn’t a goal in itself, it’s just a bit of evidence for correctness.
For the specific case in point, the error is likely around not being numerical in the individual steps—how much better is the universe with one additional low-but-positive life added? How much (if any) is the universe improved by a specific redistribution of life-quality? Without that, you can’t know if any of the steps are valid and you can’t know if the conclusion is valid.
These are values we’re talking about—the proof is a proof of inconsistency between two value sets, and you have to choose which parts of your values to give up, and how. Your choice of how to be numerical in each step determines which values you’re keeping.
I think we agree on the basics—the specificity of calculation allows you to identify exactly what you’re considering, and find out what the mismatch is (missing a step, making an incorrect step, and/or mis-stating the summation). This is true for values as well as factual beliefs.
It is only after this that you understand your proposed values well enough to know whether they are different value-sets, or just a calculation mistake in one or both. Once you know that, then you can decide which, if either, apply to you.
I guess you should also separately decide if it’s good and important for you to think you’re a unitary individual, vs a series of semi-connected experiences. Do you (singlular you) want to have a single consistent set of values, or are all the future you-components content to behave somewhat randomly over time and context. This is mostly assumed in this kind of discussion, but probably worth stating if you’re questioning what (if anything) you learn from an inconsistency.
Inconsistency shows that there is at least one error, it does not imply (actually, it gives some evidence against) that either calculation is correct. You can’t choose which one to adjust to fit the other, you have to correct all errors. Remember, consistency isn’t a goal in itself, it’s just a bit of evidence for correctness.
For the specific case in point, the error is likely around not being numerical in the individual steps—how much better is the universe with one additional low-but-positive life added? How much (if any) is the universe improved by a specific redistribution of life-quality? Without that, you can’t know if any of the steps are valid and you can’t know if the conclusion is valid.
These are values we’re talking about—the proof is a proof of inconsistency between two value sets, and you have to choose which parts of your values to give up, and how. Your choice of how to be numerical in each step determines which values you’re keeping.
I think we agree on the basics—the specificity of calculation allows you to identify exactly what you’re considering, and find out what the mismatch is (missing a step, making an incorrect step, and/or mis-stating the summation). This is true for values as well as factual beliefs.
It is only after this that you understand your proposed values well enough to know whether they are different value-sets, or just a calculation mistake in one or both. Once you know that, then you can decide which, if either, apply to you.
I guess you should also separately decide if it’s good and important for you to think you’re a unitary individual, vs a series of semi-connected experiences. Do you (singlular you) want to have a single consistent set of values, or are all the future you-components content to behave somewhat randomly over time and context. This is mostly assumed in this kind of discussion, but probably worth stating if you’re questioning what (if anything) you learn from an inconsistency.