I’m not entirely convinced by the rest of your argument, but
The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.
Is, far and away, the most intelligent thing I have ever seen anyone write on this damn paradox.
Come on, people. The fact that naive preference utilitarianism gives us torture rather than dust specks is not some result we have to live with, it’s an indication that the decision theory is horribly, horribly wrong,
It is beyond me how people can look at dust specks and torture and draw the conclusion they do. In my mind, the most obvious, immediate objection is that utility does not aggregate additively across people in any reasonable ethical system. This is true no matter how big the numbers are. Instead it aggregates by minimum, or maybe multiplicatively (especially if we normalize everyone’s utility function to [0,1]).
Sorry for all the emphasis, but I am sick and tired of supposed rationalists using math to reach the reprehensible conclusion and then claiming it must be right because math. It’s the epitome of Spock “rationality”.
This isn’t really true—clock performance is a really good metric for computing power. If your clock speed doubles, you get a 2x speedup in the amount of computation you can do without any algorithmic changes. If you instead increase chip complexity, e.g., with parallelism, you need to write new code to take advantage of it.