I’m surprised. Do you mean you wouldn’t trade off a dust speck in your eye (in some post-singularity future where x-risk is settled one way or another) to avert the torture of a billion frogs, or of some noticeable portion of all frogs? If we plotted your attitudes to progressively more intelligent entities, where’s the discontinuity or discontinuities?
I think frogs are extremely unlikely to have moral worth, but one dust speck vs 1B frogs is enough to overcome that improbability and I would accept the speck.
I would advise you to be cautious in concluding that an argument is an instance of the Sorites paradox. There is a long tradition of dismissing arguments for this reason which upon closer inspection have been found to be relevantly dissimilar to the canonical Sorites formulation. Two examples are Chalmers’s “fading qualia” argument and Parfit’s “psychological spectrum” argument.
Carl Shulman:
I think frogs are extremely unlikely to have moral worth, but one dust speck vs 1B frogs is enough to overcome that improbability and I would accept the speck.
Always a bit awkward to argument by ways of the Sorites paradox.
I would advise you to be cautious in concluding that an argument is an instance of the Sorites paradox. There is a long tradition of dismissing arguments for this reason which upon closer inspection have been found to be relevantly dissimilar to the canonical Sorites formulation. Two examples are Chalmers’s “fading qualia” argument and Parfit’s “psychological spectrum” argument.