Yeah, the sentiment expressed in that post is usually my instinct too.
But then again, that’s the problem: it’s an instinct. If my utilitarian impulse is just another impulse, then why does it automatically outweigh any other moral impulses I have, such as a value of human autonomy? If my utilitarian impulse is NOT just an impulse, but somehow is objectively more rational and outranks other moral impulses, then I have yet to see a proof of this.
“shut up and multiply” is, in principle, a way to weigh various considerations like the value of autonomy, etc etc etc...
It’s not “here’s shut up and multiply” vs “some other value here”, but “plug in your values + actual current situation including possible courses of action and compute”
Some of us are then saying “it is our moral position that human lives are so incredibly valuable that a measure of dignity for a few doesn’t outweigh the massively greater suffering/etc that would result from the implied battle that would ensue from the ‘battle of honor’ route”
Ah, then I misunderstood. A better way of phrasing my challenge might be: it sounds like we might have different algorithms, so prove to me that your algorithm is more rational.
See Shut up and multiply.
Yeah, the sentiment expressed in that post is usually my instinct too.
But then again, that’s the problem: it’s an instinct. If my utilitarian impulse is just another impulse, then why does it automatically outweigh any other moral impulses I have, such as a value of human autonomy? If my utilitarian impulse is NOT just an impulse, but somehow is objectively more rational and outranks other moral impulses, then I have yet to see a proof of this.
“shut up and multiply” is, in principle, a way to weigh various considerations like the value of autonomy, etc etc etc...
It’s not “here’s shut up and multiply” vs “some other value here”, but “plug in your values + actual current situation including possible courses of action and compute”
Some of us are then saying “it is our moral position that human lives are so incredibly valuable that a measure of dignity for a few doesn’t outweigh the massively greater suffering/etc that would result from the implied battle that would ensue from the ‘battle of honor’ route”
Ah, then I misunderstood. A better way of phrasing my challenge might be: it sounds like we might have different algorithms, so prove to me that your algorithm is more rational.
No one has answered this challenge.