Let’s be clear: I do subscribe to utilitarianism, just not a naive one. (Long-range consequences and advanced decision theories make a big difference.) If I had magical levels of certainty of the problem statement, then I’d bite the bullet and pick torture. But in real life, that’s an impossible state for a human being to occupy on object-level problems.
Truly meta-level problems are perhaps different; given a genie that magically understands human moral intuitions and is truly motivated to help humanity, I would ask it to reconcile our contradictory intuitions in a utilitarian way rather than in a deontological way. (It would take a fair bit of work to turn this hypothetical into something that makes real sense to ask, but one example is how to structure CEV.)
Does that make sense as a statement of where I stand?
This is a much more mature response to the debate.
Let’s be clear: I do subscribe to utilitarianism, just not a naive one. (Long-range consequences and advanced decision theories make a big difference.) If I had magical levels of certainty of the problem statement, then I’d bite the bullet and pick torture. But in real life, that’s an impossible state for a human being to occupy on object-level problems.
Truly meta-level problems are perhaps different; given a genie that magically understands human moral intuitions and is truly motivated to help humanity, I would ask it to reconcile our contradictory intuitions in a utilitarian way rather than in a deontological way. (It would take a fair bit of work to turn this hypothetical into something that makes real sense to ask, but one example is how to structure CEV.)
Does that make sense as a statement of where I stand?