Yes, the ridicule was annoying, although I think many have learned their lesson.
The problem with our position is that it leaves us vulnerable to being Dutch-booked by opponents who are willing to be sufficiently cruel. (How much would you pay not to be tortured? Why not that amount plus $10?)
Let’s be clear: I do subscribe to utilitarianism, just not a naive one. (Long-range consequences and advanced decision theories make a big difference.) If I had magical levels of certainty of the problem statement, then I’d bite the bullet and pick torture. But in real life, that’s an impossible state for a human being to occupy on object-level problems.
Truly meta-level problems are perhaps different; given a genie that magically understands human moral intuitions and is truly motivated to help humanity, I would ask it to reconcile our contradictory intuitions in a utilitarian way rather than in a deontological way. (It would take a fair bit of work to turn this hypothetical into something that makes real sense to ask, but one example is how to structure CEV.)
Does that make sense as a statement of where I stand?
Right. The problem was the people on that side seemed to have a tendency to ridicule the belief that it is not.
Yes, the ridicule was annoying, although I think many have learned their lesson.
The problem with our position is that it leaves us vulnerable to being Dutch-booked by opponents who are willing to be sufficiently cruel. (How much would you pay not to be tortured? Why not that amount plus $10?)
Hmm … what examples of learning their lesson are you thinking of?
This is a much more mature response to the debate.
Let’s be clear: I do subscribe to utilitarianism, just not a naive one. (Long-range consequences and advanced decision theories make a big difference.) If I had magical levels of certainty of the problem statement, then I’d bite the bullet and pick torture. But in real life, that’s an impossible state for a human being to occupy on object-level problems.
Truly meta-level problems are perhaps different; given a genie that magically understands human moral intuitions and is truly motivated to help humanity, I would ask it to reconcile our contradictory intuitions in a utilitarian way rather than in a deontological way. (It would take a fair bit of work to turn this hypothetical into something that makes real sense to ask, but one example is how to structure CEV.)
Does that make sense as a statement of where I stand?