Morality is commonly taken to describe what one will actually do when they are trading off private gains vs other people’s losses. See this as example of moral judgement. Suppose Roberts is smarter. He will quickly see that he can donate 10% to charity, and it’ll take longer for him to reason about value of cash that was not given to him (reasoning that may stop him from pressing the button), so there will be a transient during which he pushes the button, unless he somehow suppresses actions during transients. It’s an open ended problem ‘unlike logic’ because consequences are difficult to evaluate.
In the case of ‘circular altruism’, I confess I’m quite at a loss. I’ve never really managed to pull an argument out of there. But if we’re just talking about the practice of quantifying goods in moral judgements, then I agree with you there’s no strongly complete ethical calculus that’s going to do render ethics a mathematical science. But in at least in ‘circular reasoning’ EY doesn’t need quite so strong a view: so far as I can tell, he’s just saying that our moral passions conflict with our reflective moral judgements. And even if we don’t have a strongly complete moral system, we can make logically coherent reflective moral judgements. I’d go so far as to say we can make logically coherent reflective literary criticism judgements. Logic isn’t picky.
So while, on the one hand, I’m also (as yet) unconvinced about EY’s ethics, I think it goes too far in the opposite direction to say that ethical reasoning is inherently fuzzy or illogical. Valid arguments are valid arguments, regardless.
I should have given some examples of the kind of moral reasoning I’m referring to.
http://lesswrong.com/lw/n3/circular_altruism/
http://lesswrong.com/lw/1r9/shut_up_and_divide/
1st link is ambiguity aversion.
Morality is commonly taken to describe what one will actually do when they are trading off private gains vs other people’s losses. See this as example of moral judgement. Suppose Roberts is smarter. He will quickly see that he can donate 10% to charity, and it’ll take longer for him to reason about value of cash that was not given to him (reasoning that may stop him from pressing the button), so there will be a transient during which he pushes the button, unless he somehow suppresses actions during transients. It’s an open ended problem ‘unlike logic’ because consequences are difficult to evaluate.
edit: been in a hurry.
Ah, thank you, that is helpful.
In the case of ‘circular altruism’, I confess I’m quite at a loss. I’ve never really managed to pull an argument out of there. But if we’re just talking about the practice of quantifying goods in moral judgements, then I agree with you there’s no strongly complete ethical calculus that’s going to do render ethics a mathematical science. But in at least in ‘circular reasoning’ EY doesn’t need quite so strong a view: so far as I can tell, he’s just saying that our moral passions conflict with our reflective moral judgements. And even if we don’t have a strongly complete moral system, we can make logically coherent reflective moral judgements. I’d go so far as to say we can make logically coherent reflective literary criticism judgements. Logic isn’t picky.
So while, on the one hand, I’m also (as yet) unconvinced about EY’s ethics, I think it goes too far in the opposite direction to say that ethical reasoning is inherently fuzzy or illogical. Valid arguments are valid arguments, regardless.