All types of normative ethics run into trouble when taken to extremes and utilitarianism is no exception. Whether Eliezer’s idea that a super-smart intelligence can come up with some new CEV-based ethics that never leads to any sort of repugnant conclusion, or at least minimizes its repugnance according to some acceptable criterion, is an open question.
From my experience dealing with ideas from people who are much smarter than me, I suspect that the issues like dust specks vs torture or the trolley problem result from our constrained and unimaginative thinking and would never arise or even make sense in an ethical framework constructed by an FAI, assuming FAI is possible.
All types of normative ethics run into trouble when taken to extremes and utilitarianism is no exception. Whether Eliezer’s idea that a super-smart intelligence can come up with some new CEV-based ethics that never leads to any sort of repugnant conclusion, or at least minimizes its repugnance according to some acceptable criterion, is an open question.
From my experience dealing with ideas from people who are much smarter than me, I suspect that the issues like dust specks vs torture or the trolley problem result from our constrained and unimaginative thinking and would never arise or even make sense in an ethical framework constructed by an FAI, assuming FAI is possible.