Moral philosophy in general is under-appreciated in FAI discussion in this community.
LW Metaethics Sequence : Solving Actual Moral Dilemmas as Inventing Peano Arithmetic : Inventing Artificial Intelligence. In short, an important and insightful first step. Hardly a conclusive resolution of the outstanding issues.
But if we want Friendly AI, we need to be able to tell it how to resolve moral disputes somehow. I have no idea if recent moral philosophy (post-1980) has the solutions, but I feel that even folks around here underestimate the severity of the problems implied by the Orthogonality Thesis.
Could you please be more specific and give me one example of an actual moral dilemma that is solved by moral philosophy and could be a useful lesson for the metaethics?
Moral philosophy in general is under-appreciated in FAI discussion in this community.
LW Metaethics Sequence : Solving Actual Moral Dilemmas as Inventing Peano Arithmetic : Inventing Artificial Intelligence. In short, an important and insightful first step. Hardly a conclusive resolution of the outstanding issues.
But if we want Friendly AI, we need to be able to tell it how to resolve moral disputes somehow. I have no idea if recent moral philosophy (post-1980) has the solutions, but I feel that even folks around here underestimate the severity of the problems implied by the Orthogonality Thesis.
Could you please be more specific and give me one example of an actual moral dilemma that is solved by moral philosophy and could be a useful lesson for the metaethics?