I completely agree: there has been little discussion of Goalcraft since roughly 2010, when discussion on CEV and things like The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It (which I highly recommend to moral objectivists and moral realists) petered out. I would love to restart more discussion of Goalcraft, CEV, Rational approaches to ethics, and deconfusion of ethical philosophy. Please take a look at (and comment) on my sequence AI, Ethics, and Alignment, in which I attempt to summarize my last ~10 years’ thinking on the subject. I’d love to have people discussing this and disagreeing with me, rather than thinking about if by myself! Another reasonable starting place is Nick Bostrom’s Base Camp for Mt. Ethics.
While I agree that things like AI-assisted Alignment and Value Learning mean we don’t need to get all the details worked out in advance, I think there’s quite a bit of basic deconfusion that needs to be done before you can even start on those (that isn’t just solved by Utilitarianism). Such as: how to go about rationally deciding rather basic questions like: the <utility|greatest good of the greatest number|CEV> of members of what set? (Sadly, I’m not wildly impressed with what the philosophers of Ethics have been doing in this area. Quite a bit of it has x-risks.)
What make this confusing is that you can’t use logical arguments based on ethical principles — every ethical system automatically prefers itself, so you immediately get a tautology if you attempt this: it’s as hopeless as trying to use logic to choose between incompatible axiom systems in mathematics. So you have to ground your ethical-system engineering design decisions in something outside ethical philosophy, like sociology, psychology or evolutionary biology. I discuss this further in A Sense of Fairness: Deconfusing Ethics.
I completely agree: there has been little discussion of Goalcraft since roughly 2010, when discussion on CEV and things like The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It (which I highly recommend to moral objectivists and moral realists) petered out. I would love to restart more discussion of Goalcraft, CEV, Rational approaches to ethics, and deconfusion of ethical philosophy. Please take a look at (and comment) on my sequence AI, Ethics, and Alignment, in which I attempt to summarize my last ~10 years’ thinking on the subject. I’d love to have people discussing this and disagreeing with me, rather than thinking about if by myself! Another reasonable starting place is Nick Bostrom’s Base Camp for Mt. Ethics.
While I agree that things like AI-assisted Alignment and Value Learning mean we don’t need to get all the details worked out in advance, I think there’s quite a bit of basic deconfusion that needs to be done before you can even start on those (that isn’t just solved by Utilitarianism). Such as: how to go about rationally deciding rather basic questions like: the <utility|greatest good of the greatest number|CEV> of members of what set? (Sadly, I’m not wildly impressed with what the philosophers of Ethics have been doing in this area. Quite a bit of it has x-risks.)
What make this confusing is that you can’t use logical arguments based on ethical principles — every ethical system automatically prefers itself, so you immediately get a tautology if you attempt this: it’s as hopeless as trying to use logic to choose between incompatible axiom systems in mathematics. So you have to ground your ethical-system engineering design decisions in something outside ethical philosophy, like sociology, psychology or evolutionary biology. I discuss this further in A Sense of Fairness: Deconfusing Ethics.