I meant that there’s been little progress in the sense of generating theories precise enough to offer concrete recommendations, things that might be coded into an AI, e.g. formal criteria for identifying preferences, pains, and pleasures in the world (beyond pointing to existing humans and animals, which doesn’t pin down the content of utilitronium)
One could argue that until recently there has been little motivation amongst utilitarians to formulate such precise theories, so you can’t really count all of the past 60 years as evidence against this being doable in the next few decades. Some of the problems weren’t even identified until recently, and others, like how to identify pain and pleasure, could be informed by recent or ongoing science. And of course these difficulties have to be compared with the difficulties of EV. Perhaps I should just say that it’s not nearly as obvious that “hard-coding” is a bad idea, if “complexity of value” refers to the complexity of a precise formulation of utilitarianism, for example, as opposed to the complexity of “Godshatter”.
Even a little uncertainty about many dimensions means probably going wrong, and it seems that reasonable uncertainty about several of these things (e.g. infinite worlds and implications for probability and ethics) is in fact large.
Is it plausible that someone could reasonably interpret lack of applicable intuitions along some dimensions as indifference, instead of uncertainty?
One could argue that until recently there has been little motivation amongst utilitarians to formulate such precise theories, so you can’t really count all of the past 60 years as evidence against this being doable in the next few decades. Some of the problems weren’t even identified until recently, and others, like how to identify pain and pleasure, could be informed by recent or ongoing science. And of course these difficulties have to be compared with the difficulties of EV. Perhaps I should just say that it’s not nearly as obvious that “hard-coding” is a bad idea, if “complexity of value” refers to the complexity of a precise formulation of utilitarianism, for example, as opposed to the complexity of “Godshatter”.
Is it plausible that someone could reasonably interpret lack of applicable intuitions along some dimensions as indifference, instead of uncertainty?