It’s (2), but there is no circularity problem. Idealized Eliezer does not make use of those words in any output, because Idealized Eliezer is a simplified model that only accepts input and outputs a goodness score; it’s a function IE(x): statement ⇒ goodness-score. It never outputs words, so it can’t use words like “moral” or “should” except inside its own thoughts. It might (but need not) use those words in its own thoughts, but if it does, then those words will mean “what I am eventually going to output”, in which case thinking “X is moral” while computing X is equivalent to a return statement, and asking whether Y is moral while computing X is equivalent to recursion.
The circularity you think you’ve noticed is simply the observation that IE(x)=IE(x). However, this is not a computation that returns, so IE cannot be implemented that way; if it is recursive, it must be a well-founded recursion, that is, all recursive chains resulting from finite input must have finite length. This is formalized in type theory, and we can prove that particular computations are or are not suitable.
This is perhaps the most promising solution (if we want to stick with Eliezer’s approach). I’m not sure it really works though. How does your IE process meta-moral arguments, for example, arguments about whether average utilitarianism or total utilitarianism is right? (Presumably BE wants IE to be influenced by those arguments in roughly the same way that BE would.) What does “right” mean to it while it’s thinking about those kinds of arguments?
It could refer to evaluation of potential self-improvements. What the agent does is not necessarily right, and even the thing with highest goodness-score which the agent will fail to find is not necessarily right, because the agent could self-improve instead and compute a right-er action using its improved architecture where there could be no longer any goodness score, for example.
It’s (2), but there is no circularity problem. Idealized Eliezer does not make use of those words in any output, because Idealized Eliezer is a simplified model that only accepts input and outputs a goodness score; it’s a function IE(x): statement ⇒ goodness-score. It never outputs words, so it can’t use words like “moral” or “should” except inside its own thoughts. It might (but need not) use those words in its own thoughts, but if it does, then those words will mean “what I am eventually going to output”, in which case thinking “X is moral” while computing X is equivalent to a return statement, and asking whether Y is moral while computing X is equivalent to recursion.
The circularity you think you’ve noticed is simply the observation that IE(x)=IE(x). However, this is not a computation that returns, so IE cannot be implemented that way; if it is recursive, it must be a well-founded recursion, that is, all recursive chains resulting from finite input must have finite length. This is formalized in type theory, and we can prove that particular computations are or are not suitable.
This is perhaps the most promising solution (if we want to stick with Eliezer’s approach). I’m not sure it really works though. How does your IE process meta-moral arguments, for example, arguments about whether average utilitarianism or total utilitarianism is right? (Presumably BE wants IE to be influenced by those arguments in roughly the same way that BE would.) What does “right” mean to it while it’s thinking about those kinds of arguments?
It could refer to evaluation of potential self-improvements. What the agent does is not necessarily right, and even the thing with highest goodness-score which the agent will fail to find is not necessarily right, because the agent could self-improve instead and compute a right-er action using its improved architecture where there could be no longer any goodness score, for example.