OK, I see what Paul probably meant. Let’s say “utility value”, not “utility function”, since that’s what we mean. I don’t think we should be talking about “running utility value”, because utility might be something given by an abstract definition, not state of execution of any program. As I discussed in the grandparent, the distinction I’m making is between the outer AGI controlling utility value (which it does) and outer AGI controlling the simulated researchers that prepare the definition of utility value (which it shouldn’t be allowed to for AI safety reasons). There is a map/territory distinction between the definition of utility value prepared by the initial program and the utility value itself optimized by the outer AGI.
(Also, “utility function” might be confusing especially for outsiders who are used to “utility function” meaning a mapping from world states to utility values, whereas Paul is using it to mean a parameterless computation that returns a utility value.)
I don’t think we should be talking about “running utility value”, because utility might be something given by an abstract definition, not state of execution of any program.
I think Paul is thinking that the utility definition that the simulated humans come up with is not necessarily a definition of our actual values, but just something that causes the outer AGI to self-modify into an FAI, and for that purpose it might be enough to define it using a programming language.
As I discussed in the grandparent, the distinction I’m making is between the outer AGI controlling utility value (which it does) and outer AGI controlling the simulated researchers that prepare the definition of utility value (which it shouldn’t be allowed to for AI safety reasons).
I think Paul’s intuition here is that the simulated humans (or enhanced humans and/or FAIs they build inside the simulation) may find it useful to “blur the lines”. In other words, the distinction you draw is not a fundamental one but just a safety heuristic that the simulated researchers may decide to discard or modify once they become “powerful enough”. For example they may decide to partially simulate the outer AGI or otherwise try to reason about what it might do given various definitions of U’ the simulation might ultimately decide upon, once they understand enough theory to see how to do this in a safe way.
OK, I see what Paul probably meant. Let’s say “utility value”, not “utility function”, since that’s what we mean. I don’t think we should be talking about “running utility value”, because utility might be something given by an abstract definition, not state of execution of any program. As I discussed in the grandparent, the distinction I’m making is between the outer AGI controlling utility value (which it does) and outer AGI controlling the simulated researchers that prepare the definition of utility value (which it shouldn’t be allowed to for AI safety reasons). There is a map/territory distinction between the definition of utility value prepared by the initial program and the utility value itself optimized by the outer AGI.
(Also, “utility function” might be confusing especially for outsiders who are used to “utility function” meaning a mapping from world states to utility values, whereas Paul is using it to mean a parameterless computation that returns a utility value.)
I think Paul is thinking that the utility definition that the simulated humans come up with is not necessarily a definition of our actual values, but just something that causes the outer AGI to self-modify into an FAI, and for that purpose it might be enough to define it using a programming language.
I think Paul’s intuition here is that the simulated humans (or enhanced humans and/or FAIs they build inside the simulation) may find it useful to “blur the lines”. In other words, the distinction you draw is not a fundamental one but just a safety heuristic that the simulated researchers may decide to discard or modify once they become “powerful enough”. For example they may decide to partially simulate the outer AGI or otherwise try to reason about what it might do given various definitions of U’ the simulation might ultimately decide upon, once they understand enough theory to see how to do this in a safe way.