UDT shows how an agent might be able to care about something other than an externally provided reward, namely how a computation, or a set of computations, turn out. It’s conjectured that arbitrary goals, such as “maximize the number of paperclips across this distribution of possible worlds” (and our actual goals, whatever they may turn out to be) can be translated into such preferences over computations and then programmed into an AI, which will then take actions that we’d consider reasonable in pursue of such goals.
(Note this is a simplification that ignores issues like preferences over uncomputable worlds, but hopefully gives you an idea what the “step” consists of.)
UDT shows how an agent might be able to care about something other than an externally provided reward, namely how a computation, or a set of computations, turn out. It’s conjectured that arbitrary goals, such as “maximize the number of paperclips across this distribution of possible worlds” (and our actual goals, whatever they may turn out to be) can be translated into such preferences over computations and then programmed into an AI, which will then take actions that we’d consider reasonable in pursue of such goals.
(Note this is a simplification that ignores issues like preferences over uncomputable worlds, but hopefully gives you an idea what the “step” consists of.)