It feels to me like this post is treating AIs as functions from a first state of the universe to a second state of the universe. Which in a sense, anything is… but, I think that the tendency to simplification happens internally, where they operate more as functions from (digital) inputs to (digital) outputs. If you view an AI as a function from an digital input to a digital output, I don’t think goals targeting specific configurations of the universe are simple at all and don’t think decomposability over space/time/possible worlds are criteria that would lead to something simple.
It feels to me like this post is treating AIs as functions from a first state of the universe to a second state of the universe. Which in a sense, anything is… but, I think that the tendency to simplification happens internally, where they operate more as functions from (digital) inputs to (digital) outputs. If you view an AI as a function from an digital input to a digital output, I don’t think goals targeting specific configurations of the universe are simple at all and don’t think decomposability over space/time/possible worlds are criteria that would lead to something simple.