utilitarians have a bounded concern with acting or aspiring to act or believing that they aspire to act as though they have concern with good consequences that is close to linear with the consequences
I know this is not what you were suggesting, but this made me think of goal systems of the form “take the action that I think idealized agent X is most likely to take,” e.g. WWAIXID.
A huge problem with these goal systems is that the idealized agent will probably have very low-entropy probability distributions, while your own beliefs have very high entropy. So you’ll end up acting as if you believed with near-certainty the single most likely scenario you can think of.
Another problem, of course, is that you’ll take actions that only make sense for an agent much more competent than you are. For example, AIXI would be happy to bet $1 million that it can beat Cho Chikun at Go.
I know this is not what you were suggesting, but this made me think of goal systems of the form “take the action that I think idealized agent X is most likely to take,” e.g. WWAIXID.
A huge problem with these goal systems is that the idealized agent will probably have very low-entropy probability distributions, while your own beliefs have very high entropy. So you’ll end up acting as if you believed with near-certainty the single most likely scenario you can think of.
Another problem, of course, is that you’ll take actions that only make sense for an agent much more competent than you are. For example, AIXI would be happy to bet $1 million that it can beat Cho Chikun at Go.
In the relevant circumstances, I too might be happy to bet $1M that AIXI can beat Cho Chikun at go.