I think that there may be wrapper-minds with very detailed utility functions, that whatever qualities you attribute to agents that are not them, the wrapper-mind’s behavior will look like their with arbitrary precision on arbitrarily many evaluation parameters. I don’t think it’s practical or it’s something that has a serious chance of happening, but I think it’s a case that might be worth considering.
Like, maybe it’s very easy to build a wrapper mind that is a very good approximation of very non wrapper mind. Who knows
I think that there may be wrapper-minds with very detailed utility functions, that whatever qualities you attribute to agents that are not them, the wrapper-mind’s behavior will look like their with arbitrary precision on arbitrarily many evaluation parameters. I don’t think it’s practical or it’s something that has a serious chance of happening, but I think it’s a case that might be worth considering.
Like, maybe it’s very easy to build a wrapper mind that is a very good approximation of very non wrapper mind. Who knows