he could still accept superhappies or papperclipers caring about it say.
I think he actually might intrinsically value their desires too. One can theoretically make the transition from “human” to “paperclip maximizer” one atom at a time; differences in kind are the best way for corrupted/insufficiently powerful software to think about it, but here we’re talking about logical impurity, which would contaminate with sub-homeopathic doses.
Well, in that case it’s new information and we can conclude that either his utility function DOES include things in those universes that he claim can not exist, or it’s not physically possible to construct an agent that would care about them.
I think he actually might intrinsically value their desires too. One can theoretically make the transition from “human” to “paperclip maximizer” one atom at a time; differences in kind are the best way for corrupted/insufficiently powerful software to think about it, but here we’re talking about logical impurity, which would contaminate with sub-homeopathic doses.
Well, in that case it’s new information and we can conclude that either his utility function DOES include things in those universes that he claim can not exist, or it’s not physically possible to construct an agent that would care about them.
I would say “care dependent upon them”. An agent could care dependent upon them without caring about them, the converse is not true.
That’s even wider, although probably by a very small amount, thanks!