Do you expect learned ML systems to be updateless?
It seems plausible to me that updatelessness of agents is just as “disconnected from reality” of actual systems as EU maximization. Would you disagree?
No, at least probably not at the time that we lose all control.
However, I expect that systems that are self-transparent and can easily sellf-modify might quickly converge to reflective stability (and thus updatelessness). They might not, but I think the same arguments that might make you think they would develop a utility function also can be used to argue that they would develop updatelessness (and thus possibly also not develop a utility function).
Do you expect learned ML systems to be updateless?
It seems plausible to me that updatelessness of agents is just as “disconnected from reality” of actual systems as EU maximization. Would you disagree?
No, at least probably not at the time that we lose all control.
However, I expect that systems that are self-transparent and can easily sellf-modify might quickly converge to reflective stability (and thus updatelessness). They might not, but I think the same arguments that might make you think they would develop a utility function also can be used to argue that they would develop updatelessness (and thus possibly also not develop a utility function).