Another good thing about this paper is that it claims to prove that a utility maximiser can mimic any computable agent. That is an idea I have been banging on about for years, now, when people claim the utility maximiser framework sucks, or that it can’t describe humans, or whatever. The proof looks essentially the same as the one I gave.
Unless there are mistakes, this looks like a useful place to refer doubters to in the future.
Another good thing about this paper is that it claims to prove that a utility maximiser can mimic any computable agent. That is an idea I have been banging on about for years, now, when people claim the utility maximiser framework sucks, or that it can’t describe humans, or whatever. The proof looks essentially the same as the one I gave.
Unless there are mistakes, this looks like a useful place to refer doubters to in the future.