I take it as an argument against making perfect decisions. If perfection is uncomputable, then any computable agent is not perfect in some way.
The question is what imperfection do we want our agent to have? This might be the deep justification for choosing to scale probability by utility that I was looking for. Scaling linearly corresponds to being willing to lose a fixed amount to mugging, scaling superlinearly correspond to not willing to lose any genuine offer, scaling sublinearly corresponds to not being willing to ever be fooled. Or something like that. The details need some work.
I take it as an argument against making perfect decisions. If perfection is uncomputable, then any computable agent is not perfect in some way.
The question is what imperfection do we want our agent to have? This might be the deep justification for choosing to scale probability by utility that I was looking for. Scaling linearly corresponds to being willing to lose a fixed amount to mugging, scaling superlinearly correspond to not willing to lose any genuine offer, scaling sublinearly corresponds to not being willing to ever be fooled. Or something like that. The details need some work.