I actually would be very curious as of any ideas how ‘utilitarianism’ could be rescued from this. Any ideas?
I don’t believe direct utilitarianism works as a foundation; declaring that the intelligence is about maximizing ‘utility’ just trades one thing (intelligence) that has not been reduced to elementary operations but we at least have good reasons to believe it should be reducible (we are intelligent and laws of physics are, in relevant approximation, computable), for something (“utility”) that not only hasn’t been shown reducible but for which we have no good reason to think it is reducible or works on reductionist models (observe how there’s suddenly a problem with utility of life once I consider a mind upload simulated in a very straightforward way; observe how number of paperclips in the universe is impossible or incredibly difficult to define as a mathematical function).
edit: Note: the model-based utility-based agent does not have real world utility function and as such, no matter how awesomely powerful is the solver it uses to find maximums of mathematical functions, won’t ever care if it’s output gets disconnected from the actuators, unless such condition was explicitly included into model; furthermore it will break itself if model includes itself and it is to modify the model, once again no matter how powerful is it’s solver. The utility is defined within very specific non-reductionist model where e.g. a paperclip is a high level object, and ‘improving’ model (e.g. finding out that paperclip is in fact made of atoms) breaks utility measurement (it was never defined how to recognize when those atoms/quarks/what ever novel physics the intelligence came up with, constitute a paperclip). This is not a deficiency when it comes to solving practical problems other than ‘how do we destroy mankind by accident’.
I actually would be very curious as of any ideas how ‘utilitarianism’ could be rescued from this. Any ideas?
I don’t believe direct utilitarianism works as a foundation; declaring that the intelligence is about maximizing ‘utility’ just trades one thing (intelligence) that has not been reduced to elementary operations but we at least have good reasons to believe it should be reducible (we are intelligent and laws of physics are, in relevant approximation, computable), for something (“utility”) that not only hasn’t been shown reducible but for which we have no good reason to think it is reducible or works on reductionist models (observe how there’s suddenly a problem with utility of life once I consider a mind upload simulated in a very straightforward way; observe how number of paperclips in the universe is impossible or incredibly difficult to define as a mathematical function).
edit: Note: the model-based utility-based agent does not have real world utility function and as such, no matter how awesomely powerful is the solver it uses to find maximums of mathematical functions, won’t ever care if it’s output gets disconnected from the actuators, unless such condition was explicitly included into model; furthermore it will break itself if model includes itself and it is to modify the model, once again no matter how powerful is it’s solver. The utility is defined within very specific non-reductionist model where e.g. a paperclip is a high level object, and ‘improving’ model (e.g. finding out that paperclip is in fact made of atoms) breaks utility measurement (it was never defined how to recognize when those atoms/quarks/what ever novel physics the intelligence came up with, constitute a paperclip). This is not a deficiency when it comes to solving practical problems other than ‘how do we destroy mankind by accident’.