The idea that superintelligences will more closely approximate rational utilitarian agents than current organisms is based on the idea that they will be more rational, suffer from fewer resource constraints, and be less prone to problems that cause them to pointlessly burn through their own resources. They will improve in these respects as time passes. Of course they will still use heuristics—nobody claimed otherwise.
I was referring to the single minded, focussed utility maximizer that Eliezer often uses in his discussions about AI.
This still sounds needlessly derogatory. Paper-clip maximisers have a dumb utility function, that’s all. An expected utility maximiser is not necessarily “single minded”: e.g. it may be able to focus on many things at once.
Optimisation is key to understanding intelligence. Criticising optimisers is criticising all intelligent agents. I don’t see much point to doing that.
The idea that superintelligences will more closely approximate rational utilitarian agents than current organisms is based on the idea that they will be more rational, suffer from fewer resource constraints, and be less prone to problems that cause them to pointlessly burn through their own resources. They will improve in these respects as time passes. Of course they will still use heuristics—nobody claimed otherwise.
This still sounds needlessly derogatory. Paper-clip maximisers have a dumb utility function, that’s all. An expected utility maximiser is not necessarily “single minded”: e.g. it may be able to focus on many things at once.
Optimisation is key to understanding intelligence. Criticising optimisers is criticising all intelligent agents. I don’t see much point to doing that.