14 years later, I notice that Eliezer missed the other reason why evolution didn’t design organisms that have fitness maximization as an explicit motivation. It’s not just that it can’t plan well enough to get there, it’s also that such a motivation would have a disadvantage compared to a set of heuristics: higher computational cost. A hypothetical mind only concerned with fitness maximization would probably have to rediscover a bunch of heuristics like “excessive pain is bad” to survive practice. (At that point, it would indeed have an advantage in that it could avoid many of the failure modes of heuristics.)
14 years later, I notice that Eliezer missed the other reason why evolution didn’t design organisms that have fitness maximization as an explicit motivation. It’s not just that it can’t plan well enough to get there, it’s also that such a motivation would have a disadvantage compared to a set of heuristics: higher computational cost. A hypothetical mind only concerned with fitness maximization would probably have to rediscover a bunch of heuristics like “excessive pain is bad” to survive practice. (At that point, it would indeed have an advantage in that it could avoid many of the failure modes of heuristics.)
Sort of covered here (“along with”):
Reading the post I didn’t understand this:
Could evolution really build a consequentialist? The post itself kind of contradicts that.
Could a consequentialist really foresee all consequences without having any drives (such as curiosity)?
I think your critique about computational complexity is related to the 1st point.