It seems to me that any agent unable to solve this problem would be considerably less intelligent than a human.
It does seem unlikely that an “expected utility maximizer” reasoning like this would manage to build interstellar spaceships, but that insight doesn’t automatically help with building an agent that is immune to this and similar problems.
It seems to me that any agent unable to solve this problem would be considerably less intelligent than a human.
It does seem unlikely that an “expected utility maximizer” reasoning like this would manage to build interstellar spaceships, but that insight doesn’t automatically help with building an agent that is immune to this and similar problems.