I could accept a world in which for every pair of a rational plan with a world, there is an irrational plan which does better in that world, but no plausible way for a cognitive algorithm to output that irrational plan
We already live in that world.
(The following is not evidence, just an illustrative analogy) Ever seen Groundhog Day? Imagine him skipping the bulk of the movie and going straight to the last day. It is straight wall to wall WTF but it’s very optimal.
We already live in that world.
(The following is not evidence, just an illustrative analogy) Ever seen Groundhog Day? Imagine him skipping the bulk of the movie and going straight to the last day. It is straight wall to wall WTF but it’s very optimal.