bambi: I think this would be related to Newcomb’s Problem? Just because the future is fixed relative to your current state (or decision making strategy, or whatever), doesn’t mean that a successful rational agent should not try to optimize it’s current state (or decision making strategy) so that it comes out on the desired side of future probabilities.
It all sorts itself out in the end, of course—if you’re the kind of agent that gets paralyzed when presented with a deterministic universe, then you’re not going to be as successful as your consciousness moves to a different part of the configuration as agents that act as if they can change the future.
bambi: I think this would be related to Newcomb’s Problem? Just because the future is fixed relative to your current state (or decision making strategy, or whatever), doesn’t mean that a successful rational agent should not try to optimize it’s current state (or decision making strategy) so that it comes out on the desired side of future probabilities.
It all sorts itself out in the end, of course—if you’re the kind of agent that gets paralyzed when presented with a deterministic universe, then you’re not going to be as successful as your consciousness moves to a different part of the configuration as agents that act as if they can change the future.