Unknown: your last question highlights the problem with your reasoning. It’s idle to ask whether I’d go and jump off a cliff if I found my future were determined. What does that question even mean?
Put a different way, why should we ask an “ought” question about events that are determined? If A will do X whether or not it is the case that a rational person will do X, why do we care whether or not it is the case that a rational person will do X? I submit that we care about rationality because we believe it’ll give us traction on our problem of deciding what to do. So assuming fatalism (which is what we must do if the AI knows what we’re going to do, perfectly, in advance) demotivates rationality.
Here’s the ultimate problem: our intuitions about these sorts of questions don’t work, because they’re fundamentally rooted in our self-understanding as agents. It’s really, really hard for us to say sensible things about what it might mean to make a “decision” in a deterministic universe, or to understand what that implies. That’s why Newcomb’s problem is a problem—because we have normative principles of rationality that make sense only when we assume that it matters whether or not we follow them, and we don’t really know what it would mean to matter without causal leverage.
(There’s a reason free will is one of Kant’s antimonies of reason. I’ve been meaning to write a post about transcendental arguments and the limits of rationality for a while now… it’ll happen one of these days. But in a nutshell… I just don’t think our brains work when it comes down to comprehending what a deterministic universe looks like on some level other than just solving equations. And note that this might make evolutionary sense—a creature who gets the best results through a [determined] causal chain that includes rationality is going to be selected for the beliefs that make it easiest to use rationality, including the belief that it makes a difference.)
Unknown: your last question highlights the problem with your reasoning. It’s idle to ask whether I’d go and jump off a cliff if I found my future were determined. What does that question even mean?
Put a different way, why should we ask an “ought” question about events that are determined? If A will do X whether or not it is the case that a rational person will do X, why do we care whether or not it is the case that a rational person will do X? I submit that we care about rationality because we believe it’ll give us traction on our problem of deciding what to do. So assuming fatalism (which is what we must do if the AI knows what we’re going to do, perfectly, in advance) demotivates rationality.
Here’s the ultimate problem: our intuitions about these sorts of questions don’t work, because they’re fundamentally rooted in our self-understanding as agents. It’s really, really hard for us to say sensible things about what it might mean to make a “decision” in a deterministic universe, or to understand what that implies. That’s why Newcomb’s problem is a problem—because we have normative principles of rationality that make sense only when we assume that it matters whether or not we follow them, and we don’t really know what it would mean to matter without causal leverage.
(There’s a reason free will is one of Kant’s antimonies of reason. I’ve been meaning to write a post about transcendental arguments and the limits of rationality for a while now… it’ll happen one of these days. But in a nutshell… I just don’t think our brains work when it comes down to comprehending what a deterministic universe looks like on some level other than just solving equations. And note that this might make evolutionary sense—a creature who gets the best results through a [determined] causal chain that includes rationality is going to be selected for the beliefs that make it easiest to use rationality, including the belief that it makes a difference.)