Obviously you can and if you define that NEW idealization an X-agent (or more likely redefine the word rationality in that situation) and then there may be a fact of the matter about how an X-agent will behave in such situations. What we can’t do is assume that there is a fact of the matter about what a rational agent will do that outstrips the definition.
As such it doesn’t make sense to say CDT is right or TDT or whatever before introducing a specific idealization relative to which we can prove they give the correct answer. But that idealization has to come first and has to convince the reader that it is a good idealization.
But the rhetoric around these decision theories misleadingly tries to convince us that there is some kind of pre-existing notion of rational agent and they have discovered that XDT gives the correct answer for that notion. That’s what makes people view these claims as interesting. If the claim was nothing more than ’here is one way you can make decisions corresponding to the following assumptions” it would be much more obscure and less interesting.
There are pre-formal facts about what words should mean, or what meanings to place in the context where these words may be used. You test a possible definition against the word’s role in the story, and see if it’s apt. This makes use of facts outside any given definition, just as with the real world.
And here, it’s not even clear what the original definitions of agents should be capable of, if you step outside particular decision theories and look at the data they could have available to them. Open source game theory doesn’t require anything fundamentally new that a straightforward idealization of an agent won’t automatically represent. It’s just that the classical decision theories will discard that data in their abstraction of agents. In Newcomb’s problem, it’s essentially discarding part of the problem statement, which is a strange thing to expect of a good definition of an agent that needs to work on the problem.
Except if you actually go try and do the work people’s pre-theoretic understanding of rationality doesn’t correspond to a single precise concept.
Once you step into Newcomb type problems it’s no longer clear how decision theory is supposed to correspond to the world. You might be tempted to say that decision theory tells you the best way to act...but it no longer does that since it’s not that the two-boxer should have picked one box. The two-boxer was incapable of so picking and what EDT is telling you is something more like: you should have been the sort of being who would have been a one boxer not that *you* should have been a one boxer.
Different people will disagree over whether their pre-theoretic notion of rationality is one in which it is correct to say that it is rational to be a one/two boxer. Classic example of working with a imprecisely defined concept.
Obviously you can and if you define that NEW idealization an X-agent (or more likely redefine the word rationality in that situation) and then there may be a fact of the matter about how an X-agent will behave in such situations. What we can’t do is assume that there is a fact of the matter about what a rational agent will do that outstrips the definition.
As such it doesn’t make sense to say CDT is right or TDT or whatever before introducing a specific idealization relative to which we can prove they give the correct answer. But that idealization has to come first and has to convince the reader that it is a good idealization.
But the rhetoric around these decision theories misleadingly tries to convince us that there is some kind of pre-existing notion of rational agent and they have discovered that XDT gives the correct answer for that notion. That’s what makes people view these claims as interesting. If the claim was nothing more than ’here is one way you can make decisions corresponding to the following assumptions” it would be much more obscure and less interesting.
There are pre-formal facts about what words should mean, or what meanings to place in the context where these words may be used. You test a possible definition against the word’s role in the story, and see if it’s apt. This makes use of facts outside any given definition, just as with the real world.
And here, it’s not even clear what the original definitions of agents should be capable of, if you step outside particular decision theories and look at the data they could have available to them. Open source game theory doesn’t require anything fundamentally new that a straightforward idealization of an agent won’t automatically represent. It’s just that the classical decision theories will discard that data in their abstraction of agents. In Newcomb’s problem, it’s essentially discarding part of the problem statement, which is a strange thing to expect of a good definition of an agent that needs to work on the problem.
Except if you actually go try and do the work people’s pre-theoretic understanding of rationality doesn’t correspond to a single precise concept.
Once you step into Newcomb type problems it’s no longer clear how decision theory is supposed to correspond to the world. You might be tempted to say that decision theory tells you the best way to act...but it no longer does that since it’s not that the two-boxer should have picked one box. The two-boxer was incapable of so picking and what EDT is telling you is something more like: you should have been the sort of being who would have been a one boxer not that *you* should have been a one boxer.
Different people will disagree over whether their pre-theoretic notion of rationality is one in which it is correct to say that it is rational to be a one/two boxer. Classic example of working with a imprecisely defined concept.