There are pre-formal facts about what words should mean, or what meanings to place in the context where these words may be used. You test a possible definition against the word’s role in the story, and see if it’s apt. This makes use of facts outside any given definition, just as with the real world.
And here, it’s not even clear what the original definitions of agents should be capable of, if you step outside particular decision theories and look at the data they could have available to them. Open source game theory doesn’t require anything fundamentally new that a straightforward idealization of an agent won’t automatically represent. It’s just that the classical decision theories will discard that data in their abstraction of agents. In Newcomb’s problem, it’s essentially discarding part of the problem statement, which is a strange thing to expect of a good definition of an agent that needs to work on the problem.
Except if you actually go try and do the work people’s pre-theoretic understanding of rationality doesn’t correspond to a single precise concept.
Once you step into Newcomb type problems it’s no longer clear how decision theory is supposed to correspond to the world. You might be tempted to say that decision theory tells you the best way to act...but it no longer does that since it’s not that the two-boxer should have picked one box. The two-boxer was incapable of so picking and what EDT is telling you is something more like: you should have been the sort of being who would have been a one boxer not that *you* should have been a one boxer.
Different people will disagree over whether their pre-theoretic notion of rationality is one in which it is correct to say that it is rational to be a one/two boxer. Classic example of working with a imprecisely defined concept.
There are pre-formal facts about what words should mean, or what meanings to place in the context where these words may be used. You test a possible definition against the word’s role in the story, and see if it’s apt. This makes use of facts outside any given definition, just as with the real world.
And here, it’s not even clear what the original definitions of agents should be capable of, if you step outside particular decision theories and look at the data they could have available to them. Open source game theory doesn’t require anything fundamentally new that a straightforward idealization of an agent won’t automatically represent. It’s just that the classical decision theories will discard that data in their abstraction of agents. In Newcomb’s problem, it’s essentially discarding part of the problem statement, which is a strange thing to expect of a good definition of an agent that needs to work on the problem.
Except if you actually go try and do the work people’s pre-theoretic understanding of rationality doesn’t correspond to a single precise concept.
Once you step into Newcomb type problems it’s no longer clear how decision theory is supposed to correspond to the world. You might be tempted to say that decision theory tells you the best way to act...but it no longer does that since it’s not that the two-boxer should have picked one box. The two-boxer was incapable of so picking and what EDT is telling you is something more like: you should have been the sort of being who would have been a one boxer not that *you* should have been a one boxer.
Different people will disagree over whether their pre-theoretic notion of rationality is one in which it is correct to say that it is rational to be a one/two boxer. Classic example of working with a imprecisely defined concept.