How can a predictor be unfair to an algorithm that enumerates possible worlds and picks the best one, without any “decision theory” whatsoever? Unless by “unfair” you mean something like “you will get a coin that always lands tails, but the heads win, while everyone else gets a fair coin”
I don’t quite understand the question, but unfair refers to the environment requiring the internals to be a particular way. I actually think it is possible to allow some internal requirements to be considered fair and I discuss this in one of my draft posts. Nonetheless, it works as a first approximation.
Say you have certain information about the world and calculate the odds of different outcomes and their utilities. For example, in the twin prisoners dilemma the odds of DC and CD are zero, so the choice is between DD and CC. In the Newcomb’s problem the odds of getting $1001000 are zero, so the choice is between $1000000 (one-box) and $1000 (two-box). In the Death in Damascus problem the odds of escaping Death are zero, so the choice is to spend money on travel or not. What would be a concrete example of an unfair problem against this approach?
It’s impossible to enumerate possible worlds and pick the best one without a decision theory, because your decision process gives the same output in every possible world where you have a given epistemic state. We obviously need counterfactuals to make decisions, and the different decision theories can be seen as different theories about how counterfactuals work.
How can a predictor be unfair to an algorithm that enumerates possible worlds and picks the best one, without any “decision theory” whatsoever? Unless by “unfair” you mean something like “you will get a coin that always lands tails, but the heads win, while everyone else gets a fair coin”
I don’t quite understand the question, but unfair refers to the environment requiring the internals to be a particular way. I actually think it is possible to allow some internal requirements to be considered fair and I discuss this in one of my draft posts. Nonetheless, it works as a first approximation.
Say you have certain information about the world and calculate the odds of different outcomes and their utilities. For example, in the twin prisoners dilemma the odds of DC and CD are zero, so the choice is between DD and CC. In the Newcomb’s problem the odds of getting $1001000 are zero, so the choice is between $1000000 (one-box) and $1000 (two-box). In the Death in Damascus problem the odds of escaping Death are zero, so the choice is to spend money on travel or not. What would be a concrete example of an unfair problem against this approach?
I think this comment does a better job of explaining the notion of fairness you’re trying to point at than other words here.
BTW, I published the draft, although fairness isn’t the main topic and only comes up towards the end.
It’s impossible to enumerate possible worlds and pick the best one without a decision theory, because your decision process gives the same output in every possible world where you have a given epistemic state. We obviously need counterfactuals to make decisions, and the different decision theories can be seen as different theories about how counterfactuals work.