I don’t quite understand the question, but unfair refers to the environment requiring the internals to be a particular way. I actually think it is possible to allow some internal requirements to be considered fair and I discuss this in one of my draft posts. Nonetheless, it works as a first approximation.
Say you have certain information about the world and calculate the odds of different outcomes and their utilities. For example, in the twin prisoners dilemma the odds of DC and CD are zero, so the choice is between DD and CC. In the Newcomb’s problem the odds of getting $1001000 are zero, so the choice is between $1000000 (one-box) and $1000 (two-box). In the Death in Damascus problem the odds of escaping Death are zero, so the choice is to spend money on travel or not. What would be a concrete example of an unfair problem against this approach?
I don’t quite understand the question, but unfair refers to the environment requiring the internals to be a particular way. I actually think it is possible to allow some internal requirements to be considered fair and I discuss this in one of my draft posts. Nonetheless, it works as a first approximation.
Say you have certain information about the world and calculate the odds of different outcomes and their utilities. For example, in the twin prisoners dilemma the odds of DC and CD are zero, so the choice is between DD and CC. In the Newcomb’s problem the odds of getting $1001000 are zero, so the choice is between $1000000 (one-box) and $1000 (two-box). In the Death in Damascus problem the odds of escaping Death are zero, so the choice is to spend money on travel or not. What would be a concrete example of an unfair problem against this approach?
I think this comment does a better job of explaining the notion of fairness you’re trying to point at than other words here.
BTW, I published the draft, although fairness isn’t the main topic and only comes up towards the end.