I don’t see how going from yes/no questions to simulated games helps. People will still exhibit preference reversals in their actions, or just melt down.
I wasn’t proposing a solution (I wasn’t talking about simulating humans playing a game—I was referring to a formal object). The strategies that need to be compared are too big for a human to comprehend—that’s one of the problems with defining what the preference is via asking questions (or simulating humans playing games). When you construct questions about the actual consequences in the world, you are simplifying, and through this simplification lose precision. That a person can make mistakes, can be wrong, is the next step through which this process loses the original question, and a way in which you can get incoherent responses: that’s noise. It doesn’t follow from the presence of noise that noise is inherent in the signal, and it doesn’t make sense to define signal as signal with noise.
But you need at least a conceptual way to tell signal from noise. Maybe an analogy will help: do you also think that there’s an ideal Platonic market price that gets tainted by real-world “noise”?
I don’t understand market price enough to make this analogy. I don’t propose solutions, I merely say that considering noise as part of signal ignores the fact that it’s noise. There is even a strong human intuition that there are errors. If I understand that I made an error, I consider it preferable that my responses-in-error won’t be considered correct by definition.
The concept of correct answer is distinct from the concept of answer actually given. When we ask questions about preference, we are interested in correct answers, not in answers actually given. Furthermore, we are interested in correct answers to the questions that can’t physically be neither asked from nor answered by a human.
Formalizing the sense of correct answers is a big chunk of FAI, while formalizing the sense of actual answers or even counterfactual actual answers is trivial if you start from physics. It seems clear that these concepts are quite different, and the (available) formalization of the second doesn’t work for the first. Furthermore, “actual answers” also need to be interfaced with a tool that states “complete-states-of-the-world with all quarks and stuff” as human-readable questions.
I don’t see how going from yes/no questions to simulated games helps. People will still exhibit preference reversals in their actions, or just melt down.
I wasn’t proposing a solution (I wasn’t talking about simulating humans playing a game—I was referring to a formal object). The strategies that need to be compared are too big for a human to comprehend—that’s one of the problems with defining what the preference is via asking questions (or simulating humans playing games). When you construct questions about the actual consequences in the world, you are simplifying, and through this simplification lose precision. That a person can make mistakes, can be wrong, is the next step through which this process loses the original question, and a way in which you can get incoherent responses: that’s noise. It doesn’t follow from the presence of noise that noise is inherent in the signal, and it doesn’t make sense to define signal as signal with noise.
But you need at least a conceptual way to tell signal from noise. Maybe an analogy will help: do you also think that there’s an ideal Platonic market price that gets tainted by real-world “noise”?
I don’t understand market price enough to make this analogy. I don’t propose solutions, I merely say that considering noise as part of signal ignores the fact that it’s noise. There is even a strong human intuition that there are errors. If I understand that I made an error, I consider it preferable that my responses-in-error won’t be considered correct by definition.
The concept of correct answer is distinct from the concept of answer actually given. When we ask questions about preference, we are interested in correct answers, not in answers actually given. Furthermore, we are interested in correct answers to the questions that can’t physically be neither asked from nor answered by a human.
Formalizing the sense of correct answers is a big chunk of FAI, while formalizing the sense of actual answers or even counterfactual actual answers is trivial if you start from physics. It seems clear that these concepts are quite different, and the (available) formalization of the second doesn’t work for the first. Furthermore, “actual answers” also need to be interfaced with a tool that states “complete-states-of-the-world with all quarks and stuff” as human-readable questions.