But you need at least a conceptual way to tell signal from noise. Maybe an analogy will help: do you also think that there’s an ideal Platonic market price that gets tainted by real-world “noise”?
I don’t understand market price enough to make this analogy. I don’t propose solutions, I merely say that considering noise as part of signal ignores the fact that it’s noise. There is even a strong human intuition that there are errors. If I understand that I made an error, I consider it preferable that my responses-in-error won’t be considered correct by definition.
The concept of correct answer is distinct from the concept of answer actually given. When we ask questions about preference, we are interested in correct answers, not in answers actually given. Furthermore, we are interested in correct answers to the questions that can’t physically be neither asked from nor answered by a human.
Formalizing the sense of correct answers is a big chunk of FAI, while formalizing the sense of actual answers or even counterfactual actual answers is trivial if you start from physics. It seems clear that these concepts are quite different, and the (available) formalization of the second doesn’t work for the first. Furthermore, “actual answers” also need to be interfaced with a tool that states “complete-states-of-the-world with all quarks and stuff” as human-readable questions.
But you need at least a conceptual way to tell signal from noise. Maybe an analogy will help: do you also think that there’s an ideal Platonic market price that gets tainted by real-world “noise”?
I don’t understand market price enough to make this analogy. I don’t propose solutions, I merely say that considering noise as part of signal ignores the fact that it’s noise. There is even a strong human intuition that there are errors. If I understand that I made an error, I consider it preferable that my responses-in-error won’t be considered correct by definition.
The concept of correct answer is distinct from the concept of answer actually given. When we ask questions about preference, we are interested in correct answers, not in answers actually given. Furthermore, we are interested in correct answers to the questions that can’t physically be neither asked from nor answered by a human.
Formalizing the sense of correct answers is a big chunk of FAI, while formalizing the sense of actual answers or even counterfactual actual answers is trivial if you start from physics. It seems clear that these concepts are quite different, and the (available) formalization of the second doesn’t work for the first. Furthermore, “actual answers” also need to be interfaced with a tool that states “complete-states-of-the-world with all quarks and stuff” as human-readable questions.