Say, you are trying to figure out what the mass on an electron is. As you develop your experimental techniques, there will be better or worse approximate answers along the way. It makes sense to characterize the approximations to the mass you seek to measure as more or less accurate, and characterize someone else’s wild guesses about this value as correct or not correct at all.
On the other hand, it doesn’t make sense so similarly characterize the actual mass of an electron. The actual mass of an electron can’t be correct or incorrect, can’t be more or less well-calibrated—talking this way would indicate a conceptual confusion.
When I talked about prior or preference in the above comments, I meant the actual facts, not particular approximations to those facts, the concepts that we might want to approximate, not approximations. Characterizing these facts as correct or incorrect doesn’t make sense for similar reasons.
Furthermore, since they are fixed elements of ideal decision-making algorithm, it doesn’t make sense to ascribe preference to them (more or less useful, more or less preferable). This is a bit more subtle than with the example of the mass of an electron, since in that case we had a factual estimation process, and with decision-making we also have a moral estimation process. With factual estimation, the fact that we are approximating isn’t itself an approximation, and so can’t be more or less accurate. With moral estimation, we are approximating the true value of a decision (event), and the actual value of a decision (event) can’t be too high or too low.
I follow you up until you conclude that priors cannot be correct or incorrect. An agent with more accurate priors will converge toward the actual answer more quickly—I’ll grant that’s not a binary distinction, but it’s a useful one.
If you are an agent with “less accurate prior”, then you won’t be able to recognize a “more accurate prior” as a better one. You are trying to look at the situation from the outside, but it’s not possible where we discuss your own decision-making algorithms.
If I’m blind, I won’t be able to recognize a sighted person by sight. That doesn’t change the fact that the sighted person can see better than the blind person.
There is no God’s view to define the truth, and Faith to attain it. You only get to use your own eyes. If I predict a fair coin will come up “heads”, and you predict it’ll come up “tails”, and it does come up “tails”, who was closer to the truth? The truth of such a prediction is not in how well it aligns with the outcome, but in how well it takes into account available information, how well it processes the state of uncertainty. What should be believed given the available information and what is actually true are two separate questions, and the latter question is never asked, as you never have all the information, only some state of uncertainty. Reality is not transparent, it’s not possible to glimpse the hidden truth, only to cope with uncertainty. Confuse the two at your own peril.
I’m so confused, I can’t even tell if we disagree. What I am thinking of is essentially the argument in Eliezer Yudkowsky’s “Inductive Bias”:
The more inductive bias you have, the faster you learn to predict the future, but only if your inductive bias does in fact concentrate more probability into sequences of observations that actually occur. If your inductive bias concentrates probability into sequences that don’t occur, this diverts probability mass from sequences that do occur, and you will learn more slowly, or not learn at all, or even—if you are unlucky enough—learn in the wrong direction.
Inductive biases can be probabilistically correct or probabilistically incorrect, and if they are correct, it is good to have as much of them as possible, and if they are incorrect, you are left worse off than if you had no inductive bias at all. Which is to say that inductive biases are like any other kind of belief; the true ones are good for you, the bad ones are worse than nothing. In contrast, statistical bias is always bad, period—you can trade it off against other ills, but it’s never a good thing for itself. Statistical bias is a systematic direction in errors; inductive bias is a systematic direction in belief revisions.
If you can inspect and analyze your own prior (using your own prior, of course) you can notice that your prior is not reflectively consistent, that you can come up with other priors that your prior expects to get better results. Humans, who are not ideal Bayesians but have a concept of ideal Bayesians, have actually done this.
(Though reflective consistency does not guarantee effectiveness. Some priors are too ineffective to notice they are ineffective.)
If you have a concept of prior (2), and wish to get better at acting according to it over time, then (2) is your real prior. It is what you (try to) use to make your decisions. (3) is just a tool you employ in the meantime, and you may pick a better tool, judging with (2). I don’t know what (1) means (or what (2) means when (1) is realized).
(1) is the prior I would have if I had never inspected and analyzed my prior. It is a path not taken from prior (3). The point of introducing it was to point out that I really believe (2) is better than (3), as opposed to (2) is better than (1) (which I also believe, but it isn’t the point).
Does “your prior” refer to (A) the prior you identify with, or (B) the prior that describes your actual beliefs as you process evidence, or something else?
If (A), I don’t understand:
This might be a process of figuring out what your prior is, but the approximations along the way are not your prior
If (B), I don’t understand:
If you have a concept of prior (2), and wish to get better at acting according to it over time, then (2) is your real prior.
I can’t tell what you’re talking about.
Say, you are trying to figure out what the mass on an electron is. As you develop your experimental techniques, there will be better or worse approximate answers along the way. It makes sense to characterize the approximations to the mass you seek to measure as more or less accurate, and characterize someone else’s wild guesses about this value as correct or not correct at all.
On the other hand, it doesn’t make sense so similarly characterize the actual mass of an electron. The actual mass of an electron can’t be correct or incorrect, can’t be more or less well-calibrated—talking this way would indicate a conceptual confusion.
When I talked about prior or preference in the above comments, I meant the actual facts, not particular approximations to those facts, the concepts that we might want to approximate, not approximations. Characterizing these facts as correct or incorrect doesn’t make sense for similar reasons.
Furthermore, since they are fixed elements of ideal decision-making algorithm, it doesn’t make sense to ascribe preference to them (more or less useful, more or less preferable). This is a bit more subtle than with the example of the mass of an electron, since in that case we had a factual estimation process, and with decision-making we also have a moral estimation process. With factual estimation, the fact that we are approximating isn’t itself an approximation, and so can’t be more or less accurate. With moral estimation, we are approximating the true value of a decision (event), and the actual value of a decision (event) can’t be too high or too low.
I follow you up until you conclude that priors cannot be correct or incorrect. An agent with more accurate priors will converge toward the actual answer more quickly—I’ll grant that’s not a binary distinction, but it’s a useful one.
If you are an agent with “less accurate prior”, then you won’t be able to recognize a “more accurate prior” as a better one. You are trying to look at the situation from the outside, but it’s not possible where we discuss your own decision-making algorithms.
If I’m blind, I won’t be able to recognize a sighted person by sight. That doesn’t change the fact that the sighted person can see better than the blind person.
There is no God’s view to define the truth, and Faith to attain it. You only get to use your own eyes. If I predict a fair coin will come up “heads”, and you predict it’ll come up “tails”, and it does come up “tails”, who was closer to the truth? The truth of such a prediction is not in how well it aligns with the outcome, but in how well it takes into account available information, how well it processes the state of uncertainty. What should be believed given the available information and what is actually true are two separate questions, and the latter question is never asked, as you never have all the information, only some state of uncertainty. Reality is not transparent, it’s not possible to glimpse the hidden truth, only to cope with uncertainty. Confuse the two at your own peril.
I’m so confused, I can’t even tell if we disagree. What I am thinking of is essentially the argument in Eliezer Yudkowsky’s “Inductive Bias”:
If you can inspect and analyze your own prior (using your own prior, of course) you can notice that your prior is not reflectively consistent, that you can come up with other priors that your prior expects to get better results. Humans, who are not ideal Bayesians but have a concept of ideal Bayesians, have actually done this.
(Though reflective consistency does not guarantee effectiveness. Some priors are too ineffective to notice they are ineffective.)
This might be a process of figuring out what your prior is, but the approximations along the way are not your prior (they might be some priors).
I see three priors to track here:
The prior I would counterfactually have had if I were not able to make this comparison.
The ideal prior I am comparing my approximation of prior (1) to.
My actual prior resulting from this comparison, reflecting that I try to implement prior (2), but cannot always compute/internalize it.
I have prior (3), but I believe prior (2) is better.
If you have a concept of prior (2), and wish to get better at acting according to it over time, then (2) is your real prior. It is what you (try to) use to make your decisions. (3) is just a tool you employ in the meantime, and you may pick a better tool, judging with (2). I don’t know what (1) means (or what (2) means when (1) is realized).
(1) is the prior I would have if I had never inspected and analyzed my prior. It is a path not taken from prior (3). The point of introducing it was to point out that I really believe (2) is better than (3), as opposed to (2) is better than (1) (which I also believe, but it isn’t the point).
Does “your prior” refer to (A) the prior you identify with, or (B) the prior that describes your actual beliefs as you process evidence, or something else?
If (A), I don’t understand:
If (B), I don’t understand: