Here, Eliezer seems to be talking about more specified versions of a not-fully specified hypothesis (case 1):
There are always better hypotheses than the hypotheses you’re using. Even if you could exactly predict the YES and NO outcomes, can you exactly predict timing? Facial expressions?
Here, Eliezer seems to be talking about hypotheses that aren’t subhypotheses of an existing hypothesis (case 2):
You could run that test to see if all of your hypotheses are scoring lower than they promised to score, for example.
Eliezer’s approach is:
in the end people don’t usually assign an explicit probability there. They steer by the relative odds of those models they actually have of the world.
For subhypotheses (case 1), we aren’t actually considering these further features yet, so this seems true but not in a particularly exciting way.
I think it is rare for a hypothesis to truly lie outside of all existing hypotheses, because you can have very underspecified meta-hypotheses that you will implicitly be taking into account even if you don’t enumerate them. (examples of vague meta-hypotheses: supernatural vs natural, realism vs. solipsism, etc). And of course there are varying levels of vagueness from very narrow to very broad.
But, OK, within these vague meta-hypotheses the true hypothesis is still often not a subhypothesis of any of your more specified hypotheses (case 2). A number for the probability of this happening might be hard to pin down, and in order to actually obtain instrumental value from this probability assignment, or to make a Bayesian adjustment of it, you need a prior for what happens in the world where all your specific hypotheses are false.
But, you actually do have such priors and relevant information as to the probability!
Eliezer mentions:
And yet there is advice you can derive, if you go sufficiently meta. You could run that test to see if all of your hypotheses are scoring lower than they promised to score, for example. That test is not motivated by any particular hypothesis you already did calculations for. It is motivated by your belief, in full generality, in ‘the set of all hypotheses I’m not considering’.
This is relevant data. Note also that the expectation that all of your hypotheses will score lower than promised if they are all false is, in itself, a prior on the predictions of the ‘all-other-hypotheses’ hypothesis.
Likewise, when you do the adjustments mentioned in Eliezer’s last paragraph, you will do some specific amount of adjustment, and that specific adjustment amount will depend on an implicit value for the probability of the ‘all-other-hypotheses’ hypothesis and an implicit prior on its predictions.
In my view, there is no reason in principle that these priors and probabilities cannot be quantified.
To be sure, people don’t usually quantify their beliefs in the ‘all-other-hypotheses’ hypothesis. But, I see this as a special case of the general rule that people don’t usually quantify beliefs in hypotheses with poorly specified predictions. And the predictions are not infinitely poorly specified, since we do have priors about it.
Here, Eliezer seems to be talking about more specified versions of a not-fully specified hypothesis (case 1):
Here, Eliezer seems to be talking about hypotheses that aren’t subhypotheses of an existing hypothesis (case 2):
Eliezer’s approach is:
For subhypotheses (case 1), we aren’t actually considering these further features yet, so this seems true but not in a particularly exciting way.
I think it is rare for a hypothesis to truly lie outside of all existing hypotheses, because you can have very underspecified meta-hypotheses that you will implicitly be taking into account even if you don’t enumerate them. (examples of vague meta-hypotheses: supernatural vs natural, realism vs. solipsism, etc). And of course there are varying levels of vagueness from very narrow to very broad.
But, OK, within these vague meta-hypotheses the true hypothesis is still often not a subhypothesis of any of your more specified hypotheses (case 2). A number for the probability of this happening might be hard to pin down, and in order to actually obtain instrumental value from this probability assignment, or to make a Bayesian adjustment of it, you need a prior for what happens in the world where all your specific hypotheses are false.
But, you actually do have such priors and relevant information as to the probability!
Eliezer mentions:
This is relevant data. Note also that the expectation that all of your hypotheses will score lower than promised if they are all false is, in itself, a prior on the predictions of the ‘all-other-hypotheses’ hypothesis.
Likewise, when you do the adjustments mentioned in Eliezer’s last paragraph, you will do some specific amount of adjustment, and that specific adjustment amount will depend on an implicit value for the probability of the ‘all-other-hypotheses’ hypothesis and an implicit prior on its predictions.
In my view, there is no reason in principle that these priors and probabilities cannot be quantified.
To be sure, people don’t usually quantify their beliefs in the ‘all-other-hypotheses’ hypothesis. But, I see this as a special case of the general rule that people don’t usually quantify beliefs in hypotheses with poorly specified predictions. And the predictions are not infinitely poorly specified, since we do have priors about it.