I just quoted the paper. It stated that N is the expected number of civilizations in the Milky Way. If that is the case, we have to account for the fact that at least one civilization exists. Which wasn’t done by the authors. Otherwise N is just the expected number of civilizations in the Milky Way under the assumption we didn’t knew that we existed.
The update we need to do is not equivalent to assuming N is at least one, because as I said, N being less than one is consistent with our experiences.
“before you learn any experience”? I.e. before you know you exist? Before you exist? Before the “my” refers to anything?
Yes, it gets awkward if you try to interpret the prior literally. Don’t do that, just apply the updating rules.
There are infinitely many possible priors. One would need a justification that the SIA prior is more rational than the alternatives.
SIA as a prior just says it’s equally likely for you to be one of two observers that are themselves equally likely to exist. Any alternative will necessarily say that in at least one such case, you’re more likely to be one observer than the other, which violates the indifference principle.
You might be certain that 100 observers exist in the universe. You are not sure who might be you, but one of the observers you regard as twice as likely to be you as each of the other ones, so you weigh it twice a strong.
But you may also be uncertain of how many observers exist. Say you are equally uncertain about the existence of each of 99 and twice as certain about the existence of a hundredth one. Then you weigh it twice as strong.
I’m not sure where my formulation is supposed to diverge here.
“Infinity” then just means that for any real number there is another real number which is larger (or smaller).
Well, this is possible without even letting the reals be unbounded. For any real number under 2, there’s another real number under 2 that’s greater than it.
We can perfectly well (and do all the time) make probabilistic statements about the present or the past.
And those statements are meaningless except insofar as they imply predictions about the future.
Where is the supposed “incoherence” here?
The statement lacks informational content.
It is verified by just a single non-mental object.
I don’t know what this is supposed to mean. What experience does the statement imply?
Low generalization error seems to be for many theories what truth is for ordinary statements.
Sure, I have no problem with calling your theory true once it’s shown strong predictive ability. But don’t confuse that with there being some territory out there that the theory somehow corresponds to.
objective a priori probability distribution over hypotheses (i.e. all possible statements) based on information content
Yes, this is SIA + Solomonoff universal prior, as far as I’m concerned. And this prior doesn’t require calling any of the hypotheses “true”, the prior is only used for prediction. Solomonoff aggregates a large number of hypotheses, none of which are “true”.
Some barometer reading predicts a storm, but it doesn’t explain it.
The reading isn’t a model. You can turn it into a model, and then it would indeed explain the storm, while air pressure would explain it better, by virtue of explaining other things as well and being part of a larger model that explains many things simply (such as how barometers are constructed.)
prediction is symmetric:
A model isn’t an experience, and can’t get conditioned on. There is no symmetry between models and experiences in my ontology.
The experience of rain doesn’t explain the experience of the wet street—rather, a model of rain explains / predicts both experiences.
The update we need to do is not equivalent to assuming N is at least one, because as I said, N being less than one is consistent with our experiences.
Yes, it gets awkward if you try to interpret the prior literally. Don’t do that, just apply the updating rules.
SIA as a prior just says it’s equally likely for you to be one of two observers that are themselves equally likely to exist. Any alternative will necessarily say that in at least one such case, you’re more likely to be one observer than the other, which violates the indifference principle.
I’m not sure where my formulation is supposed to diverge here.
Well, this is possible without even letting the reals be unbounded. For any real number under 2, there’s another real number under 2 that’s greater than it.
And those statements are meaningless except insofar as they imply predictions about the future.
The statement lacks informational content.
I don’t know what this is supposed to mean. What experience does the statement imply?
Sure, I have no problem with calling your theory true once it’s shown strong predictive ability. But don’t confuse that with there being some territory out there that the theory somehow corresponds to.
Yes, this is SIA + Solomonoff universal prior, as far as I’m concerned. And this prior doesn’t require calling any of the hypotheses “true”, the prior is only used for prediction. Solomonoff aggregates a large number of hypotheses, none of which are “true”.
The reading isn’t a model. You can turn it into a model, and then it would indeed explain the storm, while air pressure would explain it better, by virtue of explaining other things as well and being part of a larger model that explains many things simply (such as how barometers are constructed.)
A model isn’t an experience, and can’t get conditioned on. There is no symmetry between models and experiences in my ontology.
The experience of rain doesn’t explain the experience of the wet street—rather, a model of rain explains / predicts both experiences.