So, by “mathematically unjustified” you meant something like “mathematically inconsistent” in the same way that “1 + 5 = 7” is inconsistent. However, now I’m puzzled, since why is it “mathematically inconsistent” for an insider to model his observation as a random sample from some population?
Provided the sampling model follows the Kolmogorov probability axioms, it is mathematically consistent. And this is true even if it is a totally weird and implausible sampling model (like being a random sample from the population {me now, Joan of Arc at the stake, Obama’s left shoe, the number 27} … ).
In the world of the unlucky physicists, they are assuming that their data is randomly selected. If I understand the thought experiment, this assumption is correct. If this were a computer game, we’d say the physicists have been cursed by the random number generator to receive extremely unlikely results given the true state of the universe. But that doesn’t mean the sample isn’t still random—unlikely occurrences can happen randomly.
Likewise, the doomsday argument assumes that the sample of human experiences is randomly selected. Yet there is no reason to think this is so. You are using your experiences as the sample because it is the only one truly available to you. To me, this looks like convenience sampling, with all the limitations on drawing conclusions that this implies. And if your assumption that your sample is random is wrong, then the whole doomsday argument falls apart.
In short, cursed by the random number generator != nonrandom sample.
What I’m trying to understand is the difference between these two arguments:
Model A predicts that the vast majority of observations of the universe will conclude it has a background radiation with a temperature of 1K, whereas a tiny minority of observations will conclude it has a temperature of 3K. Model B predicts that the vast majority of observations of the universe will conclude a background radiation temperature of 3K. Our current observations conclude a temperature of 3K. This is evidence against model A and in favour of model B.
Model 1 predicts that the vast majority of observations of the universe will be in civilisations which have expanded away from their planet of origin and have made many trillion trillion person-years of observations so far; a tiny minority will be in civilisations which are still on their planet of origin and have made less than 10 trillion person-years of observations so far. Model 2 predicts that the vast majority of observations will be in civilisations which are still on their planet of origin and have made less than 10 trillion person-years of observations so far. Our current observations are in a civilisation which is still on its planet of origin, and has made less than 10 trillion person-years of observations so far. This is evidence in favour of Model 2.
Formally, these look identical, but it seems you accept the first argument yet reject the second. And the difference is… ?
In both cases, the inferences being drawn rely on the fact that the observation was randomly selected.
In the physics example, the physicist started with no observation, made a random observation, and made inferences from the random observation.
In the population example, we start with an observation (our own lives). You treat this observation as a random sample, but you have no reason to think that “random sample” is a real property of your observation. Certainly, you didn’t random select the observation. Instead, you are using your own experience essentially because it is the only one available.
But then why do you assume that the physicist made a “random observation”? The model A description just says that there are lots of observations, and only a tiny minority are such as to conclude 3K. If both model A and model B were of deterministic universes, so that there are strictly no “random” observations in either of them (because there are no random processes at all) then would you reverse your conclusion?
Is your basic objection to the application of probability theory when it concerns processes other than physically random processes?
If the physicists are not receiving random samples of the population of possible observations, then their inferences are also unjustified. And if random processes are impossible because the universe is deterministic . . . my head hurts, but I think raising that problem is changing the subject. I don’t really want to talk about whether counter-factuals (like scientists proposing a different theory than the one actually proposed) are a coherent concept in a deterministic universe.
Is your basic objection to the application of probability theory when it concerns processes other than physically random processes?
That could be, but I’m not familiar with the technical vocabulary you are using. What’s an example of a non-physical random process?
This discusses lots of different interpretations of “random”. The general sense seems to be that a random process is unpredictable in detail, but has some predictable properties such that the process can be modelled mathematically by a random variable (or sequence of random variables).
Here, the notion of “modelling by a random variable” means that if we take the actual outcome and apply statistical tests to check whether the outcome is drawn from the distribution defined by the random variable, then the actual outcome passes those tests. This doesn’t mean of course that it is in an objective sense a random process with that distribution, but it does mean that the model “fits”.
P.S. For the avoidance of doubt, you can assume that models A and B involve pseudo-random processes, and these obey the usual frequency statistics of true random processes.
So, by “mathematically unjustified” you meant something like “mathematically inconsistent” in the same way that “1 + 5 = 7” is inconsistent. However, now I’m puzzled, since why is it “mathematically inconsistent” for an insider to model his observation as a random sample from some population?
Provided the sampling model follows the Kolmogorov probability axioms, it is mathematically consistent. And this is true even if it is a totally weird and implausible sampling model (like being a random sample from the population {me now, Joan of Arc at the stake, Obama’s left shoe, the number 27} … ).
In the world of the unlucky physicists, they are assuming that their data is randomly selected. If I understand the thought experiment, this assumption is correct. If this were a computer game, we’d say the physicists have been cursed by the random number generator to receive extremely unlikely results given the true state of the universe. But that doesn’t mean the sample isn’t still random—unlikely occurrences can happen randomly.
Likewise, the doomsday argument assumes that the sample of human experiences is randomly selected. Yet there is no reason to think this is so. You are using your experiences as the sample because it is the only one truly available to you. To me, this looks like convenience sampling, with all the limitations on drawing conclusions that this implies. And if your assumption that your sample is random is wrong, then the whole doomsday argument falls apart.
In short, cursed by the random number generator != nonrandom sample.
What I’m trying to understand is the difference between these two arguments:
Model A predicts that the vast majority of observations of the universe will conclude it has a background radiation with a temperature of 1K, whereas a tiny minority of observations will conclude it has a temperature of 3K. Model B predicts that the vast majority of observations of the universe will conclude a background radiation temperature of 3K. Our current observations conclude a temperature of 3K. This is evidence against model A and in favour of model B.
Model 1 predicts that the vast majority of observations of the universe will be in civilisations which have expanded away from their planet of origin and have made many trillion trillion person-years of observations so far; a tiny minority will be in civilisations which are still on their planet of origin and have made less than 10 trillion person-years of observations so far. Model 2 predicts that the vast majority of observations will be in civilisations which are still on their planet of origin and have made less than 10 trillion person-years of observations so far. Our current observations are in a civilisation which is still on its planet of origin, and has made less than 10 trillion person-years of observations so far. This is evidence in favour of Model 2.
Formally, these look identical, but it seems you accept the first argument yet reject the second. And the difference is… ?
In both cases, the inferences being drawn rely on the fact that the observation was randomly selected.
In the physics example, the physicist started with no observation, made a random observation, and made inferences from the random observation.
In the population example, we start with an observation (our own lives). You treat this observation as a random sample, but you have no reason to think that “random sample” is a real property of your observation. Certainly, you didn’t random select the observation. Instead, you are using your own experience essentially because it is the only one available.
But then why do you assume that the physicist made a “random observation”? The model A description just says that there are lots of observations, and only a tiny minority are such as to conclude 3K. If both model A and model B were of deterministic universes, so that there are strictly no “random” observations in either of them (because there are no random processes at all) then would you reverse your conclusion?
Is your basic objection to the application of probability theory when it concerns processes other than physically random processes?
If the physicists are not receiving random samples of the population of possible observations, then their inferences are also unjustified. And if random processes are impossible because the universe is deterministic . . . my head hurts, but I think raising that problem is changing the subject. I don’t really want to talk about whether counter-factuals (like scientists proposing a different theory than the one actually proposed) are a coherent concept in a deterministic universe.
That could be, but I’m not familiar with the technical vocabulary you are using. What’s an example of a non-physical random process?
Maybe take a look at the Wikipedia entry http://en.wikipedia.org/wiki/Randomness
This discusses lots of different interpretations of “random”. The general sense seems to be that a random process is unpredictable in detail, but has some predictable properties such that the process can be modelled mathematically by a random variable (or sequence of random variables).
Here, the notion of “modelling by a random variable” means that if we take the actual outcome and apply statistical tests to check whether the outcome is drawn from the distribution defined by the random variable, then the actual outcome passes those tests. This doesn’t mean of course that it is in an objective sense a random process with that distribution, but it does mean that the model “fits”.
Hope that helps...
P.S. For the avoidance of doubt, you can assume that models A and B involve pseudo-random processes, and these obey the usual frequency statistics of true random processes.