Percentage point difference in belief probability isn’t all that meaningful. 50% to 51% is a lot smaller confidence difference than 98% to 99%.
69.4% probability means 3.27 odds; 41.2% probability means 1.70 odds.
That means that, in the aggregate, survey takers find (3.27/1.70) = 1.924 → 0.944 more bits of evidence for life somewhere in the universe, compared to somewhere in the galaxy.
Is that unreasonably big or unreasonably small?
EDIT: Oops, I can’t convert properly. That should be 2.27 odds and 0.70 odds, an odds ratio of 3.24, or 1.70 more bits.
If we take the odds ratio for each individual respondent (instead of the aggregate), the median odds ratio is 10.1 → 3.3 more bits of evidence for life in the universe, compared to somewhere in the galaxy. 25th percentile odds ratio: 2.7 → 1.4 more bits; 75th percentile odds ratio: 75.7 → 6.2 more bits. (This is all using the publicly available data set; looking at the aggregate in that data set I’m getting an odds ratio of 3.6 → 1.8 more bits.)
People who believe in God/religion/the supernatural tend to give a lower odds ratio, but other than that the odds ratio doesn’t seem to be associated with any of the other variables on the survey.
That gives .44 odds non-existence in universe, 1.43 odds non-existence in galaxy, a ratio of 3.24, or 1.70 more bits of evidence for no (non-human) life in the galaxy compared to the universe in general.
And I forget why those two answers are allowed to be different...
EDIT: I made an error in the first calculation; as I suspected, the values are symmetric.
Percentage point difference in belief probability isn’t all that meaningful. 50% to 51% is a lot smaller confidence difference than 98% to 99%.
69.4% probability means 3.27 odds; 41.2% probability means 1.70 odds.
That means that, in the aggregate, survey takers find (3.27/1.70) = 1.924 → 0.944 more bits of evidence for life somewhere in the universe, compared to somewhere in the galaxy.
Is that unreasonably big or unreasonably small?
EDIT: Oops, I can’t convert properly. That should be 2.27 odds and 0.70 odds, an odds ratio of 3.24, or 1.70 more bits.
If we take the odds ratio for each individual respondent (instead of the aggregate), the median odds ratio is 10.1 → 3.3 more bits of evidence for life in the universe, compared to somewhere in the galaxy. 25th percentile odds ratio: 2.7 → 1.4 more bits; 75th percentile odds ratio: 75.7 → 6.2 more bits. (This is all using the publicly available data set; looking at the aggregate in that data set I’m getting an odds ratio of 3.6 → 1.8 more bits.)
People who believe in God/religion/the supernatural tend to give a lower odds ratio, but other than that the odds ratio doesn’t seem to be associated with any of the other variables on the survey.
I’m not comfortable with bit odds, especially in this context, so I dunno. How would you frame that in the opposite terms, for lack of existence?
That gives .44 odds non-existence in universe, 1.43 odds non-existence in galaxy, a ratio of 3.24, or 1.70 more bits of evidence for no (non-human) life in the galaxy compared to the universe in general.
And I forget why those two answers are allowed to be different...
EDIT: I made an error in the first calculation; as I suspected, the values are symmetric.