How valid is it to assume that (approximately) everyone who got the heliocentrism question wrong got it wrong by “guessing”? If 18% got it wrong, then your model says that there’s 36% who had no clue and half guessed right, but at the other extreme there’s a model that everyone ‘knows’ the answer, but 18% ‘know’ the wrong answer. I’m not sure which is scarier − 36% clueless or 18% die-hard geocentrists—but I don’t think we have enough information here to tell where on that spectrum it is. (In particular, if “I don’t know” was an option and only 3% selected it, then I think this is some evidence against the extreme end of 36% clueless?)
My guess at the truth of the matter is that almost no one is 100% guessing, but some people are extremely confident in their answer (a lot of the correct folks and also a small number of die-hard geocentrists), and then there’s a range down to people who haven’t thought about it in ages and just have a vague recollection of some elementary school teacher. Which I think is also a more hopeful picture than either the 36% clueless or the 18% geocentrists models? Because for people who are right but not confident, I’m reasonably ok with that; ideally they’d “know” more strongly, but it’s not a disaster if they don’t. And for people who are wrong but not confident, there are not that many of them and also they would happily change their mind if you just told them the correct answer.
That’s a good point. Human intuitions are geocentric, so the number of people guessing on the heliocentrism question is probably less than 18%. From an expected value perspective, we can treat 18% as guessing, whereas from a default geocentric perspective we can treat 0% as guessing.
But it goes both ways. For questions matching human intuition, if p% guess wrong then we should assume >p% got it correct by guessing.
This is where the word “belief” gets fuzzy. I think that’s what’s actually going on is that going on with the laser question is people read “Lasers work by focusing <mumble>” which does match the truth. Due to bad heuristics, it’s possible for more than 50% of a survey population to guess wrong on a true-or-false question, which means the things they guess right need to be adjusted downward of else we get nonsensical results.
How valid is it to assume that (approximately) everyone who got the heliocentrism question wrong got it wrong by “guessing”? If 18% got it wrong, then your model says that there’s 36% who had no clue and half guessed right, but at the other extreme there’s a model that everyone ‘knows’ the answer, but 18% ‘know’ the wrong answer. I’m not sure which is scarier − 36% clueless or 18% die-hard geocentrists—but I don’t think we have enough information here to tell where on that spectrum it is. (In particular, if “I don’t know” was an option and only 3% selected it, then I think this is some evidence against the extreme end of 36% clueless?)
My guess at the truth of the matter is that almost no one is 100% guessing, but some people are extremely confident in their answer (a lot of the correct folks and also a small number of die-hard geocentrists), and then there’s a range down to people who haven’t thought about it in ages and just have a vague recollection of some elementary school teacher. Which I think is also a more hopeful picture than either the 36% clueless or the 18% geocentrists models? Because for people who are right but not confident, I’m reasonably ok with that; ideally they’d “know” more strongly, but it’s not a disaster if they don’t. And for people who are wrong but not confident, there are not that many of them and also they would happily change their mind if you just told them the correct answer.
That’s a good point. Human intuitions are geocentric, so the number of people guessing on the heliocentrism question is probably less than 18%. From an expected value perspective, we can treat 18% as guessing, whereas from a default geocentric perspective we can treat 0% as guessing.
But it goes both ways. For questions matching human intuition, if p% guess wrong then we should assume >p% got it correct by guessing.
This is where the word “belief” gets fuzzy. I think that’s what’s actually going on is that going on with the laser question is people read “Lasers work by focusing <mumble>” which does match the truth. Due to bad heuristics, it’s possible for more than 50% of a survey population to guess wrong on a true-or-false question, which means the things they guess right need to be adjusted downward of else we get nonsensical results.