I think I messed this part up; random.org was down when I took it so I skipped that question and the next, then answered my best guess for redwood height, then realized that I could just make a random number by other means (python), and used that instead. I realized afterwards that it was probably about anchoring, but there was no obvious way to undo that section. Oh well; I was off by more than a factor of 2 regardless, despite having visited redwood national park.
Treat the three digit number that you just wrote down as a length, in feet. Is the height of the tallest redwood tree in the world more or less than the number that you wrote down?
… just serves to reinforce the anchoring effect, I take it.
All a setup for CFAR question 7 then, “best guess about the height of the tallest redwood tree in the world (in feet)?”
If that is so, then unfortunately for people who get a random number close to the actual height of that redwood tree, and who also have some background information on redwood trees, the anchoring effect would be impossible to tell apart from actually knowing (within bounds) the answer.
A number that is purposefully far off may have discriminated knowledge versus anchoring better, e.g. by using a random number from 500 to 1500 instead.
I think part of the point is to make it manifestly obvious that the number is not related to the question. One of the famous early anchoring experiments had subjects watch a wheel pick a random number, and then guess how many African countries are represented in the UN.
I think using a random number gives samples with low- and high-anchoring, and statistical trickery allows them to distinguish, especially since the sample size will be relatively large. (One way would be: group the samples based on random number (e.g. 0-333, 333-666, 666-999), then do a standard ANOVA with those groups as the factors.)
What I would do is compute a linear (or otherwise) regression between random number and height guessed. It would have also helped to have a control group to answer the question without anchoring, to determine what sort of background information people have, but that’s not strictly necessary.
I would only do that with respondents from the US—having to convert from metres to feet is likely to weaken the anchoring effect for respondents from elsewhere.
For anyone whose number is close to the correct answer, and who chooses a number in the vicinity as his/her own answer, the information whether that answer was picked because of anchoring effects or because of being an expert in dendrology is lost.
The sample size is probably large enough to still have reasonable predictive power without these cases, but the problem could have been circumvented by e.g. providing biased numbers, both too low and too high.
Any statistical trickery can only lead to a prediction about how likely people in the above scenario are to choose their answer based on anchoring versus based on knowledge, but that is using information from the other samples to speculate about the causal factors of our special cases, our special cases from above wouldn’t have added any information gain.
Saying “From the data, I can speculate that person A who chose a number close to the correct result and close to his random number did so because of anchoring / knowing the answer” doesn’t add to the strength of your result, it’s like saying that “Hypothetically, if a person A chose a number close to the correct result and close to his random number, I would expect that he would do so for reason X”.
Done.
Pretense for posting here:
How are the redwood tree questions relevant, don’t they mostly test trivia knowledge?
Anchoring, thus random number generator a question earlier.
I think I messed this part up; random.org was down when I took it so I skipped that question and the next, then answered my best guess for redwood height, then realized that I could just make a random number by other means (python), and used that instead. I realized afterwards that it was probably about anchoring, but there was no obvious way to undo that section. Oh well; I was off by more than a factor of 2 regardless, despite having visited redwood national park.
Ah, I see.
So this question (CFAR 6) …
… just serves to reinforce the anchoring effect, I take it.
All a setup for CFAR question 7 then, “best guess about the height of the tallest redwood tree in the world (in feet)?”
If that is so, then unfortunately for people who get a random number close to the actual height of that redwood tree, and who also have some background information on redwood trees, the anchoring effect would be impossible to tell apart from actually knowing (within bounds) the answer.
A number that is purposefully far off may have discriminated knowledge versus anchoring better, e.g. by using a random number from 500 to 1500 instead.
I think part of the point is to make it manifestly obvious that the number is not related to the question. One of the famous early anchoring experiments had subjects watch a wheel pick a random number, and then guess how many African countries are represented in the UN.
I think using a random number gives samples with low- and high-anchoring, and statistical trickery allows them to distinguish, especially since the sample size will be relatively large. (One way would be: group the samples based on random number (e.g. 0-333, 333-666, 666-999), then do a standard ANOVA with those groups as the factors.)
What I would do is compute a linear (or otherwise) regression between random number and height guessed. It would have also helped to have a control group to answer the question without anchoring, to determine what sort of background information people have, but that’s not strictly necessary.
I would only do that with respondents from the US—having to convert from metres to feet is likely to weaken the anchoring effect for respondents from elsewhere.
Of course, I’m a respondent from the US and I answered the question by converting from meters. So this approach isn’t foolproof.
For anyone whose number is close to the correct answer, and who chooses a number in the vicinity as his/her own answer, the information whether that answer was picked because of anchoring effects or because of being an expert in dendrology is lost.
The sample size is probably large enough to still have reasonable predictive power without these cases, but the problem could have been circumvented by e.g. providing biased numbers, both too low and too high.
Any statistical trickery can only lead to a prediction about how likely people in the above scenario are to choose their answer based on anchoring versus based on knowledge, but that is using information from the other samples to speculate about the causal factors of our special cases, our special cases from above wouldn’t have added any information gain.
Saying “From the data, I can speculate that person A who chose a number close to the correct result and close to his random number did so because of anchoring / knowing the answer” doesn’t add to the strength of your result, it’s like saying that “Hypothetically, if a person A chose a number close to the correct result and close to his random number, I would expect that he would do so for reason X”.