Hm, when I first read those findings, I found the first three to be as expected, the fourth to be surprising (why would blacks want racist officers?), and for the fifth I found the result to make sense but considered that I would have though the same if the opposite result had been found (soldiers don’t want to abandon their friends in combat, but want to leave together afterwards). So this would seem to indicate a problem with my model, given that the findings were all false.
But, is it possible that in that demonstration, those specific findings were selected specifically because they were opposite to what people would expect? If that is the case, then my model still isn’t really in error, because when examining the statements I had no real reason to believe that they were meant to fool me.
Hm, when I first read those findings, I found the first three to be as expected, the fourth to be surprising (why would blacks want racist officers?), and for the fifth I found the result to make sense but considered that I would have though the same if the opposite result had been found (soldiers don’t want to abandon their friends in combat, but want to leave together afterwards). So this would seem to indicate a problem with my model, given that the findings were all false.
But, is it possible that in that demonstration, those specific findings were selected specifically because they were opposite to what people would expect? If that is the case, then my model still isn’t really in error, because when examining the statements I had no real reason to believe that they were meant to fool me.