Robin, I observe that Nature also fails to live up to the usual standards of an economics experiment.
Stuart and Constant, in AI/machine learning we have a formal notion of “strictly more general concepts” as those with a strictly greater set of positive examples, and symmetrically for strictly more specific concepts. (This is not usually what I mean when I say “concept” but this is the term of art in machine learning.)
Positive bias implies that people look at a set of examples and a starting concept, and try to envision a strictly more specific concept: for example, “ascending by 2 but all numbers positive”. We seem to focus less on finding a strictly more general concept, such as “separated by equal intervals” or “in ascending order” or “any sequence not ending in 2″.
Why do we only look in the more-specific direction and see only half the universe of concepts? Instinct, one might simply say, and be done with it it. One might try a Bayesian argument that any more general concept would concentrate its probability mass less, and do a poorer job of explaining the positive examples found—for it seems that 10-12-14 is an unlikely thing to see, if the generator is “any sequence” than “any sequence separated by intervals of 2″. But this is an invalid argument if you are the one generating the examples! As for the initial example being misleadingly specific, heck, people read nonexistent coincidences into Nature all the time. It may not be fair of the experimenter but it is certainly realistic as a test of a rationalist’s skill.
If you are testing examples in an oracle, “positive” and “negative” are symmetrical labels. This point alone should make it very clear that, from the standpoint of probability theory, we are dealing strictly with a bizarre quirk of human psychology.
Robin, I observe that Nature also fails to live up to the usual standards of an economics experiment.
Stuart and Constant, in AI/machine learning we have a formal notion of “strictly more general concepts” as those with a strictly greater set of positive examples, and symmetrically for strictly more specific concepts. (This is not usually what I mean when I say “concept” but this is the term of art in machine learning.)
Positive bias implies that people look at a set of examples and a starting concept, and try to envision a strictly more specific concept: for example, “ascending by 2 but all numbers positive”. We seem to focus less on finding a strictly more general concept, such as “separated by equal intervals” or “in ascending order” or “any sequence not ending in 2″.
Why do we only look in the more-specific direction and see only half the universe of concepts? Instinct, one might simply say, and be done with it it. One might try a Bayesian argument that any more general concept would concentrate its probability mass less, and do a poorer job of explaining the positive examples found—for it seems that 10-12-14 is an unlikely thing to see, if the generator is “any sequence” than “any sequence separated by intervals of 2″. But this is an invalid argument if you are the one generating the examples! As for the initial example being misleadingly specific, heck, people read nonexistent coincidences into Nature all the time. It may not be fair of the experimenter but it is certainly realistic as a test of a rationalist’s skill.
If you are testing examples in an oracle, “positive” and “negative” are symmetrical labels. This point alone should make it very clear that, from the standpoint of probability theory, we are dealing strictly with a bizarre quirk of human psychology.