I think the intended meaning (phrased in LessWrong terminology) is something more along the lines of the following:
Humans are not perfect Bayesians, and even if they were, they don’t start from the same priors and encounter the same evidence. Therefore, Aumann’s Agreement Theorem does not hold for human beings; thus, if a large number of human beings is observed to agree on the truth of a proposition, you should be suspicious. It’s far more likely that they are signalling tribal agreement or, worse yet, accepting the proposition without thinking it through for themselves, than that they have each individually thought it through and independently reached identical conclusions. In general, then, civilized disagreement is a strong indicator of a healthy rationalist community; look at how often people disagree with each other on LW, for example. If everyone on LW was chanting, “Yes, Many Worlds is true, you should prefer torture to dust specks, mainstream philosophy is worthless,” then that would be worrying, even if it is true. (I am not claiming that it is, nor am I claiming that it is not; such topics are, I feel, beyond the scope of this discussion and were brought up purely as examples.)
Lack of diversity may be a problem because then you’ve got a lower chance of getting the right answer somewhere in there. It doesn’t mean that everyone is thinking correctly. Do you subscribe to truth relativism? Otherwise, what could be thought about that doesn’t have a correct answer?
If everyone is thinking in the same way, you have a good result only if that one way is the correct way. If there are a variety of different ways, all of which appear good, they will produce varying proposals which can be considered for their direct practical consequences, and when the different methods come into conflict, all can be tested and potentially improved.
You might object that you have deduced the correct way of thinking, and therefore you do not need to be concerned with this. Two counter-arguments: 1) You are most likely overconfident, and the consequences of overconfidently removing all other methods of thinking are likely to be a catastrophic Black Swan when an unseen flaw hits you. 2) To the best of our knowledge, the objectively correct way of thinking is AIXI, which is literally unimplementable.
A disagreement on any of those questions reduces to either incorrect reasoning or differing preferences. People having identical preferences may be uncommon, but I don’t think you can say it means someone isn’t thinking.
Risk management through diversification is a totally different use of the word diversification, and that can be followed by a single person also; I don’t have to have two contradictory opinions to not invest all my money/resources/time in one basket.
Of the 3 examples you mentioned:
1 is not something people actively “think” about, but is in a sense “automatic”, although there is disagreement.
If you feel 2 doesn’t have a correct answer, then it seems you’re endorsing some form of moral nihilism, in which case the question is meaningless. (Note: this is the position which I myself hold.)
For 3, people are not actually looking for the “best” answer; they want a satisfactory answer. There is a best answer, but it’s usually not worth the effort to find. (For any sufficiently complicated problem, of course.) There may be multiple satisfactory answers, but it’s not a sign that someone isn’t thinking if everyone comes up with the same satisfactory answer.
Risk management through diversification is a totally different use of the word diversification
Totally different than what?
1 is not something people actively “think” about
Sure they do, but to make it more stark let me change it a bit: “Am I pretty?”
More generally, this example represents the whole class of subjective opinions.
If you feel 2 doesn’t have a correct answer, then it seems you’re endorsing some form of moral nihilism
Not quite, just rejecting moral realism is quite sufficient here. But in any case, people do think about it, in different ways, and I don’t know how would one determine what is a “correct” answer.
This example represents the distinction between descriptive and normative.
For 3, people are not actually looking for the “best” answer; they want a satisfactory answer.
Also, not quite. People do want the best answer, it’s just that they are often satisfied with a good-enough answer. However the question of what is “best” is a hard one and in many cases there is no single correct answer—the optimality is conditional on certain parameters.
This example represents the individuality of many “correct” answers—what is correct depends on the person.
Diversity of opinions is helpful for risk management, specifically the risk that you commit all your resources to the single idea that turns out to be wrong. This is commonly known as “don’t put all your eggs into one basket”. Risk management is not only about money.
if you don’t know how to determine a correct answer, there’s not much to think about until you do.
I strongly disagree. In fact, figuring out how would you recognize a correct answer if it happens to bite you on the ass is the major thing to think about for many problems.
The benefit I mentioned above of diversity (higher chance of getting the right answer) is the same thing as what you’re talking about then, not like you said :”That, too, but there are other issues as well”. If you can recognize the correct answer when you see it, then the use of diversity is to increase your chances of getting the right answer.
So are we down to the only correct use of the original quote is when people aren’t sure how to recognize a correct answer?
Nope. The thing is, it’s not a “correct”—“not correct” binary dilemma. For any reasonably complex problem there might be a single “correct” answer (provided what you consider optimal is fully specified) and a lot of different “technically not correct” answers. Those “technically not correct” answers are all different and will rise to different consequences. They are not the same—and if getting the “technically correct” answer is unlikely, you do care about which “incorrect” answers you’ll end up using.
Basically, diversification helps with dealing with the risks of having to accept “technically not correct” answers because the technically correct one is out of reach.
Twenty art students are drawing the same life model. They are all thinking about the task; they will produce twenty different drawings. In what world would it be ideal for them to produce identical drawings?
Twenty animators apply for the same job at Pixar. They put a great deal of thought into their applications, and submit twenty different demo reels. In what world would it be ideal for them to produce identical demo reels?
Twenty designers compete to design the new logo for a company. In what world would it be ideal for them to come up with identical logos?
Twenty would-be startup founders come up with ideas for new products. In what world would it be ideal for them to come up with the same idea?
Twenty students take the same exam. In what world would it be ideal for them to give the same answers?
Twenty people thinking alike lynch an innocent man. Does this happen in an ideal world?
In 1 and 2, the thinking is not the type being referred to in the quote. In 3, assuming only one of theirs get chosen, then there are 19 failures, hence 19 non-thinkers or non-sufficient thinking. In 4, they’re not all trying to answer the same question “what’s the best way to make money”, but the question “what’s a good way to make money”. (That may also apply to 3.) I touched on the difference in another thread. In 5, yes, every test-taker should give the correct answer to every question. Obvious for multiple choice tests, and even other tests usually only have one really correct answer, even if there may be more than one way to phrase it.
In 6, first of all, your example is isomorphic to its complement; where 20 people decide not to lynch an innocent man. If you defend the original quote, then some of them must not be thinking. And the actual answer is that my quoted version is one-sided; agreement doesn’t imply idealism, idealism implies agreement.
I could add a disclaimer; everyone should be thinking alike in cases referred to by the first quote. I don’t have a good way to narrow down exactly what that is off-hand right now, it’s kind of intuitional. Do you have an example where my claim conflicts directly with what the first quote would say, and you think it’s obvious in that scenario that they are right and not me?
You are invited by a friend to what he calls a “cool organization”. You walk into the building, and are promptly greeted by around twenty different people, all using variations on the same welcome phrase. You ask what the main point of the organization is, and several different people chime in at the same time, all answering, “Politics.” You ask what kind of politics. Every single one of them proceeds to endorse the idea that abortion is unconditionally bad. Now feeling rather creeped out, you ask them for their reasoning. Several of them give answers, but all of those answers are variations of the same argument, and the way in which they say it gives you the feeling as though they are reciting this argument from memory.
Would you be inclined to stay at this “cool organization” a moment longer than you have to?
Now substitute “abortion is unconditionally bad” with “creationism should not be taught as science in public schools”.
If you would still be creeped out by that, then your creep detector is miscalibrated; that would mean nobody can have an organization dedicated to a cause without creeping you out.
If you would not be creeped out by that, then your initial reaction to the abortion example was probably being mindkilled by abortion, not being creeped out by the fact that a lot of people agreed on something.
Just because I agree with their ideas doesn’t mean I won’t find it creepy. A cult is a cult, regardless of what it promotes. If I wanted to join an anti-creationist community, I certainly wouldn’t join that one, and there are plenty such communities that manage to get their message across without coming off as cultish.
The example is supposed to sound cultist because the people think alike. But I have a hard time seeing how a non-cultist anti-creationist group would produce different arguments against creationism.
The non-cultist group could of course not all use the same welcome phrase, but that’s not really the heart of what the example is supposed to illustrate,
There are multiple anti-creationist arguments out there, so if they all immediately jump to the same one, I’d be suspicious. But even beyond that, it’s natural for humans to disagree about stuff, because we’re not perfect Bayesians. If you see a bunch of humans agreeing completely, you should immediately think “cult”, or at the very least “these people don’t think for themselves”. (I’d be much less suspicious if we replace humans with Bayesian superintelligences, however, because those actually follow Aumann’s Agreement Theorem.)
Just because people should reach the same conclusions does not imply they should always do the same thing; e.g. some versions of chicken have the optimal solution where both players have the same options but they should do different things. (On a one-off with binding preconditions (or TDT done right), where the sum of outcomes on their doing different things is higher than any symmetrical outcome, they should commit to choose randomly in coordination.)
This example looks similiar to me; the cool cultists don’t know how to assign turns. Even if I had several clones, we wouldn’t all be doing the same things; not because we would disagree on what was important, but because it’s unnecessary to do some things more than once.
Also, this organization sounds really cool! Where can I join? (Seriously, I’ve never been in a cult before and would love to have the experience.)
Seriously, I’ve never been in a cult before and would love to have the experience.
You really don’t want that.
edit: A concrete useful suggestion is to reorganize your life in such a way that you have better things to do with your time than be a tourist in other people’s misery and ruin.
Are you speaking from experience or general knowledge?
If I go in knowing it’s a cult, doesn’t that change a lot? I’d be interested in a comparison of survival rates (of general sanity) between people depending on their mindset upon joining
If you join a cult, then even your physical survival will suddenly become a lot more perilous. You will likely have to conform, or die. Keep that in mind.
The main problem with joining a cult isn’t physical danger, or even the chance of having your mind permanently changed (retention rates for cult membership are very low). It’s what they’ll get out of you while you’re in there. In most cases you can expect to see a lot of pressure to do things like handing over large sums of money, or donating large amounts of unpaid labor, or abandoning social links outside the organization, and those aren’t necessarily things you can get back once you’ve burned them.
I’d expect going in with eyes open to mitigate this to some extent, but not perfectly.
In 1 and 2, the thinking is not the type being referred to in the quote.
The quote is without a provenance that I can discover. If authentic, I presume that Patton was referring to military planning. I don’t see a line separating that type of thinking from cases (1)-(4) and some of (5). Ideas must be found or created to achieve results that are not totally ordered. Thinking better is helpful but thinking alike is not.
In 3, assuming only one of theirs get chosen, then there are 19 failures, hence 19 non-thinkers or non-sufficient thinking.
Only if you “thinking better” to retroactively mean “won”. But that is not what the word “thinking” means.
In 4, they’re not all trying to answer the same question “what’s the best way to make money”, but the question “what’s a good way to make money”.
I doubt any of those entrepreneurs are indifferent between a given level of success and 10 times that level.
In 5, yes, every test-taker should give the correct answer to every question. Obvious for multiple choice tests, and even other tests usually only have one really correct answer, even if there may be more than one way to phrase it.
Perhaps you are thinking only of a limited type of exam. There is only one correct answer to “what is 23 times 87?”[1] Not all exams are like that.
Philosophy:
Do we need a notion of innateness in order to explain how humans come to know about objects, causes, words, numbers, colours, actions or minds? (Your answer may focus on a single domain of knowledge.)
“The mother of the King of Upper and Lower Egypt, follower of Horus, she who is in charge of the affairs of the Harem, whose every word is done for her, daughter of the god (begotten) of his body, Hetepheres.”
—Inscription from the tomb of Hetepheres
With reference to the quotation, discuss the power and influence of queens in this period [of ancient Egypt].
The link also provides the marking criteria for the question. The ideal result can only be described as “twenty students giving the same answer” if, as in case (3), “the same answer” is redefined to mean “anything that gets top marks”, in which case it becomes tautological.
In 6, first of all, your example is isomorphic to its complement; where 20 people decide not to lynch an innocent man. If you defend the original quote, then some of them must not be thinking. And the actual answer is that my quoted version is one-sided; agreement doesn’t imply idealism, idealism implies agreement.
I reject both of those. Agreement doesn’t imply ideal, of course (case 6 was just a test to see if people were thinking). But neither does ideal imply agreement, except by definitional shenanigans. And your version of Patton’s quote doesn’t include the hypothesis of ideality anyway. Neither does Patton’s. We are, or should be, talking about the real world.
I could add a disclaimer; everyone should be thinking alike in cases referred to by the first quote. I don’t have a good way to narrow down exactly what that is off-hand right now, it’s kind of intuitional. Do you have an example where my claim conflicts directly with what the first quote would say, and you think it’s obvious in that scenario that they are right and not me?
What are those cases? Military planning, I am assuming, on the basis of who Patton was. Twenty generals gather to decide how to address the present juncture of a war. All will have ideas; these ideas will not all be the same. They will bring different backgrounds of knowledge and experience to the matter. In that situation, if they all all agree at once on what to do, I believe Patton’s version applies.
Humans have bounded rationality, different available data sets, and different sets of accumulated experience (which is freqently labeled as part of intuition).
George S. Patton
Ideally, everyone should be thinking alike. How about
I think the intended meaning (phrased in LessWrong terminology) is something more along the lines of the following:
Humans are not perfect Bayesians, and even if they were, they don’t start from the same priors and encounter the same evidence. Therefore, Aumann’s Agreement Theorem does not hold for human beings; thus, if a large number of human beings is observed to agree on the truth of a proposition, you should be suspicious. It’s far more likely that they are signalling tribal agreement or, worse yet, accepting the proposition without thinking it through for themselves, than that they have each individually thought it through and independently reached identical conclusions. In general, then, civilized disagreement is a strong indicator of a healthy rationalist community; look at how often people disagree with each other on LW, for example. If everyone on LW was chanting, “Yes, Many Worlds is true, you should prefer torture to dust specks, mainstream philosophy is worthless,” then that would be worrying, even if it is true. (I am not claiming that it is, nor am I claiming that it is not; such topics are, I feel, beyond the scope of this discussion and were brought up purely as examples.)
Why? Thinking is not limited to answering well-defined questions about empirical reality.
As a practical matter, I think lack of diversity in thinking is a bigger problem than too much diversity.
Lack of diversity may be a problem because then you’ve got a lower chance of getting the right answer somewhere in there. It doesn’t mean that everyone is thinking correctly. Do you subscribe to truth relativism? Otherwise, what could be thought about that doesn’t have a correct answer?
If everyone is thinking in the same way, you have a good result only if that one way is the correct way. If there are a variety of different ways, all of which appear good, they will produce varying proposals which can be considered for their direct practical consequences, and when the different methods come into conflict, all can be tested and potentially improved.
You might object that you have deduced the correct way of thinking, and therefore you do not need to be concerned with this. Two counter-arguments: 1) You are most likely overconfident, and the consequences of overconfidently removing all other methods of thinking are likely to be a catastrophic Black Swan when an unseen flaw hits you. 2) To the best of our knowledge, the objectively correct way of thinking is AIXI, which is literally unimplementable.
That, too, but there are other issues as well—e.g. risk management through diversification.
Is she pretty?
Should I be a vegetarian?
What’s the best way of tackling that problem?
A disagreement on any of those questions reduces to either incorrect reasoning or differing preferences. People having identical preferences may be uncommon, but I don’t think you can say it means someone isn’t thinking.
The issue discussed isn’t whether it is a problem that some people might think (or prefer) alike. The issue is (emphasis mine):
Risk management through diversification is a totally different use of the word diversification, and that can be followed by a single person also; I don’t have to have two contradictory opinions to not invest all my money/resources/time in one basket.
Of the 3 examples you mentioned:
1 is not something people actively “think” about, but is in a sense “automatic”, although there is disagreement.
If you feel 2 doesn’t have a correct answer, then it seems you’re endorsing some form of moral nihilism, in which case the question is meaningless. (Note: this is the position which I myself hold.)
For 3, people are not actually looking for the “best” answer; they want a satisfactory answer. There is a best answer, but it’s usually not worth the effort to find. (For any sufficiently complicated problem, of course.) There may be multiple satisfactory answers, but it’s not a sign that someone isn’t thinking if everyone comes up with the same satisfactory answer.
Totally different than what?
Sure they do, but to make it more stark let me change it a bit: “Am I pretty?”
More generally, this example represents the whole class of subjective opinions.
Not quite, just rejecting moral realism is quite sufficient here. But in any case, people do think about it, in different ways, and I don’t know how would one determine what is a “correct” answer.
This example represents the distinction between descriptive and normative.
Also, not quite. People do want the best answer, it’s just that they are often satisfied with a good-enough answer. However the question of what is “best” is a hard one and in many cases there is no single correct answer—the optimality is conditional on certain parameters.
This example represents the individuality of many “correct” answers—what is correct depends on the person.
We were talking about diversity of opinions, and you switched to talking about diversity for risk management.
Also, if you don’t know how to determine a correct answer, there’s not much to think about until you do.
Diversity of opinions is helpful for risk management, specifically the risk that you commit all your resources to the single idea that turns out to be wrong. This is commonly known as “don’t put all your eggs into one basket”. Risk management is not only about money.
I strongly disagree. In fact, figuring out how would you recognize a correct answer if it happens to bite you on the ass is the major thing to think about for many problems.
The benefit I mentioned above of diversity (higher chance of getting the right answer) is the same thing as what you’re talking about then, not like you said :”That, too, but there are other issues as well”. If you can recognize the correct answer when you see it, then the use of diversity is to increase your chances of getting the right answer.
So are we down to the only correct use of the original quote is when people aren’t sure how to recognize a correct answer?
Nope. The thing is, it’s not a “correct”—“not correct” binary dilemma. For any reasonably complex problem there might be a single “correct” answer (provided what you consider optimal is fully specified) and a lot of different “technically not correct” answers. Those “technically not correct” answers are all different and will rise to different consequences. They are not the same—and if getting the “technically correct” answer is unlikely, you do care about which “incorrect” answers you’ll end up using.
Basically, diversification helps with dealing with the risks of having to accept “technically not correct” answers because the technically correct one is out of reach.
Twenty art students are drawing the same life model. They are all thinking about the task; they will produce twenty different drawings. In what world would it be ideal for them to produce identical drawings?
Twenty animators apply for the same job at Pixar. They put a great deal of thought into their applications, and submit twenty different demo reels. In what world would it be ideal for them to produce identical demo reels?
Twenty designers compete to design the new logo for a company. In what world would it be ideal for them to come up with identical logos?
Twenty would-be startup founders come up with ideas for new products. In what world would it be ideal for them to come up with the same idea?
Twenty students take the same exam. In what world would it be ideal for them to give the same answers?
Twenty people thinking alike lynch an innocent man. Does this happen in an ideal world?
In 1 and 2, the thinking is not the type being referred to in the quote. In 3, assuming only one of theirs get chosen, then there are 19 failures, hence 19 non-thinkers or non-sufficient thinking. In 4, they’re not all trying to answer the same question “what’s the best way to make money”, but the question “what’s a good way to make money”. (That may also apply to 3.) I touched on the difference in another thread. In 5, yes, every test-taker should give the correct answer to every question. Obvious for multiple choice tests, and even other tests usually only have one really correct answer, even if there may be more than one way to phrase it.
In 6, first of all, your example is isomorphic to its complement; where 20 people decide not to lynch an innocent man. If you defend the original quote, then some of them must not be thinking. And the actual answer is that my quoted version is one-sided; agreement doesn’t imply idealism, idealism implies agreement.
I could add a disclaimer; everyone should be thinking alike in cases referred to by the first quote. I don’t have a good way to narrow down exactly what that is off-hand right now, it’s kind of intuitional. Do you have an example where my claim conflicts directly with what the first quote would say, and you think it’s obvious in that scenario that they are right and not me?
You are invited by a friend to what he calls a “cool organization”. You walk into the building, and are promptly greeted by around twenty different people, all using variations on the same welcome phrase. You ask what the main point of the organization is, and several different people chime in at the same time, all answering, “Politics.” You ask what kind of politics. Every single one of them proceeds to endorse the idea that abortion is unconditionally bad. Now feeling rather creeped out, you ask them for their reasoning. Several of them give answers, but all of those answers are variations of the same argument, and the way in which they say it gives you the feeling as though they are reciting this argument from memory.
Would you be inclined to stay at this “cool organization” a moment longer than you have to?
Now substitute “abortion is unconditionally bad” with “creationism should not be taught as science in public schools”.
If you would still be creeped out by that, then your creep detector is miscalibrated; that would mean nobody can have an organization dedicated to a cause without creeping you out.
If you would not be creeped out by that, then your initial reaction to the abortion example was probably being mindkilled by abortion, not being creeped out by the fact that a lot of people agreed on something.
Just because I agree with their ideas doesn’t mean I won’t find it creepy. A cult is a cult, regardless of what it promotes. If I wanted to join an anti-creationist community, I certainly wouldn’t join that one, and there are plenty such communities that manage to get their message across without coming off as cultish.
The example is supposed to sound cultist because the people think alike. But I have a hard time seeing how a non-cultist anti-creationist group would produce different arguments against creationism.
The non-cultist group could of course not all use the same welcome phrase, but that’s not really the heart of what the example is supposed to illustrate,
There are multiple anti-creationist arguments out there, so if they all immediately jump to the same one, I’d be suspicious. But even beyond that, it’s natural for humans to disagree about stuff, because we’re not perfect Bayesians. If you see a bunch of humans agreeing completely, you should immediately think “cult”, or at the very least “these people don’t think for themselves”. (I’d be much less suspicious if we replace humans with Bayesian superintelligences, however, because those actually follow Aumann’s Agreement Theorem.)
Yes, actually, and I don’t see why it is creepy despite your repeated assertions that it is.
And if they gave completely different arguments, you’d complain about the remarkable co-incidence that all these arguments suggest the same policy.
Difference of opinion, then. I would find it creepy as all hell.
I probably would, yes, but I would still prefer that world to the one in which they gave only one argument.
Now you’re just arguing from creepiness.
Just because people should reach the same conclusions does not imply they should always do the same thing; e.g. some versions of chicken have the optimal solution where both players have the same options but they should do different things. (On a one-off with binding preconditions (or TDT done right), where the sum of outcomes on their doing different things is higher than any symmetrical outcome, they should commit to choose randomly in coordination.)
This example looks similiar to me; the cool cultists don’t know how to assign turns. Even if I had several clones, we wouldn’t all be doing the same things; not because we would disagree on what was important, but because it’s unnecessary to do some things more than once.
Also, this organization sounds really cool! Where can I join? (Seriously, I’ve never been in a cult before and would love to have the experience.)
You really don’t want that.
edit: A concrete useful suggestion is to reorganize your life in such a way that you have better things to do with your time than be a tourist in other people’s misery and ruin.
Are you speaking from experience or general knowledge?
If I go in knowing it’s a cult, doesn’t that change a lot? I’d be interested in a comparison of survival rates (of general sanity) between people depending on their mindset upon joining
If you join a cult, then even your physical survival will suddenly become a lot more perilous. You will likely have to conform, or die. Keep that in mind.
...Not that I know much about cults or their relationship to the law, but that seems kind of illegal.
The main problem with joining a cult isn’t physical danger, or even the chance of having your mind permanently changed (retention rates for cult membership are very low). It’s what they’ll get out of you while you’re in there. In most cases you can expect to see a lot of pressure to do things like handing over large sums of money, or donating large amounts of unpaid labor, or abandoning social links outside the organization, and those aren’t necessarily things you can get back once you’ve burned them.
I’d expect going in with eyes open to mitigate this to some extent, but not perfectly.
The quote is without a provenance that I can discover. If authentic, I presume that Patton was referring to military planning. I don’t see a line separating that type of thinking from cases (1)-(4) and some of (5). Ideas must be found or created to achieve results that are not totally ordered. Thinking better is helpful but thinking alike is not.
Only if you “thinking better” to retroactively mean “won”. But that is not what the word “thinking” means.
I doubt any of those entrepreneurs are indifferent between a given level of success and 10 times that level.
Perhaps you are thinking only of a limited type of exam. There is only one correct answer to “what is 23 times 87?”[1] Not all exams are like that.
Philosophy:
Ancient history (from here:
The link also provides the marking criteria for the question. The ideal result can only be described as “twenty students giving the same answer” if, as in case (3), “the same answer” is redefined to mean “anything that gets top marks”, in which case it becomes tautological.
I reject both of those. Agreement doesn’t imply ideal, of course (case 6 was just a test to see if people were thinking). But neither does ideal imply agreement, except by definitional shenanigans. And your version of Patton’s quote doesn’t include the hypothesis of ideality anyway. Neither does Patton’s. We are, or should be, talking about the real world.
What are those cases? Military planning, I am assuming, on the basis of who Patton was. Twenty generals gather to decide how to address the present juncture of a war. All will have ideas; these ideas will not all be the same. They will bring different backgrounds of knowledge and experience to the matter. In that situation, if they all all agree at once on what to do, I believe Patton’s version applies.
(1) Ubj znal crbcyr’f svefg gubhtug ba ernqvat gung jnf “nun, urknqrpvzny!” Whfg...qba’g.
Humans have bounded rationality, different available data sets, and different sets of accumulated experience (which is freqently labeled as part of intuition).