we must often reject the well-argued ideas of intelligent people, sometimes more intelligent than we are, sometimes without giving them a detailed hearing, and instead stand by our intuitions, traditions and secular rules, that are the stable fruit of millenia of evolution. We should not lightly reject those rules, certainly not without a clear testable understanding of why they were valid where they are known to have worked, and why they would cease to be in another context.
This seems to be the fulcrum point of your essay, the central argument that your anecdote builds up to and all of your conclusions depend on. But it is lacking in support—why should we stand by our intuitions and disregard the opinions of more intelligent people? Can you explain why this is true? Or at the very least, link to Hayek explaining it? Sure, there are obvious cases where one’s intuition can win over a more intelligent person’s arguments, such as when your intuition has been trained by years of domain-specific experience and the more intelligent person’s intuition has not, or if the intelligent person exhibits some obvious bias. But ceteris paribus, when thinking a topic for the first time, I’d expect the more intelligent person to be at least as accurate as I am.
why should we stand by our intuitions disregard the opinions of more intelligent people?
Because no matter how intelligent the people are, the amount of computation that went into their opinions will be orders of magnitude smaller than the amount of computation that went into our intuitions, as a result of evolutionary processes operating over centuries, millennia, and longer. So if there is a conflict, it’s far more probable that the intelligent people have made some mistake that we haven’t yet spotted.
I am reminded of a saying in programming (not sure who first said it) that goes something like this: It takes twice as much intelligence to debug a given program as to write it. Therefore, if you write the most complex program you are capable of writing, you are, by definition, not smart enough to debug it.
Because no matter how intelligent the people are, the amount of computation that went into their opinions will be orders of magnitude smaller than the amount of computation that went into our intuitions, as a result of evolutionary processes operating over centuries, millennia, and longer.
This doesn’t make sense to me. The intelligent people are still humans, and can default to their intuition just like we can if they think that using unfiltered intuition would be the most accurate. And, by virtue of being more intelligent, they presumably have better/faster System 2 (deliberate) thinking, so if the particular problem being worked on does end up favoring careful thinking, they would be more accurate. Hence, the intelligent person would be at least as good as you.
Moreover, if the claim “the amount of computation that went into their opinions will be orders of magnitude smaller than the amount of computation that went into our intuitions” actually implied that intuitions were orders of magnitude better, people would never use anything but their intuitions, because their intuitions would always be more accurate. This obviously is not how things work in practice.
I am reminded of a saying in programming (not sure who first said it) that goes something like this: It takes twice as much intelligence to debug a given program as to write it. Therefore, if you write the most complex program you are capable of writing, you are, by definition, not smart enough to debug it.
Not a good analogy, since the intelligent person would be able to write a program that is at least as good as yours, even if they aren’t able to debug yours. It doesn’t matter if the intelligent person can’t debug your program if they can write a buggy program that works better than your buggy program.
Hence, the intelligent person would be at least as good as you.
Yes, this reminds me of someone I talked to some years back, who insisted that she trusted people’s intuitions about weather more than the forecasts of the weatherman.
It was unhelpful to point out that the weatherman also has intuitions, and would report using those if they really had better results.
In this particular case, I agree with you that the weatherman is far more likely to be right than the person’s intuitions.
However, suppose the weatherman had said that since it’s going to be sunny tomorrow, it would be a good day to go out and murder people, and gives a logical argument to support that position? Should the woman still go with what the weatherman says, if she can’t find a flaw in his argument?
However, suppose the weatherman had said that since it’s going to be sunny tomorrow, it would be a good day to go out and murder people, and gives a logical argument to support that position? Should the woman still go with what the weatherman says, if she can’t find a flaw in his argument?
Well, I wouldn’t expect a weatherman to be an expert on murder, but he is an expert on weather, and due to the interdisciplinary nature of murder-weather-forecasting, I would not expect there to be many people in a better position to predict which days are good for murder.
If the woman is an expert on murder, or if she has conflicting reports from murder experts (e.g. “Only murder on dark and stormy nights”) she might have reason to doubt the weatherman’s claim about sunny days.
The intelligent people are still humans, and can default to their intuition just like we can if they think that using unfiltered intuition would be the most accurate.
But by hypothesis, we are talking about a scenario where the intelligent person is proposing something that violently clashes with an intuition that is supposed to be common to everyone. So we’re not talking about whether the intelligent person has an advantage in all situations, on average; we’re talking about whether the intelligent person has an advantage, on average, in that particular class of situations.
In other words, we’re talking about a situation where something has obviously gone wrong; the question is which is more likely to have gone wrong, the intuitions or the intelligent person. It doesn’t seem to me that your argument addresses that question.
if the claim “the amount of computation that went into their opinions will be orders of magnitude smaller than the amount of computation that went into our intuitions” actually implied that intuitions were orders of magnitude better
That’s not what it implies; or at least, that’s not what I’m arguing it implies. I’m only arguing that it implies that, if we already know that something has gone wrong, if we have an obvious conflict between the intelligent person and the intuitions built up over the evolution of humans in general, it’s more likely that the intelligent person’s arguments have some mistake in them.
Also, there seems to be a bit of confusion about how the word “intuition” is being used. I’m not using it, and I don’t think the OP was using it, just to refer to “unexamined beliefs” or something like that. I’m using it to refer speciflcally to beliefs like “mass murder is wrong”, which have obvious reasonable grounds.
Not a good analogy, since the intelligent person would be able to write a program that is at least as good as yours, even if they aren’t able to debug yours. It doesn’t matter if the intelligent person can’t debug your program if they can write a buggy program that works better than your buggy program.
We’re not talking about the intelligent person being able to debug “your” program; we’re talking about the intelligent person not being able to debug his own program. And if he’s smarter than you, then obviously you can’t either. Also, we’re talking about a case where there is good reason to doubt whether the intelligent person’s program “works better”—it is in conflict with some obvious intuitive principle like “mass murder is wrong”.
Yes, but OTOH the “evolutionary processes operating over centuries, millennia, and longer” took place in environments different from where we live nowadays.
I think, more to the point is the question of what functions the evolutionary processes were computing. Those instincts did not evolve to provide insight into truth, they evolved to maximize reproductive fitness. Certainly these aren’t mutually
exclusive goals, but to a certain extent, that difference in function is why we have cognitive biases in the first place.
Obviously that’s an over simplification, but my point is that if we know something has gone wrong, and that there’s conflict between an intelligent person’s conclusions and the intuitions we’ve evolved, the high probability that the flaw’ is in the intelligent person’s argument depends on whether that instinct in some way produced more babies than it’s competitors.
This may or may not significantly decrease the probability distribution on expected errors assigned earlier, but I think it’s worth considering.
Can you explain why this is true? (...) But ceteris paribus, when thinking a topic for the first time, I’d expect the more intelligent person to be at least as accurate as I am.
Intelligence as in “reasoning capability” does not necessarily lead to similar values. As such, arguments that reduce to different terminal values aren’t amenable to compromise. “At least as accurate” doesn’t apply, regardless of intelligence, if fare just states “because I prefer a slower delta of change”. This topic is an ought-debate, not an is-debate.
I’d certainly agree there is some correlation between intelligence and pursuing more “enlightened”/trimmed down (whatever that means) values, but the immediate advantage intelligence confers isn’t in setting those goals, it is in achieving them. If it turned out that the OP just likes his change in smaller increments (a la “I don’t like to constantly adapt”), there’s little that can be said against that, other than “well, I don’t mind radical course corrections”.
but the immediate advantage intelligence confers isn’t in setting those goals, it is in achieving them.
The goals that are sufficiently well defined for lower intelligence may become undefined for higher intelligence. Furthermore, in any accepted metric of intelligence, such as IQ test, we do not consider person’s tendency to procrastinate when trying to attain his stated goals to be part of ‘intelligence’. Furthermore, there’s more than one dimension to it. If you give a person some hallucinogenic drug, you’ll observe the outcome very distinct from simple diminishment of intelligence.
Or in an AI, if you rely on a self contradictory axiomatic system with the minimum length of proof to self contradiction of L, the intelligences that can not explore past L behave just fine while those that explore past L end up being able to prove a statement and it’s opposite. That may be happening in humans with regard to morality. If the primal rules, or the rules of inference are self contradictory, that incapacitates the higher reasoning and leaves the decisions to much less intelligent subsystems, with the intelligence only able to rationalize any action. Or the decision ends up dependent to which of A or ~A has shortest proof, or which proof invokes items that accidentally got cross wired to some sort of feeling of rightness. Either way the outcome looks bizarre and stupid.
Intelligence as in “reasoning capability” does not necessarily lead to similar values
Agreed. That’s why I said “ceteris paribus”—it’s clear that you shouldn’t necessarily trust someone with different terminal values to make a judgement about terminal values. I was mostly referring to factual claims.
This seems to be the fulcrum point of your essay, the central argument that your anecdote builds up to and all of your conclusions depend on. But it is lacking in support—why should we stand by our intuitions and disregard the opinions of more intelligent people? Can you explain why this is true? Or at the very least, link to Hayek explaining it? Sure, there are obvious cases where one’s intuition can win over a more intelligent person’s arguments, such as when your intuition has been trained by years of domain-specific experience and the more intelligent person’s intuition has not, or if the intelligent person exhibits some obvious bias. But ceteris paribus, when thinking a topic for the first time, I’d expect the more intelligent person to be at least as accurate as I am.
Because no matter how intelligent the people are, the amount of computation that went into their opinions will be orders of magnitude smaller than the amount of computation that went into our intuitions, as a result of evolutionary processes operating over centuries, millennia, and longer. So if there is a conflict, it’s far more probable that the intelligent people have made some mistake that we haven’t yet spotted.
I am reminded of a saying in programming (not sure who first said it) that goes something like this: It takes twice as much intelligence to debug a given program as to write it. Therefore, if you write the most complex program you are capable of writing, you are, by definition, not smart enough to debug it.
This doesn’t make sense to me. The intelligent people are still humans, and can default to their intuition just like we can if they think that using unfiltered intuition would be the most accurate. And, by virtue of being more intelligent, they presumably have better/faster System 2 (deliberate) thinking, so if the particular problem being worked on does end up favoring careful thinking, they would be more accurate. Hence, the intelligent person would be at least as good as you.
Moreover, if the claim “the amount of computation that went into their opinions will be orders of magnitude smaller than the amount of computation that went into our intuitions” actually implied that intuitions were orders of magnitude better, people would never use anything but their intuitions, because their intuitions would always be more accurate. This obviously is not how things work in practice.
Not a good analogy, since the intelligent person would be able to write a program that is at least as good as yours, even if they aren’t able to debug yours. It doesn’t matter if the intelligent person can’t debug your program if they can write a buggy program that works better than your buggy program.
Yes, this reminds me of someone I talked to some years back, who insisted that she trusted people’s intuitions about weather more than the forecasts of the weatherman.
It was unhelpful to point out that the weatherman also has intuitions, and would report using those if they really had better results.
In this particular case, I agree with you that the weatherman is far more likely to be right than the person’s intuitions.
However, suppose the weatherman had said that since it’s going to be sunny tomorrow, it would be a good day to go out and murder people, and gives a logical argument to support that position? Should the woman still go with what the weatherman says, if she can’t find a flaw in his argument?
Well, I wouldn’t expect a weatherman to be an expert on murder, but he is an expert on weather, and due to the interdisciplinary nature of murder-weather-forecasting, I would not expect there to be many people in a better position to predict which days are good for murder.
If the woman is an expert on murder, or if she has conflicting reports from murder experts (e.g. “Only murder on dark and stormy nights”) she might have reason to doubt the weatherman’s claim about sunny days.
You don’t get it. Murder is NOT an abstract variable in the previous comment. It’s a constant.
I thought I understood what I was saying, but I don’t understand what you’re saying. What?
But by hypothesis, we are talking about a scenario where the intelligent person is proposing something that violently clashes with an intuition that is supposed to be common to everyone. So we’re not talking about whether the intelligent person has an advantage in all situations, on average; we’re talking about whether the intelligent person has an advantage, on average, in that particular class of situations.
In other words, we’re talking about a situation where something has obviously gone wrong; the question is which is more likely to have gone wrong, the intuitions or the intelligent person. It doesn’t seem to me that your argument addresses that question.
That’s not what it implies; or at least, that’s not what I’m arguing it implies. I’m only arguing that it implies that, if we already know that something has gone wrong, if we have an obvious conflict between the intelligent person and the intuitions built up over the evolution of humans in general, it’s more likely that the intelligent person’s arguments have some mistake in them.
Also, there seems to be a bit of confusion about how the word “intuition” is being used. I’m not using it, and I don’t think the OP was using it, just to refer to “unexamined beliefs” or something like that. I’m using it to refer speciflcally to beliefs like “mass murder is wrong”, which have obvious reasonable grounds.
We’re not talking about the intelligent person being able to debug “your” program; we’re talking about the intelligent person not being able to debug his own program. And if he’s smarter than you, then obviously you can’t either. Also, we’re talking about a case where there is good reason to doubt whether the intelligent person’s program “works better”—it is in conflict with some obvious intuitive principle like “mass murder is wrong”.
Yes, but OTOH the “evolutionary processes operating over centuries, millennia, and longer” took place in environments different from where we live nowadays.
I think, more to the point is the question of what functions the evolutionary processes were computing. Those instincts did not evolve to provide insight into truth, they evolved to maximize reproductive fitness. Certainly these aren’t mutually exclusive goals, but to a certain extent, that difference in function is why we have cognitive biases in the first place.
Obviously that’s an over simplification, but my point is that if we know something has gone wrong, and that there’s conflict between an intelligent person’s conclusions and the intuitions we’ve evolved, the high probability that the flaw’ is in the intelligent person’s argument depends on whether that instinct in some way produced more babies than it’s competitors.
This may or may not significantly decrease the probability distribution on expected errors assigned earlier, but I think it’s worth considering.
Intelligence as in “reasoning capability” does not necessarily lead to similar values. As such, arguments that reduce to different terminal values aren’t amenable to compromise. “At least as accurate” doesn’t apply, regardless of intelligence, if fare just states “because I prefer a slower delta of change”. This topic is an ought-debate, not an is-debate.
I’d certainly agree there is some correlation between intelligence and pursuing more “enlightened”/trimmed down (whatever that means) values, but the immediate advantage intelligence confers isn’t in setting those goals, it is in achieving them. If it turned out that the OP just likes his change in smaller increments (a la “I don’t like to constantly adapt”), there’s little that can be said against that, other than “well, I don’t mind radical course corrections”.
The goals that are sufficiently well defined for lower intelligence may become undefined for higher intelligence. Furthermore, in any accepted metric of intelligence, such as IQ test, we do not consider person’s tendency to procrastinate when trying to attain his stated goals to be part of ‘intelligence’. Furthermore, there’s more than one dimension to it. If you give a person some hallucinogenic drug, you’ll observe the outcome very distinct from simple diminishment of intelligence.
Or in an AI, if you rely on a self contradictory axiomatic system with the minimum length of proof to self contradiction of L, the intelligences that can not explore past L behave just fine while those that explore past L end up being able to prove a statement and it’s opposite. That may be happening in humans with regard to morality. If the primal rules, or the rules of inference are self contradictory, that incapacitates the higher reasoning and leaves the decisions to much less intelligent subsystems, with the intelligence only able to rationalize any action. Or the decision ends up dependent to which of A or ~A has shortest proof, or which proof invokes items that accidentally got cross wired to some sort of feeling of rightness. Either way the outcome looks bizarre and stupid.
Agreed. That’s why I said “ceteris paribus”—it’s clear that you shouldn’t necessarily trust someone with different terminal values to make a judgement about terminal values. I was mostly referring to factual claims.