You should still have a prior. “I don’t have enough detailed info” is not an excuse for not having a prior.
No, it’s not, but I think it’s a reasonable excuse for not having a more specific prior than ‘low and uncertain’. Being more specific in my prior would not be very useful without being more specific about what exactly the question is. I sometimes see a tendency here to overconfidence in estimates simply because a bunch of rather arbitrary priors have been multiplied together and produced a comfortingly precise number.
Why not just take the probability distribution of heritability coefficients for traits-in-general as your prior?
I don’t know what it is. I suspect it is not a well established piece of information. I’m not convinced that heritability for ‘traits-in-general’ is a good basis for rationality in particular. Do you have a reference for a good estimate for this distribution?
I feel that people who refuse to give a numerical prior and use protestations of ignorance (that can be cured with a 5-second google search) as an excuse to say “very low” are really engaging in motivated cognition, usually without realizing.
Whenever one says “I don’t have much info so I think the probability of X is really low”, one should ask oneself:
(1) Would I apply the same argument to ~X ? A “very low” prior for X implies a very high prior for ~X. Am I exploiting the framing of using X rather than ~X to engage in motivated skepticism?
(2) Has it even crossed my mind to think about how easy it might be to find the info, and if not, am I willfully clinging to ignorance in order to avoid having to give up a cherished belief?
I never qualified ‘low’ with ‘very’ or ‘really’. If numbers make you feel better ‘low’ means roughly 1-10% probability. I find it a little backwards when someone focuses so much on precisely quantifying an estimate like this before the exact question is even clarified. I see it a lot from non-technical managers as a programmer.
I started this thread by asking for the information you were using to arrive at your (implied) high confidence in a genetic basis for rationality. There’s been several recent articles about What Intelligence Tests Miss and I haven’t started reading it yet (though it is now sitting on my Kindle) so I was already thinking about whether rationality as a separate trait from IQ is a distinct and measurable trait. I haven’t seen enough evidence yet to convince me that it is so your implication that it is and is strongly heritable made me wonder if you were privy to some information that I didn’t know.
While assigning numerical probabilities to priors and doing some math can be useful for certain problems I don’t think it is necessarily the best starting point when investigating complex issues like this with actual human brains rather than ideal Bayesian reasoning agents. I’m still in data gathering mode at the moment and don’t see a great benefit to throwing out exact priors as a badge of Bayesian honor.
This is what I meant about quantifying the probability estimate before clarifying the exact question. As I said originally, I’m skeptical of a strong heritability for rationality independent of IQ. I’m not sure what the correct statistical terminology is for talking about this kind of question. I think there is a low probability that a targeted genetic modification could increase rationality independent of IQ in a significant and measurable way. That belief doesn’t map in a straightforward way onto a claim about the heritability of rationality. I’m expecting What Intelligence Tests Miss to help clarify my thinking about what kind of test could even be used to reliably separate a ‘rationality’ cognitive trait from IQ which would be a necessary precondition to measuring the heritability of rationality.
Given that memory, verbal intelligence, spatial reasoning and general intelligence all have values of H of around 0.4, it seems that P{H>0.3) ~ 70%
These all correlate significantly with IQ however I believe (correct me if you think I’m wrong on this). It’s at least plausible that targeted genetic modifications could improve say spatial or verbal reasoning significantly more than IQ (perhaps by lowering scores in other areas) since there is some evidence of sex differences in these traits. Rationality seems more like a way of reasoning and a higher level trait than these ‘specialized’ forms of intelligence however.
Rationality seems more like a way of reasoning and a higher level trait than these ‘specialized’ forms of intelligence however.
Maybe. Actually, I think that the dominant theory around here is that rationality is actually the result of an atrophied motivated-cognition module, so perfect rationality is not a question of creating a new brain module, but subtracting off the distorting mechanisms that we are blighted with.
I realize that “brain module” != “distinct patch of cortex real estate”, but have there been any cases of brain damage that have increased a person’s rationality in some areas? I am aware that depression and certain autism spectrum traits have this property, but I’m curious if physical trauma has done anything similar.
I don’t know, but without a standardized test for rationality (like there is for IQ), how would we even notice?
Googling for “can brain injury cause autism” leads to conflicting info:
“This is a question which arises again and again, it’s other form is ‘Can brain injury cause autism?’ Of course the answer is most definitely, yes!”
“Blows to the head, lack of oxygen, and other physical trauma can certainly cause brain damage. Brain damaged children may have behaviors similar to those of autistic children. But physical injury cannot cause accurately diagnosed autism. Certainly a few non-traumatic falls in infancy are NOT the cause of autism in a toddler.”
To test this, you’d need to somehow identify a group of patients that were going to receive some kind of very specific brain surgery, and give them a pre- and post- rationality test.
At this point I was mostly wondering if there were any motivating anecdotes such as Phineas Gage or gourmand syndrome, except with a noticeable personality change towards rationality. Someone changing his political orientation, becoming less superstitious, or gambling less as a result of an injury could be useful (and, as a caveat, all could be caused by damage that has nothing to do with rationality).
and even then you would only expect 1 in 50 or so kinds of brain surgery to remove the part that caused (say) motivated cognition, and only one in 5 or so of those to not do so much damage that you could actually detect the positive effect.
Better, use high-precision real-time brain imaging to image somebody’s brain when motivated cognition is happening, then use high-precision TMS to switch just that part off.
You can apply laws of probability to intuitive notions of plausibility as well (and some informal arguments won’t be valid if they violate these laws, like both X and ~X being unexpected). Specific numbers don’t have to be thought up to do that.
No, it’s not, but I think it’s a reasonable excuse for not having a more specific prior than ‘low and uncertain’. Being more specific in my prior would not be very useful without being more specific about what exactly the question is. I sometimes see a tendency here to overconfidence in estimates simply because a bunch of rather arbitrary priors have been multiplied together and produced a comfortingly precise number.
I don’t know what it is. I suspect it is not a well established piece of information. I’m not convinced that heritability for ‘traits-in-general’ is a good basis for rationality in particular. Do you have a reference for a good estimate for this distribution?
Here’s a start for you
I feel that people who refuse to give a numerical prior and use protestations of ignorance (that can be cured with a 5-second google search) as an excuse to say “very low” are really engaging in motivated cognition, usually without realizing.
Whenever one says “I don’t have much info so I think the probability of X is really low”, one should ask oneself:
(1) Would I apply the same argument to ~X ? A “very low” prior for X implies a very high prior for ~X. Am I exploiting the framing of using X rather than ~X to engage in motivated skepticism?
(2) Has it even crossed my mind to think about how easy it might be to find the info, and if not, am I willfully clinging to ignorance in order to avoid having to give up a cherished belief?
I never qualified ‘low’ with ‘very’ or ‘really’. If numbers make you feel better ‘low’ means roughly 1-10% probability. I find it a little backwards when someone focuses so much on precisely quantifying an estimate like this before the exact question is even clarified. I see it a lot from non-technical managers as a programmer.
I started this thread by asking for the information you were using to arrive at your (implied) high confidence in a genetic basis for rationality. There’s been several recent articles about What Intelligence Tests Miss and I haven’t started reading it yet (though it is now sitting on my Kindle) so I was already thinking about whether rationality as a separate trait from IQ is a distinct and measurable trait. I haven’t seen enough evidence yet to convince me that it is so your implication that it is and is strongly heritable made me wonder if you were privy to some information that I didn’t know.
While assigning numerical probabilities to priors and doing some math can be useful for certain problems I don’t think it is necessarily the best starting point when investigating complex issues like this with actual human brains rather than ideal Bayesian reasoning agents. I’m still in data gathering mode at the moment and don’t see a great benefit to throwing out exact priors as a badge of Bayesian honor.
of what threshold heritability coefficient?
Really, you should give a set of probabilities: P(H>0.01), P(H>0.02), P(H>0.03),, … P(H>0.98), P(H>0.99).
A reasonable distribution might be some quasi-linear function?
Given that memory, verbal intelligence, spatial reasoning and general intelligence all have values of H of around 0.4, it seems that P{H>0.3) ~ 70%
This is what I meant about quantifying the probability estimate before clarifying the exact question. As I said originally, I’m skeptical of a strong heritability for rationality independent of IQ. I’m not sure what the correct statistical terminology is for talking about this kind of question. I think there is a low probability that a targeted genetic modification could increase rationality independent of IQ in a significant and measurable way. That belief doesn’t map in a straightforward way onto a claim about the heritability of rationality. I’m expecting What Intelligence Tests Miss to help clarify my thinking about what kind of test could even be used to reliably separate a ‘rationality’ cognitive trait from IQ which would be a necessary precondition to measuring the heritability of rationality.
These all correlate significantly with IQ however I believe (correct me if you think I’m wrong on this). It’s at least plausible that targeted genetic modifications could improve say spatial or verbal reasoning significantly more than IQ (perhaps by lowering scores in other areas) since there is some evidence of sex differences in these traits. Rationality seems more like a way of reasoning and a higher level trait than these ‘specialized’ forms of intelligence however.
Maybe. Actually, I think that the dominant theory around here is that rationality is actually the result of an atrophied motivated-cognition module, so perfect rationality is not a question of creating a new brain module, but subtracting off the distorting mechanisms that we are blighted with.
I realize that “brain module” != “distinct patch of cortex real estate”, but have there been any cases of brain damage that have increased a person’s rationality in some areas?
I am aware that depression and certain autism spectrum traits have this property, but I’m curious if physical trauma has done anything similar.
I don’t know, but without a standardized test for rationality (like there is for IQ), how would we even notice?
Googling for “can brain injury cause autism” leads to conflicting info:
To test this, you’d need to somehow identify a group of patients that were going to receive some kind of very specific brain surgery, and give them a pre- and post- rationality test.
At this point I was mostly wondering if there were any motivating anecdotes such as Phineas Gage or gourmand syndrome, except with a noticeable personality change towards rationality. Someone changing his political orientation, becoming less superstitious, or gambling less as a result of an injury could be useful (and, as a caveat, all could be caused by damage that has nothing to do with rationality).
This paper on schizophrenia is interesting.
...because of the following insights ____.
and even then you would only expect 1 in 50 or so kinds of brain surgery to remove the part that caused (say) motivated cognition, and only one in 5 or so of those to not do so much damage that you could actually detect the positive effect.
Better, use high-precision real-time brain imaging to image somebody’s brain when motivated cognition is happening, then use high-precision TMS to switch just that part off.
You can apply laws of probability to intuitive notions of plausibility as well (and some informal arguments won’t be valid if they violate these laws, like both X and ~X being unexpected). Specific numbers don’t have to be thought up to do that.