As far as I understand there are currently people associated with CFAR who try to build a rationality test. Maybe the test while successfully measure a new thing that’s distinct from IQ, When it does it’s useful to call it can be useful to call that rationality.
We had that debate a while ago, but to repeat it, I might use the word “real” in a sense that you aren’t used to. Here I’m a constructivist. If you can construct a new rationality test that measures something new that distinct from IQ/g if you run PCA then you are free to call that new variable “real rationality”.
If you can construct a new rationality test that measures something new that distinct from IQ/g if you run PCA then you are free to call that new variable “real rationality”
There are a lot of tests which output something distinct from g. The real (ahem) question is what the variable measured means, or, more specifically, what directly observable characteristics does it correlate with.
For IQ there are a lot of studies showing how the number which comes out of a test correlates with a variety of interesting things (like wealth, health, educational achievements, etc.).
Let’s say you designed a test and it outputs a number. What does this number have to correlate with in order for you to claim it’s a measure of rationality?
There are a lot of tests which output something distinct from g.
Emotional intelligence would be one example. It turns out having a notion of emotional intelligence is useful.
If you take Gardner’s 7 intelligence model it turns out that those 7 values don’t produce distinct factors in PCA.
The name of the game is finding a new value that robust if you change the test around and that’s orthogonal to established psychometric measurements.
That new value might be something that works for various different tests of immunity to mental bias.
Ideally you find something that isn’t simple “a EQ + b IQ” but that’s really orthogonal. Then you can study whether your new measurement is useful to predict things like wealth or educational achievement and whether a linear model that has information about IQ and your new measurement of rationality does better predictions of educational achievement than just a linear model that has information about IQ. At that time you see what the measurement can really do in reality and see whether it’s useful.
I don’t think that you can say beforehand what success will look like. It’s a lot about trying something and seeing whether we get a new number that useful and that bears some relationship to what we call rationality.
Didn’t have a particular source in mind, was going off memory.
Looks like there’s some debate over whether it has predictive power, but consensus is that EQ is a collection of mostly unrelated traits, and is heavily entangled with the big five, particularly neuroticism and openness, in an overlapping way. (My memory overstated the case somewhat.) This looks like a relatively representative study, and here are the abstract and docx of a study which concluded that EQ had no meaningful predictive power.
As far as the first study goes, I don’t see why we should control for income and marital status. If EQ increases income in a way that increases life satisfaction then EQ is a highly useful construct.
That said there are political problems with treating openness as a variable to be maximized. Openness correlates with voting left in US elections. Teaching people to increase their abilities of emotional management might be politically easier to communicate.
A lot of personality tests are also easy to game if a person wants to score highly. The notion of intelligence supposes that getting high values in the test needs actual skill.
I don’t think it’s useful to decide beforehand on which metric to use when doing exploratory research.
‘Cheshire Puss,’ she began, rather timidly, as she did not at all know
whether it would like the name: however, it only grinned a little wider.
‘Come, it’s pleased so far,’ thought Alice, and she went on. ‘Would you
tell me, please, which way I ought to go from here?’
‘That depends a good deal on where you want to get to,’ said the Cat.
‘I don’t much care where—’ said Alice.
‘Then it doesn’t matter which way you go,’ said the Cat.
‘—so long as I get SOMEWHERE,’ Alice added as an explanation.
‘Oh, you’re sure to do that,’ said the Cat, ‘if you only walk long
enough.’
You confuse metrics do decide where to look with metrics to decide whether you found something. Those two aren’t the same thing.
You decide where to look based on what you know in the present but you decide whether you found something based on information that you find in the future.
What is a “real rationality test”? A successful startup? Hunger Games? Whoever dies with the most toys wins?
As far as I understand there are currently people associated with CFAR who try to build a rationality test. Maybe the test while successfully measure a new thing that’s distinct from IQ, When it does it’s useful to call it can be useful to call that rationality.
We had that debate a while ago, but to repeat it, I might use the word “real” in a sense that you aren’t used to. Here I’m a constructivist. If you can construct a new rationality test that measures something new that distinct from IQ/g if you run PCA then you are free to call that new variable “real rationality”.
There are a lot of tests which output something distinct from g. The real (ahem) question is what the variable measured means, or, more specifically, what directly observable characteristics does it correlate with.
For IQ there are a lot of studies showing how the number which comes out of a test correlates with a variety of interesting things (like wealth, health, educational achievements, etc.).
Let’s say you designed a test and it outputs a number. What does this number have to correlate with in order for you to claim it’s a measure of rationality?
Emotional intelligence would be one example. It turns out having a notion of emotional intelligence is useful. If you take Gardner’s 7 intelligence model it turns out that those 7 values don’t produce distinct factors in PCA.
The name of the game is finding a new value that robust if you change the test around and that’s orthogonal to established psychometric measurements.
That new value might be something that works for various different tests of immunity to mental bias.
Ideally you find something that isn’t simple “a EQ + b IQ” but that’s really orthogonal. Then you can study whether your new measurement is useful to predict things like wealth or educational achievement and whether a linear model that has information about IQ and your new measurement of rationality does better predictions of educational achievement than just a linear model that has information about IQ. At that time you see what the measurement can really do in reality and see whether it’s useful.
I don’t think that you can say beforehand what success will look like. It’s a lot about trying something and seeing whether we get a new number that useful and that bears some relationship to what we call rationality.
I’m pretty sure analysis found that EQ was fully explained by “a IQ + b Openness”.
Could you link to such an analysis? It would surprise me.
Didn’t have a particular source in mind, was going off memory.
Looks like there’s some debate over whether it has predictive power, but consensus is that EQ is a collection of mostly unrelated traits, and is heavily entangled with the big five, particularly neuroticism and openness, in an overlapping way. (My memory overstated the case somewhat.) This looks like a relatively representative study, and here are the abstract and docx of a study which concluded that EQ had no meaningful predictive power.
As far as the first study goes, I don’t see why we should control for income and marital status. If EQ increases income in a way that increases life satisfaction then EQ is a highly useful construct.
That said there are political problems with treating openness as a variable to be maximized. Openness correlates with voting left in US elections. Teaching people to increase their abilities of emotional management might be politically easier to communicate.
A lot of personality tests are also easy to game if a person wants to score highly. The notion of intelligence supposes that getting high values in the test needs actual skill.
That would be the second PCA component :-D
I am not asking what success will look like. I am asking what metrics will you be using to decide if something is successful or not.
I don’t think it’s useful to decide beforehand on which metric to use when doing exploratory research.
‘Cheshire Puss,’ she began, rather timidly, as she did not at all know whether it would like the name: however, it only grinned a little wider. ‘Come, it’s pleased so far,’ thought Alice, and she went on. ‘Would you tell me, please, which way I ought to go from here?’
‘That depends a good deal on where you want to get to,’ said the Cat.
‘I don’t much care where—’ said Alice.
‘Then it doesn’t matter which way you go,’ said the Cat.
‘—so long as I get SOMEWHERE,’ Alice added as an explanation.
‘Oh, you’re sure to do that,’ said the Cat, ‘if you only walk long enough.’
You confuse metrics do decide where to look with metrics to decide whether you found something. Those two aren’t the same thing.
You decide where to look based on what you know in the present but you decide whether you found something based on information that you find in the future.