Want some brutally truthful tests designed to see how competent you are?
Take the SAT test, , which measures math ability, and verbal ability. Find a few psychology tests that try and measure memorization ability(like, how quickly and well you can memorize a topic)
Why? Because real world success, in intellectual endeavors, is largely a combination of how large your fluid intelligence is+memorization ability+ work ethic. The rest is due to combos of other factors.
Besides the SAT, you can take the LSAT, and study for a few things on there that require specific knowledge. The test is the single largest predictive factor of success in law school, and blind tests of competence.
There are a few other tests you can take. I recommend the AMC, which requires studying some topics outside of the regular school curriculum, but not too much.
Or, you could play starcraft for 3 months, and see how high you end up ranking.
Beyond these,I don’t know of many good tests to see how competent you are.
Are you joking? Starcraft isn’t even a well-designed game—it has all kinds of crazy barriers to entry and elements that explicitly exist to make it unnecessary difficult for people to pick up. Besides, it’s easy for even a bad gamer (see: me) to achieve a high-looking ranking (top 25 Diamond) in Starcraft II thanks to its nonintuitive rating and placement system, and true rankings are only maintained for the top 200 people in each region.
Yeah, I’ve done many of those. I took the SAT when I was 12. I’ve taken a few probably-inaccurate online IQ tests. I’ve done a few cognitive testing suites at SIAI. I’m in pretty good shape. In general though, there are better frameworks for cognitive testing. It’s probable that one could make a neat suite out of PEBL, which is free and very customizable. Fluid g seems over-emphasized. The limiting factor for most rationalists tends to be strong metadispositions for thought, reflection, and drive.
We didn’t study long enough to get any statistically significant data. Like, not even close. And I think sending off the data (even without names attached) would sorta breach an implicit privacy agreement among those who took part in the tests.
Jaeggi 2008 didn’t necessarily study very long either, some around/less than a week.
(I wouldn’t be asking this question, by the way, if you had written more concretely and said something like ‘We only studied for 2 days, not long enough to get any statistically significant data’.)
Hmm, most people would be ok with that sort of data being sent out in an anonymized form. I’m surprised that you didn’t suggest that before hand. Is there any chance you can contact the people in question and get their permission to release the anonymized data?
Is there any chance you can contact the people in question and get their permission to release the anonymized data?
We could, but really, there’s no information there, no matter how much Bayes magic you use. It’s noise. If the data was at all significant then we’d send it out, of course. We might actually have gotten anonymized disclosure agreement from everyone; I don’t remember. But it didn’t end up mattering.
Yeah, but that only matters from a self-assessment standpoint if the causal graph is wealth --> score <-- ability, whereas for an uncoached entrant it’s almost purely wealth --> ability --> score.
whereas for an uncoached entrant it’s almost purely wealth --> ability --> score.
And coaching can’t make up a large part of the score difference, either. There’s more than 100 points discrepancy on Critical Reading or Math alone between the lowest and highest income groups, whereas coaching only creates improvements of 30 points in Reading and Math combined.
Want some brutally truthful tests designed to see how competent you are?
Take the SAT test, , which measures math ability, and verbal ability. Find a few psychology tests that try and measure memorization ability(like, how quickly and well you can memorize a topic)
Why? Because real world success, in intellectual endeavors, is largely a combination of how large your fluid intelligence is+memorization ability+ work ethic. The rest is due to combos of other factors.
Besides the SAT, you can take the LSAT, and study for a few things on there that require specific knowledge. The test is the single largest predictive factor of success in law school, and blind tests of competence.
There are a few other tests you can take. I recommend the AMC, which requires studying some topics outside of the regular school curriculum, but not too much.
Or, you could play starcraft for 3 months, and see how high you end up ranking.
Beyond these,I don’t know of many good tests to see how competent you are.
Are you joking? Starcraft isn’t even a well-designed game—it has all kinds of crazy barriers to entry and elements that explicitly exist to make it unnecessary difficult for people to pick up. Besides, it’s easy for even a bad gamer (see: me) to achieve a high-looking ranking (top 25 Diamond) in Starcraft II thanks to its nonintuitive rating and placement system, and true rankings are only maintained for the top 200 people in each region.
Yeah, I’ve done many of those. I took the SAT when I was 12. I’ve taken a few probably-inaccurate online IQ tests. I’ve done a few cognitive testing suites at SIAI. I’m in pretty good shape. In general though, there are better frameworks for cognitive testing. It’s probable that one could make a neat suite out of PEBL, which is free and very customizable. Fluid g seems over-emphasized. The limiting factor for most rationalists tends to be strong metadispositions for thought, reflection, and drive.
They have those?
They’re ad hoc, we’ve used one for a dual n-back study which ended up yielding insufficient data.
Any chance you could write up that study? I don’t believe I have seen any SIAI-related DNB study; certainly it’s not in my FAQ.
(Remember kids: only you can fight publication bias!)
We didn’t study long enough to get any statistically significant data. Like, not even close. And I think sending off the data (even without names attached) would sorta breach an implicit privacy agreement among those who took part in the tests.
Jaeggi 2008 didn’t necessarily study very long either, some around/less than a week.
(I wouldn’t be asking this question, by the way, if you had written more concretely and said something like ‘We only studied for 2 days, not long enough to get any statistically significant data’.)
Hmm, most people would be ok with that sort of data being sent out in an anonymized form. I’m surprised that you didn’t suggest that before hand. Is there any chance you can contact the people in question and get their permission to release the anonymized data?
We could, but really, there’s no information there, no matter how much Bayes magic you use. It’s noise. If the data was at all significant then we’d send it out, of course. We might actually have gotten anonymized disclosure agreement from everyone; I don’t remember. But it didn’t end up mattering.
Shouldn’t doing something successfully in the real world be in there somewhere?
and wealth.
Read “the bell curve”
basically, smart parents were more likely to go to a higher ranking school, and move themselves up in the social heirarchy.
Smart people tend to have smart kids. Dumb people tend to have dumb kids. Hence, the scores.
For the race aspect of this, you can find the stats where poor east asian kids do better than rich white kids.At least on the math portion.
Yeah, but that only matters from a self-assessment standpoint if the causal graph is wealth --> score <-- ability, whereas for an uncoached entrant it’s almost purely wealth --> ability --> score.
Fair enough.
And coaching can’t make up a large part of the score difference, either. There’s more than 100 points discrepancy on Critical Reading or Math alone between the lowest and highest income groups, whereas coaching only creates improvements of 30 points in Reading and Math combined.