I feel like a more right claim is something like “beyond a certain IQ, we don’t know what makes a good alignment researcher”. Which I think is a substantially weaker claim than the one which is underlying your post. I also think that the fact that the probability of being a good alignment researcher increases with IQ is relevant if true (and I think it’s very likely to be true, as for most sciences where Nobels are usually outliers along that axis).
I also feel like I would expect predictors from other research fields to roughly apply (e.g. conscientiousness).
In this post you don’t cover what seems to be the most important part of why sometimes advice that are of the form “It seems like given features X and Y you’re more likely to be able to fruitfully contribute to Z” (which seems to be adjacent to the claims you’re criticizing), i.e. the opportunity cost of someone.
If I try to think about someone’s IQ (which I don’t normally do, except for the sake of this message above where I tried to think about a specific number to make my claim precise) I feel like I can have an ordering where I’m not too uncertain on a scale that includes me, some common reference classes (e.g. the median student of school X has IQ Y), and a few people who did IQ tests around me. I’d by the way be happy to bet on anyone if someone accepted to reveal their IQ (e.g. from the list of SERI MATS’s mentors) if you think my claim is wrong.
Also, I think that it’s fine to have less chances of being an excellent alignment research for that reason. What matters is having impact, not being an excellent alignment researcher. E.g. I don’t go full-in a technical career myself essentially for that reason, combined with the fact that I have other features that might allow me to go further in the impact tail in other subareas that are relevant.
If I try to think about someone’s IQ (which I don’t normally do, except for the sake of this message above where I tried to think about a specific number to make my claim precise)
Thanks for clarifying that.
I feel like I can have an ordering where I’m not too uncertain on a scale that includes me, some common reference classes (e.g. the median student of school X has IQ Y), and a few people who did IQ tests around me.
I’m not very familiar with the IQ scores and testing, but it seems reasonable you could get rough estimates that way.
Also, I think that it’s fine to have less chances of being an excellent alignment research for that reason. What matters is having impact, not being an excellent alignment researcher. E.g. I don’t go full-in a technical career myself essentially for that reason, combined with the fact that I have other features that might allow me to go further in the impact tail in other subareas that are relevant.
Good point, there are lots of ways to contribute to reducing AI risk besides just doing technical alignment research.
Thanks for writing that.
Three thoughts that come to mind:
I feel like a more right claim is something like “beyond a certain IQ, we don’t know what makes a good alignment researcher”. Which I think is a substantially weaker claim than the one which is underlying your post. I also think that the fact that the probability of being a good alignment researcher increases with IQ is relevant if true (and I think it’s very likely to be true, as for most sciences where Nobels are usually outliers along that axis).
I also feel like I would expect predictors from other research fields to roughly apply (e.g. conscientiousness).
In this post you don’t cover what seems to be the most important part of why sometimes advice that are of the form “It seems like given features X and Y you’re more likely to be able to fruitfully contribute to Z” (which seems to be adjacent to the claims you’re criticizing), i.e. the opportunity cost of someone.
How do you (presume to) know people’s IQ scores?
If I try to think about someone’s IQ (which I don’t normally do, except for the sake of this message above where I tried to think about a specific number to make my claim precise) I feel like I can have an ordering where I’m not too uncertain on a scale that includes me, some common reference classes (e.g. the median student of school X has IQ Y), and a few people who did IQ tests around me. I’d by the way be happy to bet on anyone if someone accepted to reveal their IQ (e.g. from the list of SERI MATS’s mentors) if you think my claim is wrong.
Also, I think that it’s fine to have less chances of being an excellent alignment research for that reason. What matters is having impact, not being an excellent alignment researcher. E.g. I don’t go full-in a technical career myself essentially for that reason, combined with the fact that I have other features that might allow me to go further in the impact tail in other subareas that are relevant.
Thanks for clarifying that.
I’m not very familiar with the IQ scores and testing, but it seems reasonable you could get rough estimates that way.
Good point, there are lots of ways to contribute to reducing AI risk besides just doing technical alignment research.