If I try to think about someone’s IQ (which I don’t normally do, except for the sake of this message above where I tried to think about a specific number to make my claim precise) I feel like I can have an ordering where I’m not too uncertain on a scale that includes me, some common reference classes (e.g. the median student of school X has IQ Y), and a few people who did IQ tests around me. I’d by the way be happy to bet on anyone if someone accepted to reveal their IQ (e.g. from the list of SERI MATS’s mentors) if you think my claim is wrong.
Also, I think that it’s fine to have less chances of being an excellent alignment research for that reason. What matters is having impact, not being an excellent alignment researcher. E.g. I don’t go full-in a technical career myself essentially for that reason, combined with the fact that I have other features that might allow me to go further in the impact tail in other subareas that are relevant.
If I try to think about someone’s IQ (which I don’t normally do, except for the sake of this message above where I tried to think about a specific number to make my claim precise)
Thanks for clarifying that.
I feel like I can have an ordering where I’m not too uncertain on a scale that includes me, some common reference classes (e.g. the median student of school X has IQ Y), and a few people who did IQ tests around me.
I’m not very familiar with the IQ scores and testing, but it seems reasonable you could get rough estimates that way.
Also, I think that it’s fine to have less chances of being an excellent alignment research for that reason. What matters is having impact, not being an excellent alignment researcher. E.g. I don’t go full-in a technical career myself essentially for that reason, combined with the fact that I have other features that might allow me to go further in the impact tail in other subareas that are relevant.
Good point, there are lots of ways to contribute to reducing AI risk besides just doing technical alignment research.
If I try to think about someone’s IQ (which I don’t normally do, except for the sake of this message above where I tried to think about a specific number to make my claim precise) I feel like I can have an ordering where I’m not too uncertain on a scale that includes me, some common reference classes (e.g. the median student of school X has IQ Y), and a few people who did IQ tests around me. I’d by the way be happy to bet on anyone if someone accepted to reveal their IQ (e.g. from the list of SERI MATS’s mentors) if you think my claim is wrong.
Also, I think that it’s fine to have less chances of being an excellent alignment research for that reason. What matters is having impact, not being an excellent alignment researcher. E.g. I don’t go full-in a technical career myself essentially for that reason, combined with the fact that I have other features that might allow me to go further in the impact tail in other subareas that are relevant.
Thanks for clarifying that.
I’m not very familiar with the IQ scores and testing, but it seems reasonable you could get rough estimates that way.
Good point, there are lots of ways to contribute to reducing AI risk besides just doing technical alignment research.