My answers to the IQ questions could seem inconsistent: I’d pay a lot to get a higher IQ, and if it turned out LLM usage decreased my IQ, that would be a much smaller concern. The reason is that I expect AI Alignment work to be largely gated by high IQ, such that a higher IQ might allow me to contribute much more, while a lower IQ might just transform my contribution from neglible to very neglible.
Noted. I’m already expecting marginal values of IQ to be weird since IQ isn’t a linear scale in the first place.
I admit I’m testing a chain of conjectures with those questions and probably will only get weak evidence for my actual question. The feedback is really appreciated!
My answers to the IQ questions could seem inconsistent: I’d pay a lot to get a higher IQ, and if it turned out LLM usage decreased my IQ, that would be a much smaller concern. The reason is that I expect AI Alignment work to be largely gated by high IQ, such that a higher IQ might allow me to contribute much more, while a lower IQ might just transform my contribution from neglible to very neglible.
Noted. I’m already expecting marginal values of IQ to be weird since IQ isn’t a linear scale in the first place.
I admit I’m testing a chain of conjectures with those questions and probably will only get weak evidence for my actual question. The feedback is really appreciated!