AFAIK, most don’t prepare at all since there isn’t much at stake.
Very few companies hire based on high IQ, when they do it’s usually because the problems the employee will have to deal with are highly mathematical and/or logical in nature and a person with a low (real) IQ would do really poorly in that, and in any case they still require candidates to have specific skills, which are more determinant than the IQ. And when such companies do take IQ in consideration, they usually do so not by requiring an official score, but by making candidates go through aptitude tests and puzzles, then checking how they scored in those. Very few go for a fully certified score, and when they do, they have requirements such that they may well also require a full personality evaluation, meaning a full Big 5 assessment.
On the flip side, there are jobs that have a maximum IQ score requirement, and don’t hire people above that, the reasoning being that anyone with an IQ higher than that would get utterly bored at that job and leave it on the first opportunity, thus wasting the company’s time and training investment. So they provide a test and if you get too good a score on it you’re let go.
Hence, if one were to try gaming the score, one would either end up in a job with such extreme mathematical and logical thinking requirements they would end up constantly mentally exhausted and leave, unable to cope with spending so much mental energy (and this is measurable, brain scans of high IQ individuals show their brains do very energy expenditure when dealing with complex tasks that, for average IQ individuals, cause their brains to flare up in a storm of long, constant, intense activity). Or, on the other extreme, would put them in a job with such low requirements for their abilities that it’d make them feel miserable until they in fact jumped ship for something more stimulating.
Now, one important thing to keep in mind is that IQ scores aren’t absolute values, they’re relative values based on how a population answers tests, and it follows a Gaussian distribution.
If a test has 100 questions, and 50% of those taking it get less than 60 questions right, and the other 50% get more than 60 questions right, then IQ 100 is defined as “getting 60 questions right”. If in 20 years the same test has 50% of those taking it getting less than 70 questions right, and the other 50% getting more than 70 questions right, then IQ 100 is redefined as “getting 70 questions right”. Hence, IQ 100 is always the average of a population.
Then, for numbers above and below 100, every ‘n’ points (usually 15) are defined as “one standard deviation”. Since the distribution is Gaussian, this means that IQ 85 (1 standard deviation below the mean) is defined as whatever number of questions 84.1% of respondents get right; IQ 100 (the mean) is the number of questions the aforementioned 50% of respondents get right; IQ 115 (1 standard deviation above the mean) as the number of questions only the top 15.9% of respondents get right; IQ 130 (2 standard deviations above the mean) as the number of questions only the top 2.3% of respondents get right; IQ 145 (3 standard deviations) as the number of questions only the top 0.2% of respondents get right; and so on and so forth, in both directions.
This means that, if people began gaming the score, the shape of the curve would change into a distorted Gaussian, introducing a perceptible skew that could be calculated following standard statistical procedures, which in turn would prompt a renormalization of the test so that it would track averages and standard deviations correctly once again, rendering any such effort a one time stunt.
The shape is perceptibly different from a Gaussian (at least in the distributions that I found googling “empirical distribution of IQ” and similar keywords). This is not surprising, because almost nothing in Nature is an ideal Gaussian.
Not really. Currently IQ distribution is defined as a Gaussian, so if tests are made correctly and the proper transformation is applied the shape of the curve, for a large enough population, will literally be a Gaussian “by definition”. Check this answer on Stack Exchange for details and references:
It’s designed to be a normal distribution, but actual implementations don’t work out exactly that way. For starters, the distribution is skewed rightward because brain damage is a thing and brain augmentation isn’t (yet).
AFAIK, most don’t prepare at all since there isn’t much at stake.
Very few companies hire based on high IQ, when they do it’s usually because the problems the employee will have to deal with are highly mathematical and/or logical in nature and a person with a low (real) IQ would do really poorly in that, and in any case they still require candidates to have specific skills, which are more determinant than the IQ. And when such companies do take IQ in consideration, they usually do so not by requiring an official score, but by making candidates go through aptitude tests and puzzles, then checking how they scored in those. Very few go for a fully certified score, and when they do, they have requirements such that they may well also require a full personality evaluation, meaning a full Big 5 assessment.
On the flip side, there are jobs that have a maximum IQ score requirement, and don’t hire people above that, the reasoning being that anyone with an IQ higher than that would get utterly bored at that job and leave it on the first opportunity, thus wasting the company’s time and training investment. So they provide a test and if you get too good a score on it you’re let go.
Hence, if one were to try gaming the score, one would either end up in a job with such extreme mathematical and logical thinking requirements they would end up constantly mentally exhausted and leave, unable to cope with spending so much mental energy (and this is measurable, brain scans of high IQ individuals show their brains do very energy expenditure when dealing with complex tasks that, for average IQ individuals, cause their brains to flare up in a storm of long, constant, intense activity). Or, on the other extreme, would put them in a job with such low requirements for their abilities that it’d make them feel miserable until they in fact jumped ship for something more stimulating.
Now, one important thing to keep in mind is that IQ scores aren’t absolute values, they’re relative values based on how a population answers tests, and it follows a Gaussian distribution.
If a test has 100 questions, and 50% of those taking it get less than 60 questions right, and the other 50% get more than 60 questions right, then IQ 100 is defined as “getting 60 questions right”. If in 20 years the same test has 50% of those taking it getting less than 70 questions right, and the other 50% getting more than 70 questions right, then IQ 100 is redefined as “getting 70 questions right”. Hence, IQ 100 is always the average of a population.
Then, for numbers above and below 100, every ‘n’ points (usually 15) are defined as “one standard deviation”. Since the distribution is Gaussian, this means that IQ 85 (1 standard deviation below the mean) is defined as whatever number of questions 84.1% of respondents get right; IQ 100 (the mean) is the number of questions the aforementioned 50% of respondents get right; IQ 115 (1 standard deviation above the mean) as the number of questions only the top 15.9% of respondents get right; IQ 130 (2 standard deviations above the mean) as the number of questions only the top 2.3% of respondents get right; IQ 145 (3 standard deviations) as the number of questions only the top 0.2% of respondents get right; and so on and so forth, in both directions.
This means that, if people began gaming the score, the shape of the curve would change into a distorted Gaussian, introducing a perceptible skew that could be calculated following standard statistical procedures, which in turn would prompt a renormalization of the test so that it would track averages and standard deviations correctly once again, rendering any such effort a one time stunt.
The shape is perceptibly different from a Gaussian (at least in the distributions that I found googling “empirical distribution of IQ” and similar keywords). This is not surprising, because almost nothing in Nature is an ideal Gaussian.
Not really. Currently IQ distribution is defined as a Gaussian, so if tests are made correctly and the proper transformation is applied the shape of the curve, for a large enough population, will literally be a Gaussian “by definition”. Check this answer on Stack Exchange for details and references:
Wood, Why are IQ test results normally distributed?, URL (version: 2019-12-23)
Now, evidently, for smaller sub-samples of the population the shape will vary.
It’s designed to be a normal distribution, but actual implementations don’t work out exactly that way. For starters, the distribution is skewed rightward because brain damage is a thing and brain augmentation isn’t (yet).