Assuming I don’t have to kill and clean them myself, and that I am not emotionally attached to any of the animals in question:
If the value is not cumulative, the answer is likely zero, because of the social penalties of being known to eat animals categorized as “pets”, “cute”, and “babies”. More than that, contingent on the ability to do so without public knowledge of such and depending on age; likely at most 200 or so, which assumes young animals and that I eat only the muscles and little else until finished (id est, the point at which the utility of a varied diet exceeds that of a point of IQ.)
If the value is cumulative with an expected gain of around one point a year, roughly an average of around two pounds of food per day, however many individual animals that works out to be, id est, the point at which the utility of not gaining excess weight exceeds that of gaining IQ, a value which may vary with time.
I suspect this comment will go a long way toward convincing others of the accuracy of the first word of my user name...
Voluptatis avidus, Magis quam salutis; Mortuus in anima, Curam gero cutis.
Oh, I do value virtue, to be sure; but I have gradually convinced myself to internalize the value of a moral calculus, and I accept that my judgments may not align with most people’s instinctive emotional reactions.
Given that 1 point of IQ is 1/15th of a standard deviation, a “point” of IQ isn’t necessarily a consistent metric for cognitive function—depending on the shape of the actual population curve used to take the test, the actual performance delta between 125 and 130 may be VASTLY divergent from the performance delta between 145 and 150.
I think we need a different shorthand word for “quantified boost in cognitive performance” than “points of IQ”. Does anyone have any ideas?
It implies no such thing; hence my asking for ideas rather than presenting them. The only thing we know for certain is that, due to how IQ tests are measured and calibrated, there is no particular reason why they SHOULD represent an actual, consistent metric—they merely note where on the bell curve of values you are, not what actual value that point on the bell curve represents. (At core, of course, it simply represents “number of questions on a particular IQ test that you got right”, and everyone agrees that that metric is measuring SOMETHING about intelligence, but it would be nice to have a more formal metric for “smartness” that actually has real-world consequences.)
ETA: I certainly have an intuitive idea for what “smartness” would mean as an actual quantifiable thing, which seems to have something to do with pattern-recognition / signal-extraction performance across a wide range of noisy media. This makes some sense to me, since IQ tests—especially the ones that attempt to avoid linguistic bias—typically involve pattern-matching and similar signal extraction/prediction tasks. So intuitively, I think intelligence will have units of Entropy per [Kolmogorov complexity x time], and any unit which measures “one average 100 IQ human” worth of Smartness will have some ungodly constant-of-conversion comparable to Avogadro’s number.
NOTE 2: Like I said, this is an intuitive sense, which I have not done ANY formal processing on.
Well, you need some framework. You said that IQ points are not “necessarily a consistent metric for cognitive function”. First, what is “cognitive function” and how do you want to measure it? If you have no alternate metrics then how do you know IQ points are inconsistent and what do you compare them to?
everyone agrees that that metric is measuring SOMETHING about intelligence
The usual answer is that it is measuring the g factor, the unobserved general-intelligence capability. It was originally formulated as the first principal component of the results of a variety of IQ tests. It is quantifiable (by IQ points) and it does have real-world consequences.
Saying that IQ measures g is like saying that flow through a mountain creek measures snowmelt. More of one generally means more of the other, but there’s a bunch of fiddly little details (maybe someone’s airlifting water onto a forest fire upstream, or filling their swimming pool) that add up to a substantial deviation—and there are still a lot of unanswered questions about the way they relate to each other.
In any case, g is more a statement about the correlations between domain skills than the causes of intelligence or the shape of the ability curve. The existence of a g factor tells you that you can probably teach music more easily to someone who’s good at math, but it doesn’t tell you what to look for in a CT scan, or whether working memory, say, will scale linearly or geometrically or in some other way with IQ; those are separate questions.
g was observed as a correlation between test scores. That is by definition a scalar value, but we don’t know exactly how the underlying mechanism works or how it can be modeled; we just know that it’s not very domain-specific. It’s the underlying mechanism, not the correlation value, that I was referring to in the grandparent, and I’m pretty sure it’s what ialdabaoth is referring to as well.
The usual answer is that it is measuring the g factor, the unobserved general-intelligence capability. It was originally formulated as the first principal component of the results of a variety of IQ tests. It is quantifiable (by IQ points) and it does have real-world consequences.
Absolutely, but +n g doesn’t necessarily mean +m IQ for all (n,m).
I don’t understand what that means.
Here’s a place where my intuition’s going to struggle to formulate good words for this.
An intelligent system receives information (which has fundamental units of Entropy) and outputs a behavior. A
“proper” quantitative measure of intelligence should be a simple function of how much Utility it can expect from its chosen behavior, on average, given an input with n bits of Entropy, and t seconds to crunch on those bits. Whether “Utility” is measured in units similar to Kolmogorov complexity is questionable, but that’s what my naive intuition yanked out when grasping for units.
But the point is, whatever we actually choose to measure g in, the term “+1 g” should make sense, and should mean the same thing regardless of what our current g is. IQ, being merely a statistical fit onto a gaussian distribution, does NOT do that.
but +n g doesn’t necessarily mean +m IQ for all (n,m)
This phrase implies that you have a metric for g (different from IQ points) because without it the expression “+n g” has no meaning.
An intelligent system receives information (which has fundamental units of Entropy) and outputs a behavior.
Okay. To be precise we are talking about Shannon entropy and these units are bits.
A “proper” quantitative measure of intelligence should be a simple function of how much Utility it can expect from its chosen behavior
Hold on. What is this Utility thing? I don’t see how it fits in the context in which we are talking. You are now introducing things like goals and values. Kolmogorov complexity is a measure of complexity, what does it have to do with utility?
the term “+1 g” should make sense, and should mean the same thing regardless of what our current g is
I don’t see this as obvious. Why?
IQ, being merely a statistical fit onto a gaussian distribution
Not so. IQ is a metric, presumably of g, that is rescaled so that the average IQ is 100. Rescaling isn’t a particularly distorting operation to do. It is not fit onto a gaussian distribution.
IQ is a metric, presumably of g, that is rescaled so that the average IQ is 100. Rescaling isn’t a particularly distorting operation to do. It is not fit onto a gaussian distribution.
I’m afraid you’re mistaken here. IQ scores are generally derived from a set of raw test scores by fitting them to a normal distribution with mean 100 and SD of 15 (sometimes 16): IQ 70 is thus defined as a score two standard deviations below the mean. It’s not a linear rescaling, unless the question pool just happens to give you a normal distribution of raw scores.
Lots. This contradicts my revealed preference though I suppose, because I have a vague idea that fish oil increases intelligence but I haven’t made a special effort to eat any.
I’m trying to anticipate how you’ll follow up on this in a way that’s relevant to my post and coming up blank.
I care a lot more about the boundary between eating kittens and not eating kittens than the number of kittens I eat, so the gain I’d need to eat two kittens is less than twice the gain I’d need to eat one kitten. Which indicates that I’m more concerned for myself than for the kittens...
I’ve seen other discussions here regarding varieties of Omega-3s which strongly indicated that fatty acids from fish are used to build brain related cells and that these acids aren’t really available in any other foods; casual googling fails to turn anything up but the link you provided seems like the sort of site that might, for instance, dismiss cryonics as quackery so I would like to see further discussion from someone better at researching than I am.
The post is about a tradeoff between epistemic rationality and instrumental rationality: you shouldn’t invest too much effort in precise knowledge, and in some circumstances, humans may find themselves at a disadvantage because of knowing more. The same clash appears in the metaphor where you trade the achievement of goals (not wanting to eat kittens) for precision of knowledge (gaining IQ points).
Ah… I think I see now. The comment assumed that one would not want to eat kittens, and that IQ is equivalent or isomorphic to epistemic rationality, and then mapped that to giving up instrumental rationality in favor of epistemic rationality. Definitely could’ve used some explanation.
I’d guess 1 point of IQ is somewhere around equivalent to the cognitive boost I’d get by sleeping half an hour longer every day, and it’d take quite a few hours for me to earn enough money to buy a single kitten, so… no more than a couple per month, I’d guess. (And that’s not even counting slaughtering and cooking the kittens or paying someone to do that.)
How many kittens would you eat to gain 1 point of IQ?
I should eat them for free, since I already pay money to eat pigs.
Assuming I don’t have to kill and clean them myself, and that I am not emotionally attached to any of the animals in question:
If the value is not cumulative, the answer is likely zero, because of the social penalties of being known to eat animals categorized as “pets”, “cute”, and “babies”. More than that, contingent on the ability to do so without public knowledge of such and depending on age; likely at most 200 or so, which assumes young animals and that I eat only the muscles and little else until finished (id est, the point at which the utility of a varied diet exceeds that of a point of IQ.)
If the value is cumulative with an expected gain of around one point a year, roughly an average of around two pounds of food per day, however many individual animals that works out to be, id est, the point at which the utility of not gaining excess weight exceeds that of gaining IQ, a value which may vary with time.
I suspect this comment will go a long way toward convincing others of the accuracy of the first word of my user name...
In this crowd? I don’t see why.
Voluptatis avidus, Magis quam salutis; Mortuus in anima, Curam gero cutis.
Oh, I do value virtue, to be sure; but I have gradually convinced myself to internalize the value of a moral calculus, and I accept that my judgments may not align with most people’s instinctive emotional reactions.
Given that 1 point of IQ is 1/15th of a standard deviation, a “point” of IQ isn’t necessarily a consistent metric for cognitive function—depending on the shape of the actual population curve used to take the test, the actual performance delta between 125 and 130 may be VASTLY divergent from the performance delta between 145 and 150.
I think we need a different shorthand word for “quantified boost in cognitive performance” than “points of IQ”. Does anyone have any ideas?
This implies you have another metric for cognitive function which an IQ point does not match. What is that another metric?
It implies no such thing; hence my asking for ideas rather than presenting them. The only thing we know for certain is that, due to how IQ tests are measured and calibrated, there is no particular reason why they SHOULD represent an actual, consistent metric—they merely note where on the bell curve of values you are, not what actual value that point on the bell curve represents. (At core, of course, it simply represents “number of questions on a particular IQ test that you got right”, and everyone agrees that that metric is measuring SOMETHING about intelligence, but it would be nice to have a more formal metric for “smartness” that actually has real-world consequences.)
ETA: I certainly have an intuitive idea for what “smartness” would mean as an actual quantifiable thing, which seems to have something to do with pattern-recognition / signal-extraction performance across a wide range of noisy media. This makes some sense to me, since IQ tests—especially the ones that attempt to avoid linguistic bias—typically involve pattern-matching and similar signal extraction/prediction tasks. So intuitively, I think intelligence will have units of Entropy per [Kolmogorov complexity x time], and any unit which measures “one average 100 IQ human” worth of Smartness will have some ungodly constant-of-conversion comparable to Avogadro’s number.
NOTE 2: Like I said, this is an intuitive sense, which I have not done ANY formal processing on.
Well, you need some framework. You said that IQ points are not “necessarily a consistent metric for cognitive function”. First, what is “cognitive function” and how do you want to measure it? If you have no alternate metrics then how do you know IQ points are inconsistent and what do you compare them to?
The usual answer is that it is measuring the g factor, the unobserved general-intelligence capability. It was originally formulated as the first principal component of the results of a variety of IQ tests. It is quantifiable (by IQ points) and it does have real-world consequences.
I don’t understand what that means.
Saying that IQ measures g is like saying that flow through a mountain creek measures snowmelt. More of one generally means more of the other, but there’s a bunch of fiddly little details (maybe someone’s airlifting water onto a forest fire upstream, or filling their swimming pool) that add up to a substantial deviation—and there are still a lot of unanswered questions about the way they relate to each other.
In any case, g is more a statement about the correlations between domain skills than the causes of intelligence or the shape of the ability curve. The existence of a g factor tells you that you can probably teach music more easily to someone who’s good at math, but it doesn’t tell you what to look for in a CT scan, or whether working memory, say, will scale linearly or geometrically or in some other way with IQ; those are separate questions.
g is an unobserved value, a scalar. It cannot say anything about “causes of intelligence” or shapes of curves. It doesn’t aim to.
g was observed as a correlation between test scores. That is by definition a scalar value, but we don’t know exactly how the underlying mechanism works or how it can be modeled; we just know that it’s not very domain-specific. It’s the underlying mechanism, not the correlation value, that I was referring to in the grandparent, and I’m pretty sure it’s what ialdabaoth is referring to as well.
To be more precise, the existence of g was derived from observing the correlation of test scores.
Moreover, g itself is not the correlation, it is the unobservable underlying factor which we assume to cause the correlation.
It is still a scalar-valued characteristic of a person, not a mechanism.
Absolutely, but +n g doesn’t necessarily mean +m IQ for all (n,m).
Here’s a place where my intuition’s going to struggle to formulate good words for this.
An intelligent system receives information (which has fundamental units of Entropy) and outputs a behavior. A “proper” quantitative measure of intelligence should be a simple function of how much Utility it can expect from its chosen behavior, on average, given an input with n bits of Entropy, and t seconds to crunch on those bits. Whether “Utility” is measured in units similar to Kolmogorov complexity is questionable, but that’s what my naive intuition yanked out when grasping for units.
But the point is, whatever we actually choose to measure g in, the term “+1 g” should make sense, and should mean the same thing regardless of what our current g is. IQ, being merely a statistical fit onto a gaussian distribution, does NOT do that.
This phrase implies that you have a metric for g (different from IQ points) because without it the expression “+n g” has no meaning.
Okay. To be precise we are talking about Shannon entropy and these units are bits.
Hold on. What is this Utility thing? I don’t see how it fits in the context in which we are talking. You are now introducing things like goals and values. Kolmogorov complexity is a measure of complexity, what does it have to do with utility?
I don’t see this as obvious. Why?
Not so. IQ is a metric, presumably of g, that is rescaled so that the average IQ is 100. Rescaling isn’t a particularly distorting operation to do. It is not fit onto a gaussian distribution.
I’m afraid you’re mistaken here. IQ scores are generally derived from a set of raw test scores by fitting them to a normal distribution with mean 100 and SD of 15 (sometimes 16): IQ 70 is thus defined as a score two standard deviations below the mean. It’s not a linear rescaling, unless the question pool just happens to give you a normal distribution of raw scores.
Hm. A quick look around finds this which says that raw scores are standardized by forcing them to the mean of 100 and the standard deviation of 15.
This is a linear transformation and it does not fit anything to a gaussian distribution.
Of course this is just stackexchange—do you happen to have links to how “proper” IQ test are supposed to convert raw scores into IQ points?
If the difficulty of the questions can’t be properly quantified, what exactly do the raw scores tell you?
See the first sentence of the penultimate paragraph of this.
Lots. This contradicts my revealed preference though I suppose, because I have a vague idea that fish oil increases intelligence but I haven’t made a special effort to eat any.
I’m trying to anticipate how you’ll follow up on this in a way that’s relevant to my post and coming up blank.
The fish oil thing is quackery I’m afraid
I find it hard to be properly scope sensitive about the kittens thing.
“Scope sensitive”?
referring to “scope insensitivity”.
I care a lot more about the boundary between eating kittens and not eating kittens than the number of kittens I eat, so the gain I’d need to eat two kittens is less than twice the gain I’d need to eat one kitten. Which indicates that I’m more concerned for myself than for the kittens...
I’ve seen other discussions here regarding varieties of Omega-3s which strongly indicated that fatty acids from fish are used to build brain related cells and that these acids aren’t really available in any other foods; casual googling fails to turn anything up but the link you provided seems like the sort of site that might, for instance, dismiss cryonics as quackery so I would like to see further discussion from someone better at researching than I am.
Assuming I somehow found a way to counteract taste-related problems, more then 10. Why value the life of a kitten?
EDIT: And given my social situation as autistic, I could get around the resulting problems without too much in the way of trouble.
Has this comment really gone entirely without explanation and still been upvoted multiple times? How is this remotely relevant to the post?
The post is about a tradeoff between epistemic rationality and instrumental rationality: you shouldn’t invest too much effort in precise knowledge, and in some circumstances, humans may find themselves at a disadvantage because of knowing more. The same clash appears in the metaphor where you trade the achievement of goals (not wanting to eat kittens) for precision of knowledge (gaining IQ points).
Ah… I think I see now. The comment assumed that one would not want to eat kittens, and that IQ is equivalent or isomorphic to epistemic rationality, and then mapped that to giving up instrumental rationality in favor of epistemic rationality. Definitely could’ve used some explanation.
I’d guess 1 point of IQ is somewhere around equivalent to the cognitive boost I’d get by sleeping half an hour longer every day, and it’d take quite a few hours for me to earn enough money to buy a single kitten, so… no more than a couple per month, I’d guess. (And that’s not even counting slaughtering and cooking the kittens or paying someone to do that.)
;-)