Single-metric versions of intelligence are going the way of the dinosaur. In practical contexts, it’s much better to test for a bunch of specific skills and aptitudes and to create a predictive model of success at the desired task.
I first read the book in the early nineties, though Howard Gardner had published the first edition in 1982. I was at first a bit extra skeptical that it would be based too much on some form of “political correctness”, but I found the concepts to be very compelling.
Most of the discussion I heard in subsequent years, occasionally by psychology professor and grad student friends, continued to be positive.
I might say that I had no ulterior motive in trying to find reasons to agree with the book, since I always score in the genius range myself on standardized, traditional-style IQ tests.
So, it does seem to me that intelligence is a vector, not a scalar, if we have to call it by one noun.
As to Katja’s follow-up question, does it matter for Bostrom’s arguments? Not really, as long as one is clear (which it is from the contexts of his remarks) which kind(s) of intelligence he is referring to.
I think there is a more serious vacuum in our understanding, than whether intelligence is a single property, or comes in several irreducibly different (possibly context-dependent) forms, and that is this : with respect to the sorts of intelligence we usually default to conversing about (like the sort that helps a reader understand Bostrom’s book, an explanation of special relativity, or RNA interference in molecular biology), do we even know what we think we know about what that is.
I would have to explain the idea of this purported “vacuum” in understanding at significant length; it is a set of new ideas that stuck me, together, as a set of related insights. I am working on a paper explaining the new perspective I think I have found, and why it might open up some new important questions and strategies for AGI. When it is finished and clear enough to be useful, I will make it available by PDF or on a blog.
(Too lengthy to put in one post here, so I will put the link up. If these ideas pan out, they may suggest some reconceptualizations with nontrivial consequences, and be informative in a scalable sense—which is what one in this area of research would hope for.)
I thought that this had become a fairly dominant view, over 20 years ago. See this PDF: http://www.learner.org/courses/learningclassroom/support/04_mult_intel.pdf
I first read the book in the early nineties, though Howard Gardner had published the first edition in 1982. I was at first a bit extra skeptical that it would be based too much on some form of “political correctness”, but I found the concepts to be very compelling.
Most of the discussion I heard in subsequent years, occasionally by psychology professor and grad student friends, continued to be positive.
I might say that I had no ulterior motive in trying to find reasons to agree with the book, since I always score in the genius range myself on standardized, traditional-style IQ tests.
So, it does seem to me that intelligence is a vector, not a scalar, if we have to call it by one noun.
As to Katja’s follow-up question, does it matter for Bostrom’s arguments? Not really, as long as one is clear (which it is from the contexts of his remarks) which kind(s) of intelligence he is referring to.
I think there is a more serious vacuum in our understanding, than whether intelligence is a single property, or comes in several irreducibly different (possibly context-dependent) forms, and that is this : with respect to the sorts of intelligence we usually default to conversing about (like the sort that helps a reader understand Bostrom’s book, an explanation of special relativity, or RNA interference in molecular biology), do we even know what we think we know about what that is.
I would have to explain the idea of this purported “vacuum” in understanding at significant length; it is a set of new ideas that stuck me, together, as a set of related insights. I am working on a paper explaining the new perspective I think I have found, and why it might open up some new important questions and strategies for AGI.
When it is finished and clear enough to be useful, I will make it available by PDF or on a blog. (Too lengthy to put in one post here, so I will put the link up. If these ideas pan out, they may suggest some reconceptualizations with nontrivial consequences, and be informative in a scalable sense—which is what one in this area of research would hope for.)