Single-metric versions of intelligence are going the way of the dinosaur. In practical contexts, it’s much better to test for a bunch of specific skills and aptitudes and to create a predictive model of success at the desired task.
In addition, our understanding of intelligence frequently gives a high score to someone capable of making terrible decisions or someone reasoning brilliantly from a set of desperately flawed first principles.
Yeah, having high math or reading comprehension capability does not always make people more effective or productive. They can still, for instance, become suidical, sociopathic or rebel against well-meaning authorities. They still often do not go into their doctor when sick, they develop addictions, they may become too introverted or arrogant when it is counterproductive or fail to escape bad relationships.
We should not strictly be looking to enhance intelligence. If we’re going down the enhancement route at all, we should wish to create good decision-makers without, for example, tendencies to mis-read people, sociopathy and self-harm.
For instance, rebelling against well-meaning authorities has been known to cause someone not to adhere to a correct medication regime or to start smoking.
Problems regularly rear their head when it comes to listening to the doctor.
I guess I’ll add that the well-meaning authority is also knowledgeable.
Really, what I am getting at is that just like anyone else, smart people may rebel or conform as a knee-jerk reaction. Neither is using reason to come to an appropriate conclusion, but I have seen them do it all the time.
One might think an agent who was sufficiently smart would at some point apply reason to the question of whether they should follow their knee-jerk responses with respect to e.g. these decisions.
Single-metric versions of intelligence are going the way of the dinosaur. In practical contexts, it’s much better to test for a bunch of specific skills and aptitudes and to create a predictive model of success at the desired task.
I first read the book in the early nineties, though Howard Gardner had published the first edition in 1982. I was at first a bit extra skeptical that it would be based too much on some form of “political correctness”, but I found the concepts to be very compelling.
Most of the discussion I heard in subsequent years, occasionally by psychology professor and grad student friends, continued to be positive.
I might say that I had no ulterior motive in trying to find reasons to agree with the book, since I always score in the genius range myself on standardized, traditional-style IQ tests.
So, it does seem to me that intelligence is a vector, not a scalar, if we have to call it by one noun.
As to Katja’s follow-up question, does it matter for Bostrom’s arguments? Not really, as long as one is clear (which it is from the contexts of his remarks) which kind(s) of intelligence he is referring to.
I think there is a more serious vacuum in our understanding, than whether intelligence is a single property, or comes in several irreducibly different (possibly context-dependent) forms, and that is this : with respect to the sorts of intelligence we usually default to conversing about (like the sort that helps a reader understand Bostrom’s book, an explanation of special relativity, or RNA interference in molecular biology), do we even know what we think we know about what that is.
I would have to explain the idea of this purported “vacuum” in understanding at significant length; it is a set of new ideas that stuck me, together, as a set of related insights. I am working on a paper explaining the new perspective I think I have found, and why it might open up some new important questions and strategies for AGI. When it is finished and clear enough to be useful, I will make it available by PDF or on a blog.
(Too lengthy to put in one post here, so I will put the link up. If these ideas pan out, they may suggest some reconceptualizations with nontrivial consequences, and be informative in a scalable sense—which is what one in this area of research would hope for.)
Single-metric versions of intelligence are going the way of the dinosaur. In practical contexts, it’s much better to test for a bunch of specific skills and aptitudes and to create a predictive model of success at the desired task.
In addition, our understanding of intelligence frequently gives a high score to someone capable of making terrible decisions or someone reasoning brilliantly from a set of desperately flawed first principles.
Ok, does this matter for Bostrom’s arguments?
Yeah, having high math or reading comprehension capability does not always make people more effective or productive. They can still, for instance, become suidical, sociopathic or rebel against well-meaning authorities. They still often do not go into their doctor when sick, they develop addictions, they may become too introverted or arrogant when it is counterproductive or fail to escape bad relationships.
We should not strictly be looking to enhance intelligence. If we’re going down the enhancement route at all, we should wish to create good decision-makers without, for example, tendencies to mis-read people, sociopathy and self-harm.
What’s wrong with that?
...and, presumably, without tendencies to rebel against well-meaning authorities?
I don’t think I like the idea of genetic slavery.
For instance, rebelling against well-meaning authorities has been known to cause someone not to adhere to a correct medication regime or to start smoking.
Problems regularly rear their head when it comes to listening to the doctor.
I guess I’ll add that the well-meaning authority is also knowledgeable.
Let me point out the obvious: the knowledgeable well-meaning authority is not necessarily acting in your best interests.
Not to mention that authority that’s both knowledgeable and well-meaning is pretty rare.
Really, what I am getting at is that just like anyone else, smart people may rebel or conform as a knee-jerk reaction. Neither is using reason to come to an appropriate conclusion, but I have seen them do it all the time.
One might think an agent who was sufficiently smart would at some point apply reason to the question of whether they should follow their knee-jerk responses with respect to e.g. these decisions.
I thought that this had become a fairly dominant view, over 20 years ago. See this PDF: http://www.learner.org/courses/learningclassroom/support/04_mult_intel.pdf
I first read the book in the early nineties, though Howard Gardner had published the first edition in 1982. I was at first a bit extra skeptical that it would be based too much on some form of “political correctness”, but I found the concepts to be very compelling.
Most of the discussion I heard in subsequent years, occasionally by psychology professor and grad student friends, continued to be positive.
I might say that I had no ulterior motive in trying to find reasons to agree with the book, since I always score in the genius range myself on standardized, traditional-style IQ tests.
So, it does seem to me that intelligence is a vector, not a scalar, if we have to call it by one noun.
As to Katja’s follow-up question, does it matter for Bostrom’s arguments? Not really, as long as one is clear (which it is from the contexts of his remarks) which kind(s) of intelligence he is referring to.
I think there is a more serious vacuum in our understanding, than whether intelligence is a single property, or comes in several irreducibly different (possibly context-dependent) forms, and that is this : with respect to the sorts of intelligence we usually default to conversing about (like the sort that helps a reader understand Bostrom’s book, an explanation of special relativity, or RNA interference in molecular biology), do we even know what we think we know about what that is.
I would have to explain the idea of this purported “vacuum” in understanding at significant length; it is a set of new ideas that stuck me, together, as a set of related insights. I am working on a paper explaining the new perspective I think I have found, and why it might open up some new important questions and strategies for AGI.
When it is finished and clear enough to be useful, I will make it available by PDF or on a blog. (Too lengthy to put in one post here, so I will put the link up. If these ideas pan out, they may suggest some reconceptualizations with nontrivial consequences, and be informative in a scalable sense—which is what one in this area of research would hope for.)