I agree there is no consensus on the meaning of intelligence, and this is probably because “intelligence” isn’t one thing but many things. In humans these things are often correlated but only weakly.
As Scott argues, IQ gives an okay measure of some things under the intelligence umbrella. However, the economic impact of an AI will be measured entirely by job performance. And the correlation between IQ-like intelligence and job competency is pretty clearly already broken by existing AI expert systems. We may be able to get a better understanding of the human correlations by doing experiments.
Understanding what IQ measures and how it is related to competencies I expect with ~90% confidence will be a human coincidence. Since we don’t have a notion of intelligence, IQ is a flawed measure only applicable to humans, and the connection between what IQ attempts to measure and the thing we care about is only present in humans, I don’t expect models of IQ to improve our ability to forecast AI.
Even if I agreed that IQ research measured a coherent notion of intelligence and that there was a strong metaphor between human IQ and machine intelligence that would give us better models of takeoff and help with strategy, this does not imply that funding IQ research is an effective tactic unless there is a community of researchers with well aligned motivations capable of doing research aligned with these interests and that they expect results to be strategicaly useful in a foreseeable time frame. This may be true but it’s an important part of the argument that should be presented.
And some specific comments:
If intelligence is not correlated with job performance or correlated only up to a certain point of performance, super intelligence’s won’t be super at job performance and they won’t have as huge an impact as we might have expected.
If intelligence is not correlated with job performance we should not expect economically disruptive AI to be “superintelligent.” This already seems clear, since AI that caused various Flash Crashes was not superintelligent. By the same analogy, this may be either convenient (for example, very corrigible) or inconvenient (for example, can commit grave errors without understanding them) for safety purposes.
It is unlikely to speed up the creation of intelligence directly as it is not working on a constructive view of intelligence but a descriptive one. It will point in a direction of research though
(warning, epistemic status: hot take) I think one of the major barriers in AI right now is a lack of clear functional analogies to existing intelligence systems. Building AI systems out of various pieces is very common (AlphaGo and self driving cars both do this) and some of the most exciting recent advances like use of convnets in computer vision are based on explicit analogies. I would suspect that good descriptive models of human intelligence would be very powerful tools to advance AI.
My overall takeaway here is:
Having better descriptive models of human intelligence would be valuable and interesting, though possibly would carry some dangers. This does not mean EAs should fund it.
Being able to constrain takeoff scenarios would be very valuable for AI safety research. This does not mean there’s a connection between takeoff scenarios and IQ.
Thanks for the comments, they are very useful. I don’t have time to address all of it right now, I’ll try and get back to you sometime next week.
A few questions: I don’t think you engaged with the different models of what intelligence is (whether it was about speed/etc) or whether those things were relevant to job performance and how those would be related to take off scenarios.
Was this because it this section was unclear?
Can you think of any other empirical study other than examining the nature of intelligence/IQ that might possibly help understand takeoff scenarios (apart from building strong AI directly)?
Even if I agreed that IQ research measured a coherent notion of intelligence and that there was a strong metaphor between human IQ and machine intelligence that would give us better models of takeoff and help with strategy, this does not imply that funding IQ research is an effective tactic unless there is a community of researchers with well aligned motivations capable of doing research aligned with these interests and that they expect results to be strategicaly useful in a foreseeable time frame. This may be true but it’s an important part of the argument that should be presented.
Surely the expectation just has to be not obviously worse than trying to build/train a community of safety focused ML researchers. I don’t have the background in psychology myself to point to, “psychology people that do good work,” but maybe others in our community do.
We are bunch of people wandering around in a dark cave, trying to find light. I think I see a twinkle, but we have a high prior that it is just a trick of my eyes, should we try and see if something is there? We have precious few clues to go on.
I didn’t engage with the different scenarios because they weren’t related to my cruxes for the argument; I don’t expect the nature of human intelligence to give us much insight into the nature of machine intelligence in the same way I don’t expect bird watching to help you identify different models of airplane.
I think there are good reasons to believe that safety focused AI/ML research will have strong payoffs now; in particular the flash crash example I linked above and things like self-driving car ethics are current practical problems—I do think this is very different from what MIRI does for example, and I’m not confident that I’d endorse MIRI as an EA cause (though I expect their work to be valuable). But there is a ton of unethical machine learning going on right now and I expect both that substantial problems that already exist can be addressed by ML safety as well as that research contributing to both the social position and theoretical development of AI safety in the future.
In a sense, I don’t feel like we’re entirely in a dark cave. We’re in a cave with a bunch of glowing mushrooms, and they’re sort of going in a line, and we can try following that line in both directions because there are reasons to think that’ll lead us out of the tunnel. It might also be interesting to study the weird patterns they make along the way but I think that requires a better outside view argument, and the patterns they make when we leave the tunnel have a good chance of being totally different. Sorry if that metaphor got away from me.
Ah, okay. I’m not sure you are my audience, or whether EAs are in that case.
I’m interested in the people that care about take off and want to get more information about it. It seems that a more general AI is needed for programming (at least to do programming better than Genetic Programming etc). And only that will lead to take off. The only general intelligence we know is human.
I don’t expect the nature of human intelligence to give us much insight into the nature of machine intelligence in the same way I don’t expect bird watching to help you identify different models of airplane.
I hope that studying birds might give you insight into the nature of lift and aerodynamics. Which would help with airplanes. Actually it turns out the wright brother’s got inspiration for the turning mechanism from birds
The Wrights developed their wing warping theory in the summer of 1899 after observing the buzzards at Pinnacle Hill twisting the tips of their wings as they soared into the wind.
The Wrights made the right decision by focusing on large birds. It turns out that small birds don’t change the shape of their wings when flying, rather they change the speed of their flapping wings. For example, to start a left turn, the right wing is flapped more vigorously.
I agree that ML safety will be useful in the short term as we integrate ML systems into our day to day lives.
I work on a simple system that could one day be more general, so I am very interested in getting more details about general-ness in intelligence and what is possible before it gets to that stage.
I agree there is no consensus on the meaning of intelligence, and this is probably because “intelligence” isn’t one thing but many things. In humans these things are often correlated but only weakly.
As Scott argues, IQ gives an okay measure of some things under the intelligence umbrella. However, the economic impact of an AI will be measured entirely by job performance. And the correlation between IQ-like intelligence and job competency is pretty clearly already broken by existing AI expert systems. We may be able to get a better understanding of the human correlations by doing experiments.
Understanding what IQ measures and how it is related to competencies I expect with ~90% confidence will be a human coincidence. Since we don’t have a notion of intelligence, IQ is a flawed measure only applicable to humans, and the connection between what IQ attempts to measure and the thing we care about is only present in humans, I don’t expect models of IQ to improve our ability to forecast AI.
Even if I agreed that IQ research measured a coherent notion of intelligence and that there was a strong metaphor between human IQ and machine intelligence that would give us better models of takeoff and help with strategy, this does not imply that funding IQ research is an effective tactic unless there is a community of researchers with well aligned motivations capable of doing research aligned with these interests and that they expect results to be strategicaly useful in a foreseeable time frame. This may be true but it’s an important part of the argument that should be presented.
And some specific comments:
If intelligence is not correlated with job performance we should not expect economically disruptive AI to be “superintelligent.” This already seems clear, since AI that caused various Flash Crashes was not superintelligent. By the same analogy, this may be either convenient (for example, very corrigible) or inconvenient (for example, can commit grave errors without understanding them) for safety purposes.
(warning, epistemic status: hot take) I think one of the major barriers in AI right now is a lack of clear functional analogies to existing intelligence systems. Building AI systems out of various pieces is very common (AlphaGo and self driving cars both do this) and some of the most exciting recent advances like use of convnets in computer vision are based on explicit analogies. I would suspect that good descriptive models of human intelligence would be very powerful tools to advance AI.
My overall takeaway here is:
Having better descriptive models of human intelligence would be valuable and interesting, though possibly would carry some dangers. This does not mean EAs should fund it.
Being able to constrain takeoff scenarios would be very valuable for AI safety research. This does not mean there’s a connection between takeoff scenarios and IQ.
Thanks for the comments, they are very useful. I don’t have time to address all of it right now, I’ll try and get back to you sometime next week.
A few questions: I don’t think you engaged with the different models of what intelligence is (whether it was about speed/etc) or whether those things were relevant to job performance and how those would be related to take off scenarios.
Was this because it this section was unclear?
Can you think of any other empirical study other than examining the nature of intelligence/IQ that might possibly help understand takeoff scenarios (apart from building strong AI directly)?
Surely the expectation just has to be not obviously worse than trying to build/train a community of safety focused ML researchers. I don’t have the background in psychology myself to point to, “psychology people that do good work,” but maybe others in our community do.
We are bunch of people wandering around in a dark cave, trying to find light. I think I see a twinkle, but we have a high prior that it is just a trick of my eyes, should we try and see if something is there? We have precious few clues to go on.
I didn’t engage with the different scenarios because they weren’t related to my cruxes for the argument; I don’t expect the nature of human intelligence to give us much insight into the nature of machine intelligence in the same way I don’t expect bird watching to help you identify different models of airplane.
I think there are good reasons to believe that safety focused AI/ML research will have strong payoffs now; in particular the flash crash example I linked above and things like self-driving car ethics are current practical problems—I do think this is very different from what MIRI does for example, and I’m not confident that I’d endorse MIRI as an EA cause (though I expect their work to be valuable). But there is a ton of unethical machine learning going on right now and I expect both that substantial problems that already exist can be addressed by ML safety as well as that research contributing to both the social position and theoretical development of AI safety in the future.
In a sense, I don’t feel like we’re entirely in a dark cave. We’re in a cave with a bunch of glowing mushrooms, and they’re sort of going in a line, and we can try following that line in both directions because there are reasons to think that’ll lead us out of the tunnel. It might also be interesting to study the weird patterns they make along the way but I think that requires a better outside view argument, and the patterns they make when we leave the tunnel have a good chance of being totally different. Sorry if that metaphor got away from me.
Ah, okay. I’m not sure you are my audience, or whether EAs are in that case.
I’m interested in the people that care about take off and want to get more information about it. It seems that a more general AI is needed for programming (at least to do programming better than Genetic Programming etc). And only that will lead to take off. The only general intelligence we know is human.
I hope that studying birds might give you insight into the nature of lift and aerodynamics. Which would help with airplanes. Actually it turns out the wright brother’s got inspiration for the turning mechanism from birds
I agree that ML safety will be useful in the short term as we integrate ML systems into our day to day lives.
I work on a simple system that could one day be more general, so I am very interested in getting more details about general-ness in intelligence and what is possible before it gets to that stage.