Thanks for the comments, they are very useful. I don’t have time to address all of it right now, I’ll try and get back to you sometime next week.
A few questions: I don’t think you engaged with the different models of what intelligence is (whether it was about speed/etc) or whether those things were relevant to job performance and how those would be related to take off scenarios.
Was this because it this section was unclear?
Can you think of any other empirical study other than examining the nature of intelligence/IQ that might possibly help understand takeoff scenarios (apart from building strong AI directly)?
Even if I agreed that IQ research measured a coherent notion of intelligence and that there was a strong metaphor between human IQ and machine intelligence that would give us better models of takeoff and help with strategy, this does not imply that funding IQ research is an effective tactic unless there is a community of researchers with well aligned motivations capable of doing research aligned with these interests and that they expect results to be strategicaly useful in a foreseeable time frame. This may be true but it’s an important part of the argument that should be presented.
Surely the expectation just has to be not obviously worse than trying to build/train a community of safety focused ML researchers. I don’t have the background in psychology myself to point to, “psychology people that do good work,” but maybe others in our community do.
We are bunch of people wandering around in a dark cave, trying to find light. I think I see a twinkle, but we have a high prior that it is just a trick of my eyes, should we try and see if something is there? We have precious few clues to go on.
I didn’t engage with the different scenarios because they weren’t related to my cruxes for the argument; I don’t expect the nature of human intelligence to give us much insight into the nature of machine intelligence in the same way I don’t expect bird watching to help you identify different models of airplane.
I think there are good reasons to believe that safety focused AI/ML research will have strong payoffs now; in particular the flash crash example I linked above and things like self-driving car ethics are current practical problems—I do think this is very different from what MIRI does for example, and I’m not confident that I’d endorse MIRI as an EA cause (though I expect their work to be valuable). But there is a ton of unethical machine learning going on right now and I expect both that substantial problems that already exist can be addressed by ML safety as well as that research contributing to both the social position and theoretical development of AI safety in the future.
In a sense, I don’t feel like we’re entirely in a dark cave. We’re in a cave with a bunch of glowing mushrooms, and they’re sort of going in a line, and we can try following that line in both directions because there are reasons to think that’ll lead us out of the tunnel. It might also be interesting to study the weird patterns they make along the way but I think that requires a better outside view argument, and the patterns they make when we leave the tunnel have a good chance of being totally different. Sorry if that metaphor got away from me.
Ah, okay. I’m not sure you are my audience, or whether EAs are in that case.
I’m interested in the people that care about take off and want to get more information about it. It seems that a more general AI is needed for programming (at least to do programming better than Genetic Programming etc). And only that will lead to take off. The only general intelligence we know is human.
I don’t expect the nature of human intelligence to give us much insight into the nature of machine intelligence in the same way I don’t expect bird watching to help you identify different models of airplane.
I hope that studying birds might give you insight into the nature of lift and aerodynamics. Which would help with airplanes. Actually it turns out the wright brother’s got inspiration for the turning mechanism from birds
The Wrights developed their wing warping theory in the summer of 1899 after observing the buzzards at Pinnacle Hill twisting the tips of their wings as they soared into the wind.
The Wrights made the right decision by focusing on large birds. It turns out that small birds don’t change the shape of their wings when flying, rather they change the speed of their flapping wings. For example, to start a left turn, the right wing is flapped more vigorously.
I agree that ML safety will be useful in the short term as we integrate ML systems into our day to day lives.
I work on a simple system that could one day be more general, so I am very interested in getting more details about general-ness in intelligence and what is possible before it gets to that stage.
Thanks for the comments, they are very useful. I don’t have time to address all of it right now, I’ll try and get back to you sometime next week.
A few questions: I don’t think you engaged with the different models of what intelligence is (whether it was about speed/etc) or whether those things were relevant to job performance and how those would be related to take off scenarios.
Was this because it this section was unclear?
Can you think of any other empirical study other than examining the nature of intelligence/IQ that might possibly help understand takeoff scenarios (apart from building strong AI directly)?
Surely the expectation just has to be not obviously worse than trying to build/train a community of safety focused ML researchers. I don’t have the background in psychology myself to point to, “psychology people that do good work,” but maybe others in our community do.
We are bunch of people wandering around in a dark cave, trying to find light. I think I see a twinkle, but we have a high prior that it is just a trick of my eyes, should we try and see if something is there? We have precious few clues to go on.
I didn’t engage with the different scenarios because they weren’t related to my cruxes for the argument; I don’t expect the nature of human intelligence to give us much insight into the nature of machine intelligence in the same way I don’t expect bird watching to help you identify different models of airplane.
I think there are good reasons to believe that safety focused AI/ML research will have strong payoffs now; in particular the flash crash example I linked above and things like self-driving car ethics are current practical problems—I do think this is very different from what MIRI does for example, and I’m not confident that I’d endorse MIRI as an EA cause (though I expect their work to be valuable). But there is a ton of unethical machine learning going on right now and I expect both that substantial problems that already exist can be addressed by ML safety as well as that research contributing to both the social position and theoretical development of AI safety in the future.
In a sense, I don’t feel like we’re entirely in a dark cave. We’re in a cave with a bunch of glowing mushrooms, and they’re sort of going in a line, and we can try following that line in both directions because there are reasons to think that’ll lead us out of the tunnel. It might also be interesting to study the weird patterns they make along the way but I think that requires a better outside view argument, and the patterns they make when we leave the tunnel have a good chance of being totally different. Sorry if that metaphor got away from me.
Ah, okay. I’m not sure you are my audience, or whether EAs are in that case.
I’m interested in the people that care about take off and want to get more information about it. It seems that a more general AI is needed for programming (at least to do programming better than Genetic Programming etc). And only that will lead to take off. The only general intelligence we know is human.
I hope that studying birds might give you insight into the nature of lift and aerodynamics. Which would help with airplanes. Actually it turns out the wright brother’s got inspiration for the turning mechanism from birds
I agree that ML safety will be useful in the short term as we integrate ML systems into our day to day lives.
I work on a simple system that could one day be more general, so I am very interested in getting more details about general-ness in intelligence and what is possible before it gets to that stage.