Notice also that until now, we didn’t even have a summary of the kind in this post! So yeah, we’re still at an early stage of strategic work, which is why SI and FHI are spending so much time on strategic work.
I’ll note, however, that I expect significant strategic insights to come from the technical work (e.g. FAI math). Such work should will give us insight into how hard the problems actually are, what architectures look most promising, to what degree the technical work can be outsourced to the mainstream academic community, and so on.
I expect significant strategic insights to come from the technical work (e.g. FAI math).
Interesting point. I’m worried that, while FAI math will help us understand what is dangerous or outsourceable from our particular path, many many other paths to AGI are possible, and we won’t learn from FAI math which of those other paths are dangerous or likely.
I feel like one clear winning strategy is safety promotion. It seems that almost no bad can come from promoting safety ideas among AI researchers and investors. It also seems relatively easy, in that requires only regular human skills of networking, persuasion, et cetera.
You’re probably right about safety promotion, but calling it “clear” may be an overstatement. A possible counterargument:
Existing AI researchers are likely predisposed to think that their AGI is likely to naturally be both safe and powerful. If they are exposed to arguments that it will instead naturally be both dangerous and very powerful (the latter half of the argument can’t be easily omitted; the potential danger is in part because of the high potential power), would it not be a natural result of confirmation bias for the preconception-contradicting “dangerous” half of the argument to be disbelieved and the preconception-confirming “very powerful” half of the argument to be believed?
Half of the AI researcher interviews posted to LessWrong appear to be with people who believe that “Garbage In, Garbage Out” only applies to arithmetic, not to morality. If the end result of persuasion is that as many as half of them have that mistake corrected while the remainder are merely convinced that they should work even harder, that may not be a net win.
I’ve lost all disrespect for the “stealing” of generic ideas, and roughly 25% of the intended purpose of my personal quotes files is so that I can “rob everyone blind” if I ever try writing fiction again. Any aphorisms I come up with myself are free to be folded, spindled, and mutilated. I try to cite originators when format and poor memory permit, and receiving the same favor would be nice, but I certainly wouldn’t mind seeing my ideas spread completely unattributed either.
Yeah, quite possibly. But I wouldn’t want people to run into analysis paralysis; I still think safety promotion is very likely to be a great way to reduce x-risk.
Half of the AI researcher interviews posted to LessWrong appear to be with people who believe that “Garbage In, Garbage Out” only applies to arithmetic, not to morality.
Does ‘garbage in, garbage out’ apply to morality, or not?
Looking many existing risky technologies the consumers and governments are the safety regulators, and manufacturers mostly cater to their demands. Consider the automobile industry, the aeronautical industry and the computer industry for examples.
All the more reason to use resources effectively. Relatively few safety campaigns have attempted to influence manufacturers. What you tend to see instead are F.U.D. campaigns and neagtive marketing—where organisations attempt to smear their competitors by spreading negative rumours about their products. For example, here is Apple’s negative marketing machine at work.
Are you suggesting that we encourage consumers to have safety demands? I’m not sure this will work. It’s possible that consumers are to reactionary for this to be helpful. Also, I think AI projects will be dangerous before reaching the consumer level. We want AGI researchers to think safe before they even develop theory.
It isn’t clear that influencing consumer awareness of safety issues would have much effect. However, it suggests that influencing the designers may not be very effective—they are often just giving users the safety level they are prepared to pay for.
Thanks for putting all this stuff in one place!
It makes me kind of sad that we still have more or less no answer to so many big, important questions. Does anyone else share this worry?
Notice also that until now, we didn’t even have a summary of the kind in this post! So yeah, we’re still at an early stage of strategic work, which is why SI and FHI are spending so much time on strategic work.
I’ll note, however, that I expect significant strategic insights to come from the technical work (e.g. FAI math). Such work should will give us insight into how hard the problems actually are, what architectures look most promising, to what degree the technical work can be outsourced to the mainstream academic community, and so on.
Interesting point. I’m worried that, while FAI math will help us understand what is dangerous or outsourceable from our particular path, many many other paths to AGI are possible, and we won’t learn from FAI math which of those other paths are dangerous or likely.
I feel like one clear winning strategy is safety promotion. It seems that almost no bad can come from promoting safety ideas among AI researchers and investors. It also seems relatively easy, in that requires only regular human skills of networking, persuasion, et cetera.
You’re probably right about safety promotion, but calling it “clear” may be an overstatement. A possible counterargument:
Existing AI researchers are likely predisposed to think that their AGI is likely to naturally be both safe and powerful. If they are exposed to arguments that it will instead naturally be both dangerous and very powerful (the latter half of the argument can’t be easily omitted; the potential danger is in part because of the high potential power), would it not be a natural result of confirmation bias for the preconception-contradicting “dangerous” half of the argument to be disbelieved and the preconception-confirming “very powerful” half of the argument to be believed?
Half of the AI researcher interviews posted to LessWrong appear to be with people who believe that “Garbage In, Garbage Out” only applies to arithmetic, not to morality. If the end result of persuasion is that as many as half of them have that mistake corrected while the remainder are merely convinced that they should work even harder, that may not be a net win.
Catchy! Mind if I steal a derivative of this?
I’ve lost all disrespect for the “stealing” of generic ideas, and roughly 25% of the intended purpose of my personal quotes files is so that I can “rob everyone blind” if I ever try writing fiction again. Any aphorisms I come up with myself are free to be folded, spindled, and mutilated. I try to cite originators when format and poor memory permit, and receiving the same favor would be nice, but I certainly wouldn’t mind seeing my ideas spread completely unattributed either.
Relevant TED talk
Noted; thanks.
Yeah, quite possibly. But I wouldn’t want people to run into analysis paralysis; I still think safety promotion is very likely to be a great way to reduce x-risk.
Does ‘garbage in, garbage out’ apply to morality, or not?
Upvoted for the “Garbage in, Garbage Out” line.
Somehow I managed not to list AI safety promotion in the original draft! Added now.
Looking many existing risky technologies the consumers and governments are the safety regulators, and manufacturers mostly cater to their demands. Consider the automobile industry, the aeronautical industry and the computer industry for examples.
Unfortunately, AGI isn’t a “risky technology” where mostly is going to cut it in any sense, including adhering to expectations for safety regulation.
All the more reason to use resources effectively. Relatively few safety campaigns have attempted to influence manufacturers. What you tend to see instead are F.U.D. campaigns and neagtive marketing—where organisations attempt to smear their competitors by spreading negative rumours about their products. For example, here is Apple’s negative marketing machine at work.
Are you suggesting that we encourage consumers to have safety demands? I’m not sure this will work. It’s possible that consumers are to reactionary for this to be helpful. Also, I think AI projects will be dangerous before reaching the consumer level. We want AGI researchers to think safe before they even develop theory.
It isn’t clear that influencing consumer awareness of safety issues would have much effect. However, it suggests that influencing the designers may not be very effective—they are often just giving users the safety level they are prepared to pay for.
Yes! It’s crucial to have those questions in one place, so that people can locate them and start finding answers.