shouldn’t an organization worried about the dangers of AI be very closely in touch with AI researchers in computer science departments? Sure, there’s room for pure philosophy and mathematics, but you’d need some grounding in actual AI to understand what future AIs are likely to do.
Yes. It’s hardly urgent, since AI researchers are nowhere near a runaway intelligence. But on the other hand, control of AI is going to be crucial+difficult eventually, and it would be good for researchers to be aware of it, if they aren’t.
Right, it’s just (in my and most other AI researchers’[*] opinion) overwhelmingly likely that we are in fact nowhere near (the capability of) it. Although it’s interesting to me that I don’t feel there’s that much difference in probability of “(good enough to) run away improving itself quickly past human level AI” in the next year, and in the next 10 years—both extremely close to 0 is the most specific I can be at this point. That suggests I haven’t really quantified my beliefs exactly yet.
[*] I actually only work on natural language processing using really dumb machine learning, i.e. not general AI.
Yes. It’s hardly urgent, since AI researchers are nowhere near a runaway intelligence. But on the other hand, control of AI is going to be crucial+difficult eventually, and it would be good for researchers to be aware of it, if they aren’t.
Sadly, there’s no guarantee of that.
Right, it’s just (in my and most other AI researchers’[*] opinion) overwhelmingly likely that we are in fact nowhere near (the capability of) it. Although it’s interesting to me that I don’t feel there’s that much difference in probability of “(good enough to) run away improving itself quickly past human level AI” in the next year, and in the next 10 years—both extremely close to 0 is the most specific I can be at this point. That suggests I haven’t really quantified my beliefs exactly yet.
[*] I actually only work on natural language processing using really dumb machine learning, i.e. not general AI.