Are you in the camp of “we should make a benevolent dictator AI implementing CEV”, or “we can make task-limited-AGI-agents and coordinate to never make long-term-planning-AGI-agents”, or something else?
No idea. :-)
My general feeling is that having an opinion on the best course of approach would require knowing what AGI and the state of the world will be like when it is developed, but we currently don’t know either.
Lots of historical predictions about coming problems have been rendered completely irrelevant because something totally unexpected happened. And the other way around; it would have been hard for people to predict the issue of computer viruses before electricity had been invented, and harder yet to think about how to prepare for it. That might be a bit of exaggeration—our state of understanding about AGI is probably better than the understanding that pre-electric people would have had of computer viruses—but it still feels impossible to effectively reason about at the moment.
My preferred approach is to just have people pursue many different kinds of basic research on AI safety, understanding human values, etc., while also engaging with near-term AI issues so that they get influence in the kinds of organizations which will eventually make decisions about AI. And then hope that we figure out something once the picture becomes clearer.
No idea. :-)
My general feeling is that having an opinion on the best course of approach would require knowing what AGI and the state of the world will be like when it is developed, but we currently don’t know either.
Lots of historical predictions about coming problems have been rendered completely irrelevant because something totally unexpected happened. And the other way around; it would have been hard for people to predict the issue of computer viruses before electricity had been invented, and harder yet to think about how to prepare for it. That might be a bit of exaggeration—our state of understanding about AGI is probably better than the understanding that pre-electric people would have had of computer viruses—but it still feels impossible to effectively reason about at the moment.
My preferred approach is to just have people pursue many different kinds of basic research on AI safety, understanding human values, etc., while also engaging with near-term AI issues so that they get influence in the kinds of organizations which will eventually make decisions about AI. And then hope that we figure out something once the picture becomes clearer.