Agreeing with your post, I think it might be important to offer the people you want to reach out to a specific alternative what they should work on instead (because otherwise we are basically just telling them to quit their job, which nobody likes to hear). One such alternative would be AI alignment, but maybe that is not optimal for impatient people. I assume that researchers at OpenAI and DeepMind are in it because of the possibilities of advanced AI and that most of them are rather impatient to see them realized. Do you think it would be a good idea to advocate that those to don’t want to work on alignment work on shallow AI instead?
I am also thinking of this blog post, arguing that “It’s Our Moral Obligation to Make Data More Accessible” because there is a lot proprietary data out there, which only one company/institution has access to and that stifles innovation (and it’s possible to do so while respecting privacy). This also means that there is potentially a lot of data no (or few) shallow, safe ML algorithms have been tried on and that we might be able to get a substantial fraction of the benefits of AGI by just doing more with that data. There are of course downsides to this. Making data more accessible increases the number of applications of AI and could thus lead to increased funding for AGI development.
Agreeing with your post, I think it might be important to offer the people you want to reach out to a specific alternative what they should work on instead (because otherwise we are basically just telling them to quit their job, which nobody likes to hear). One such alternative would be AI alignment, but maybe that is not optimal for impatient people. I assume that researchers at OpenAI and DeepMind are in it because of the possibilities of advanced AI and that most of them are rather impatient to see them realized. Do you think it would be a good idea to advocate that those to don’t want to work on alignment work on shallow AI instead?
I am also thinking of this blog post, arguing that “It’s Our Moral Obligation to Make Data More Accessible” because there is a lot proprietary data out there, which only one company/institution has access to and that stifles innovation (and it’s possible to do so while respecting privacy). This also means that there is potentially a lot of data no (or few) shallow, safe ML algorithms have been tried on and that we might be able to get a substantial fraction of the benefits of AGI by just doing more with that data.
There are of course downsides to this. Making data more accessible increases the number of applications of AI and could thus lead to increased funding for AGI development.
EDIT: Just realized that this is basically the same as number 4 in The case for Doing Something Else: