You don’t really have to be that sneaky. People are usually happy to hear the case for AI safety if you put it in palatable terms.
Not that this will instantly enable them to pick the right technical issues to worry about, or make them willing to make sacrifices that affect their own personal comfort. But I bet that AI safety initiatives within companies and universities are mostly formed by by going to someone in control of resources, making a reasonable-if-normie case for building AIs that do good things and not bad things, and then getting money or time allocated to you in a way that looks good for the person doing the allocating.
This is not the level of influence needed to start a Manhattan Project for value-aligned AI, so if that turns out to be necessary we’re probably S.O.L., but it seems like a lot more influence than you can exert while being sneaky.
You don’t really have to be that sneaky. People are usually happy to hear the case for AI safety if you put it in palatable terms.
Not that this will instantly enable them to pick the right technical issues to worry about, or make them willing to make sacrifices that affect their own personal comfort. But I bet that AI safety initiatives within companies and universities are mostly formed by by going to someone in control of resources, making a reasonable-if-normie case for building AIs that do good things and not bad things, and then getting money or time allocated to you in a way that looks good for the person doing the allocating.
This is not the level of influence needed to start a Manhattan Project for value-aligned AI, so if that turns out to be necessary we’re probably S.O.L., but it seems like a lot more influence than you can exert while being sneaky.