The model is this: assume that if an AI is created, it’s because one researcher, chosen at random from the pool of all researchers, has the key insight; and humanity survives if and only if that researcher is careful and takes safety seriously.
A human alone can’t build a superintelligence. So, companies and other organisations are what we should mostly be concerned with. Targetting the engineering talent with the message is probably the wrong approach—you mostly want the managers and directors, since they are more likely to be the ones who willl decide what the machine wants.
I think the low-hanging fruit in getting corporations to behave better is reputation systems—which I discuss here. Merely telling corporations what they are doing is risky seems unlikely to be very effective—corporations are just not that risk-averse.
A human alone can’t build a superintelligence. So, companies and other organisations are what we should mostly be concerned with. Targetting the engineering talent with the message is probably the wrong approach—you mostly want the managers and directors, since they are more likely to be the ones who willl decide what the machine wants.
I think the low-hanging fruit in getting corporations to behave better is reputation systems—which I discuss here. Merely telling corporations what they are doing is risky seems unlikely to be very effective—corporations are just not that risk-averse.