At first my reaction was something like, “the teams have been acquired by large trillion dollar technology companies and so a dollar moved away from those companies is probably a bit less than a penny moved away from AGI development. This sounds very inefficient.”
But as a publicly announced way to incentivize defunding DeepMind it’s at least theoretically very efficient. If I controlled BlackRock I could say “I will divest x$ from Google as a whole unless you divest x/y$ from DeepMind’s meta research toward legitimate AI safety” and it would be pretty strongly in Google’s interest to do so. The difficulties lie in all of the details—you’d want to make the campaign extremely boring except for the people you care about , evaluate leadership of Google/Facebook/Microsoft to see how they’d react, coordinate the funds so that the shareholder activists have clear and almost auto-enforced terms, etc. The failure mode for this ironically looks something like how the U.S. does its sanctions, where we do a very poor job of dictating clear terms and goals, and increasing or decreasing pressure quickly and transparently in response to escalation and de-escalation.
We also really wouldn’t want these strategies to backfire by giving any fired meta researchers a reason to hate us personally and be less sympathetic. Finding some way to “cancel” AGI researchers would honestly feel really good to me but even under the best circumstances it’d be really ineffective. We don’t want them disgruntled and working somewhere else on the same thing, I want them to have something else to do that doesn’t lead to the collapse of the lightcone.
At first my reaction was something like, “the teams have been acquired by large trillion dollar technology companies and so a dollar moved away from those companies is probably a bit less than a penny moved away from AGI development. This sounds very inefficient.”
But as a publicly announced way to incentivize defunding DeepMind it’s at least theoretically very efficient. If I controlled BlackRock I could say “I will divest x$ from Google as a whole unless you divest x/y$ from DeepMind’s meta research toward legitimate AI safety” and it would be pretty strongly in Google’s interest to do so. The difficulties lie in all of the details—you’d want to make the campaign extremely boring except for the people you care about , evaluate leadership of Google/Facebook/Microsoft to see how they’d react, coordinate the funds so that the shareholder activists have clear and almost auto-enforced terms, etc. The failure mode for this ironically looks something like how the U.S. does its sanctions, where we do a very poor job of dictating clear terms and goals, and increasing or decreasing pressure quickly and transparently in response to escalation and de-escalation.
We also really wouldn’t want these strategies to backfire by giving any fired meta researchers a reason to hate us personally and be less sympathetic. Finding some way to “cancel” AGI researchers would honestly feel really good to me but even under the best circumstances it’d be really ineffective. We don’t want them disgruntled and working somewhere else on the same thing, I want them to have something else to do that doesn’t lead to the collapse of the lightcone.