What do you think about offering an option to divest from companies developing unsafe AGI? For example, by creating something like an ESG index that would deliberately exclude AGI-developing companies (Meta, Google etc) or just excluding these companies from existing ESGs.
The impact = making AGI research a liability (being AGI-unsafe costs money) + raising awareness in general (everyone will see AGI-safe & AGI-unsafe options in their pension investment menu + a decision itself will make a noise) + social pressure on AGI researchers (equating them to fossil fuels extracting guys).
Do you think this is implementable short-term? Is there a shortcut from this post to whoever makes a decisions at BlackRock & Co?
At first my reaction was something like, “the teams have been acquired by large trillion dollar technology companies and so a dollar moved away from those companies is probably a bit less than a penny moved away from AGI development. This sounds very inefficient.”
But as a publicly announced way to incentivize defunding DeepMind it’s at least theoretically very efficient. If I controlled BlackRock I could say “I will divest x$ from Google as a whole unless you divest x/y$ from DeepMind’s meta research toward legitimate AI safety” and it would be pretty strongly in Google’s interest to do so. The difficulties lie in all of the details—you’d want to make the campaign extremely boring except for the people you care about , evaluate leadership of Google/Facebook/Microsoft to see how they’d react, coordinate the funds so that the shareholder activists have clear and almost auto-enforced terms, etc. The failure mode for this ironically looks something like how the U.S. does its sanctions, where we do a very poor job of dictating clear terms and goals, and increasing or decreasing pressure quickly and transparently in response to escalation and de-escalation.
We also really wouldn’t want these strategies to backfire by giving any fired meta researchers a reason to hate us personally and be less sympathetic. Finding some way to “cancel” AGI researchers would honestly feel really good to me but even under the best circumstances it’d be really ineffective. We don’t want them disgruntled and working somewhere else on the same thing, I want them to have something else to do that doesn’t lead to the collapse of the lightcone.
What do you think about offering an option to divest from companies developing unsafe AGI? For example, by creating something like an ESG index that would deliberately exclude AGI-developing companies (Meta, Google etc) or just excluding these companies from existing ESGs.
The impact = making AGI research a liability (being AGI-unsafe costs money) + raising awareness in general (everyone will see AGI-safe & AGI-unsafe options in their pension investment menu + a decision itself will make a noise) + social pressure on AGI researchers (equating them to fossil fuels extracting guys).
Do you think this is implementable short-term? Is there a shortcut from this post to whoever makes a decisions at BlackRock & Co?
At first my reaction was something like, “the teams have been acquired by large trillion dollar technology companies and so a dollar moved away from those companies is probably a bit less than a penny moved away from AGI development. This sounds very inefficient.”
But as a publicly announced way to incentivize defunding DeepMind it’s at least theoretically very efficient. If I controlled BlackRock I could say “I will divest x$ from Google as a whole unless you divest x/y$ from DeepMind’s meta research toward legitimate AI safety” and it would be pretty strongly in Google’s interest to do so. The difficulties lie in all of the details—you’d want to make the campaign extremely boring except for the people you care about , evaluate leadership of Google/Facebook/Microsoft to see how they’d react, coordinate the funds so that the shareholder activists have clear and almost auto-enforced terms, etc. The failure mode for this ironically looks something like how the U.S. does its sanctions, where we do a very poor job of dictating clear terms and goals, and increasing or decreasing pressure quickly and transparently in response to escalation and de-escalation.
We also really wouldn’t want these strategies to backfire by giving any fired meta researchers a reason to hate us personally and be less sympathetic. Finding some way to “cancel” AGI researchers would honestly feel really good to me but even under the best circumstances it’d be really ineffective. We don’t want them disgruntled and working somewhere else on the same thing, I want them to have something else to do that doesn’t lead to the collapse of the lightcone.
A suggestion from my brother: https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=hRJxxhKtbKj8fhDd5