Suppose you are passionate about the problem of AGI alignment. You want to research solutions to the problem, and you state it more or less like this:
We are on track for an AI catastrophe, most likely within the next few decades. We do not know how (or even if) we can prevent it. We need to do continued research right now, and closely follow new developments in industry. If there are enough of us dedicated to the problem, perhaps we can find a solution in time. Otherwise, the companies developing AI won’t do it because they only care about short-term business goals, and won’t take the threat seriously until it’s too late. Come with us if you want to live.
This is a tough sell. From the point of view of any organization granting research funds, it is expected to hear: a) this problem is extremely important and b) our research is the most promising way to solve it. I believe that making the case for a) is easier than for b). Why? Because the notable AI advancements from the past few years came from companies like Facebook and Google. If you are not inside those companies, your job as a researcher is to react to whatever they decide to share with the rest of the world.
If I were completely convinced that this is the most pressing problem that exists, I would want to be as close as possible to the source of new developments so I could influence them directly. It would be a tough job, because I would have to walk a fine line. I could not get myself fired by opposing the deployment of every new mechanism, but perhaps I could spend a significant portion of my time playing devil’s advocate. As someone involved in deploying production systems, I could insist in adding circuit breakers and other safety mechanisms. In addition I would be involved in the hiring process, so I could try to get like-minded people to join the team. Perhaps we might manage to nudge the company culture away from “move fast and break things (irreversibly).”
If you cannot beat them, join them and promote change from within.
What if the best path for a person who wants to work on AGI alignment is to join Facebook or Google?
Suppose you are passionate about the problem of AGI alignment. You want to research solutions to the problem, and you state it more or less like this:
This is a tough sell. From the point of view of any organization granting research funds, it is expected to hear: a) this problem is extremely important and b) our research is the most promising way to solve it. I believe that making the case for a) is easier than for b). Why? Because the notable AI advancements from the past few years came from companies like Facebook and Google. If you are not inside those companies, your job as a researcher is to react to whatever they decide to share with the rest of the world.
If I were completely convinced that this is the most pressing problem that exists, I would want to be as close as possible to the source of new developments so I could influence them directly. It would be a tough job, because I would have to walk a fine line. I could not get myself fired by opposing the deployment of every new mechanism, but perhaps I could spend a significant portion of my time playing devil’s advocate. As someone involved in deploying production systems, I could insist in adding circuit breakers and other safety mechanisms. In addition I would be involved in the hiring process, so I could try to get like-minded people to join the team. Perhaps we might manage to nudge the company culture away from “move fast and break things (irreversibly).”
If you cannot beat them, join them and promote change from within.