I’m at over 50% chance that AI will kill us all. But consider the decision to quit from a consequentialist viewpoint. Most likely the person who replaces you will be almost as good as you at capacity research but care far less than you do about AI existential risk. Humanity, consequently, probably has a better chance if you stay in the lab ready for the day when, hopefully, lots of lab workers try to convince the bosses that now is the time for a pause, or at least that now is the time to shift a lot of resources from capacity to alignment.
The time for a pause is now. Advancing AI capabilities now is immoral and undemocractic.
OK, then, here is another suggestion I have for the concerned people at AI labs: Go on a strike and demand that capability research is dropped in favour of alignment research.
Your framework appears to be moral rather than practical. Right now going on strike would just get you fired, but in a year or two perhaps it could accomplish something. You should consider the marginal impact of the action of a few workers on the likely outcome with AI risk.
I am using a moral appeal to elicit a practical outcome.
Right now going on strike would just get you fired, but in a year or two perhaps it could accomplish something.
Two objections:
I think it will not get you fired now. If you are an expensive AI researcher (or better a bunch of AI researchers), your act will create a small media storm. Firing you will not be an acceptable option for optics. (Just don’t say you believe AI is conscious.)
A year or two might be a little late for that.
One recommendation: Unionise.
You should consider the marginal impact of the action of a few workers on the likely outcome with AI risk.
Great marginal impact, precisely because of the media effect. “AI researchers strike against the machines, demanding AI lab pause”
I like the idea of an AI lab workers’ union. It might be worth talking to union organizers and AI lab workers to see how practical the idea is, and what steps would have to be taken. Although a danger is that the union would put salaries ahead of existential risk.
Great to see some support for these ideas. Well, if anything at all, a union will be a good distraction for the management and a drain on finances that would otherwise be spent on compute.
Demand an immediate undefinite pause. Demand that all work is dropped and you only work on alignment until it is solved. Demand that humanity live and not die.
I’m at over 50% chance that AI will kill us all. But consider the decision to quit from a consequentialist viewpoint. Most likely the person who replaces you will be almost as good as you at capacity research but care far less than you do about AI existential risk. Humanity, consequently, probably has a better chance if you stay in the lab ready for the day when, hopefully, lots of lab workers try to convince the bosses that now is the time for a pause, or at least that now is the time to shift a lot of resources from capacity to alignment.
The time for a pause is now. Advancing AI capabilities now is immoral and undemocractic.
OK, then, here is another suggestion I have for the concerned people at AI labs: Go on a strike and demand that capability research is dropped in favour of alignment research.
Your framework appears to be moral rather than practical. Right now going on strike would just get you fired, but in a year or two perhaps it could accomplish something. You should consider the marginal impact of the action of a few workers on the likely outcome with AI risk.
I am using a moral appeal to elicit a practical outcome.
Two objections:
I think it will not get you fired now. If you are an expensive AI researcher (or better a bunch of AI researchers), your act will create a small media storm. Firing you will not be an acceptable option for optics. (Just don’t say you believe AI is conscious.)
A year or two might be a little late for that.
One recommendation: Unionise.
Great marginal impact, precisely because of the media effect. “AI researchers strike against the machines, demanding AI lab pause”
I like the idea of an AI lab workers’ union. It might be worth talking to union organizers and AI lab workers to see how practical the idea is, and what steps would have to be taken. Although a danger is that the union would put salaries ahead of existential risk.
Great to see some support for these ideas. Well, if anything at all, a union will be a good distraction for the management and a drain on finances that would otherwise be spent on compute.
I do not know how I can help personally with this, but here is a link for anyone who reads this and happens to work at an AI lab: https://aflcio.org/formaunion/4-steps-form-union
Demand an immediate undefinite pause. Demand that all work is dropped and you only work on alignment until it is solved. Demand that humanity live and not die.