I think that in general, there aren’t many examples of large portions of a large company suddenly switching what they’re working on (on a timescale of days/weeks), and this seems pretty hard to pull off without very strong forces in play.
I guess some examples are how many companies had to shift their operations around a lot at the start of COVID, but this was very overdetermined, as the alternative was losing a lot of their profits.
For AGI labs, if given a situation where they’re uncertain if they should pause, it’s less clear that they could rally large parts of their workforce to suddenly work on safety. I think planning for this scenario seems very good, including possibly having every employee not just have their normal role but also a “pause role”, that is, a research project/team that they expect to join in case of a pause.
However, detailed planning for a pause is probably pretty hard, as the types of work you want to shift people to probably changes depending on what caused the pause.
I think that in general, there aren’t many examples of large portions of a large company suddenly switching what they’re working on (on a timescale of days/weeks), and this seems pretty hard to pull off without very strong forces in play.
I think this is actually easier and more common than you’d expect for software companies (see “reorgs” in the non-layoff sense). People get moved between wildly different teams all the time, and sometimes large portions of the company do this at once. It’s unusual to reorg your biggest teams, but I think it could be done.
Note that you don’t even need to change what everyone is working on since support work won’t change nearly as much (if you have a sane separation of concerns). The people keeping servers running largely don’t care what the servers are doing, the people collecting and cleaning data largely don’t care what the data is being used for, etc.
The biggest change would be for your relatively small number of researchers, but even then I’d expect them to be able to come up with projects related to safety (if they’re good, they probably already have ideas and just weren’t prioritizing them).
Yes, in reality, as people are approaching the “danger zone”, the safety work should gradually take more and more significant fraction.
I am impressed by OpenAI’s commitment of 20% of meaningful resources to AGI/ASI safety, and by the recent impressive diversification of their safety efforts (which is great, since we really need to cover all promising directions), but I expect that closer to the real danger zone, 50% of resources or even more allocated to safety would not be unreasonable (of course, separation between safety and capability is tricky, a lot of safety enhancers do boost capabilities or do have strong potential to boost capabilities, so those percentages of allocations might become more nuanced; while those allocations would need to be against “pure capability” roles, a lot of mixed roles are likely to be involved, people collaborating with advanced AI systems on how to make it safe for everyone is an example of a quintessentially mixed role).
So, what I really think is that if there is a robust ongoing safety effort in a lab which is already measured in dozens of percents of the overall force and resources, it should be relatively easy to gradually absorb the rest of the force and resources into that safety effort, if needed.
Whereas, if the safety effort is tiny and next to non-existent, then this is unlikely to work (and in such a case I would not expect a lab to be safety-conscious enough to pause anyway).
I think that in general, there aren’t many examples of large portions of a large company suddenly switching what they’re working on (on a timescale of days/weeks), and this seems pretty hard to pull off without very strong forces in play.
I guess some examples are how many companies had to shift their operations around a lot at the start of COVID, but this was very overdetermined, as the alternative was losing a lot of their profits.
For AGI labs, if given a situation where they’re uncertain if they should pause, it’s less clear that they could rally large parts of their workforce to suddenly work on safety. I think planning for this scenario seems very good, including possibly having every employee not just have their normal role but also a “pause role”, that is, a research project/team that they expect to join in case of a pause.
However, detailed planning for a pause is probably pretty hard, as the types of work you want to shift people to probably changes depending on what caused the pause.
I think this is actually easier and more common than you’d expect for software companies (see “reorgs” in the non-layoff sense). People get moved between wildly different teams all the time, and sometimes large portions of the company do this at once. It’s unusual to reorg your biggest teams, but I think it could be done.
Note that you don’t even need to change what everyone is working on since support work won’t change nearly as much (if you have a sane separation of concerns). The people keeping servers running largely don’t care what the servers are doing, the people collecting and cleaning data largely don’t care what the data is being used for, etc.
The biggest change would be for your relatively small number of researchers, but even then I’d expect them to be able to come up with projects related to safety (if they’re good, they probably already have ideas and just weren’t prioritizing them).
Yes, in reality, as people are approaching the “danger zone”, the safety work should gradually take more and more significant fraction.
I am impressed by OpenAI’s commitment of 20% of meaningful resources to AGI/ASI safety, and by the recent impressive diversification of their safety efforts (which is great, since we really need to cover all promising directions), but I expect that closer to the real danger zone, 50% of resources or even more allocated to safety would not be unreasonable (of course, separation between safety and capability is tricky, a lot of safety enhancers do boost capabilities or do have strong potential to boost capabilities, so those percentages of allocations might become more nuanced; while those allocations would need to be against “pure capability” roles, a lot of mixed roles are likely to be involved, people collaborating with advanced AI systems on how to make it safe for everyone is an example of a quintessentially mixed role).
So, what I really think is that if there is a robust ongoing safety effort in a lab which is already measured in dozens of percents of the overall force and resources, it should be relatively easy to gradually absorb the rest of the force and resources into that safety effort, if needed.
Whereas, if the safety effort is tiny and next to non-existent, then this is unlikely to work (and in such a case I would not expect a lab to be safety-conscious enough to pause anyway).