Longtermist X-Risk Cases for working in Semiconductor Manufacturing
Two separate pitches for jobs/roles in semiconductor manufacturing for people who are primarily interested in x-risk reduction.
Securing Semiconductor Supply Chains
This is basically the “computer security for x-risk reduction” argument applied to semiconductor manufacturing.
Briefly restating: it seems exceedingly likely that technologies crucial to x-risks are on computers or connected to computers. Improving computer security increases the likelihood that those machines are not stolen or controlled by criminals. In general, this should make things like governance and control strategy more straightforward.
This argument also applies to making sure that there isn’t any tampering with the semiconductor supply chain. In particular, we want to make sure that the designs from the designer are not modified in ways that make it easier for outside actors to steal or control information or technology.
One of the primary complaints about working in semiconductor manufacturing for longtermist reasons is accelerating semiconductor progress. I think security work here is not nearly as much as a direct driver of progress as other roles, so I would argue this as differentially x-risk reducing.
Diversifying Semiconductor Manufacturing
This one is more controversial in mainline longtermist x-risk reduction, so I’ll try to clearly signpost the hypotheses that this is based on.
The reasoning is basically:
Right now, most prosaic AI alignment techniques require access to a lot of compute
It’s possible that some prosaic AI alignment techniques (like interpretability) will require much more compute in the future
So, right now AI alignment research is at least partially gated on access to compute, and it seems plausible this will be the case in the future
So if we want to ensure these research efforts continue to have access to compute, we basically need to make sure they have enough money to buy the compute, and that there is compute to be sold.
Normally this wouldn’t be much of an issue, as in general we can trust markets to meet demands, etc. However semiconductor manufacturing is increasingly becoming a part of international conflict strategy.
In particular, much of the compute acceleration used in AI research (including AI alignment research) is manufactured in Taiwan, which seems to be coming under increasing threats.
My argument here is that I think it is possible to increase the chances that AI alignment research labs will continue to have access to compute, even in cases of large-scale geopolitical conflict. I think this can be done in ways that end up not dramatically increasing the global semiconductor manufacturing capacity by much.
Longtermist X-Risk Cases for working in Semiconductor Manufacturing
Two separate pitches for jobs/roles in semiconductor manufacturing for people who are primarily interested in x-risk reduction.
Securing Semiconductor Supply Chains
This is basically the “computer security for x-risk reduction” argument applied to semiconductor manufacturing.
Briefly restating: it seems exceedingly likely that technologies crucial to x-risks are on computers or connected to computers. Improving computer security increases the likelihood that those machines are not stolen or controlled by criminals. In general, this should make things like governance and control strategy more straightforward.
This argument also applies to making sure that there isn’t any tampering with the semiconductor supply chain. In particular, we want to make sure that the designs from the designer are not modified in ways that make it easier for outside actors to steal or control information or technology.
One of the primary complaints about working in semiconductor manufacturing for longtermist reasons is accelerating semiconductor progress. I think security work here is not nearly as much as a direct driver of progress as other roles, so I would argue this as differentially x-risk reducing.
Diversifying Semiconductor Manufacturing
This one is more controversial in mainline longtermist x-risk reduction, so I’ll try to clearly signpost the hypotheses that this is based on.
The reasoning is basically:
Right now, most prosaic AI alignment techniques require access to a lot of compute
It’s possible that some prosaic AI alignment techniques (like interpretability) will require much more compute in the future
So, right now AI alignment research is at least partially gated on access to compute, and it seems plausible this will be the case in the future
So if we want to ensure these research efforts continue to have access to compute, we basically need to make sure they have enough money to buy the compute, and that there is compute to be sold.
Normally this wouldn’t be much of an issue, as in general we can trust markets to meet demands, etc. However semiconductor manufacturing is increasingly becoming a part of international conflict strategy.
In particular, much of the compute acceleration used in AI research (including AI alignment research) is manufactured in Taiwan, which seems to be coming under increasing threats.
My argument here is that I think it is possible to increase the chances that AI alignment research labs will continue to have access to compute, even in cases of large-scale geopolitical conflict. I think this can be done in ways that end up not dramatically increasing the global semiconductor manufacturing capacity by much.