I think that there are two questions one could ask here:
Is this job bad for x-risk reasons? I would say that the answer to this is “probably not”—if you’re not pushing the frontier but are only commercialising already available technology, your contribution to x-risk is negligible at best. Maybe you’re very slightly adding to the generative AI hype, but that ship’s somewhat sailed at this point.
Is this job bad for other reasons? That seems like something you’d have to answer for yourself based on the particulars of the job. It also involves some philosophical/political priors that are probably pretty specific to you. Like—is automating away jobs good most of the time? Argument for yes—it frees up people to do other work, it advances the amount of stuff society can do in general. Argument for no—it takes away people’s jobs, disrupts lives, some people can’t adapt to the change.
I’ll avoid giving my personal answer to the above, since I don’t want to bias you. I think you should ask how you feel about this category of thing in general, and then decide how picky or not you should be about these AI jobs based on that category of thing. If they’re mostly good, you can just avoid particularly scummy fields and other than that, go for it. If they’re mostly bad, you shouldn’t take one unless you have a particularly ethical area you can contribute to.
I suppose I’m mostly also looking for aspects of this I might have overlooked, or inside perspective about any details from someone who has relevant experience. I think I tend to err a bit on caution on things but ultimately I believe that “staying pure” is rarely a road to doing good (at most it’s a road to not doing bad, but that’s relatively easy if you just do nothing at all). Some of the problems with automation would have applied to many of the previous rounds of it, and those ultimately came out mostly good, I think, but also it somehow feels This Time It’s Different (but then again, I do tend to skew towards pessimism and seeing all the possible ways things can go wrong...).
I guess my way of thinking of it is—you can automate tasks, jobs, or people.
Automating tasks seems probably good. You’re able to remove busywork from people, but their job is comprised of many more things than that task, so people aren’t at risk of losing their jobs. (Unless you only need 10 units of productivity, and each person is now producing 1.25 units so you end up with 8 people instead of 10 - but a lot of teams could also quite use 12.5 units of productivity well)
Automating jobs is...contentious. It’s basically the tradeoff I talked about above.
Automating people is bad right now. Not only are you eliminating someone’s job, you’re eliminating most other things this person could do at all. This person has had society pass them by, and I think we should either not do that or make sure this person still has sufficient resources and social value to thrive in society despite being automated out of an economic position. (If I was confident society would do this, I might change my tune about automating people)
So, I would ask myself—what type of automation am I doing? Am I removing busywork, replacing jobs entirely, or replacing entire skillsets? (Note: You are probably not doing the last one. Very few, if any, are. The tech does not seem there atm. But maybe the company is setting themselves up to do so as soon as it is, or something)
And when you figure out what type you’re doing, you can ask how you feel about that.
A fair point. I suppose part of my doubt though is exactly: are most of these applications going to automate jobs, or merely tasks? And to what extent does contributing to either advance the know how that might eventually help automating people?
I don’t know about the first one—I think you’ll have to analyse each job and decide about that. I suspect the answer to your second question is “Basically nil”. I think that unless you are working on state-of-the-art advances in:
A) Frontier models
B) Agent scaffolds, maybe.
You are not speeding up the knowledge required to automate people.
I think that there are two questions one could ask here:
Is this job bad for x-risk reasons? I would say that the answer to this is “probably not”—if you’re not pushing the frontier but are only commercialising already available technology, your contribution to x-risk is negligible at best. Maybe you’re very slightly adding to the generative AI hype, but that ship’s somewhat sailed at this point.
Is this job bad for other reasons? That seems like something you’d have to answer for yourself based on the particulars of the job. It also involves some philosophical/political priors that are probably pretty specific to you. Like—is automating away jobs good most of the time? Argument for yes—it frees up people to do other work, it advances the amount of stuff society can do in general. Argument for no—it takes away people’s jobs, disrupts lives, some people can’t adapt to the change.
I’ll avoid giving my personal answer to the above, since I don’t want to bias you. I think you should ask how you feel about this category of thing in general, and then decide how picky or not you should be about these AI jobs based on that category of thing. If they’re mostly good, you can just avoid particularly scummy fields and other than that, go for it. If they’re mostly bad, you shouldn’t take one unless you have a particularly ethical area you can contribute to.
I suppose I’m mostly also looking for aspects of this I might have overlooked, or inside perspective about any details from someone who has relevant experience. I think I tend to err a bit on caution on things but ultimately I believe that “staying pure” is rarely a road to doing good (at most it’s a road to not doing bad, but that’s relatively easy if you just do nothing at all). Some of the problems with automation would have applied to many of the previous rounds of it, and those ultimately came out mostly good, I think, but also it somehow feels This Time It’s Different (but then again, I do tend to skew towards pessimism and seeing all the possible ways things can go wrong...).
I guess my way of thinking of it is—you can automate tasks, jobs, or people.
Automating tasks seems probably good. You’re able to remove busywork from people, but their job is comprised of many more things than that task, so people aren’t at risk of losing their jobs. (Unless you only need 10 units of productivity, and each person is now producing 1.25 units so you end up with 8 people instead of 10 - but a lot of teams could also quite use 12.5 units of productivity well)
Automating jobs is...contentious. It’s basically the tradeoff I talked about above.
Automating people is bad right now. Not only are you eliminating someone’s job, you’re eliminating most other things this person could do at all. This person has had society pass them by, and I think we should either not do that or make sure this person still has sufficient resources and social value to thrive in society despite being automated out of an economic position. (If I was confident society would do this, I might change my tune about automating people)
So, I would ask myself—what type of automation am I doing? Am I removing busywork, replacing jobs entirely, or replacing entire skillsets? (Note: You are probably not doing the last one. Very few, if any, are. The tech does not seem there atm. But maybe the company is setting themselves up to do so as soon as it is, or something)
And when you figure out what type you’re doing, you can ask how you feel about that.
A fair point. I suppose part of my doubt though is exactly: are most of these applications going to automate jobs, or merely tasks? And to what extent does contributing to either advance the know how that might eventually help automating people?
I don’t know about the first one—I think you’ll have to analyse each job and decide about that. I suspect the answer to your second question is “Basically nil”. I think that unless you are working on state-of-the-art advances in:
A) Frontier models B) Agent scaffolds, maybe.
You are not speeding up the knowledge required to automate people.