I think maybe a bit of the confusion here is nostalgebraist reading “corporate management” to mean something like “a regular job in industry”, whereas you’re pointing at “middle- or upper-management in sufficiently large or maze-like organizations”? Because those seem very different to me and I could imagine the second being much worse for people’s mental health than the first.
Separately I’m confused about the claim that “people who were really ok wouldn’t have reason to build unfriendly AI”; it sounds like you don’t agree that the idea that UFAI is the default outcome from building AFI without a specific effort to make it friendly? (This is probably a distraction from this threads’ subject but I’d be interested to read your thoughts on that if you’ve written them up somewhere.)
I think maybe a bit of the confusion here is nostalgebraist reading “corporate management” to mean something like “a regular job in industry”, whereas you’re pointing at “middle- or upper-management in sufficiently large or maze-like organizations”?
Yes, that seems likely. I did some interships at Google as a software engineer and they didn’t seem better than working at MIRI on average, although they had less intense psychological effects, as things didn’t break out in fractal betrayal during the time I was there.
Separately I’m confused about the claim that “people who were really ok wouldn’t have reason to build unfriendly AI”
People might think they “have to be productive” which points at increasing automation detached from human value, which points towards UFAI. Alternatively, they might think there isn’t a need to maximize productivity, and they can do things that would benefit their own values, which wouldn’t include UFAI. (I acknowledge there could be coordination problems where selfish behavior leads to cutting corners, but I don’t think that’s the main driver of existential risk failure modes)
I think maybe a bit of the confusion here is nostalgebraist reading “corporate management” to mean something like “a regular job in industry”, whereas you’re pointing at “middle- or upper-management in sufficiently large or maze-like organizations”? Because those seem very different to me and I could imagine the second being much worse for people’s mental health than the first.
Separately I’m confused about the claim that “people who were really ok wouldn’t have reason to build unfriendly AI”; it sounds like you don’t agree that the idea that UFAI is the default outcome from building AFI without a specific effort to make it friendly? (This is probably a distraction from this threads’ subject but I’d be interested to read your thoughts on that if you’ve written them up somewhere.)
Yes, that seems likely. I did some interships at Google as a software engineer and they didn’t seem better than working at MIRI on average, although they had less intense psychological effects, as things didn’t break out in fractal betrayal during the time I was there.
People might think they “have to be productive” which points at increasing automation detached from human value, which points towards UFAI. Alternatively, they might think there isn’t a need to maximize productivity, and they can do things that would benefit their own values, which wouldn’t include UFAI. (I acknowledge there could be coordination problems where selfish behavior leads to cutting corners, but I don’t think that’s the main driver of existential risk failure modes)