As far as I can tell, normal corporate management is much worse than Leverage. The kind of people from that world will, sometimes when prompted in private conversations, say things like:
Standard practice is to treat negotiations with other parties as zero-sum games.
“If you look around the table and can’t tell who the sucker is, it’s you” is a description of a common, relevant social dynamic in corporate meetings.
They have PTSD symptoms from working in corporate management, and are very threat-sensitive in general.
They learned from experience to treat social reality in general as fake, everything as an act.
They learned to accept that “there’s no such thing as not being lost”, like they’ve lost the ability to self-locate in a global map (I’ve experienced losing this to a significant extent).
Successful organizations get to be where they are by committing crimes, so copying standard practices from them is copying practices for committing crimes.
This is, to a large extent, them admitting to being bad actors, them and others having been made so by their social context. (This puts the possibility of “Geoff Anders being a bad actor” into perspective)
MIRI is, despite the problems noted in the post, as far as I can tell the most high-integrity organization doing AI safety research. FHI contributes some, but overall lower-quality research; Paul Christiano does some relevant research; OpenAI’s original mission was actively harmful, and hasn’t done much relevant safety research as far as I can tell. MIRI’s public output in the past few years since I left has been low, which seems like bad sign for its future performance, but what it’s done so far has been quite a large portion of the relevant research. I’m not particularly worried about scandals sinking the overall non-MIRI AI safety world’s reputation, given the degree to which it is of mixed value.
As far as I can tell, normal corporate management is much worse than Leverage
Your original post drew a comparison between MIRI and Leverage, the latter of which has just been singled out for intense criticism.
If I take the quoted sentence literally, you’re saying that “MIRI was like Leverage” is a gentler critique than “MIRI is like your regular job”?
If the intended message was “my job was bad, although less bad than the jobs of many people reading this, and instead only about as bad as Leverage Research,” why release this criticism on the heels of a post condemning Leverage as an abusive cult? If you believe the normally-employed among LessWrong readers are being abused by sub-Leverage hellcults, all the time, that seems like quite the buried lede!
Sorry for the intense tone, it’s just … this sentence, if taken seriously, reframes the entire post for me in a big, weird, bad way.
I thought I was pretty clear, at the end of the post, that I wasn’t sad that I worked at MIRI instead of Google or academia. I’m glad I left when I did, though.
The conversations I’m mentioning with corporate management types were suprising to me, as were the contents of Moral Mazes, and Venkatesh Rao’s writing. So “like a regular job” doesn’t really communicate the magnitude of the harms to someone who doesn’t know how bad normal corporate management is. It’s hard for me to have strong opinions given that I haven’t worked in corporate management, though. Maybe a lot of places are pretty okay.
I’ve talked a lot with someone who got pretty high in Google’s management hierarchy, who seems really traumatized (and says she is) and who has a lot of physiological problems, which seem overall worse than mine. I wouldn’t trade places with her, mental health-wise.
MIRI wouldn’t make sense as a project if most regular jobs were fine, people who were really ok wouldn’t have reason to build unfriendly AI. I discussed with some friends about the benefits of working at Leverage vs. MIRI vs. the US Marines, and we agreed that Leverage and MIRI were probably overall less problematic, but the fact that the US marines signal that they’re going to dominate/abuse people is an important advantage relative to the alternatives, since it sets expectations more realistically.
MIRI wouldn’t make sense as a project if most regular jobs were fine, people who were really ok wouldn’t have reason to build unfriendly AI.
I just want to note that this is a contentious claim.
There is a competing story, and one much more commonly held among people who work for or support MIRI, that the world is heading towards an unaligned intelligence explosion due to the combination of a coordination problem and very normal motivated reasoning about the danger posed by lucrative and prestigious projects.
One could make the claim “healthy” people (whatever that means) wouldn’t exhibit those behaviors, ie that they would be able to coordinate and avoid rationalizing. But that’s a non-standard view.
I would prefer that you specifically flag it as a non-standard view, and then either make the argument for that view over the more common one, or highlight that you’re not going into detail on the argument and that you don’t expect others to accept the claim.
As it is, it feels a little like this is being slipped in as if it is a commonly accepted premise.
Yes, I would! Any pointers? (to avoid miscommunication I’m reading this to say that people are more likely to build UFAI because of traumatizing environment vs. normal reasons Eli mentioned)
Note that there’s an important distinction between “corporate management” and “corporate employment”—the thing where you say “yeesh, I’m glad I’m not a manager at Google” is substantially different from the thing where you say “yeesh, I’m glad I’m not a programmer at Google”, and the audience here has many more programmers than managers.
[And also Vanessa’s experience matches my impressions, tho I’ve spent less time in industry.]
[EDIT: I also thought it was clear that you meant this more as a “this is what MIRI was like” than “MIRI was unusually bad”, but I also think this means you’re open to nostalgebraist’s objection, that you’re ordering things pretty differently from how people might naively order them.]
My experience was that if you were T-5 (Senior), you had some overlap with PM and management games, and at T-6 (Staff), you were often in them. I could not handle the politics to get to T-7. Programmers below T-5 are expected to earn promotions or to leave.
Google’s a big company, so it might have been different elsewhere internally. My time at Google certainly traumatized me, but probably not to the point of anything in this or the Leverage thread.
Programmers below T-5 are expected to earn promotions or to leave.
This changed something like five years ago [edit: August 2017], to where people at level four (one level above new grad) no longer needed to get promoted to stay long term.
I think maybe a bit of the confusion here is nostalgebraist reading “corporate management” to mean something like “a regular job in industry”, whereas you’re pointing at “middle- or upper-management in sufficiently large or maze-like organizations”? Because those seem very different to me and I could imagine the second being much worse for people’s mental health than the first.
Separately I’m confused about the claim that “people who were really ok wouldn’t have reason to build unfriendly AI”; it sounds like you don’t agree that the idea that UFAI is the default outcome from building AFI without a specific effort to make it friendly? (This is probably a distraction from this threads’ subject but I’d be interested to read your thoughts on that if you’ve written them up somewhere.)
I think maybe a bit of the confusion here is nostalgebraist reading “corporate management” to mean something like “a regular job in industry”, whereas you’re pointing at “middle- or upper-management in sufficiently large or maze-like organizations”?
Yes, that seems likely. I did some interships at Google as a software engineer and they didn’t seem better than working at MIRI on average, although they had less intense psychological effects, as things didn’t break out in fractal betrayal during the time I was there.
Separately I’m confused about the claim that “people who were really ok wouldn’t have reason to build unfriendly AI”
People might think they “have to be productive” which points at increasing automation detached from human value, which points towards UFAI. Alternatively, they might think there isn’t a need to maximize productivity, and they can do things that would benefit their own values, which wouldn’t include UFAI. (I acknowledge there could be coordination problems where selfish behavior leads to cutting corners, but I don’t think that’s the main driver of existential risk failure modes)
I worked for 16 years in the industry, including management positions, including (briefly) having my own startup. I talked to many, many people who worked in many companies, including people who had their own startups and some with successful exits.
The industry is certainly not a rose garden. I encountered people who were selfish, unscrupulous, megalomaniac or just foolish. I’ve seen lies, manipulation, intrigue and plain incompetence. But, I also encountered people who were honest, idealistic, hardworking and talented. I’ve seen teams trying their best to build something actually useful for some corner of the world. And, it’s pretty hard to avoid reality checks when you need to deliver a real product for real customers (although some companies do manage to just get more and more investments without delivering anything until the eventual crash).
I honestly think most of them are not nearly as bad as Leverage.
As far as I can tell, normal corporate management is much worse than Leverage. The kind of people from that world will, sometimes when prompted in private conversations, say things like:
Standard practice is to treat negotiations with other parties as zero-sum games.
“If you look around the table and can’t tell who the sucker is, it’s you” is a description of a common, relevant social dynamic in corporate meetings.
They have PTSD symptoms from working in corporate management, and are very threat-sensitive in general.
They learned from experience to treat social reality in general as fake, everything as an act.
They learned to accept that “there’s no such thing as not being lost”, like they’ve lost the ability to self-locate in a global map (I’ve experienced losing this to a significant extent).
Successful organizations get to be where they are by committing crimes, so copying standard practices from them is copying practices for committing crimes.
This is, to a large extent, them admitting to being bad actors, them and others having been made so by their social context. (This puts the possibility of “Geoff Anders being a bad actor” into perspective)
MIRI is, despite the problems noted in the post, as far as I can tell the most high-integrity organization doing AI safety research. FHI contributes some, but overall lower-quality research; Paul Christiano does some relevant research; OpenAI’s original mission was actively harmful, and hasn’t done much relevant safety research as far as I can tell. MIRI’s public output in the past few years since I left has been low, which seems like bad sign for its future performance, but what it’s done so far has been quite a large portion of the relevant research. I’m not particularly worried about scandals sinking the overall non-MIRI AI safety world’s reputation, given the degree to which it is of mixed value.
Your original post drew a comparison between MIRI and Leverage, the latter of which has just been singled out for intense criticism.
If I take the quoted sentence literally, you’re saying that “MIRI was like Leverage” is a gentler critique than “MIRI is like your regular job”?
If the intended message was “my job was bad, although less bad than the jobs of many people reading this, and instead only about as bad as Leverage Research,” why release this criticism on the heels of a post condemning Leverage as an abusive cult? If you believe the normally-employed among LessWrong readers are being abused by sub-Leverage hellcults, all the time, that seems like quite the buried lede!
Sorry for the intense tone, it’s just … this sentence, if taken seriously, reframes the entire post for me in a big, weird, bad way.
I thought I was pretty clear, at the end of the post, that I wasn’t sad that I worked at MIRI instead of Google or academia. I’m glad I left when I did, though.
The conversations I’m mentioning with corporate management types were suprising to me, as were the contents of Moral Mazes, and Venkatesh Rao’s writing. So “like a regular job” doesn’t really communicate the magnitude of the harms to someone who doesn’t know how bad normal corporate management is. It’s hard for me to have strong opinions given that I haven’t worked in corporate management, though. Maybe a lot of places are pretty okay.
I’ve talked a lot with someone who got pretty high in Google’s management hierarchy, who seems really traumatized (and says she is) and who has a lot of physiological problems, which seem overall worse than mine. I wouldn’t trade places with her, mental health-wise.
MIRI wouldn’t make sense as a project if most regular jobs were fine, people who were really ok wouldn’t have reason to build unfriendly AI. I discussed with some friends about the benefits of working at Leverage vs. MIRI vs. the US Marines, and we agreed that Leverage and MIRI were probably overall less problematic, but the fact that the US marines signal that they’re going to dominate/abuse people is an important advantage relative to the alternatives, since it sets expectations more realistically.
I just want to note that this is a contentious claim.
There is a competing story, and one much more commonly held among people who work for or support MIRI, that the world is heading towards an unaligned intelligence explosion due to the combination of a coordination problem and very normal motivated reasoning about the danger posed by lucrative and prestigious projects.
One could make the claim “healthy” people (whatever that means) wouldn’t exhibit those behaviors, ie that they would be able to coordinate and avoid rationalizing. But that’s a non-standard view.
I would prefer that you specifically flag it as a non-standard view, and then either make the argument for that view over the more common one, or highlight that you’re not going into detail on the argument and that you don’t expect others to accept the claim.
As it is, it feels a little like this is being slipped in as if it is a commonly accepted premise.
I agree this is a non-standard view.
Yes, I would! Any pointers?
(to avoid miscommunication I’m reading this to say that people are more likely to build UFAI because of traumatizing environment vs. normal reasons Eli mentioned)
Note that there’s an important distinction between “corporate management” and “corporate employment”—the thing where you say “yeesh, I’m glad I’m not a manager at Google” is substantially different from the thing where you say “yeesh, I’m glad I’m not a programmer at Google”, and the audience here has many more programmers than managers.
[And also Vanessa’s experience matches my impressions, tho I’ve spent less time in industry.]
[EDIT: I also thought it was clear that you meant this more as a “this is what MIRI was like” than “MIRI was unusually bad”, but I also think this means you’re open to nostalgebraist’s objection, that you’re ordering things pretty differently from how people might naively order them.]
My experience was that if you were T-5 (Senior), you had some overlap with PM and management games, and at T-6 (Staff), you were often in them. I could not handle the politics to get to T-7. Programmers below T-5 are expected to earn promotions or to leave.
Google’s a big company, so it might have been different elsewhere internally. My time at Google certainly traumatized me, but probably not to the point of anything in this or the Leverage thread.
This changed something like five years ago [edit: August 2017], to where people at level four (one level above new grad) no longer needed to get promoted to stay long term.
I think maybe a bit of the confusion here is nostalgebraist reading “corporate management” to mean something like “a regular job in industry”, whereas you’re pointing at “middle- or upper-management in sufficiently large or maze-like organizations”? Because those seem very different to me and I could imagine the second being much worse for people’s mental health than the first.
Separately I’m confused about the claim that “people who were really ok wouldn’t have reason to build unfriendly AI”; it sounds like you don’t agree that the idea that UFAI is the default outcome from building AFI without a specific effort to make it friendly? (This is probably a distraction from this threads’ subject but I’d be interested to read your thoughts on that if you’ve written them up somewhere.)
Yes, that seems likely. I did some interships at Google as a software engineer and they didn’t seem better than working at MIRI on average, although they had less intense psychological effects, as things didn’t break out in fractal betrayal during the time I was there.
People might think they “have to be productive” which points at increasing automation detached from human value, which points towards UFAI. Alternatively, they might think there isn’t a need to maximize productivity, and they can do things that would benefit their own values, which wouldn’t include UFAI. (I acknowledge there could be coordination problems where selfish behavior leads to cutting corners, but I don’t think that’s the main driver of existential risk failure modes)
I worked for 16 years in the industry, including management positions, including (briefly) having my own startup. I talked to many, many people who worked in many companies, including people who had their own startups and some with successful exits.
The industry is certainly not a rose garden. I encountered people who were selfish, unscrupulous, megalomaniac or just foolish. I’ve seen lies, manipulation, intrigue and plain incompetence. But, I also encountered people who were honest, idealistic, hardworking and talented. I’ve seen teams trying their best to build something actually useful for some corner of the world. And, it’s pretty hard to avoid reality checks when you need to deliver a real product for real customers (although some companies do manage to just get more and more investments without delivering anything until the eventual crash).
I honestly think most of them are not nearly as bad as Leverage.