I’ve been holding off sharing a take about the Nonlinear situation, but I think I’m settle enough on one to share it now. This post is as good as any to write it up in response to.
As far as I can tell no one had malicious intent, but Nonlinear put themselves in a precarious situation by offering weird employment terms, and then didn’t have strong enough filters to only hire people who would actually be happy with those terms. The situation predictably blew up, and now, depending on how you frame the situation, Nonlinear folks either look like assholes or victims because we don’t have a standard way to interpret the facts.
Nonlinear seems to have fallen prey to a well-worn story in the startup world: naive, earnest founders try to do something novel with the organizational structure, and because most changes make things worse, things are worse, and it blows up in some weird way that lawyers haven’t previously figured out how to deal with, so it ends up becoming a huge mess trying to sort everything out. For many startups, these kinds of failures kill companies that otherwise might have had successful products.
Successful small orgs know they can innovate on the org and processes, but have to do it mostly within standard bounds. Too much innovation is a liability, and you’re not here to build a new type of organization, you’re here to build a successful product. Or, if we apply this to Nonlinear, they’re not here to innovate on how non-profits should operate, they’re here to carry out a mission to make the world better, and the best way to do that is usually to have a boring, standard org and just focus on the mission, and only do something funky with the org when there’s literally no other way to achieve the mission than to do the weird org thing.
Now as to the investigation, it’s probably been gone about in the wrong way. Digging into facts and trying to present something that looks like a research report was never going to get at the issue. The problem, which EAs and rationalists are sometimes afraid to admit, was that Nonlinear tried to be weird in unhelpful ways, and saying this is a thing that’s tough for high-openness folks like us. And if you exclude the possibility that the problem was being too weird, you’re left to try to make sense of the details on the assumption that being weird was fine when it fact it was a huge source of liability.
You would think the ecosystem would have learned this lesson by now, but it has not. I don’t want to drag up old drama, but other EA and rationalist orgs have had issues over the years for similar reasons: trying to do something weird with the org structure, having it blow up, then being poorly equipped to handle the blow up because the org structure wasn’t designed to handle it.
It would be great if we could all just be friends and hang out and do cool stuff together without worrying about things like org structures and liability, but you need those things because sometimes things fall out between friends. I’m reminded of a saying I love about contracts: you only write and sign contracts with people you trust. If you don’t trust someone, you don’t do business with them. If you do trust them, then you trust you and them to benefit from the formalization of what to do when conflicts arise. It’s a key social technology that prevents the kind of chaos when trust breaks down. Standard org structures and employment relationships serve a similar purpose. You have the formal, standard stuff because you like your friends and you trust them and you want to make sure things won’t go sideways should something change, as sometimes does.
It’s perhaps sad the world is this way and we can’t have nicer things, but we must make do with the humans we have.
I’ve been holding off sharing a take about the Nonlinear situation, but I think I’m settle enough on one to share it now. This post is as good as any to write it up in response to.
As far as I can tell no one had malicious intent, but Nonlinear put themselves in a precarious situation by offering weird employment terms, and then didn’t have strong enough filters to only hire people who would actually be happy with those terms. The situation predictably blew up, and now, depending on how you frame the situation, Nonlinear folks either look like assholes or victims because we don’t have a standard way to interpret the facts.
Nonlinear seems to have fallen prey to a well-worn story in the startup world: naive, earnest founders try to do something novel with the organizational structure, and because most changes make things worse, things are worse, and it blows up in some weird way that lawyers haven’t previously figured out how to deal with, so it ends up becoming a huge mess trying to sort everything out. For many startups, these kinds of failures kill companies that otherwise might have had successful products.
Successful small orgs know they can innovate on the org and processes, but have to do it mostly within standard bounds. Too much innovation is a liability, and you’re not here to build a new type of organization, you’re here to build a successful product. Or, if we apply this to Nonlinear, they’re not here to innovate on how non-profits should operate, they’re here to carry out a mission to make the world better, and the best way to do that is usually to have a boring, standard org and just focus on the mission, and only do something funky with the org when there’s literally no other way to achieve the mission than to do the weird org thing.
Now as to the investigation, it’s probably been gone about in the wrong way. Digging into facts and trying to present something that looks like a research report was never going to get at the issue. The problem, which EAs and rationalists are sometimes afraid to admit, was that Nonlinear tried to be weird in unhelpful ways, and saying this is a thing that’s tough for high-openness folks like us. And if you exclude the possibility that the problem was being too weird, you’re left to try to make sense of the details on the assumption that being weird was fine when it fact it was a huge source of liability.
You would think the ecosystem would have learned this lesson by now, but it has not. I don’t want to drag up old drama, but other EA and rationalist orgs have had issues over the years for similar reasons: trying to do something weird with the org structure, having it blow up, then being poorly equipped to handle the blow up because the org structure wasn’t designed to handle it.
It would be great if we could all just be friends and hang out and do cool stuff together without worrying about things like org structures and liability, but you need those things because sometimes things fall out between friends. I’m reminded of a saying I love about contracts: you only write and sign contracts with people you trust. If you don’t trust someone, you don’t do business with them. If you do trust them, then you trust you and them to benefit from the formalization of what to do when conflicts arise. It’s a key social technology that prevents the kind of chaos when trust breaks down. Standard org structures and employment relationships serve a similar purpose. You have the formal, standard stuff because you like your friends and you trust them and you want to make sure things won’t go sideways should something change, as sometimes does.
It’s perhaps sad the world is this way and we can’t have nicer things, but we must make do with the humans we have.