I think I go with “Nonlinear is a hard but rewarding place to work, with *edit* slightly aggressive tendencies that suppress bad stories. Some people will get broken over this. Both parties can try and avoid that happening.”
I think this is a very reasonable view and think it deserves less-karma hate.
I’m not sure what the magnitudes involved are, and I might not endorse “slightly aggressive”, but at present, my max-probability guess is that some version of this hypothesis is the true one.
I think this overlaps with what I said, except I’m more specific/concrete? E.g. whereas you say “Nonlinear is … rewarding”, I am more specific in saying what the rewards are, namely “entry into … utilitarian world conquest, a pyramid scheme and a pseudocharity focused on providing comfort for elites”. Like there’s no doubt that Nonlinear provided lots of connections to high-ranking EAs and lots of luxurious vacations, and it also sounds like Nonlinear’s response to the accusations of not providing enough reward was to mention precisely these things, so these seem like the main rewards nonlinear provided.
Possibly other Nonlinear employees can chime in and mention non-luxury-vacation non-EA-connection rewards they’ve gotten. Like in terms of pay it seems like Alice/Chloe earned way less than I do in my job, and I don’t think this is disputed.
One should definitely be cautious in concluding for certain that these are the rewards, but I think it is worth promoting the hypothesis to attention due to the convergence of multiple reasons:
If it is a pyramid scheme and a pseudocharity, that seems important to know so it can be stopped or at least doesn’t distract from genuine charities.
The general prior probability for the pyramid scheme/comfort pseudocharity is I think high enough to promote the hypothesis for attention even without evidence, and the mechanism for this seems clear enough: pyramid schemes are incentivized because they allow founders to extract resources from their followers, and turning those pyramid schemes to provide luxury for top members is helpful to incentivize these members to participate or strengthen the scheme.
By explicitly stating the details of alternative hypotheses, it becomes easier to figure out what would constitute evidence for or against them.
In the context of the Nonlinear drama, I find that this hypothesis resolves several things that might otherwise seem confusingly contradictory. For instance, assuming the hypothesis is true, here’s a model for how Alice could feel simultaneously isolated and excited:
If, in order to run their pyramid scheme, Nonlinear raised a lot of excitement, this would raIse the hypothesis “EA/Nonlinear is excellent!” to consideration. But if Alice/Chloe never saw any definite good consequences of EA/Nonlinear for themselves, what they saw would also be compatible with a hypothesis of “EA/Nonlinear is a pyramid scheme”. Thus they would have a very wide range of uncertainty about how good EA/Nonlinear is, and if the upper range was true it would of course be logical to be excited about joining them, but if the lower range is true it would of course also be logical to be sad about not developing non-EA resources, i.e. to feel isolated by the time spent on EA/Nonlinear. Thus simultaneously feeling excited about connections and isolated.
Surely, “it was a big gamble that didn’t pay off” works as well as an explanation here. They didn’t realise how bad it would be and regret taking up the offer.
I guess, but it doesn’t explain why it was so bad.
I should say, I don’t mean “it doesn’t explain why” is a knockdown debunking because usually when stuff happens, one doesn’t find out why. (But it can still be desirable to know why to be better educated about how it will generalize, hence why I post a hypothesis that I think is worthy of investigation and which I suspect helps me to make sense of things.)
I think I go with “Nonlinear is a hard but rewarding place to work, with *edit* slightly aggressive tendencies that suppress bad stories. Some people will get broken over this. Both parties can try and avoid that happening.”
I think this is a very reasonable view and think it deserves less-karma hate.
I’m not sure what the magnitudes involved are, and I might not endorse “slightly aggressive”, but at present, my max-probability guess is that some version of this hypothesis is the true one.
I think this overlaps with what I said, except I’m more specific/concrete? E.g. whereas you say “Nonlinear is … rewarding”, I am more specific in saying what the rewards are, namely “entry into … utilitarian world conquest, a pyramid scheme and a pseudocharity focused on providing comfort for elites”. Like there’s no doubt that Nonlinear provided lots of connections to high-ranking EAs and lots of luxurious vacations, and it also sounds like Nonlinear’s response to the accusations of not providing enough reward was to mention precisely these things, so these seem like the main rewards nonlinear provided.
Possibly other Nonlinear employees can chime in and mention non-luxury-vacation non-EA-connection rewards they’ve gotten. Like in terms of pay it seems like Alice/Chloe earned way less than I do in my job, and I don’t think this is disputed.
I don’t think there is enough evidence here of the pyramid scheme pseudocharity bit. Why have such a complex hypthesis when my simpler one will do?
One should definitely be cautious in concluding for certain that these are the rewards, but I think it is worth promoting the hypothesis to attention due to the convergence of multiple reasons:
If it is a pyramid scheme and a pseudocharity, that seems important to know so it can be stopped or at least doesn’t distract from genuine charities.
The general prior probability for the pyramid scheme/comfort pseudocharity is I think high enough to promote the hypothesis for attention even without evidence, and the mechanism for this seems clear enough: pyramid schemes are incentivized because they allow founders to extract resources from their followers, and turning those pyramid schemes to provide luxury for top members is helpful to incentivize these members to participate or strengthen the scheme.
By explicitly stating the details of alternative hypotheses, it becomes easier to figure out what would constitute evidence for or against them.
In the context of the Nonlinear drama, I find that this hypothesis resolves several things that might otherwise seem confusingly contradictory. For instance, assuming the hypothesis is true, here’s a model for how Alice could feel simultaneously isolated and excited:
If, in order to run their pyramid scheme, Nonlinear raised a lot of excitement, this would raIse the hypothesis “EA/Nonlinear is excellent!” to consideration. But if Alice/Chloe never saw any definite good consequences of EA/Nonlinear for themselves, what they saw would also be compatible with a hypothesis of “EA/Nonlinear is a pyramid scheme”. Thus they would have a very wide range of uncertainty about how good EA/Nonlinear is, and if the upper range was true it would of course be logical to be excited about joining them, but if the lower range is true it would of course also be logical to be sad about not developing non-EA resources, i.e. to feel isolated by the time spent on EA/Nonlinear. Thus simultaneously feeling excited about connections and isolated.
Surely, “it was a big gamble that didn’t pay off” works as well as an explanation here. They didn’t realise how bad it would be and regret taking up the offer.
I guess, but it doesn’t explain why it was so bad.
I should say, I don’t mean “it doesn’t explain why” is a knockdown debunking because usually when stuff happens, one doesn’t find out why. (But it can still be desirable to know why to be better educated about how it will generalize, hence why I post a hypothesis that I think is worthy of investigation and which I suspect helps me to make sense of things.)