The other mindset I have is something like, as long as I act in such a way that an alternative version of me who landed in the right community does great, that would be enough? Like, if I throw a dart at the dartboard and hit wrong, but I act in such a way that I would want everyone who happened to work on the right things to act, then I don’t have to worry too much about getting good at darts.
There’s a sense I’m getting that Alkjash wants to give up on having a great map of the communities that are around, and just get on with nailing the work of the one that he’s in, so that if he’s ended up in one of the good ones, then the payoff is large, and so that he isn’t wasting most of his time trying to evaluate which one to be in (which is an adversarial game in which he basically expects to either lose or waste most of his resources on).
A key point to me that seemed not to be mentioned is that this takes the distribution of communities as static. Perhaps my life so far has seen the rise and fall of more communities than others get to experience, but I think one of the effects of modeling the communities and figuring out your own principles is not just that you figure out which is a good one to be part of right now, but that you set the standards and the incentives for what new ones can come into existence. If most people will basically go along with the status quo, then the competition for new games to play is weak.
I’ll try to make up an example that points at the effect. Suppose you’re a good academic who has harshly fought back against the bureaucratic attempts to make you into a manager. You’ve picked particular departments and universities and sub-fields where you can win this fight and actually do research. Nonetheless this has come with major costs for your career. Then suppose someone starts a new university and wants to attract you (a smart academic who is underpriced by the current system because of your standards) to join. Compared to the version of you that just did what the system wanted, where they just needed to offer you slightly more pay, they actually have a reason to build the sort of place that attracts people with higher standards. They can say “Look, the pay is lower, and we’re just getting started with the department, but I will ensure that the majority of professors have complete control over the number of PhDs they take (including 0).” You doing the work of (a) noticing the rot in your current institution and (b) clearly signaling that you will not accept the rot, provides a clear communication to whoever builds the next institution that this is worth avoiding and furthermore they can attract good people by doing so.
This is a general heuristic that I’ve picked up that it’s good to figure out what principles you care about and act according to them so people know what standards you will hold them to in future situations that you weren’t thinking about and couldn’t have predicted in advance.
I see this as an argument for the “broad map” over the “detailed map” side of the debate.
I don’t have a complete reply to this yet, but wanted to clarify if it was not clear that the position in this dialogue was written with the audience (a particularly circumspect broad-map-building audience) in mind. I certainly think that the vast majority of young people outside this community would benefit from spending more time building broad maps of reality before committing to career/identity/community choices. So I certainly don’t prescribe giving up entirely.
ETA: Maybe a useful analogy is that for Amazon shopping I have found doing serious research into products (past looking at purchase volume and average ratings) largely unhelpful. Usually if I read reviews carefully, I end up more confused than anything else as a large list of tail risks and second-order considerations are brought to my attention. Career choice I suspect is similar with much higher stakes.
There’s a sense I’m getting that Alkjash wants to give up on having a great map of the communities that are around, and just get on with nailing the work of the one that he’s in, so that if he’s ended up in one of the good ones, then the payoff is large, and so that he isn’t wasting most of his time trying to evaluate which one to be in (which is an adversarial game in which he basically expects to either lose or waste most of his resources on).
A key point to me that seemed not to be mentioned is that this takes the distribution of communities as static. Perhaps my life so far has seen the rise and fall of more communities than others get to experience, but I think one of the effects of modeling the communities and figuring out your own principles is not just that you figure out which is a good one to be part of right now, but that you set the standards and the incentives for what new ones can come into existence. If most people will basically go along with the status quo, then the competition for new games to play is weak.
I’ll try to make up an example that points at the effect. Suppose you’re a good academic who has harshly fought back against the bureaucratic attempts to make you into a manager. You’ve picked particular departments and universities and sub-fields where you can win this fight and actually do research. Nonetheless this has come with major costs for your career. Then suppose someone starts a new university and wants to attract you (a smart academic who is underpriced by the current system because of your standards) to join. Compared to the version of you that just did what the system wanted, where they just needed to offer you slightly more pay, they actually have a reason to build the sort of place that attracts people with higher standards. They can say “Look, the pay is lower, and we’re just getting started with the department, but I will ensure that the majority of professors have complete control over the number of PhDs they take (including 0).” You doing the work of (a) noticing the rot in your current institution and (b) clearly signaling that you will not accept the rot, provides a clear communication to whoever builds the next institution that this is worth avoiding and furthermore they can attract good people by doing so.
This is a general heuristic that I’ve picked up that it’s good to figure out what principles you care about and act according to them so people know what standards you will hold them to in future situations that you weren’t thinking about and couldn’t have predicted in advance.
I see this as an argument for the “broad map” over the “detailed map” side of the debate.
I don’t have a complete reply to this yet, but wanted to clarify if it was not clear that the position in this dialogue was written with the audience (a particularly circumspect broad-map-building audience) in mind. I certainly think that the vast majority of young people outside this community would benefit from spending more time building broad maps of reality before committing to career/identity/community choices. So I certainly don’t prescribe giving up entirely.
ETA: Maybe a useful analogy is that for Amazon shopping I have found doing serious research into products (past looking at purchase volume and average ratings) largely unhelpful. Usually if I read reviews carefully, I end up more confused than anything else as a large list of tail risks and second-order considerations are brought to my attention. Career choice I suspect is similar with much higher stakes.