I think there’s a different sort of conversation where this sort of comment might be helpful (I think there’s plenty of perspectives from which EA, or “A”, doesn’t make sense, that are worth talking about). But it feels a bit outside the scope of this conversation.
(Not 100% about toonalfrink’s goals for the conversation)
I have no idea what toonalfrink’s goals for the conversation are. But when someone writes something like,
>So you find yourself in this volunteering opportunity with some EA’s and they tell you some stuff you can do, and you do it, and you’re left in the dark again. Is this going to steer you into safe waters? Should you do more? Impress more? Maybe spend more time on that Master’s degree to get grades that set you apart, maybe that’ll get you invited with the cool kids?
then the only sensible option from my perspective is to take a step back and consider why you’re seeking status from this community in the first place. What motivations go into this behavior. At this point, I think it’s well worth reflecting
1) Why altruism in the first place?
2) Given 1, why EA?
3) Given 2, why seeking status?
Community norms tend to be self-reinforcing. It’s worth pointing out that there are people with a genuinely different perspective, and that this perspective has a reason.
I do think it makes sense to step back, but in the opposite order (you can’t rederive your entire ontology and goal structure every time something doesn’t make sense—it’s too much work and you’d never get anything done).
“Why am I seeking status?” and “Why is EA and/or EA-organizations the right way to go about A?” seem like plausible steps-backwards to take given the questions toon is raising here.
“Why altruism?” is a question every altruist should take seriously at least once, but none of the dilemmas raised in toon’s post seem like the sort of thing that warrants questioning the entire underpinning of your goal structure. (I realize if you think the entire structure is flawed, you’re going to disagree, but I think it’s strongly meta-level important for people to be able to think through problems within a given paradigm without every conversation being about re-evaluating that paradigm)
Happy to talk more in a different top-level post but not really interested in talking more in this particular comment-section
I think there’s a different sort of conversation where this sort of comment might be helpful (I think there’s plenty of perspectives from which EA, or “A”, doesn’t make sense, that are worth talking about). But it feels a bit outside the scope of this conversation.
(Not 100% about toonalfrink’s goals for the conversation)
I have no idea what toonalfrink’s goals for the conversation are. But when someone writes something like,
>So you find yourself in this volunteering opportunity with some EA’s and they tell you some stuff you can do, and you do it, and you’re left in the dark again. Is this going to steer you into safe waters? Should you do more? Impress more? Maybe spend more time on that Master’s degree to get grades that set you apart, maybe that’ll get you invited with the cool kids?
then the only sensible option from my perspective is to take a step back and consider why you’re seeking status from this community in the first place. What motivations go into this behavior. At this point, I think it’s well worth reflecting
1) Why altruism in the first place?
2) Given 1, why EA?
3) Given 2, why seeking status?
Community norms tend to be self-reinforcing. It’s worth pointing out that there are people with a genuinely different perspective, and that this perspective has a reason.
I do think it makes sense to step back, but in the opposite order (you can’t rederive your entire ontology and goal structure every time something doesn’t make sense—it’s too much work and you’d never get anything done).
“Why am I seeking status?” and “Why is EA and/or EA-organizations the right way to go about A?” seem like plausible steps-backwards to take given the questions toon is raising here.
“Why altruism?” is a question every altruist should take seriously at least once, but none of the dilemmas raised in toon’s post seem like the sort of thing that warrants questioning the entire underpinning of your goal structure. (I realize if you think the entire structure is flawed, you’re going to disagree, but I think it’s strongly meta-level important for people to be able to think through problems within a given paradigm without every conversation being about re-evaluating that paradigm)
Happy to talk more in a different top-level post but not really interested in talking more in this particular comment-section