This is especially important in the light of a fairly recent massive grass-roots effort in academia—originated by academics in multiple disciplines volunteering their spare time—to do the work that led to the replication crisis, because academics in many fields are actually still trying to get the right answer along some dimensions and are willing to endure material costs (including reputational damage to their own fields) to do so. So, that’s not actually a proposal to decline to initiate a stag hunt, that’s a proposal to unilaterally choose Rabbit in a context where close to a critical quorum might be choosing Stag.
Another distinction I think is important, for the specific example of “scientific fraud vs. cow suffering” as a hypothetical:
Science is a terrible career for almost any goal other than actually contributing to the scientific endeavor.
I have a guess that “science, specifically” as a career-with-harmful-impacts in the hypothetical was not specifically important to Ray, but that it was very important to Ben. And that if the example career in Ray’s “which harm is highest priority?” thought experiment had been “high-frequency-trading” (or something else that some folks believe has harms when ordinarily practiced, but is lucrative and thus could have benefits worth staying for, and is not specifically a role of stewardship over our communal epistemics) that Ben would have a different response. I’m curious to what extent that’s true.
You’re right that I’d respond to different cases differently. Doing high frequency trading in a way that causes some harm—if you think you can do something very good with the money—seems basically sympathetic to me, in a sufficiently unjust society such as ours.
Any info good (including finance and trading) is on some level pretending to involve stewardship over our communal epistemics, but the simulacrum level of something like finance is pretty high in many respects.
I think your final paragraph is getting at an important element of the disagreement. To be clear, *I* treat science and high frequency trading differently, too, but yes I think to me it registers as “very important” and to Ben it seems closer to “sacred” (which, to be clear, seems like a quite reasonable outlook to me)
Science is a terrible career for almost any goal other than actually contributing to the scientific endeavor.
Small background tidbit that’s part of this: I think many scientists have goals that seem like more like like “do what their parents want” and “be respectable” or something. Which isn’t about traditional financial success, but looks like opting into a particular weird sub-status-hierarchy that one might plausibly well suited to win at.
Another background snippet informing my model:
Recently I was asking an academic friend “hey, do you think your field could benefit from better intellectual infrastructure?” and they said “you mean like LessWrong?” and I said “I mean a meta-level version of it that tries to look at the local set of needs and improve communication in some fashion.”
And they said something like “man, sorry to disappoint you, but most of academia is not, like, trying to solve problems together, the way it looks like the rationality or AI alignment communities are. They wouldn’t want to post clearer communications earlier in the idea-forming stage because they’d be worried about getting scooped. They’re just trying to further their own career.”
This is just one datapoint, and again I know very little about academia overall. Ben’s comments about how the replication crisis happened via an organic grassroots process seems quite important and quite relevant.
Reiterating from my other post upthread: I am not making any claims about what people in science and/or academia should do. I’m making conditional claims, which depend on the actual state of science and academia.
This is especially important in the light of a fairly recent massive grass-roots effort in academia—originated by academics in multiple disciplines volunteering their spare time—to do the work that led to the replication crisis, because academics in many fields are actually still trying to get the right answer along some dimensions and are willing to endure material costs (including reputational damage to their own fields) to do so. So, that’s not actually a proposal to decline to initiate a stag hunt, that’s a proposal to unilaterally choose Rabbit in a context where close to a critical quorum might be choosing Stag.
Another distinction I think is important, for the specific example of “scientific fraud vs. cow suffering” as a hypothetical:
Science is a terrible career for almost any goal other than actually contributing to the scientific endeavor.
I have a guess that “science, specifically” as a career-with-harmful-impacts in the hypothetical was not specifically important to Ray, but that it was very important to Ben. And that if the example career in Ray’s “which harm is highest priority?” thought experiment had been “high-frequency-trading” (or something else that some folks believe has harms when ordinarily practiced, but is lucrative and thus could have benefits worth staying for, and is not specifically a role of stewardship over our communal epistemics) that Ben would have a different response. I’m curious to what extent that’s true.
You’re right that I’d respond to different cases differently. Doing high frequency trading in a way that causes some harm—if you think you can do something very good with the money—seems basically sympathetic to me, in a sufficiently unjust society such as ours.
Any info good (including finance and trading) is on some level pretending to involve stewardship over our communal epistemics, but the simulacrum level of something like finance is pretty high in many respects.
I think your final paragraph is getting at an important element of the disagreement. To be clear, *I* treat science and high frequency trading differently, too, but yes I think to me it registers as “very important” and to Ben it seems closer to “sacred” (which, to be clear, seems like a quite reasonable outlook to me)
Small background tidbit that’s part of this: I think many scientists have goals that seem like more like like “do what their parents want” and “be respectable” or something. Which isn’t about traditional financial success, but looks like opting into a particular weird sub-status-hierarchy that one might plausibly well suited to win at.
Another background snippet informing my model:
Recently I was asking an academic friend “hey, do you think your field could benefit from better intellectual infrastructure?” and they said “you mean like LessWrong?” and I said “I mean a meta-level version of it that tries to look at the local set of needs and improve communication in some fashion.”
And they said something like “man, sorry to disappoint you, but most of academia is not, like, trying to solve problems together, the way it looks like the rationality or AI alignment communities are. They wouldn’t want to post clearer communications earlier in the idea-forming stage because they’d be worried about getting scooped. They’re just trying to further their own career.”
This is just one datapoint, and again I know very little about academia overall. Ben’s comments about how the replication crisis happened via an organic grassroots process seems quite important and quite relevant.
Reiterating from my other post upthread: I am not making any claims about what people in science and/or academia should do. I’m making conditional claims, which depend on the actual state of science and academia.