I’d want to separate considerations of impact on [LW as collective epistemic process] from [LW as outreach to ML researchers]
Yeah I put those in one sentence in my comment but I agree that they are two separate points.
RE impact on ML community: I wasn’t thinking about anything in particular I just think the ML community should have more respect for LW/x-safety, and stuff like that doesn’t help.
Yeah I put those in one sentence in my comment but I agree that they are two separate points.
RE impact on ML community: I wasn’t thinking about anything in particular I just think the ML community should have more respect for LW/x-safety, and stuff like that doesn’t help.