Mostly I agree with this. I have more thoughts, but probably better to put them in a top-level post—largely because I think this is important and would be interested to get more input on a good balance.
A few thoughts on LW endorsing invalid arguments: I’d want to separate considerations of impact on [LW as collective epistemic process] from [LW as outreach to ML researchers]. E.g. it doesn’t necessarily seem much of a problem for the former to have reliance on unstated assumptions. I wouldn’t formally specify an idea before sketching it, and it’s not clear to me that there’s anything wrong with collective sketching (so long as we know we’re sketching—and this part could certainly be improved). I’d first want to optimize the epistemic process, and then worry about the looking foolish part. (granted that there are instrumental reasons not to look foolish)
On ML’s view, are you mainly thinking of people who may do research on an important x-safety sub-problem without necessarily buying x-risk arguments? It seems unlikely to me that anyone gets persuaded of x-risk from the bottom up, whether or not the paper/post in question is rigorous—but perhaps this isn’t required for a lot of useful research?
I’d want to separate considerations of impact on [LW as collective epistemic process] from [LW as outreach to ML researchers]
Yeah I put those in one sentence in my comment but I agree that they are two separate points.
RE impact on ML community: I wasn’t thinking about anything in particular I just think the ML community should have more respect for LW/x-safety, and stuff like that doesn’t help.
Mostly I agree with this.
I have more thoughts, but probably better to put them in a top-level post—largely because I think this is important and would be interested to get more input on a good balance.
A few thoughts on LW endorsing invalid arguments:
I’d want to separate considerations of impact on [LW as collective epistemic process] from [LW as outreach to ML researchers]. E.g. it doesn’t necessarily seem much of a problem for the former to have reliance on unstated assumptions. I wouldn’t formally specify an idea before sketching it, and it’s not clear to me that there’s anything wrong with collective sketching (so long as we know we’re sketching—and this part could certainly be improved).
I’d first want to optimize the epistemic process, and then worry about the looking foolish part. (granted that there are instrumental reasons not to look foolish)
On ML’s view, are you mainly thinking of people who may do research on an important x-safety sub-problem without necessarily buying x-risk arguments? It seems unlikely to me that anyone gets persuaded of x-risk from the bottom up, whether or not the paper/post in question is rigorous—but perhaps this isn’t required for a lot of useful research?
Yeah I put those in one sentence in my comment but I agree that they are two separate points.
RE impact on ML community: I wasn’t thinking about anything in particular I just think the ML community should have more respect for LW/x-safety, and stuff like that doesn’t help.