To the point of peer review, many AI safety researchers already get peer review by circling their drafts around to other researchers.
It seems to me that this is only a good use of your time if the journal became respectable. (Otherwise you barely increase visibility of the field, no one will care about publishing in the journal, and it doesn’t help academics’ careers much.) There can even be a negative effect where AI safety is perceived as “that fringe field that publishes in <journal>”, which makes AI researchers more reluctant to work on safety.
I don’t know how a journal becomes respectable but I would expect that it’s hard and takes a lot of work (and probably luck), and would want to see a good plan for how the journal will become respectable before I’d be excited to see this happen. I would guess that this wouldn’t be doable without the effort of a senior AI/ML researcher.
To the point of peer review, many AI safety researchers already get peer review by circling their drafts around to other researchers.
I expect this to be less feasible as the field grows, especially as new researchers enter the field who do not have strong connections yet. For example in my own work it has been useful to share things with folks to get early feedback but then there is also value in comments I received in peer review to point out things like related work none of us was aware of (based on my publishing results in mathematics some years ago).
Yeah, I think I agree with this. You are still able to get peer review from the people you work with, if you work at an organization, but it is preferable to get more varied feedback, and some people may not work at an organization.
I don’t know how a journal becomes respectable but I would expect that it’s hard and takes a lot of work (and probably luck), and would want to see a good plan for how the journal will become respectable before I’d be excited to see this happen. I would guess that this wouldn’t be doable without the effort of a senior AI/ML researcher.
I agree with all of this except that this seems to suggest to me that it is worth trying in the spirit of trying potentially high impact projects even if they have a low chance of success since the expected utility is probably enough to overcome opportunity costs. Yes there are many challenges and maybe only a 10% chance of success, but that seems good enough to try if the idea is otherwise valuable.
Sorry for the super late response, I only just discovered notifications.
In the cases where the things you are trying out are meta-type things that affect other people, I think it’s worth trying things _well_ even if they have a low chance of success, but quite costly to try things in an okayish way if it has a low chance of success.
One major downside of trying new things is that it makes future attempts to do the same thing less likely to work (because people are less enthusiastic about it and expect it to fail, or you get a proliferation of the new things and half of the people are on one of them and half are on the other and you lose out on network effects and economies of scale). This means that when you try new things, especially ones that make asks of other people, you want to put a _lot_ of effort into getting it right quickly. If you do the 20% effort version and that fails, maybe before you had done this the 90% effort version would have succeeded but now it simply can’t be done, and you’ve lost that value entirely. Whereas if you do the 90% effort version from the start and it fails, you can be reasonably confident that it was just not doable.
In this particular case, there’s also an object-level downside in the case of failure, namely that AI safety is thought of as “that fringe group that publishes in <journal>”.
To the point of peer review, many AI safety researchers already get peer review by circling their drafts around to other researchers.
It seems to me that this is only a good use of your time if the journal became respectable. (Otherwise you barely increase visibility of the field, no one will care about publishing in the journal, and it doesn’t help academics’ careers much.) There can even be a negative effect where AI safety is perceived as “that fringe field that publishes in <journal>”, which makes AI researchers more reluctant to work on safety.
I don’t know how a journal becomes respectable but I would expect that it’s hard and takes a lot of work (and probably luck), and would want to see a good plan for how the journal will become respectable before I’d be excited to see this happen. I would guess that this wouldn’t be doable without the effort of a senior AI/ML researcher.
I expect this to be less feasible as the field grows, especially as new researchers enter the field who do not have strong connections yet. For example in my own work it has been useful to share things with folks to get early feedback but then there is also value in comments I received in peer review to point out things like related work none of us was aware of (based on my publishing results in mathematics some years ago).
Yeah, I think I agree with this. You are still able to get peer review from the people you work with, if you work at an organization, but it is preferable to get more varied feedback, and some people may not work at an organization.
I agree with all of this except that this seems to suggest to me that it is worth trying in the spirit of trying potentially high impact projects even if they have a low chance of success since the expected utility is probably enough to overcome opportunity costs. Yes there are many challenges and maybe only a 10% chance of success, but that seems good enough to try if the idea is otherwise valuable.
Sorry for the super late response, I only just discovered notifications.
In the cases where the things you are trying out are meta-type things that affect other people, I think it’s worth trying things _well_ even if they have a low chance of success, but quite costly to try things in an okayish way if it has a low chance of success.
One major downside of trying new things is that it makes future attempts to do the same thing less likely to work (because people are less enthusiastic about it and expect it to fail, or you get a proliferation of the new things and half of the people are on one of them and half are on the other and you lose out on network effects and economies of scale). This means that when you try new things, especially ones that make asks of other people, you want to put a _lot_ of effort into getting it right quickly. If you do the 20% effort version and that fails, maybe before you had done this the 90% effort version would have succeeded but now it simply can’t be done, and you’ve lost that value entirely. Whereas if you do the 90% effort version from the start and it fails, you can be reasonably confident that it was just not doable.
In this particular case, there’s also an object-level downside in the case of failure, namely that AI safety is thought of as “that fringe group that publishes in <journal>”.