I don’t know how a journal becomes respectable but I would expect that it’s hard and takes a lot of work (and probably luck), and would want to see a good plan for how the journal will become respectable before I’d be excited to see this happen. I would guess that this wouldn’t be doable without the effort of a senior AI/ML researcher.
I agree with all of this except that this seems to suggest to me that it is worth trying in the spirit of trying potentially high impact projects even if they have a low chance of success since the expected utility is probably enough to overcome opportunity costs. Yes there are many challenges and maybe only a 10% chance of success, but that seems good enough to try if the idea is otherwise valuable.
Sorry for the super late response, I only just discovered notifications.
In the cases where the things you are trying out are meta-type things that affect other people, I think it’s worth trying things _well_ even if they have a low chance of success, but quite costly to try things in an okayish way if it has a low chance of success.
One major downside of trying new things is that it makes future attempts to do the same thing less likely to work (because people are less enthusiastic about it and expect it to fail, or you get a proliferation of the new things and half of the people are on one of them and half are on the other and you lose out on network effects and economies of scale). This means that when you try new things, especially ones that make asks of other people, you want to put a _lot_ of effort into getting it right quickly. If you do the 20% effort version and that fails, maybe before you had done this the 90% effort version would have succeeded but now it simply can’t be done, and you’ve lost that value entirely. Whereas if you do the 90% effort version from the start and it fails, you can be reasonably confident that it was just not doable.
In this particular case, there’s also an object-level downside in the case of failure, namely that AI safety is thought of as “that fringe group that publishes in <journal>”.
I agree with all of this except that this seems to suggest to me that it is worth trying in the spirit of trying potentially high impact projects even if they have a low chance of success since the expected utility is probably enough to overcome opportunity costs. Yes there are many challenges and maybe only a 10% chance of success, but that seems good enough to try if the idea is otherwise valuable.
Sorry for the super late response, I only just discovered notifications.
In the cases where the things you are trying out are meta-type things that affect other people, I think it’s worth trying things _well_ even if they have a low chance of success, but quite costly to try things in an okayish way if it has a low chance of success.
One major downside of trying new things is that it makes future attempts to do the same thing less likely to work (because people are less enthusiastic about it and expect it to fail, or you get a proliferation of the new things and half of the people are on one of them and half are on the other and you lose out on network effects and economies of scale). This means that when you try new things, especially ones that make asks of other people, you want to put a _lot_ of effort into getting it right quickly. If you do the 20% effort version and that fails, maybe before you had done this the 90% effort version would have succeeded but now it simply can’t be done, and you’ve lost that value entirely. Whereas if you do the 90% effort version from the start and it fails, you can be reasonably confident that it was just not doable.
In this particular case, there’s also an object-level downside in the case of failure, namely that AI safety is thought of as “that fringe group that publishes in <journal>”.