If some biologists started a journal that dealt with physics (because they thought they had some reason to believe they had a unique and valuable take on Physics And Biology) that might be weird, perhaps bad. But, it wouldn’t be “decide what physics things get published.” It’d be “some biologists start a weird Physics Journal with it’s own kinda weird submission criteria.”
I in fact meant “decide what physics things get published”; in this counterfactual every physics journal / conference sends their submissions to biologists for peer review and a decision on whether it should be published. I think that is more correctly pointing at the problems I am worried about than “some biologists start a new physics journal”.
Like, it is not the case that there already exists a public evaluation mechanism for work coming out of CHAI / OpenAI / DeepMind. (I guess you could look at whether the papers they produce are published in some top conference, but this isn’t something OpenAI and DeepMind try very hard to do, and in any case that’s a pretty bad evaluation mechanism because it’s evaluating by the standards of the regular AI field, not the standards of AI safety.) So creating a public evaluation mechanism when none exists is automatically going to get some of the legitimacy, at least for non-experts.
I in fact meant “decide what physics things get published”; in this counterfactual every physics journal / conference sends their submissions to biologists for peer review and a decision on whether it should be published. I think that is more correctly pointing at the problems I am worried about than “some biologists start a new physics journal”.
Like, it is not the case that there already exists a public evaluation mechanism for work coming out of CHAI / OpenAI / DeepMind. (I guess you could look at whether the papers they produce are published in some top conference, but this isn’t something OpenAI and DeepMind try very hard to do, and in any case that’s a pretty bad evaluation mechanism because it’s evaluating by the standards of the regular AI field, not the standards of AI safety.) So creating a public evaluation mechanism when none exists is automatically going to get some of the legitimacy, at least for non-experts.