1. In the model organisms of misalignment -section it is stated that AI companies might be nervous about researching model organisms because it could increase the likelihood of new regulation, since it would provide more evidence on concerning properties in AI system. Doesn’t this depend on what kind of model organisms the company expects to be able to develop? If it’s difficult to find model organisms, we would have evidence that alignment is easier and thus there would be less need for regulation.
2. Why didn’t you listed AI control work as one of the areas that may be slow to progress without efforts from outside labs? According to your incentives analysis it doesn’t seem like AI companies have many incentives to pursue this kind of work, and there were zero papers on AI control.
Really interesting work! I have two questions:
1. In the model organisms of misalignment -section it is stated that AI companies might be nervous about researching model organisms because it could increase the likelihood of new regulation, since it would provide more evidence on concerning properties in AI system. Doesn’t this depend on what kind of model organisms the company expects to be able to develop? If it’s difficult to find model organisms, we would have evidence that alignment is easier and thus there would be less need for regulation.
2. Why didn’t you listed AI control work as one of the areas that may be slow to progress without efforts from outside labs? According to your incentives analysis it doesn’t seem like AI companies have many incentives to pursue this kind of work, and there were zero papers on AI control.