Not being able to directly communicate with the others would be an issue in the beginning, but I’m guessing you would be able to use the setup to work out what the others think.
A bigger issue is that this would probably result in a very homogeneous group of minds. They’re optimizing not for correct answers, but for consensus answers. It’s the equivalent of studying for the exams. An fun example are the Polish equivalent of SAT exams (this probably generalizes, but I don’t know about other countries). I know quite a few people who went to study biology, and then decided to retake the biology exam (as one can do). Most retakers had worse results the second time round. Because they had more up to date knowledge—the exam is like at least 10 years behind the current state of knowledge, so they give correct (as of today) answers, but have them marked as incorrect. I’d expect the group of AIs to eventually converge on a set of acceptable beliefs, rather than correct ones.
Not being able to directly communicate with the others would be an issue in the beginning, but I’m guessing you would be able to use the setup to work out what the others think.
A bigger issue is that this would probably result in a very homogeneous group of minds. They’re optimizing not for correct answers, but for consensus answers. It’s the equivalent of studying for the exams. An fun example are the Polish equivalent of SAT exams (this probably generalizes, but I don’t know about other countries). I know quite a few people who went to study biology, and then decided to retake the biology exam (as one can do). Most retakers had worse results the second time round. Because they had more up to date knowledge—the exam is like at least 10 years behind the current state of knowledge, so they give correct (as of today) answers, but have them marked as incorrect. I’d expect the group of AIs to eventually converge on a set of acceptable beliefs, rather than correct ones.