The two concrete examples you gave weren’t what I had in mind. I was addressing the problem of an AI “losing” values during extrapolation,and it looks like a real reason to me. If you want to prevent an AI undergoing value drift during extrapolation, keep an extrapolated one as a reference. Two is a group minimally.
There may well be other advantages to doing rationality and ethics in groups, and yes, that needs research, and no, that isnt a show stopper.
The two concrete examples you gave weren’t what I had in mind. I was addressing the problem of an AI “losing” values during extrapolation,and it looks like a real reason to me. If you want to prevent an AI undergoing value drift during extrapolation, keep an extrapolated one as a reference. Two is a group minimally.
There may well be other advantages to doing rationality and ethics in groups, and yes, that needs research, and no, that isnt a show stopper.