I’m not aware of anything other than history that is a viable candidate to be evidence that a particular moral theory is correct.
Its possible that you’re using correct to mean something completely different than I would use it to mean, but I don’t see how history is supposed to be evidence that a moral theory is correct. Are you saying that historically widespread moral theories are likely to be correct?
Further, a FAI would need the capacity to predict how a human society would react to various circumstances or interventions.
This is something that the AI is supposed to figure out for itself, not something that would be hardcoded in (at least not in currently favored designs).
Its possible that you’re using correct to mean something completely different than I would use it to mean, but I don’t see how history is supposed to be evidence that a moral theory is correct. Are you saying that historically widespread moral theories are likely to be correct?
This is something that the AI is supposed to figure out for itself, not something that would be hardcoded in (at least not in currently favored designs).