The FAI project is about finding the moral theory that is correct,(1) then implementing potential AGIs so that they will implement that process of making decisions. I’m not aware of anything other than history that is a viable candidate to be evidence that a particular moral theory is correct.
Further, a FAI would need the capacity to predict how a human society would react to various circumstances or interventions. Again, history is the only data on how human societies react.
(1) I acknowledge the need to taboo “correct” in this context in order to make progress on this front.
I’m not aware of anything other than history that is a viable candidate to be evidence that a particular moral theory is correct.
Its possible that you’re using correct to mean something completely different than I would use it to mean, but I don’t see how history is supposed to be evidence that a moral theory is correct. Are you saying that historically widespread moral theories are likely to be correct?
Further, a FAI would need the capacity to predict how a human society would react to various circumstances or interventions.
This is something that the AI is supposed to figure out for itself, not something that would be hardcoded in (at least not in currently favored designs).
The FAI project is about finding the moral theory that is correct,(1) then implementing potential AGIs so that they will implement that process of making decisions. I’m not aware of anything other than history that is a viable candidate to be evidence that a particular moral theory is correct.
Further, a FAI would need the capacity to predict how a human society would react to various circumstances or interventions. Again, history is the only data on how human societies react.
(1) I acknowledge the need to taboo “correct” in this context in order to make progress on this front.
Its possible that you’re using correct to mean something completely different than I would use it to mean, but I don’t see how history is supposed to be evidence that a moral theory is correct. Are you saying that historically widespread moral theories are likely to be correct?
This is something that the AI is supposed to figure out for itself, not something that would be hardcoded in (at least not in currently favored designs).