From the linked post, the part where you discuss a form of “weak moral realism”:
… in addition to the straightforward approach of programming an AI to adopt some value system (such as utilitarianism), we could also program the AI to hold the correct moral system.
What can this mean? Utilitarianism is is a moral system. (What is a “value system”, as you use the term?)
From the linked post, the part where you discuss a form of “weak moral realism”:
What can this mean? Utilitarianism is is a moral system. (What is a “value system”, as you use the term?)