I think what Eliezer is saying is that our evolutionary psychology, memetic history and reaction to current moral arguments form the computational trajectory for our moral judgment. All the points on this trajectory are acceptable moral judgments but when new experiences are fed back through base program this trajectory can shift. The shift takes place at the base of the line as it extends from the program, rather than curving in the middle to include all the current moral values. Moral values that are contacted by this line are good and any that aren’t contacted by this line are not good, like an off-on switch. This is because current moral judgments flow backwards.
The aggregate moral trajectory adds up to humanity’s morality when the function is filtered through the base program once again. So it continues to perform an update loop. Now if we edit the base program then it no longer provides consistent answers. This would be like taking a pill that makes it ‘morally right’ to kill people. What I am stuck on is how we could edit the base program and still have it produce consistent answers.
I think what Eliezer is saying is that our evolutionary psychology, memetic history and reaction to current moral arguments form the computational trajectory for our moral judgment. All the points on this trajectory are acceptable moral judgments but when new experiences are fed back through base program this trajectory can shift. The shift takes place at the base of the line as it extends from the program, rather than curving in the middle to include all the current moral values. Moral values that are contacted by this line are good and any that aren’t contacted by this line are not good, like an off-on switch. This is because current moral judgments flow backwards.
The aggregate moral trajectory adds up to humanity’s morality when the function is filtered through the base program once again. So it continues to perform an update loop. Now if we edit the base program then it no longer provides consistent answers. This would be like taking a pill that makes it ‘morally right’ to kill people. What I am stuck on is how we could edit the base program and still have it produce consistent answers.