Actually, I favor the “response to the Euthyphro Dilemma” theory—if God is Good, then what God loves must be Good by definition, not by contrivance, and the dilemma collapses.
That is, if you ignore the contrivance that is “God is Good”.
As I understand it, “goodness” is being used in a somewhat confusing way there, referring to not only outcomes in the world and the actions which lead to those outcomes, but also preferences or personality traits which lead to those actions. In other words, there is a set of mental qualities which results in preferable outcomes (good) and a set which does not (evil); God has all of the former qualities and none of the latter, therefore similarity to God can be taken as evidence of goodness and vice-versa.
The main practical difference from Friendly AI is that God is presumed to already exist.
Actually, I favor the “response to the Euthyphro Dilemma” theory—if God is Good, then what God loves must be Good by definition, not by contrivance, and the dilemma collapses.
That is, if you ignore the contrivance that is “God is Good”.
Could you elaborate on how the second part follows from God = good?
As I understand it, “goodness” is being used in a somewhat confusing way there, referring to not only outcomes in the world and the actions which lead to those outcomes, but also preferences or personality traits which lead to those actions. In other words, there is a set of mental qualities which results in preferable outcomes (good) and a set which does not (evil); God has all of the former qualities and none of the latter, therefore similarity to God can be taken as evidence of goodness and vice-versa.
The main practical difference from Friendly AI is that God is presumed to already exist.
I assume it’s supposed to work like mass-energy equivalence or something, but I don’t actually believe it, so I can’t say.
Heh, fair enough.