Where is the cultural context in all of this? How does that play in? Pain and pleasure here in the West is different than in the East just as value systems are different. When it comes to creating AGI I think a central set of agreed upon tenets are important. What is valuable? How can we quantify that in a way that makes sense to create AGI? If we want to reward it for doing good things, we have to consider cultural validation. We don’t steal, murder or assault people because we have significant cultural incentive not to do so, especially if you live in a stable country. I think that could help. If we can somehow show group approval of the AGI, like favorable opinions, verbal validation and other things that I intrinsically values as we do. We could use our own culture to reinforce norms within it’s archetecture.
A rigorous theory of valence wouldn’t involve cultural context, much as a rigorous theory of electromagnetism doesn’t involve cultural context.
Cultural context may matter a great deal in terms of how to build a friendly AGI that preserves what’s valuable about human civilization—or this may mostly boil down to the axioms that ‘pleasure is good’ and ‘suffering is bad’. I’m officially agnostic on whether value is simple or complex in this way.
One framework for dealing with the stuff you mention is Coherent Extrapolated Volition (CEV)- it’s not the last word on anything but it seems like a good intuition pump.
And I guess I’m saying that the sooner we think about these sorts of things the better off we’ll be. Going for pleasure good/suffering bad reduced the mindset of AI to about 2 years old. Cultural context gives us a sense of maturity Valence or no.
Where is the cultural context in all of this? How does that play in? Pain and pleasure here in the West is different than in the East just as value systems are different. When it comes to creating AGI I think a central set of agreed upon tenets are important. What is valuable? How can we quantify that in a way that makes sense to create AGI? If we want to reward it for doing good things, we have to consider cultural validation. We don’t steal, murder or assault people because we have significant cultural incentive not to do so, especially if you live in a stable country. I think that could help. If we can somehow show group approval of the AGI, like favorable opinions, verbal validation and other things that I intrinsically values as we do. We could use our own culture to reinforce norms within it’s archetecture.
A rigorous theory of valence wouldn’t involve cultural context, much as a rigorous theory of electromagnetism doesn’t involve cultural context.
Cultural context may matter a great deal in terms of how to build a friendly AGI that preserves what’s valuable about human civilization—or this may mostly boil down to the axioms that ‘pleasure is good’ and ‘suffering is bad’. I’m officially agnostic on whether value is simple or complex in this way.
One framework for dealing with the stuff you mention is Coherent Extrapolated Volition (CEV)- it’s not the last word on anything but it seems like a good intuition pump.
And I guess I’m saying that the sooner we think about these sorts of things the better off we’ll be. Going for pleasure good/suffering bad reduced the mindset of AI to about 2 years old. Cultural context gives us a sense of maturity Valence or no.