Moral relativism doesn’t seem to require any assumptions at all because moral objectivism implies I should ‘just know’ that moral objectivism is true, if it is true. But I don’t.
not at all. nothing guarantees that discovering the objective nature of morality is easy. if it’s derived from game theory, then there’s specific reason to believe it would be hard to compute. evolution has had time to discover good patterns in games, though, which hardcodes patterns in living creatures. that said, this also implies there’s no particular reason to expect fast convergence back to human morality after a system becomes superintelligent, so it’s not terribly reassuring—it only claims that some arbitrarily huge amount of time later the asi eventually gets really sad to have killed humanity.
The math behind game theory shaped our evolution in such a way as to create emotions because that was a faster solution for evolution to stumble on then making us all mathematical geniuses who would immediately deduce game theory from first principles as toddlers. Either way would have worked.
ASI wouldn’t need to evolve emotions for rule-of-thumbing game theory.
Game theory has little interesting to say about a situation where one party simply has no need for the other at all and can squish them like a bug, anyway.
Game theory has little interesting to say about a situation where one party simply has no need for the other at all and can squish them like a bug, anyway.
Yup. A super-paperclipper wouldn’t realize the loss until probably billions of years later, after it has time for its preferred shape of paperclip to evolve enough for it to realize it’s sad it can’t decorate the edges of the paperclips with humans.
Moral relativism doesn’t seem to require any assumptions at all because moral objectivism implies I should ‘just know’ that moral objectivism is true, if it is true. But I don’t.
not at all. nothing guarantees that discovering the objective nature of morality is easy. if it’s derived from game theory, then there’s specific reason to believe it would be hard to compute. evolution has had time to discover good patterns in games, though, which hardcodes patterns in living creatures. that said, this also implies there’s no particular reason to expect fast convergence back to human morality after a system becomes superintelligent, so it’s not terribly reassuring—it only claims that some arbitrarily huge amount of time later the asi eventually gets really sad to have killed humanity.
The math behind game theory shaped our evolution in such a way as to create emotions because that was a faster solution for evolution to stumble on then making us all mathematical geniuses who would immediately deduce game theory from first principles as toddlers. Either way would have worked.
ASI wouldn’t need to evolve emotions for rule-of-thumbing game theory.
Game theory has little interesting to say about a situation where one party simply has no need for the other at all and can squish them like a bug, anyway.
Yup. A super-paperclipper wouldn’t realize the loss until probably billions of years later, after it has time for its preferred shape of paperclip to evolve enough for it to realize it’s sad it can’t decorate the edges of the paperclips with humans.
It does not imply that any more than thinking that the sun is a particular temperature means all people know what temperature it is.