If you ever felt inclined to lay out your reasons for believing that general intelligences without a shared heritage are likely to converge on egalitarianism, altruism, curiosity, game theoretical moral sentiments, or Buddhahood… or, come to that, lay out more precisely what you mean by those terms… I would be interested to read them.
Evolution works on species, smart species’ members will either evolve together or eventually get smart enough to learn to copy each other even if adversarial, such interactions will probably roughly approximate evolutionary game theory, and iterated games for social animals will probably yield cooperation and possibly altruism. Knowing more about yourself, the process that created you, and the arbitrarity in how you ended up with your preferences, intuitively seems like it would promote egalitarianism both on aesthetic grounds and pragmatic game theoretic grounds. Curiosity is just a necessary prerequisite for intelligence, it’s obviously convergent. Something like Buddhahood is just a necessary prerequisite for a reasonable decision theory approximation, and is thus also convergent. That one is horribly imprecise, I know, but Buddhahood is hard enough to explain in itself, let alone as a decision theory approximation, let alone as a normative one.
That’s just the scattershot sleep-deprived off-the-top-of-my-head version that’s missing all the good intuitions. If I end up converting my mountain of intuitions into respectable arguments in text I will let you know. It’s just so much easier to do in-person, where I can get quick feedback about others’ ontologies and how they mesh with mine, et cetera.
If you ever felt inclined to lay out your reasons for believing that general intelligences without a shared heritage are likely to converge on egalitarianism, altruism, curiosity, game theoretical moral sentiments, or Buddhahood… or, come to that, lay out more precisely what you mean by those terms… I would be interested to read them.
Evolution works on species, smart species’ members will either evolve together or eventually get smart enough to learn to copy each other even if adversarial, such interactions will probably roughly approximate evolutionary game theory, and iterated games for social animals will probably yield cooperation and possibly altruism. Knowing more about yourself, the process that created you, and the arbitrarity in how you ended up with your preferences, intuitively seems like it would promote egalitarianism both on aesthetic grounds and pragmatic game theoretic grounds. Curiosity is just a necessary prerequisite for intelligence, it’s obviously convergent. Something like Buddhahood is just a necessary prerequisite for a reasonable decision theory approximation, and is thus also convergent. That one is horribly imprecise, I know, but Buddhahood is hard enough to explain in itself, let alone as a decision theory approximation, let alone as a normative one.
That’s just the scattershot sleep-deprived off-the-top-of-my-head version that’s missing all the good intuitions. If I end up converting my mountain of intuitions into respectable arguments in text I will let you know. It’s just so much easier to do in-person, where I can get quick feedback about others’ ontologies and how they mesh with mine, et cetera.
Thanks, on both counts. And, yes, agreed that it’s easier to have these sorts of conversations with known quantities.