I can’t tell if you think kindness is rare amongst aliens, or if you think it’s common amongst aliens but rare amongst AIs. Either way, I would like to understand why you think that. What is it that makes humans so weird in this way?
Can’t speak for Nate and Eliezer, but I expect kindness to be somewhat rare among evolved aliens (I think Eliezer’s wild guess is 5%? That sounds about right to me), and the degree to which they are kind will vary, possibly from only very slightly kind (or kind only under a very cosmopolitan view of kindness), to as kind or more kind than humans.
For AIs that humans are likely to build soon, I think there is significant probability (more than 50, less than 99? 90% seems fair) that they have literally 0 kindness. One reason is that I expect there is a significant chance that there is nothing within the first superintelligent AI systems to care about kindness or anything else, in the way that humans and aliens might care about something. If an AI system is superintelligent, then by assumption, some component piece of the system will necessarily have a deep and correct understanding of kindness (and many other things), and be capable of manipulating that understanding to achieve some goals. But understanding kindness is different from the system itself valuing kindness, or for there being anything at all “there” to have values of any kind whatsoever.
I think that current AI systems don’t provide much evidence on this question one way or the other, and as I’ve said elsewhere, arguments about this which rely on pattern matching human cognition to structures in current AI systems often fail to draw the understanding / valuing distinction sharply enough, in my view.
So a 90% chance of ~0 kindness is mostly just a made-up guess, but it still feels like a better guess to me than a shaky, overly-optimistic argument about how AI systems designed by processes which look nothing like human (or alien) evolution will produce minds which, very luckily for us, just so happen to share an important value with minds produced by evolution.
Can’t speak for Nate and Eliezer, but I expect kindness to be somewhat rare among evolved aliens (I think Eliezer’s wild guess is 5%? That sounds about right to me), and the degree to which they are kind will vary, possibly from only very slightly kind (or kind only under a very cosmopolitan view of kindness), to as kind or more kind than humans.
For AIs that humans are likely to build soon, I think there is significant probability (more than 50, less than 99? 90% seems fair) that they have literally 0 kindness. One reason is that I expect there is a significant chance that there is nothing within the first superintelligent AI systems to care about kindness or anything else, in the way that humans and aliens might care about something. If an AI system is superintelligent, then by assumption, some component piece of the system will necessarily have a deep and correct understanding of kindness (and many other things), and be capable of manipulating that understanding to achieve some goals. But understanding kindness is different from the system itself valuing kindness, or for there being anything at all “there” to have values of any kind whatsoever.
I think that current AI systems don’t provide much evidence on this question one way or the other, and as I’ve said elsewhere, arguments about this which rely on pattern matching human cognition to structures in current AI systems often fail to draw the understanding / valuing distinction sharply enough, in my view.
So a 90% chance of ~0 kindness is mostly just a made-up guess, but it still feels like a better guess to me than a shaky, overly-optimistic argument about how AI systems designed by processes which look nothing like human (or alien) evolution will produce minds which, very luckily for us, just so happen to share an important value with minds produced by evolution.