It’s worth noting that “Humanity” /= “Human-like (or better) intelligences that largely share our values” /= “Civilization.” This gives us three different kinds of existential risk.
Robin Hanson, as I understand him, seems to expect that only the third will survive, and seems to be okay with that. Many Less Wrongers, on the other hand, seem not so concerned with humanity per se, but would care about the survival of human-like intelligences sharing our values. And someone could care an awful lot about humanity per se, and want to put a lot of effort into making sure humans aren’t largely replaced by AIs of any kind.
I’m not a huge reader of blog comment threads, so it’s possible these debates have been done to death in comments and I’m not aware of it, but it would be nice to see some OPs on this issue.
It’s worth noting that “Humanity” /= “Human-like (or better) intelligences that largely share our values” /= “Civilization.” This gives us three different kinds of existential risk.
Robin Hanson, as I understand him, seems to expect that only the third will survive, and seems to be okay with that. Many Less Wrongers, on the other hand, seem not so concerned with humanity per se, but would care about the survival of human-like intelligences sharing our values. And someone could care an awful lot about humanity per se, and want to put a lot of effort into making sure humans aren’t largely replaced by AIs of any kind.
I’m not a huge reader of blog comment threads, so it’s possible these debates have been done to death in comments and I’m not aware of it, but it would be nice to see some OPs on this issue.