But as Eliezer has pointed out in Humans In Funny Suits, we should be wary of irrationally anthropomorphizing aliens. Even if there’s a tendency for intelligent life on other planets to be sort of like humans, such intelligent life may (whether intentionally or inadvertently) create a really powerful optimization process.
The creation of a powerful optimization process is a distraction here—as Eliezer points out in the article you link, and in others like the “Three Worlds Collide” story, aliens are quite unlikely to share much of our value system.
Yes, the creation of unfriendly AI is an important topic here on earth, but for an alien civilization, all we need to know is that they’re starting off with a different values system, and that they might become very powerful. Meeting a powerful entity with a different value system is more likely to be bad news than good news, regardless of whether the “power” comes from creating an “alien-friendly” AI, an “alien-unfriendly” AI (destroying their old alien value system in the process), self modification, uploading or whatnot.
Berating Steven Hawking for not mentioning this detail (“Maybe the aliens will create an unfriendly AI!”) is unnecessary. His view looks much less naive than this:
My feeling is that if human civilization advances to the point where we can explore outer space in earnest, it will be because humans have become much more cooperative and pluralistic than presently existing humans.
The creation of a powerful optimization process is a distraction here—as Eliezer points out in the article you link, and in others like the “Three Worlds Collide” story, aliens are quite unlikely to share much of our value system.
Yes, the creation of unfriendly AI is an important topic here on earth, but for an alien civilization, all we need to know is that they’re starting off with a different values system, and that they might become very powerful. Meeting a powerful entity with a different value system is more likely to be bad news than good news, regardless of whether the “power” comes from creating an “alien-friendly” AI, an “alien-unfriendly” AI (destroying their old alien value system in the process), self modification, uploading or whatnot.
Berating Steven Hawking for not mentioning this detail (“Maybe the aliens will create an unfriendly AI!”) is unnecessary. His view looks much less naive than this: