It might not be a problem if we decide to work on the meta-level, and, rather than trying to optimize the universe according to some extrapolation of human values, tried to make sure the universe kept on having conditions that would produce some things like humans.
tried to make sure the universe kept on having conditions that would produce some things like humans.
I would submit that from the point of view of the ancestor species we displaced, we (homo sapiens) were the equivalent of UAI. We were a superior intelligence which was unconstrained by a set of values that supported our ancestor species. We tiled the planet with copies of ourselves robbing especially our immediate ancestors (who tended to occupy similar niches as us) of resources and making them extinct with our success.
So a universe that has the condition to produce some things like humans, that is “the state of nature” from which UAI will arise and, if they are as good as we are afraid they are, supplant humans as the dominant species.
I think this is the thinking behind all jokes along the lines of “I, for one, welcome our new robot overlords.”
So a universe that has the condition to produce some things like humans, that is “the state of nature” from which UAI will arise and, if they are as good as we are afraid they are, supplant humans as the dominant species.
That’s the goal. What, you want there to be humans a million years from now?
That’s the goal. What, you want there to be humans a million years from now?
Is that true, or are you just being cleverly sarcastic? If that is the goal of CEV, could you point me to something written up on CEV where I might see this aspect of it?
That does not sound like much of a win. Present-day humans are really not that impressive, compared to the kind of transhumanity we could develop into. I don’t think trying to reproduce entites close to our current mentality is worth doing, in the long run.
It might not be a problem if we decide to work on the meta-level, and, rather than trying to optimize the universe according to some extrapolation of human values, tried to make sure the universe kept on having conditions that would produce some things like humans.
I would submit that from the point of view of the ancestor species we displaced, we (homo sapiens) were the equivalent of UAI. We were a superior intelligence which was unconstrained by a set of values that supported our ancestor species. We tiled the planet with copies of ourselves robbing especially our immediate ancestors (who tended to occupy similar niches as us) of resources and making them extinct with our success.
So a universe that has the condition to produce some things like humans, that is “the state of nature” from which UAI will arise and, if they are as good as we are afraid they are, supplant humans as the dominant species.
I think this is the thinking behind all jokes along the lines of “I, for one, welcome our new robot overlords.”
That’s the goal. What, you want there to be humans a million years from now?
Is that true, or are you just being cleverly sarcastic? If that is the goal of CEV, could you point me to something written up on CEV where I might see this aspect of it?
I mean, that’s the goal of anyone with morals like mine, rather than just nepotism.
That does not sound like much of a win. Present-day humans are really not that impressive, compared to the kind of transhumanity we could develop into. I don’t think trying to reproduce entites close to our current mentality is worth doing, in the long run.
By “things like humans” I meant “things that have some of the same values or preferences.”