I think you’re equivocating between “good world” and “good world for humans.” Humans have made a pretty good world for themselves, but we’ve also caused mass extinction of many species, have used practices like factory farming of animals which creates enormous amounts of suffering, and we have yet to undo much of the environmental damage we’ve caused.
We have no inherent incentive to maximize happiness for all living beings in existence—evolution did not select brains that held these values as strongly as, say, your desire to seek pleasure or your self preservational instinct, which are far stronger (assuming we have any inherent altruistic desires at all). Keep in mind that evolution only selected traits which were beneficial in propagating the genes that contained those traits. There is no reason to expect such genes to carry traits that help different genes survive. There might be genes that carry altruistic behavior traits, but this does not change the fact that these genes were selected completely selfishly, for the preservation of itself.
It’s true that evolution hasn’t produced “paperclip” maximizers, but it has produced many replicators, with traits helpful for replicating. Is the concept really so different? Are not most organisms simply “self” maximizers?
This is the right answer, but I’d like to add emphasis on the self-referential nature of the evaluation of humans in the OP. That is, it uses human values to assess humanity, and comes up with a positive verdict. Not terribly surprising, nor terribly useful in predicting the value, in human terms, of an AI. What the analogy predicts is that evaluated by AI values, AI will probably be a wonderful thing. I don’t find that very reassuring.
I think you’re equivocating between “good world” and “good world for humans.” Humans have made a pretty good world for themselves, but we’ve also caused mass extinction of many species, have used practices like factory farming of animals which creates enormous amounts of suffering, and we have yet to undo much of the environmental damage we’ve caused.
We have no inherent incentive to maximize happiness for all living beings in existence—evolution did not select brains that held these values as strongly as, say, your desire to seek pleasure or your self preservational instinct, which are far stronger (assuming we have any inherent altruistic desires at all). Keep in mind that evolution only selected traits which were beneficial in propagating the genes that contained those traits. There is no reason to expect such genes to carry traits that help different genes survive. There might be genes that carry altruistic behavior traits, but this does not change the fact that these genes were selected completely selfishly, for the preservation of itself.
It’s true that evolution hasn’t produced “paperclip” maximizers, but it has produced many replicators, with traits helpful for replicating. Is the concept really so different? Are not most organisms simply “self” maximizers?
This is the right answer, but I’d like to add emphasis on the self-referential nature of the evaluation of humans in the OP. That is, it uses human values to assess humanity, and comes up with a positive verdict. Not terribly surprising, nor terribly useful in predicting the value, in human terms, of an AI. What the analogy predicts is that evaluated by AI values, AI will probably be a wonderful thing. I don’t find that very reassuring.