Though Eliezer does not say it explicitly today, the totality of his public pronouncements on laughter leads me to believe that he considers laughter an instrinsic good of very high order. I hope he does not expect me to accept the highness of the probability of the rareness of humor in the universe as evidence for humor’s intrinsic goodness. After all, spines are probably very rare in the universe, too. At least spines with 32 (or however many humans have) vertebrae are.
Eliezer does not explicitly say today that happiness is an intrinsic good, but he does contrast pebble sorting with “the human vision of a galaxy in which agents are running around experiencing positive reinforcement.”
I take it Eliezer does not wish to see the future light cone tiled with tiny computers running Matt Mahoney’s Autobliss 1.0. Pray tell me, What is wrong with such a future that is not also wrong with a future in which the resources of the future light cone are devoted to helping humans run around and experience positive reinforcement? Eliezer answer might refer to the difference between the simplicity of Autobliss 1.0 and the complexity of a human. Well, my reply to that is that it is relatively easy to make Autobliss more complex. We can even employ an evolutionary algorithm to create the complexity, increasing the resemblance between Autobliss 2.0 and humans. Eliezer probably has a reply to that, too. But when does this dialog reach the point where it is obvious that the distinction that makes humans intrinsically valuable and Autobliss 1.0 not valuable is being chosen so as to have the desired consequence? And did we not have a sermon some day in the last couple of weeks about how it is bad to gather evidence for a desired conclusion while ignore evidence against the conclusion?
Though Eliezer does not say it explicitly today, the totality of his public pronouncements on laughter leads me to believe that he considers laughter an instrinsic good of very high order. I hope he does not expect me to accept the highness of the probability of the rareness of humor in the universe as evidence for humor’s intrinsic goodness. After all, spines are probably very rare in the universe, too. At least spines with 32 (or however many humans have) vertebrae are.
Eliezer does not explicitly say today that happiness is an intrinsic good, but he does contrast pebble sorting with “the human vision of a galaxy in which agents are running around experiencing positive reinforcement.”
I take it Eliezer does not wish to see the future light cone tiled with tiny computers running Matt Mahoney’s Autobliss 1.0. Pray tell me, What is wrong with such a future that is not also wrong with a future in which the resources of the future light cone are devoted to helping humans run around and experience positive reinforcement? Eliezer answer might refer to the difference between the simplicity of Autobliss 1.0 and the complexity of a human. Well, my reply to that is that it is relatively easy to make Autobliss more complex. We can even employ an evolutionary algorithm to create the complexity, increasing the resemblance between Autobliss 2.0 and humans. Eliezer probably has a reply to that, too. But when does this dialog reach the point where it is obvious that the distinction that makes humans intrinsically valuable and Autobliss 1.0 not valuable is being chosen so as to have the desired consequence? And did we not have a sermon some day in the last couple of weeks about how it is bad to gather evidence for a desired conclusion while ignore evidence against the conclusion?