Richard, if you’re seriously proposing that consciousness is a mistaken idea, but morality isn’t, I can only say that that has got to be one unique theory of morality.
Yes, Z.M.D., I am seriously proposing. And I know my theory of morality is not unique to me because a man caused thousands of people to declare for a theory of morality that makes no reference to consciousness (or subject experience for that matter) and although most of those thousands might have switched by now to some other moral theory and although most of the declarations might have been insincere in the first place, a significant fraction have not and were not if my correspondence with a couple of dozens of those thousands is any indication.
Maybe [Eliezer is] right, and superintelligence implies consciousness. I don’t see why it would, but maybe it does. How would we know? I worry about how productive discussions about AI can be, if most of the participants are relying so heavily upon their intuitions, as we don’t have any crushing experimental evidence.
It is not only that we don’t have any experimental evidence, crushing or otherwise, but also that I have never seen anything resembling an embryo of a definition of consciousness (or personhood unless personhood is defined “arbitrarily”, e.g., by equating it to being a human being) that would commit a user of the concept to any outcome in any experiment. I have never seen anything resembling an embryo of a definition even after reading Chalmers, Churchland, literally most of SL4 before 2004 (which goes on and on about consciousness) and almost everything Eliezer published (e.g., on SL4).
Richard, if you’re seriously proposing that consciousness is a mistaken idea, but morality isn’t, I can only say that that has got to be one unique theory of morality.
Yes, Z.M.D., I am seriously proposing. And I know my theory of morality is not unique to me because a man caused thousands of people to declare for a theory of morality that makes no reference to consciousness (or subject experience for that matter) and although most of those thousands might have switched by now to some other moral theory and although most of the declarations might have been insincere in the first place, a significant fraction have not and were not if my correspondence with a couple of dozens of those thousands is any indication.
Maybe [Eliezer is] right, and superintelligence implies consciousness. I don’t see why it would, but maybe it does. How would we know? I worry about how productive discussions about AI can be, if most of the participants are relying so heavily upon their intuitions, as we don’t have any crushing experimental evidence.
It is not only that we don’t have any experimental evidence, crushing or otherwise, but also that I have never seen anything resembling an embryo of a definition of consciousness (or personhood unless personhood is defined “arbitrarily”, e.g., by equating it to being a human being) that would commit a user of the concept to any outcome in any experiment. I have never seen anything resembling an embryo of a definition even after reading Chalmers, Churchland, literally most of SL4 before 2004 (which goes on and on about consciousness) and almost everything Eliezer published (e.g., on SL4).