I’ll break this down into two response, because of the length.
-Assuming the locust-thing is an apologetic gloss doesn’t seem warranted. Locusts have been a common food source in many parts of Asia and Africa for thousands of years, and the fact that the Torah permits the consumption of certain locusts strongly implies that they were being eaten. It seems fair to estimate that the people eating these locusts would have known how many legs they really had, regardless of illiteracy and poor knowledge of animal biology.
-I’m not claiming that the Tanakh itself contains clear, obvious passages expressing wonder at the universe, in fact I pointed out that the text itself generally doesn’t. I’m claiming that the legal tradition that derives from it necessitates the study of nature and makes it inevitable, and that the study of nature became a part of Jewish oral tradition as a consequence. While I used the Kuzari for easy citation, the necessity for scientific study can be seen from the text of the Mishnah. How would the Tannaim have fixed a calendar without studying astronomy, established rules for identifying sick animals without studying animal disease, established rules for eruvin without studying plane geometry, etc? Simply reiterating that the written Tanakh itself doesn’t express much wonder for the universe, ignores the fact that both oral tradition and written law had an equal stake in how Judaism began, and in how it developed. It also ignores the fact that Judaism has always been much more concerned with the morality of concrete, physical activity than with scientific speculation, the latter having been appropriately subordinated and sublimated to the cause of the former.
-You’re right, I am admitting that certain aspects of Jewish thought occupy distinct magisteria. What I am disputing is that rational, scientific methodology is synonymous with reason itself. Many schools of philosophy utilize methods of logic other than the scientific process. As an example (and I don’t mean this to be below the belt), one could claim, as Peter Singer does, that an adult baboon has more utility and moral value than a human infant, since the baboon would have a more developed brain and therefore greater consciousness. By extrapolation, one could similarly claim that a super-intelligent computer would have more utility and moral value than a contemporary adult human, since the former would have a more developed mind and therefore greater consciousness. If ethics are to be understood through the prism of the scientific process as we know it, these ideas could actually be argued for pretty effectively, and I don’t think such methods of reasoning are appropriate for the discussion of such issues.
It seems fair to estimate that the people eating these locusts would have known how many legs they really had
Any large text that makes scientific claims makes errors. A modern science textbook averages about 14 errors. Ancient Greek texts are full of erroneous factual claims that they could have easily checked. Aristotle claimed that men had more teeth than women. Had such a claim been in the Torah, there would be later commentary explaining that in women, certain teeth don’t count as teeth.
Being fair to Aristotle, it may be the case that empirically, in Ancient Greece, or in whatever sample he used to check his claim, the women did actually have fewer teeth on average. Worse nutrition, more stress on the body due to pregnancy, whatever. If you check ten women and ten men in a non-modern community you might easily get such a result by sheer chance.
Since a large part of what he did was checking empirically, I don’t think your opinion is justified. Really, the most likely explanation is that he checked empirically—the same way he observed that the kidneys filter urine, that some sharks give birth to live young, and numerous other biological discoveries that were obtained in part through first-hand vivisection.
-I say “below the belt,” because I imagine that there are individuals of the Less Wrong community who strongly support SIAI’s work and goals concerning AI, but who simultaneously would not consider such AI creations to be of greater moral value than humans, and I didn’t want these individuals to think that I was making an assumption about their ethical opinions based on their support of AI research.
-Yes, it is largely because of disapproval of the conclusions, but I disapprove of the conclusions because the conclusions are not rational in the face of other intellectual considerations. The failure to see a qualitative difference between humans, baboons and computers suggests an inability to distinguish between living and non-living entities, and I think that is irrational.
there are individuals of the Less Wrong community who strongly support SIAI’s work and goals concerning AI, but who simultaneously would not consider such AI creations to be of greater moral value than humans
I normally hate to do this, but Nonsentient Optimizers says it better than I could. If you’re building an AI as a tool, don’t make it a person.
The failure to see a qualitative difference between humans, baboons and computers suggests an inability to distinguish between living and non-living entities, and I think that is irrational.
That’s a question of values, though. I don’t value magnitude of consciousness; if baboons were uplifted to be more intelligent than humans on average, I would still value humans more.
I’ll break this down into two response, because of the length.
-Assuming the locust-thing is an apologetic gloss doesn’t seem warranted. Locusts have been a common food source in many parts of Asia and Africa for thousands of years, and the fact that the Torah permits the consumption of certain locusts strongly implies that they were being eaten. It seems fair to estimate that the people eating these locusts would have known how many legs they really had, regardless of illiteracy and poor knowledge of animal biology.
-I’m not claiming that the Tanakh itself contains clear, obvious passages expressing wonder at the universe, in fact I pointed out that the text itself generally doesn’t. I’m claiming that the legal tradition that derives from it necessitates the study of nature and makes it inevitable, and that the study of nature became a part of Jewish oral tradition as a consequence. While I used the Kuzari for easy citation, the necessity for scientific study can be seen from the text of the Mishnah. How would the Tannaim have fixed a calendar without studying astronomy, established rules for identifying sick animals without studying animal disease, established rules for eruvin without studying plane geometry, etc? Simply reiterating that the written Tanakh itself doesn’t express much wonder for the universe, ignores the fact that both oral tradition and written law had an equal stake in how Judaism began, and in how it developed. It also ignores the fact that Judaism has always been much more concerned with the morality of concrete, physical activity than with scientific speculation, the latter having been appropriately subordinated and sublimated to the cause of the former.
-You’re right, I am admitting that certain aspects of Jewish thought occupy distinct magisteria. What I am disputing is that rational, scientific methodology is synonymous with reason itself. Many schools of philosophy utilize methods of logic other than the scientific process. As an example (and I don’t mean this to be below the belt), one could claim, as Peter Singer does, that an adult baboon has more utility and moral value than a human infant, since the baboon would have a more developed brain and therefore greater consciousness. By extrapolation, one could similarly claim that a super-intelligent computer would have more utility and moral value than a contemporary adult human, since the former would have a more developed mind and therefore greater consciousness. If ethics are to be understood through the prism of the scientific process as we know it, these ideas could actually be argued for pretty effectively, and I don’t think such methods of reasoning are appropriate for the discussion of such issues.
Any large text that makes scientific claims makes errors. A modern science textbook averages about 14 errors. Ancient Greek texts are full of erroneous factual claims that they could have easily checked. Aristotle claimed that men had more teeth than women. Had such a claim been in the Torah, there would be later commentary explaining that in women, certain teeth don’t count as teeth.
Being fair to Aristotle, it may be the case that empirically, in Ancient Greece, or in whatever sample he used to check his claim, the women did actually have fewer teeth on average. Worse nutrition, more stress on the body due to pregnancy, whatever. If you check ten women and ten men in a non-modern community you might easily get such a result by sheer chance.
I don’t think that Aristotle did check empirically, though.
Since a large part of what he did was checking empirically, I don’t think your opinion is justified. Really, the most likely explanation is that he checked empirically—the same way he observed that the kidneys filter urine, that some sharks give birth to live young, and numerous other biological discoveries that were obtained in part through first-hand vivisection.
Why would this be below the belt? If “greater consciousness” is what you value, it seems self-evidently true.
Is there a reason for this other than disapproval of the conclusions?
-I say “below the belt,” because I imagine that there are individuals of the Less Wrong community who strongly support SIAI’s work and goals concerning AI, but who simultaneously would not consider such AI creations to be of greater moral value than humans, and I didn’t want these individuals to think that I was making an assumption about their ethical opinions based on their support of AI research.
-Yes, it is largely because of disapproval of the conclusions, but I disapprove of the conclusions because the conclusions are not rational in the face of other intellectual considerations. The failure to see a qualitative difference between humans, baboons and computers suggests an inability to distinguish between living and non-living entities, and I think that is irrational.
I normally hate to do this, but Nonsentient Optimizers says it better than I could. If you’re building an AI as a tool, don’t make it a person.
That’s a question of values, though. I don’t value magnitude of consciousness; if baboons were uplifted to be more intelligent than humans on average, I would still value humans more.
How do you define a living entity?