-I say “below the belt,” because I imagine that there are individuals of the Less Wrong community who strongly support SIAI’s work and goals concerning AI, but who simultaneously would not consider such AI creations to be of greater moral value than humans, and I didn’t want these individuals to think that I was making an assumption about their ethical opinions based on their support of AI research.
-Yes, it is largely because of disapproval of the conclusions, but I disapprove of the conclusions because the conclusions are not rational in the face of other intellectual considerations. The failure to see a qualitative difference between humans, baboons and computers suggests an inability to distinguish between living and non-living entities, and I think that is irrational.
there are individuals of the Less Wrong community who strongly support SIAI’s work and goals concerning AI, but who simultaneously would not consider such AI creations to be of greater moral value than humans
I normally hate to do this, but Nonsentient Optimizers says it better than I could. If you’re building an AI as a tool, don’t make it a person.
The failure to see a qualitative difference between humans, baboons and computers suggests an inability to distinguish between living and non-living entities, and I think that is irrational.
That’s a question of values, though. I don’t value magnitude of consciousness; if baboons were uplifted to be more intelligent than humans on average, I would still value humans more.
Why would this be below the belt? If “greater consciousness” is what you value, it seems self-evidently true.
Is there a reason for this other than disapproval of the conclusions?
-I say “below the belt,” because I imagine that there are individuals of the Less Wrong community who strongly support SIAI’s work and goals concerning AI, but who simultaneously would not consider such AI creations to be of greater moral value than humans, and I didn’t want these individuals to think that I was making an assumption about their ethical opinions based on their support of AI research.
-Yes, it is largely because of disapproval of the conclusions, but I disapprove of the conclusions because the conclusions are not rational in the face of other intellectual considerations. The failure to see a qualitative difference between humans, baboons and computers suggests an inability to distinguish between living and non-living entities, and I think that is irrational.
I normally hate to do this, but Nonsentient Optimizers says it better than I could. If you’re building an AI as a tool, don’t make it a person.
That’s a question of values, though. I don’t value magnitude of consciousness; if baboons were uplifted to be more intelligent than humans on average, I would still value humans more.
How do you define a living entity?