Thanks for the link! I think the earlier chapters that I skimmed were actually more interesting to me than chapter 5 that I… well, skimmed in more detail. I’ll make some comments anyway, and if I’m wrong it’s probably my own fault for not being sufficiently acholarly.
Some issues general issues with the entire “robot rights” genre (justifications for me not including more auch papers in this poast), which I don’t think you evaded:
Rights-based reasoning isn’t very useful for questions like what entities to create in the first place.
AI capabilities are not going to reach the level of useful personal assistants and then plateau. They’re going to keep growing. The useful notion of rights relies on the usefulness of certain legal and social categories, but sufficiently cabable AI might be able to get what it wants in ways that undermine those categories (in the extreme case, without acting as a relevant member of society or relevant subject of the legal system).
Even in the near term, for those interested in mental properties as a basis for how we should treat AIs, the literature is too anthropomorphic, and reality (e.g. “what’s it like to be GPT-3”) is very, very not anthropomorphic. I would say your book is above average here because it focuses on social / legal reasons for rights.
Thanks for taking the time to look through my book. It’s an important first step to having a fair dialogue about tricky issues. I’ll say from the outset that I initially sought to answer two questions in my book- (1) could robots have rights (I showed that this could easily be the case in terms of legal rights, which is already happening in the US in the form of pedestrian rights for personal delivery devices); and (2) should robots have rights (here I also answered in the affirmative by taking a broad view of the insights provided by the Anthropocene, New Materialism, and critical environmental law). As to your points, see my responses below.
I disagree. In fact, some of the most vocal opponents of robot rights, like Joanna Bryson, argue that if we have robots worthy of rights, we will have designed them injustly. Her point is that if we figure out the rights question, it can help us to avoid designing robots that might qualify for rights. My position on this is that roboticists are going to do what they want (unless the government stops them), so we are headed for maximally human-like robots (see: Ishiguro and Hanson).
I can sort of see where you’re going with this, but I might say I agree in part and disagree in part. I agree that designers will not stop at useful personal assistants, but I don’t see how robots that eventually advocate for themselves will nullify the usefulness of rights. Keep in mind that rights are a two-way street; they require responsibilities as well. If for some reason robots take it upon themselves to seize what they want, that might only strengthen the need for new human rights and perhaps responsibilities on behalf of the companies creating these machines. It will still be important (perhaps more so) for society to determine what kind of moral status autonomous systems might warrant.
I appreciate the positive feedback. However, I would say that the cognitivist approach to moral status is a dead end. David Gunkel, whose terrific book Robot Rights inspired my own, discusses the objections to such a properties-based approach to moral status in a forthcoming chapter in an edited volume. Basically, there are a lot of problems, but perhaps the most important is the “problem of other minds” in philosophy, which says we can never really know what is going on in another entity’s mind (see Nagel’s iconic article, “What is it like to be a bat?”).
Thanks again for your comments and I am grateful for your willingness to engage.
Thanks for the link! I think the earlier chapters that I skimmed were actually more interesting to me than chapter 5 that I… well, skimmed in more detail. I’ll make some comments anyway, and if I’m wrong it’s probably my own fault for not being sufficiently acholarly.
Some issues general issues with the entire “robot rights” genre (justifications for me not including more auch papers in this poast), which I don’t think you evaded:
Rights-based reasoning isn’t very useful for questions like what entities to create in the first place.
AI capabilities are not going to reach the level of useful personal assistants and then plateau. They’re going to keep growing. The useful notion of rights relies on the usefulness of certain legal and social categories, but sufficiently cabable AI might be able to get what it wants in ways that undermine those categories (in the extreme case, without acting as a relevant member of society or relevant subject of the legal system).
Even in the near term, for those interested in mental properties as a basis for how we should treat AIs, the literature is too anthropomorphic, and reality (e.g. “what’s it like to be GPT-3”) is very, very not anthropomorphic. I would say your book is above average here because it focuses on social / legal reasons for rights.
Thanks for taking the time to look through my book. It’s an important first step to having a fair dialogue about tricky issues. I’ll say from the outset that I initially sought to answer two questions in my book- (1) could robots have rights (I showed that this could easily be the case in terms of legal rights, which is already happening in the US in the form of pedestrian rights for personal delivery devices); and (2) should robots have rights (here I also answered in the affirmative by taking a broad view of the insights provided by the Anthropocene, New Materialism, and critical environmental law). As to your points, see my responses below.
I disagree. In fact, some of the most vocal opponents of robot rights, like Joanna Bryson, argue that if we have robots worthy of rights, we will have designed them injustly. Her point is that if we figure out the rights question, it can help us to avoid designing robots that might qualify for rights. My position on this is that roboticists are going to do what they want (unless the government stops them), so we are headed for maximally human-like robots (see: Ishiguro and Hanson).
I can sort of see where you’re going with this, but I might say I agree in part and disagree in part. I agree that designers will not stop at useful personal assistants, but I don’t see how robots that eventually advocate for themselves will nullify the usefulness of rights. Keep in mind that rights are a two-way street; they require responsibilities as well. If for some reason robots take it upon themselves to seize what they want, that might only strengthen the need for new human rights and perhaps responsibilities on behalf of the companies creating these machines. It will still be important (perhaps more so) for society to determine what kind of moral status autonomous systems might warrant.
I appreciate the positive feedback. However, I would say that the cognitivist approach to moral status is a dead end. David Gunkel, whose terrific book Robot Rights inspired my own, discusses the objections to such a properties-based approach to moral status in a forthcoming chapter in an edited volume. Basically, there are a lot of problems, but perhaps the most important is the “problem of other minds” in philosophy, which says we can never really know what is going on in another entity’s mind (see Nagel’s iconic article, “What is it like to be a bat?”).
Thanks again for your comments and I am grateful for your willingness to engage.