I’m not sure the question means anything, nor am I sure exactly what it would mean if it did.
An easier thought experiment for me to imagine, which seems to relate to this question, is how I would expect cultural attitudes towards unattractive people to evolve when technology allows individuals to choose their appearances. A still easier one is how I would expect such attitudes to be projected into an online environment where people choose the appearances of their avatars. My intuition is that the “ugly”/”attractive” scale starts to mean very different things in such cultures, and ultimately ceases to mean much of anything at all, and questions like “should I be allowed to choose an ugly avatar?” and “should my avatar be forcibly upgraded to be less ugly?” start to feel like silly questions to which the correct answer is “who cares?”. Sure, I may understand intellectually that newcomers to this culture come from a world where attractiveness matters a great deal, and may have a hard time acclimatizing themselves to the idea that my world is different; I may even sympathize with their need to ask such silly questions, but that won’t make me respect the questions any more.
Relatedly, one thing I might expect in a transhumanist culture is that the whole idea of a linear scale of ability might wither in the face of a myriad incommensurable but mutually exclusive abilities and a general presumption of as much competence as an individual desires. That is, “disabled” would come to mean something very different, and ultimately would cease to mean much of anything.
Put another way: if I’m a nonverbal autistic in such a culture who has the choice of installing the ability to express affection verbally, but chooses not to, and you’re a something-else who has the choice of installing the ability to understand how I communicate affection but chooses not to, and both of us have the choice of installing the ability to communicate telepathically but have chosen not to, it’s not clear to me that either of us is in a position even remotely like the autistic and his mother you describe.
Another example: in such a culture, if I plug real-time information about others’ preferences directly into my own motivational framework to become maximally social, while you artificially compensate for the natural cognitive biases that would ordinarily cause your motives to be influenced by others’ expressed preferences in order to become maximally independent (or vice versa, if you prefer), it’s not at all clear that it makes sense to talk about either of us as disabled, even though each of us lacks an ability the other possesses, and even though someone in my culture who approximated either of those states might be considered disabled in various ways.
So, I dunno. I agree that a rational clear dialogue about disability is desirable, but more because of their actual effect on the present than their hypothetical effect on a potential transhumanist future.
I’m not sure the question means anything, nor am I sure exactly what it would mean if it did.
An easier thought experiment for me to imagine, which seems to relate to this question, is how I would expect cultural attitudes towards unattractive people to evolve when technology allows individuals to choose their appearances. A still easier one is how I would expect such attitudes to be projected into an online environment where people choose the appearances of their avatars. My intuition is that the “ugly”/”attractive” scale starts to mean very different things in such cultures, and ultimately ceases to mean much of anything at all, and questions like “should I be allowed to choose an ugly avatar?” and “should my avatar be forcibly upgraded to be less ugly?” start to feel like silly questions to which the correct answer is “who cares?”. Sure, I may understand intellectually that newcomers to this culture come from a world where attractiveness matters a great deal, and may have a hard time acclimatizing themselves to the idea that my world is different; I may even sympathize with their need to ask such silly questions, but that won’t make me respect the questions any more.
Relatedly, one thing I might expect in a transhumanist culture is that the whole idea of a linear scale of ability might wither in the face of a myriad incommensurable but mutually exclusive abilities and a general presumption of as much competence as an individual desires. That is, “disabled” would come to mean something very different, and ultimately would cease to mean much of anything.
Put another way: if I’m a nonverbal autistic in such a culture who has the choice of installing the ability to express affection verbally, but chooses not to, and you’re a something-else who has the choice of installing the ability to understand how I communicate affection but chooses not to, and both of us have the choice of installing the ability to communicate telepathically but have chosen not to, it’s not clear to me that either of us is in a position even remotely like the autistic and his mother you describe.
Another example: in such a culture, if I plug real-time information about others’ preferences directly into my own motivational framework to become maximally social, while you artificially compensate for the natural cognitive biases that would ordinarily cause your motives to be influenced by others’ expressed preferences in order to become maximally independent (or vice versa, if you prefer), it’s not at all clear that it makes sense to talk about either of us as disabled, even though each of us lacks an ability the other possesses, and even though someone in my culture who approximated either of those states might be considered disabled in various ways.
So, I dunno. I agree that a rational clear dialogue about disability is desirable, but more because of their actual effect on the present than their hypothetical effect on a potential transhumanist future.