What proposition are you looking for an argument against?
Transhumanism can mean a lot of things: the transcending of various heretofore human limits, conditions, or behaviors—which are many and different from one another.
And for those things, you might refer to the proposition that they are possible, or likely, or inevitable; (un)desirable or neutral; ethically (in)permissible or obligatory; and so on.
I’m looking for utilitarian arguments against the desirability of changing human nature by direct engineering. Basically, I’m wondering if there’s any utilitarian case for the “it’s fundamentally wrong to play God” position in bioethics. (I’m being vague in order to maximize my chance of encountering something.)
Not sure. I’ve been pessimistic about the Singularity for several years, but the general argument for human value being doomed-with-a-very-high-probability only really clicked sometime late last year.
Please be more specific and define “changes to human nature”.
We already make many deliberate changes to people. We raise them in a culture, educate them, train them, fit them into jobs and social roles, make social norms and expectations into second nature for most people, make them strongly believe many things without evidence, indoctrinate them into cults and religions and causes, make them do almost anything we like.
We also make medical interventions that change human nature, which is to die of diseases easily treated today. We restore sight to the myopic and hard of hearing, and lately even to the blind and deaf. We even transplant complex organs.
We have changed the experience of human life out of all recognition with the ancestral state, and we have grown used to it.
Where does the line between human and transhuman lie? We can talk about any specific proposed change, and some will be bad and some will be good. But any argument that says all changes are inherently bad might also say that all the changes that already occurred have been bad as well.
What proposition are you looking for an argument against?
Transhumanism can mean a lot of things: the transcending of various heretofore human limits, conditions, or behaviors—which are many and different from one another.
And for those things, you might refer to the proposition that they are possible, or likely, or inevitable; (un)desirable or neutral; ethically (in)permissible or obligatory; and so on.
I’m looking for utilitarian arguments against the desirability of changing human nature by direct engineering. Basically, I’m wondering if there’s any utilitarian case for the “it’s fundamentally wrong to play God” position in bioethics. (I’m being vague in order to maximize my chance of encountering something.)
A while back, I made the argument that the ability to remove fundamental human limits will eventually lead to the loss of everything we value.
How long have you been this pessimistic about the erasure of human value?
Not sure. I’ve been pessimistic about the Singularity for several years, but the general argument for human value being doomed-with-a-very-high-probability only really clicked sometime late last year.
This seems to assume a Hansonesque competitive future, rather than an FAI singleton, is that right?
Pretty much.
Please be more specific and define “changes to human nature”.
We already make many deliberate changes to people. We raise them in a culture, educate them, train them, fit them into jobs and social roles, make social norms and expectations into second nature for most people, make them strongly believe many things without evidence, indoctrinate them into cults and religions and causes, make them do almost anything we like.
We also make medical interventions that change human nature, which is to die of diseases easily treated today. We restore sight to the myopic and hard of hearing, and lately even to the blind and deaf. We even transplant complex organs.
We have changed the experience of human life out of all recognition with the ancestral state, and we have grown used to it.
Where does the line between human and transhuman lie? We can talk about any specific proposed change, and some will be bad and some will be good. But any argument that says all changes are inherently bad might also say that all the changes that already occurred have been bad as well.