I don’t have anything in my moral framework that makes it acceptable to tinker with future conscious AIs and not with future conscious humans. Do you?
Sure I do. I’m a speciesist :-)
Besides, we’re not discussing what to do or not to do with hypothetical future conscious AIs. We’re discussing whether “we should be looking for ways to engineer friendliness into humans”. Humans are not hypothetical and “ways to engineer into humans” are not hypothetical either. They are usually known by the name of “eugenics” and have a… mixed history. Do you have reasons to believe that future attempts to “engineer humans” will be much better?
For the most part, eugenics does not have a mixed history. Eugenics has a bad name because it has historically been preformed by eliminating people from the gene pool—through murder or sterilization. As far as I am aware, no significant eugenics movement has avoided this, and therefor the history would not qualify as mixed.
We should assume that future attempts will be better when those future attempts involve well developed, well understood, well tested, and widely (preferably universally) available changes to humans before they are born—that is, changes that do not take anyone out of the gene pool.
I probably am too, but I don’t much like it. I want to be a consciousness-ist.
Most humans are hypothetical, just like all AIs are. They haven’t existed yet, and may not exist in the forms we imagine them. Much like MIRI is not recommending termination of any existing AIs, I am not recommending termination of existing humans.
I am merely pointing out that most of what I’ve read about FAI goals seems to apply to future humans as much or more as to future AIs.
Sure I do. I’m a speciesist :-)
Besides, we’re not discussing what to do or not to do with hypothetical future conscious AIs. We’re discussing whether “we should be looking for ways to engineer friendliness into humans”. Humans are not hypothetical and “ways to engineer into humans” are not hypothetical either. They are usually known by the name of “eugenics” and have a… mixed history. Do you have reasons to believe that future attempts to “engineer humans” will be much better?
For the most part, eugenics does not have a mixed history. Eugenics has a bad name because it has historically been preformed by eliminating people from the gene pool—through murder or sterilization. As far as I am aware, no significant eugenics movement has avoided this, and therefor the history would not qualify as mixed.
We should assume that future attempts will be better when those future attempts involve well developed, well understood, well tested, and widely (preferably universally) available changes to humans before they are born—that is, changes that do not take anyone out of the gene pool.
I probably am too, but I don’t much like it. I want to be a consciousness-ist.
Most humans are hypothetical, just like all AIs are. They haven’t existed yet, and may not exist in the forms we imagine them. Much like MIRI is not recommending termination of any existing AIs, I am not recommending termination of existing humans.
I am merely pointing out that most of what I’ve read about FAI goals seems to apply to future humans as much or more as to future AIs.