Currently, I’d say the threat from unfriendly natural intelligence is many orders of magnitude higher than that from AI.
There is a valid question of the shape of the improvement curve, and it’s at least somewhat believable that technological intelligence outstrips puny humans very rapidly at some point, and shortly thereafter the balance shifts by more than is imaginable.
Personally, I’m with you—we should be looking for ways to engineer friendliness into humans as the fist step toward understanding and engineering it into machines.
we should be looking for ways to engineer friendliness into humans
No. That’s a really bad idea.
First, no one even knows what “friendliness” is. Second, I strongly suspect that attempts to genetically engineer “friendly humans” will end up creating genetic slaves.
Perhaps. Don’t both of those concerns apply to AI as well?
Humans are the bigger threat, are more easily studied, and are (currently) changing slowly enough that we can be more deliberate than we can of a near-foom AI (presuming post-foom is too late).
I don’t have anything in my moral framework that makes it acceptable to tinker with future conscious AIs and not with future conscious humans. Do you?
I don’t have anything in my moral framework that makes it acceptable to tinker with future conscious AIs and not with future conscious humans. Do you?
Sure I do. I’m a speciesist :-)
Besides, we’re not discussing what to do or not to do with hypothetical future conscious AIs. We’re discussing whether “we should be looking for ways to engineer friendliness into humans”. Humans are not hypothetical and “ways to engineer into humans” are not hypothetical either. They are usually known by the name of “eugenics” and have a… mixed history. Do you have reasons to believe that future attempts to “engineer humans” will be much better?
For the most part, eugenics does not have a mixed history. Eugenics has a bad name because it has historically been preformed by eliminating people from the gene pool—through murder or sterilization. As far as I am aware, no significant eugenics movement has avoided this, and therefor the history would not qualify as mixed.
We should assume that future attempts will be better when those future attempts involve well developed, well understood, well tested, and widely (preferably universally) available changes to humans before they are born—that is, changes that do not take anyone out of the gene pool.
I probably am too, but I don’t much like it. I want to be a consciousness-ist.
Most humans are hypothetical, just like all AIs are. They haven’t existed yet, and may not exist in the forms we imagine them. Much like MIRI is not recommending termination of any existing AIs, I am not recommending termination of existing humans.
I am merely pointing out that most of what I’ve read about FAI goals seems to apply to future humans as much or more as to future AIs.
Personally, I’m with you—we should be looking for ways to engineer friendliness into humans as the fist step toward understanding and engineering it into machines.
As far as I understand engineering humans to be more friendly is a concern for the Chinese. They also happen to be more likely to do genetic engineering than the West.
Currently, I’d say the threat from unfriendly natural intelligence is many orders of magnitude higher than that from AI.
There is a valid question of the shape of the improvement curve, and it’s at least somewhat believable that technological intelligence outstrips puny humans very rapidly at some point, and shortly thereafter the balance shifts by more than is imaginable.
Personally, I’m with you—we should be looking for ways to engineer friendliness into humans as the fist step toward understanding and engineering it into machines.
No. That’s a really bad idea.
First, no one even knows what “friendliness” is. Second, I strongly suspect that attempts to genetically engineer “friendly humans” will end up creating genetic slaves.
Perhaps. Don’t both of those concerns apply to AI as well?
Humans are the bigger threat, are more easily studied, and are (currently) changing slowly enough that we can be more deliberate than we can of a near-foom AI (presuming post-foom is too late).
I don’t have anything in my moral framework that makes it acceptable to tinker with future conscious AIs and not with future conscious humans. Do you?
Sure I do. I’m a speciesist :-)
Besides, we’re not discussing what to do or not to do with hypothetical future conscious AIs. We’re discussing whether “we should be looking for ways to engineer friendliness into humans”. Humans are not hypothetical and “ways to engineer into humans” are not hypothetical either. They are usually known by the name of “eugenics” and have a… mixed history. Do you have reasons to believe that future attempts to “engineer humans” will be much better?
For the most part, eugenics does not have a mixed history. Eugenics has a bad name because it has historically been preformed by eliminating people from the gene pool—through murder or sterilization. As far as I am aware, no significant eugenics movement has avoided this, and therefor the history would not qualify as mixed.
We should assume that future attempts will be better when those future attempts involve well developed, well understood, well tested, and widely (preferably universally) available changes to humans before they are born—that is, changes that do not take anyone out of the gene pool.
I probably am too, but I don’t much like it. I want to be a consciousness-ist.
Most humans are hypothetical, just like all AIs are. They haven’t existed yet, and may not exist in the forms we imagine them. Much like MIRI is not recommending termination of any existing AIs, I am not recommending termination of existing humans.
I am merely pointing out that most of what I’ve read about FAI goals seems to apply to future humans as much or more as to future AIs.
As far as I understand engineering humans to be more friendly is a concern for the Chinese. They also happen to be more likely to do genetic engineering than the West.