The problem with FAI is that it is nearly impossible for human minds of even high intellect to get good results solely through philosophy—without experimental feedback
I do not understand how this has anything to do with FAI
Because of our limited intellects, our best bet is to simply take the one intelligent system that we know of—the human brain—and simply replicate it in an artificial manner.
This is not in fact “simple” to do. It’s not even clear what level of details will be needed- just a neural network? Hormones? Glial cells? Modelling of the actual neurons?
So obviously the most promising way to create Friendly AI at this point in time is to replicate the brain of a Friendly Human.
Are you sure you understand what FAI actually refers to? In particular, with p~~1, no living human qualifies as Friendly; even if they did, we would still need to solve several open problems also needed for FAI (like ensuring that value systems remain unchanged during self-modification) for a Friendly Upload to remain Friendly.
With regards to your claims regarding HBD, eugenics, etc: Evolution is a lot weaker than you think it is, and we know a lot less about genetic influence on intelligence than you seem to think. (See eg here or here.) Such a program would be incredibly difficult to get implemented, and so is probably not worth it.
I do not understand how this has anything to do with FAI
It has to do because FAI is currently a branch of pure philosophy. Without constant experimental feedback and contact with reality, philosophy simply cannot deliver useful results like science can.
This is not in fact “simple” to do. It’s not even clear what level of details will be needed- just a neural network? Hormones? Glial cells? Modelling of the actual neurons?
Are there any other current proposals to build AGI that don’t start from the brain? From what I can tell, people don’t even know where to begin with those.
Are you sure you understand what FAI actually refers to? In particular, with p~~1, no living human qualifies as Friendly; even if they did, we would still need to solve several open problems also needed for FAI (like ensuring that value systems remain unchanged during self-modification) for a Friendly Upload to remain Friendly.
At some point you have to settle for “good enough” and “friendly enough”. Keep in mind that simply stalling AI until you have your perfect FAI philosophy in place may have a serious cost in terms of human lives lost due to inaction.
(like ensuring that value systems remain unchanged during self-modification)
But what if the AI is programmed with a faulty value system by its human creators?
Such a program would be incredibly difficult to get implemented, and so is probably not worth it.
Fair enough, I was giving it as an example because it is possible to implement now—at least technically, though obviously not politically. Things like genome repair seem more distant in time. Cloning brilliant scientists seems like a better course of action in the long run, and without so many controversies. However, this would still leave the problem of what to do with those who are genetically more prone to violence, who are a net drag on society.
I do not understand how this has anything to do with FAI
This is not in fact “simple” to do. It’s not even clear what level of details will be needed- just a neural network? Hormones? Glial cells? Modelling of the actual neurons?
Are you sure you understand what FAI actually refers to? In particular, with p~~1, no living human qualifies as Friendly; even if they did, we would still need to solve several open problems also needed for FAI (like ensuring that value systems remain unchanged during self-modification) for a Friendly Upload to remain Friendly.
With regards to your claims regarding HBD, eugenics, etc: Evolution is a lot weaker than you think it is, and we know a lot less about genetic influence on intelligence than you seem to think. (See eg here or here.) Such a program would be incredibly difficult to get implemented, and so is probably not worth it.
It has to do because FAI is currently a branch of pure philosophy. Without constant experimental feedback and contact with reality, philosophy simply cannot deliver useful results like science can.
Are there any other current proposals to build AGI that don’t start from the brain? From what I can tell, people don’t even know where to begin with those.
At some point you have to settle for “good enough” and “friendly enough”. Keep in mind that simply stalling AI until you have your perfect FAI philosophy in place may have a serious cost in terms of human lives lost due to inaction.
But what if the AI is programmed with a faulty value system by its human creators?
Fair enough, I was giving it as an example because it is possible to implement now—at least technically, though obviously not politically. Things like genome repair seem more distant in time. Cloning brilliant scientists seems like a better course of action in the long run, and without so many controversies. However, this would still leave the problem of what to do with those who are genetically more prone to violence, who are a net drag on society.