My expectation is that the presently small fields of machine ethics and neuroscience of morality will grow rapidly and will come into contact, and there will be a distributed research subculture which is consciously focused on determining the optimal AI value system in the light of biological human nature.
Is SIAI working on trying to cause that?
It seems like it would do more harm than good, since it does a lot of work for FAI, and almost none for AI.
Is SIAI working on trying to cause that?
It seems like it would do more harm than good, since it does a lot of work for FAI, and almost none for AI.
Without speaking toward its plausibility, I’m pretty happy with a scenario where we err on the side of figuring out FAI before we figure out seed AIs.