I do agree that AI, who is underdeveloped in terms of its goals and allowed to exist, is too likely to become an ethical and/or existential catastrophe, but have a few questions.
If neurosurgery and psychology develop sufficiently, is it ethically okay to align humans (or newborn) to other, more primitive life forms to the extent we want to align AI to humanity (I didn’t say “the same way”, because human brain seems to be differently organized than programmable computers, but I mean practically the same behaviour and/or goals change)?
Does anyone, mentioning that AI would become more intelligent than whole human civilization, think that AI would be, therefore, more valuable than humanity? Shouldn’t AI goals be set with consideration of that? If not, isn’t answer for 1) “yes”?
I do agree that AI, who is underdeveloped in terms of its goals and allowed to exist, is too likely to become an ethical and/or existential catastrophe, but have a few questions.
If neurosurgery and psychology develop sufficiently, is it ethically okay to align humans (or newborn) to other, more primitive life forms to the extent we want to align AI to humanity (I didn’t say “the same way”, because human brain seems to be differently organized than programmable computers, but I mean practically the same behaviour and/or goals change)?
Does anyone, mentioning that AI would become more intelligent than whole human civilization, think that AI would be, therefore, more valuable than humanity? Shouldn’t AI goals be set with consideration of that? If not, isn’t answer for 1) “yes”?