“Errr.… luzr, why would I assume that the majority of GAIs that we create will think in a way I define as ‘right’?”
It is not about what YOU define as right.
Anyway, considering that Eliezer is existing self-aware sentient GI agent, with obviously high intelligence and he is able to ask such questions despite his original biological programming makes me suppose that some other powerful strong sentient self-aware GI should reach the same point. I also believe that more general intelligence make GI converge to such “right thinking”.
What makes me worry most is building GAI as non-sentient utility maximizer. OTOH, I believe that ‘non-sentient utility maximizer’ is mutually exclusive with ‘learning’ strong AGI system—in other words, any system capable of learning and exceeding human inteligence must outgrow non-sentience and utility maximizing. I migh be wrong, of course. But the fact that universe is not paperclipped yet makes me hope...
Said Eliezer agent is programmed genetically to value his own genes and those of humanity.
An artificial Elizer could reach the conclusion that humanity is worth keeping but is by no means obliged to come to that conclusion. On the contrary, genetics determines that at least some of us humans value the continued existence of humanity.
“Errr.… luzr, why would I assume that the majority of GAIs that we create will think in a way I define as ‘right’?”
It is not about what YOU define as right.
Anyway, considering that Eliezer is existing self-aware sentient GI agent, with obviously high intelligence and he is able to ask such questions despite his original biological programming makes me suppose that some other powerful strong sentient self-aware GI should reach the same point. I also believe that more general intelligence make GI converge to such “right thinking”.
What makes me worry most is building GAI as non-sentient utility maximizer. OTOH, I believe that ‘non-sentient utility maximizer’ is mutually exclusive with ‘learning’ strong AGI system—in other words, any system capable of learning and exceeding human inteligence must outgrow non-sentience and utility maximizing. I migh be wrong, of course. But the fact that universe is not paperclipped yet makes me hope...
Could reach the same point.
Said Eliezer agent is programmed genetically to value his own genes and those of humanity.
An artificial Elizer could reach the conclusion that humanity is worth keeping but is by no means obliged to come to that conclusion. On the contrary, genetics determines that at least some of us humans value the continued existence of humanity.