I am not quite sure what this story is getting at. I’d guess it’s saying that we need to understand how human morality arises on a more fundamental (computable/programmable?) level before we can be sure that we can program AIs that will adhere to it, but the basis of human morality is (presumably) so much more complicated than the “prime numbers = good” presented here that the analogy is a bit strained. I may be interpreting this entirely wrongly.
I am not quite sure what this story is getting at. I’d guess it’s saying that we need to understand how human morality arises on a more fundamental (computable/programmable?) level before we can be sure that we can program AIs that will adhere to it, but the basis of human morality is (presumably) so much more complicated than the “prime numbers = good” presented here that the analogy is a bit strained. I may be interpreting this entirely wrongly.